Clouds and beyond

The question on most IT professionals’ mind seems to be “what’s the next paradigm shift in the datacenter space”

About a decade ago, most IT companies started heavy adoption of Virtualization and incorporated a “Virtualization first” approach. In essence from the typical process of an IT manager procuring a hardware resource, he would evaluate if the intended workload could be virtualized. If yes, that would the default choice. The reasons are obvious and aplenty – cost, flexibility, availability and so on.

Net-net, the datacenters consolidated and optimized. To help the cause there’s Moore’s law

While in the initial days, there were skepticism and rebellion of sorts. Certain industries chose to remain physical citing their own reasons, eventually we saw almost all verticals from banks to defense organizations adopt virtualization.

Important thing to note is that, as the density of hypervisor vendors increased, the value proposition was no more “Virtualization” rather who could serve it best, (i.e) the actual competition was who can provide a better  quality of features on top of a virtualized platform.

One can also perceive that “virtualization” turned into a commodity and how it can be delivered, maintained or managed were the deciding factors.

In the meanwhile, there were interesting developments above and below the virtualization layer… there was hyper-convergence, storage & network virtualization and in the application stack, modernization of apps to move away from legacy models to  cloud native models.

Putting the pieces together, we have a mixed bag of workloads. Some can be in a private cloud, some that can be run on the public cloud and some hybrid.

From organizational standpoint, CIOs would build a cloud strategy with a set of policies that will govern the placement of workloads (A feature to consider – Policy based Cloud Workload Management)

The devil is in the details,

Private Cloud = Increased Capex – On-PREM but better control and compliance

Public Cloud = Increased Opex – Off-PREM but predictable expenditure and less IT management complexities such datacenter costs – power , cooling, hardware maintenance etc…

Over a period of time we witnessed each layer in the datacenter (bottom-up) getting commodotized.

Gartner states that by 2020, the concept of “no-cloud” policy would be rare.

It appears that a hybrid state with shifting balances will prevail for a fair amount of time and Nostradamus may help us from there !!!

Advertisements

Feel empowered with the all new Log Insight 3.3.x

 

The all new vRealize Log Insight(vRLI) version 3.3.x comes with great new enhancements , but two key features hog the limelight,

 

#1 New Product Licensing

vRealize Log Insight 3.3 includes the ability to use 25 OSI available licenses at no additional cost with the use of a vCenter Server STD installation.

 

 

This means that you get vRLI clubbed with vCenter license and you get a good insight into its advanced capabilities. More details on how the licensing work is outlined here FAQs

 

 

#2 Importer Utility

A new importer utility is available to support importing old logs and support bundles via the Log Insight ingestion API. This utility is available as an executable for Windows and Linux, supports a manifest file that is almost identical to an agent configuration file (only difference is the directory option), can ingest messages based on their timestamp (requires authentication) and supports compressed (zip/gzip/tar) as well as recursive directory imports.

 

 

This new feature lets you work with offline logs, i.e.  you can process log bundles extracted from product, as opposed to configuring production servers to direct logs to vRLI.

Hence if there were impediments in deploying vRLI in the environment such as security approvals , budget approvals or business justification to procure the product and leverage it, the above two features helps to get things moving by building out an isolated setup & loading the logs for offline analysis.

This can be the first line of attack for administrators before engaging VMware Support.

I will follow up this blog with some sample & guidelines on setting it up.

 

Happy troubleshooting…

 

 

 

vCOPs: Error 500 The call failed on the server; see server log for details (StatusCode: 500)

Error 500 is rather annoying and quite generic “catch-all” error handler  for Web servers. VROPS is no different in this context.

In this blog , I aim to resolve one of the conditions with Analytics component of VROPS . Check if the symptoms and logs are relevant before you go ahead with the steps.

Symptoms

1- Login through ssh to the admin page of vrops

http://ipaddressofUIVM/admin

2- Check the status of all the services

Analytics

In this case you observe that VCOPS Manager Service & Analytics Service stopped

3-This issue may also manifest as below , but you have not changed the ip address of either of the VMs(UI or Analytics)

The UI VM running at <IP address> but cannot to the Analytics VM at <IP address >, make sure it is running and reachable from <IP address> If the IP address of either VM has changes, then login to Administration interface that will guide you through the steps to restore connectivity between the two VMs.

 

4- Check logs $ALIVE_BASE/user/log/analytics.log

2015-11-12 16:38:41,225 ERROR [Thread-1] com.integrien.alive.dbaccess.AnalyticsFastLoaderCache.loadActiveAlarmsNative – Error while loading active alarms org.postgresql.util.PSQLException: PANIC: checksum mismatch: disk has 0x41ea3421, should be 0xda4b0281 filename pg_tblspc/16385/PG_9.0_201106101/16386/16435, BlockNum 171234, block specifier 16385/16386/16435/0/171234

2015-11-12 16:38:42,245 INFORMATION [Thread-1] com.integrien.analytics.AnalyticsMain.stop – Analytics is stopping
2015-11-12 16:38:42,265 INFORMATION [Thread-1] com.integrien.analytics.AnalyticsMain.stop – AnalyticsService has been stopped  

If these symptoms match, proceed to the next steps,

Root cause : For some reason the postgres db used by the analytics engineer encountered an error and needs you to correct it.

Resolution

1- Login to Analytics VM

2- Connect to the DB

su postgres

3- Execute ” pg_ctl” stop -m smart -D /data/pgsql/data

** Force it using immediate switch if required/previous command errors out

4- Execute the following command and replace bolded text with the actual tablespace/database/relation/fork/blockNum recorded in analytics log

postgres –single -D $PGDATA -c fix_block_checksum= “16385/16386/16435/0/171234″

This should return with an message as ” ..* fixed”

5- pg_ctl start -D /data/pgsql/data

6- Go ahead and restart the vcops services

Happy Monitoring …