Skip navigation
All Places > Industry Perspectives > Blog

This one is close to my heart and is always talked about week after week. Depending on what vendors see as the next big thing or downtime of public cloud providers affecting companies who have bought into production running in off premises cloud infrastructure, there is always a frequent publication of articles relating to where consumers should place their workload.

 

Is there really a right or wrong answer ?

 

The merits of private cloud strongly favour the security conscious and the holders of crown jewels so important they may never consider off premises production scenarios. Equally, the newer startups based on agility and a DevOps culture always want to adopt a cloud first approach and have no issues with using the likes of Amazon and Azure to host their production apps. Again, we could argue that test/dev is also suited towards public cloud but what happens when you need portability between these workloads and your own private datacenter. It’s not always easy to port from Amazon or Azure between different hypervisors so this has got to be thought about.

 

Some companies have yet to dip their toe into public cloud. Is this a bad thing in this day and age ?

 

Surely these companies should be experimenting now before they become the next Blockbuster Video. Well…let me pose this question….how many of you have REAL PRODUCTION workloads in the public cloud ? Of those that have, do they think about security, availability and data protection ?

 

Don’t get me wrong here. The cost economics of using public cloud are there and I realise why so many familiar names are trusting public cloud vendors with their Tier 1 apps. If you haven’t yet made a decision in your company, it could be worth trying to move cold or archive data to the cloud as your first step before looking at more complex propositions. Either way, public cloud and private cloud gives us the automation and self-service demanded from the business.

 

The final thoughts I’d like to leave you with are as follows. Public cloud is here to stay but you need to think about your company culture and business motivators before considering what your strategy is. Test drive various providers but don’t forget that keeping cloud on premises should also be at the back of your head if you aren’t a startup and want to deliver agility to lines of business.

 

So who’s all in with public cloud ????

So the story has been said many times over the years but seems to be getting more interest this year. The story of converged platforms has been around for a while in the form of Vblock, Flexpod and other architectures that are ready to be deployed in a fraction of the time than traditional architectures but does this really resonate with you ?

Bringing together the compute, network and storage and knowing that you don’t have to worry yourself and firmware compatibility, device drivers, patching the hardware and having multiple lines of support is a compelling story right ? If not why not ?

Now hyperconverged is taking a step further and realizing the dream of making the datacenter truly software defined. I know you are thinking it can’t be software defined if you still have to buy hardware and get it installed but the secret here is that the fundamental layers are orchestrated and automated by software. Having compute and storage building blocks and scaling these as you grow is surely an attractive proposal as you no longer have to worry about SAN and zoning and other mundane tasks to stitch a datacenter solution together. Of course, this isn’t suitable for greenfield only but a little more difficult when you have an existing datacenter but still possible to implement for a specific use case. You could have specific use cases around VDI or perhaps a project that needs a cloud native feel using a PaaS layer like Cloud foundry or Openshift. My point here is that hyperconverged solutions have been around a while and gaining momentum in the industry.

Policy management is where we all want to be so less thinking is applied on the constructs of the hardware and more focus is on how do we drive policy to ensure workloads are compliant and also performant. That is the nirvana right ?

 

So my question here is are you a storage hugger or not ? Do you love LUNS, masking and multiple GUI’s ? Has your company go down the road of using a converged or hyperconverged solution and what were the benefits or indeed, what were the downfalls ?

As we enter the first official day of Autumn, I’m sure that a large percentage of us are reflecting on a missed vacation or day off that we wish we would have taken this summer. Whether it was to take the family for a week-long getaway to the lake, or just a day off to crack open a few cold ones and relax on the beach with friends.  Too many of us let the opportunity to enjoy life outside of work get away from us.

 

Work/life balance has been in the spotlight recently, particularly in the tech sector (i.e. Amazon).  My opinion is that it stems from a combination of two things: it IS the culture in IT to work hard, but it’s also a by-product of the industry attracting people who care about what they do and strive to do a good job.  Being on the sales side myself, I work in a super competitive environment – both internally and externally.  I’m judged by numbers, and the more that I work, the more opportunities I have to produce.  There are also a lot of platforms which are viewed as competition to VMTurbo, all using similar verbiage to describe what their product does, though there is always one huge key difference – see if you can spot it:

 

One will “Help you improve performance”.  The other is “Designed to assure application performance”

 

It’s a subtle difference, but the main difference boils down to one important factor: with traditional monitoring tools, it’s up to YOU to leverage that tool in order to IMPROVE performance (not assure it).  On the other side VMTurbo’s platform, by design, ASSURES the performance for you.

 

So what does this have to do with work/life balance?  Let’s say you’re only leveraging monitoring tools for your virtual environment.  Part of your team’s responsibilities includes making sure the businesses applications are always up and running, right? So what happens if you take off and something goes wrong?  Even when you do get around to taking that precious day off, the phone is attached to the hip.  And let’s not even get into the 2am wake-up calls.  These are all very scary things for you as being the one responsible for fixing it, as well as for the business because it obviously means something critical to your business is not working.

 

The Wall Street Journal put out an article recently titled, ‘Who Will Put Out Company Fires When Tech Workers Are at Burning Man?’ and it got me thinking: why are we still worrying about this?  It’s 2015, and while it’s as, if not more, important as ever that in IT we’re meeting QoS/SLA agreements, we’ve advanced.  We’re no longer 10% virtualized; we’re 60%, 70%, 99% virtualized and with that comes a level of complexity that no human truly can manage with the time constraints we have.

 

Imagine being on a beach – white sand, crystal clear water, blue skies all around, cold drink in hand. Ahhhhhh… feeling better already, right? Now imagine that you’re not even THINKING about your VMware/Hyper-V/Citrix/hybrid cloud environment because there’s a software platform in place that is preventing fires from happening in the first place.  That VDI project you were tasked with?  Well you got that done ahead of schedule BEFORE you left for the vacation (AND under budget!) because you weren’t pulled away every hour to deal with another alert.

 

Earlier in the month we asked VMworld attendees to describe their “Desired State” – where in the world would you go if you could go anywhere?  A couple of lucky entrants won a $5,000 and $10,000 grand prize in order to go on that dream trip.  Now what about your virtual data center?  Luckily for the rest of us, VMTurbo offers a free 30-day trial to everyone to help keep your data center in its own desired state, where applications are working as they should and IT resources are utilized most optimally all while the IT team is keeping up with the rate of change within the business.

 

Maybe you missed out on this year’s Burning Man, but there’s still time to use up those valuable vacation days before they expire at end of the year.  All it takes is 15 minutes to download the trial, and within an hour it’s operational – that’s less time than it takes to deal with some alerts!  Then, grab your significant other (remember her/him?), hop into anything that’s smoking, shut off the phone, and find your own Desired State.

 

This is not a fantasy, it really could happen!  Tell us: what would it mean for you and your business to have a platform in place which assures QoS for you?

This is a really excellent representation of the various tools and components that are referred to as we discuss DevOps. A handy reference for sure!

 

Thanks to the folks at Xebia Labs for this one.

 

Matt Ray

Rightsizing Virtual Workloads

Posted by Matt Ray Expert Mar 23, 2015

Virtualization has brought an immense amount of flexibility to IT in terms of being able to quickly provision workloads and allocate resources. Another, great benefit is the ability to oversubscribe resources at the host level in order to fit more VMs per physical host. But, this benefit comes with its own set of risks to VM performance on the compute layer.

As we often talk about at VMTurbo these risks can be substantially mitigated by appropriately managing workloads from a placement, capacity, and sizing standpoint. Today, I want to take a little bit closer look into some of the performance risks associated with over provisioning workloads specifically in terms of CPU and memory allocation.

 

Over Provisioning CPU

The most immediate risk to performance that I see in customer environments comes from over provisioning of vCPU. This risk stems from the way that hypervisors schedule CPU time once the host has become over allocated.

When a VM needs processor time it must wait for the host to have available CPU cores to process. The number that must be available is equal to the number of vCPUs assigned to the VM. This wait time results in degradation of performance for not only the VM experiencing the wait time but every other VM on the host.

Because of this phenomenon, administrators often end up trying to follow best practices for over allocation of CPU resources, although it is impossible to manage the environment based on rule of thumb. One of the best ways to reduce the amount of CPU ready time in an environment is actually to right size the VMs within the environment to the appropriate number of vCPUs. This has two benefits:

  1. The VM being rightsized now has less CPUs that must clear before it can get processor time.
  2. The other VMs on the host now have less competing processors.

Aside from a pure performance play this type of rightsizing also helps to increase overall host densities.

 

Over Provisioning Memory

Performance degradation associated with over provisioning of memory is a little less clear cut. There is a small overhead associated with additional memory associated with a VM, but it is oftentimes a negligible hit to performance. The real performance hit comes in the risk associated with over allocation of resources and the ability to guarantee performance of dynamic workloads.

The first problem we see commonly as administers is ballooning. I want to start by pointing out that ballooning isn’t ALWAYS a bad thing. The feature exists because it prevents the host from moving to a swap state, and is very beneficial in that regard.

The problem with ballooning comes in when you start looking at how fast the host can respond to changes in resource demand. The balloon driver takes time to initiate, meaning that if a host is relying on balloon drivers to adequately deliver memory to other VMs you will experience slowness not associated with typical memory access. It also takes time for the balloon driver to deflate, meaning that if the VM balloon driver is inflated and the VM needs more memory slowness will occur.

Often times to prevent these problems ballooning is turned off, or reservations are set on the VMs. These types of constraints introduce their own issues, the biggest one being the potential for swapping on the host leading to performance degradation of multiple VMs on the host.

 

How VMTurbo helps

VMTurbo manages resource allocation across the entire environment including sizing, placement, and capacity. Reclaiming resources within an environment can be a beneficial activity in order to mitigate the performance risks mentioned above. But, managing reclamation of resources independent of provisioning of additional resources or proper placement of workloads will only increase risk within the environment.

By continuously driving your environment to its healthiest state VMTurbo is the only solution that is able to safely decrease overall risk within the virtualized environment.

david.fiore

Ride The Wave

Posted by david.fiore Expert Oct 19, 2014

I started in IT while I was in college, working in the computing lab. I would have done the job for free, just to have the access, but they actually paid me to do it, and more money than I had ever seen as a dishwasher.   I thought to myself, "this could lead to something...."

 

At that time our computer lab was a collection of Televideo terminals, IBM PCs, DEC Rainbows and a single Mac+.  And we had a hardware tech who spent the majority of his time inserting expansion cards into the various PCs.  You have to remember that PCs were pretty expensive then, so you would typically buy a computer that just met your immediate requirements.  Later, when you had more money, or a need arose, you might add RAM, upgrade your graphics card, or put in a bigger hard disk.  Adding expansion cards into PCs was not a simple thing to do in those days and you had to have a fair bit of understanding of the internal PC architecture in order to do it properly. Specifically, you had to understand 3 critical resources used by expansion cards: shared memory (which was the memory between 640K and 1M), Interrupts (IRQ), and Direct Memory Access (DMA) channels.   You had to understand how these resources were being used in the computer you were upgrading, and which ones the card needed.  You then had to configure the card (usually by means of jumper pins or DIP switches) to dovetail its resource consumption with what was available.

 

Today nobody except hardware designers have to know a thing about those resources or how they are allocated.  There are many reasons for this; first, PCs are a lot cheaper, so often times you just buy the computer with all the features you need and never bother to do any expansion.  But more importantly, if you even have expansion slots, the underlying hardware is sophisticated enough to figure out how to allocate those resources correctly.  And most 'peripheral expansion' today is via USB, which is also self-configuring.

 

Would anyone want to go back to the old days?  Is it not self evident that having the hardware configure itself automatically is a much better world than one in which we have to figure it out ourselves? Just imagine how much less would get done if every time you needed connect new hardware to your computer you had to crack the case, consult the manual for the new hardware and the old, and then fiddle with the settings?

 

There was a time, not so very long ago, when if you wanted cash, you had to go the bank.  When they were open.   Does the term "Banker's Hours" ring a bell?  You would go to the bank, stand in line, and then you withdrew your cash after interacting with the teller.  Today we have ATMs (Automated Teller Machine).  The process of verifying your identity and bank balance and then dispensing cash has been completely automated.  Could you even imagine life without them now?  Would anyone want to go back to the days before ATM machines?

 

There is a great story that has been floating around the net forever called The Ballad of Mel. It chronicles the story of a programmer whose code was so meticulously optimized for the underlying hardware that that it took the story's author two weeks to figure out how the program exited a loop.

 

The story might make one wistful for the days when people had such deep understanding about the technology they worked with, but I ask "Would we really want to have an army of Mel's writing code, or would we rather use software tools that, at some expense in terms of performance, allowed other developers to easily understand what the code is doing and modify it if needed?  Could we even imagine going back to a time when people wrote software that way?

 

I am sure there used to be an army of hardware technicians, tellers, and hex-level hardware programmers.  As these changes came, many of them lost their jobs.  Sure there are still a few of each left, just like there are still a few elevator operators and even farriers. But the vast majority of people in those professions are no longer doing those tasks.  Those jobs simply dried up.   Of course new ones arose, and those who could adapt to the changes brought about by the new technology did well.  But if you were a hardware tech or teller in the 80's or a Cobol or Assember language programmer from the 70s you had a choice to make: learn new technologies and adapt, become the last person doing what you do, or find yourself out of work.  And you don't want to be the last person doing what you do.

 

Over the years of my career I have seen a lot of people who do not understand this dynamic.  They become 'the guy' who does a particular thing and they jealously protect their domain, not sharing the knowledge or doing anything to enable others to do what they do.  They are fierce resisters of anything which diminishes their role as the unique provider of a particular service.   And then I see people who realize that our industry is in a constant state of flux.  These people realize that whatever they're doing today, a good part of it will be automated soon.  The best ones try to implement that automation themselves.  They see their value not in terms of a particular set of knowledge or skills, but in being people who help the business by bringing technology to bear on the problems of the day. To them, technology is not an end in itself, but means to an end.  Such people are always trying to learn new skills and new ways of doing things, as they facilitate the automation/commoditization of what they're doing now.  That is how to really add value!

 

Some people say that Amazon is destroying book selling.  In reply to this, Jeff Bezos said "Amazon is not what's happening to book selling, the future is what's happening to book selling".   And folks, the future is what is happening to the datacenter.  Today's virtualized infrastructures are growing exponentially in size, scope, and complexity.  Software-Driven Control is here now, and more and more companies are realizing that this is the only way to run these environments.   And while we are in transition to this way of doing things now, the day is going to come soon when nobody will be able to imagine going back to a world where the IT staff spends its time sizing and placing virtual machines.  That job will seem as quaint as a farrier.

 

So what is a virtualization engineer to do?  For what it is worth, my advice is to embrace change.  And when I say embrace, I don't mean like you embrace your obnoxious uncle when he comes over for Thanksgiving, I mean embrace like you embrace your dear ones: with love and affection. Do not fear the changes that are coming, because fear will simply paralyze you.  Change is coming, change is inevitable, so if you want to do well in IT for the long term, you must be constantly in motion, always experimenting, and always learning.  You must not find your identity or see your value in the particular task you are doing, but in your ability to apply whatever technology is available to solving the problems your organization faces. 

 

It may not be easy to predict which technology will ultimately take over, but you can be sure that whatever you are doing now will not last.  For those who adopt this perspective, the future is very bright.