Q&A: Interview with VMTurbo, Talking Software-Defined Revolution and their Unique Management Approach

Document created by eric.senunas on Nov 13, 2014Last modified by fran.schwarzmann on Aug 15, 2016
Version 2Show Document
  • View in full screen mode

 

This interview originally appeared on VMblog.com.


Whenever I get the opportunity to catch up with Shmuel Kliger, founder and president of VMTurbo, I take it.  I've been fortunate for many years now to be able to sit down and chat with him during VMworld and Citrix Synergy trade shows.  But with all the news coming out around VMTurbo lately, I thought it made sense to reach out to him directly, outside of these events, and ask him a couple of questions so I could get the inside scoop.

 

VMblog:  I always enjoy our conversations, and we share many of the same passions within this industry.  Can you explain to readers what the big problem is that VMTurbo helps to solve?

 

Shmuel Kliger: VMware created a revolution in the last decade starting with server virtualization. It then grew from server to storage and other parts of the IT stack. And in the last few years, the marketing people gave it a name: the software-defined data center.

 

The reality of this software-defined revolution is that once the IT silos of old were destroyed, once software could execute actions in the datacenter without human intervention, there was no going back to the old, siloed-IT approach of making decisions in isolation.

 

This new world requires much larger thinking to be managed - beyond isolated actions attempting to solve specific problems alerted to post hoc. The software-defined revolution requires an understanding of the entire environment holistically, so that it could reach a "Desired State".

 

This concept of "Desired State" is something you and I have discussed before: It's a dynamic equilibrium where workload demand is best satisfied by infrastructure supply. This equilibrium is a continuously-fluctuating target where application performance is assured while maximizing infrastructure utilization.

A Desired State - this dynamic equilibrium - is fundamentally a tradeoff between an n-dimension of variables, none in of which can be addressed in isolation:

  • between budget and cost,
  • between resiliency, performance and agility,
  • between application performance and infrastructure utilization,
  • between workload QoS and sweating assets,
  • between compute, storage and network bandwidth,
  • between compute, storage and endpoint latencies,
  • between infrastructure constraints and business constraints,
  • between compute and storage,
  • between CPU, memory, IO, network, ready Qs, latency, etc.,
  • between application priorities
  • and among business priorities.

As you can imagine, these tradeoffs aren't simple. In fact, they're between huge, conflicting forces that pull the data center in different directions - in real time, all the time.

 

If you think about it, a datacenter is not so different from a symphony orchestra. Unless every instrument finds its proper equilibrium with every other - in real time, with every beat - the orchestra will never achieve harmony worthy of the composer's art. That harmony is the Desired State of that orchestra at that time, and it can be fleeting.

 

That's why orchestra conductors are the rock stars of classical music - they create the harmony that breathes life into some of humankind's greatest art. The tradeoffs they must negotiate in creating that harmony are not so dissimilar from those you must negotiate in your datacenter.

 

These tradeoffs can't be solved by a collection of ad-hoc point tools working in isolation in their own silos of sizing, placement or capacity. Once the physical silos of IT were destroyed by the software-defined revolution, the old silos of IT management needed to follow. But in many ways the silos of traditional IT management have been the most resistant to the revolution VMware created.

 

The only way to solve the tradeoffs the software-defined world demands is with a unified, software-driven control that can see the forest for the trees. In order to do so, a data model must provide a common abstraction across all layers of the IT stack from the application all the way down to the fabric, hiding the messy details of any single tree in the managed environment - and yet expose a level of detail sufficient to control and maintain the environment as a whole in a healthy state.

 

Only then can some mechanism continuously derive the Desired State from the millions of possible states in the average datacenter, and drive the actions necessary to control that environment in that state of health.

 

VMblog:  And if you would, talk about what's unique to the VMTurbo approach.

 

Read the rest of David Marshall's interview with Shmuel Kliger at VMblog.com.

Attachments

    Outcomes