anson.mccook wrote a great recent blog on one of the new capabilities we introduced with release 4.7
Below is an excerpt.
When talking with customers about their onboarding strategies, there are typically two types of scenarios for bringing in new workloads to the datacenter for which you have to be prepared. The workloads you know about (and are lucky enough to plan for)…and the requests that pop up last minute. In a world where both exist, it’s a challenge to be accommodating for all of your guests.
I was recently talking with an organization about their upcoming VDI project and their plans to support around 1000 users. They will be deploying 1000 workstations over the next 6 months and are asking themselves if they have enough resources. Because they will be rolling out the project in waves, they have some lead-time when it comes to ensuring there is enough hardware to support the project.
The biggest problem this customer faced was knowing how to accommodate the current workload utilization while also supporting the next wave of VDI instances that had yet to be deployed. If the current workload was starting to demand more resources, then their plan for adding another 150 instances could not be accomplished with their current capacity. And of course, they only realized this bottleneck once they had started to deploy the next batch – resulting in a last minute purchase of hardware to squeeze it all in.
Ideally, we’d like to know when the current workload has grown to the point where we can’t support the next batch of workloads. Without hurting our brains everyday crunching numbers, we need to know when and how much capacity is needed in real-time to support current and future demand.