Is there a way to calculate the largest vm (cpu ram) that can be created in a cluster, without causing over provisioning?
In a way, I wonder about this question... The idea of VMTurbo is that you create the VM that you want, and deploy it -- let VMTurbo figure out where to put the VM, and let it move other VMs around to make room for the new VM if necessary. So theoretically, the VM can be just slightly smaller than the host (to account for overhead of hosting the VM). You should look into Deploy, by the way, where you can set up a reservation for a VM of the size you want, and then let VMTurbo place it for you.
But that doesn't specifically answer your question. (I hate forum answers that say, "Why don't you do something else?")
Planning should help... Look at the Cluster Capacity dashboard. It's kind of an easy way to do this, because the product automatically sets up and runs cluster capacity plans.
This dashboard calculates the "headroom" for each cluster. The calculation is done by running a plan on the cluster to see how many VMs of a certain template can "fit" in the cluster with the current supply of resources (current hardware). So here the first cluster has headroom for 26 VMs that have the allocated capacity defined in the template VirtualMachine::Microsoft_SQL2008-medium:
Then you can do two things... You can see what the template definition is, and you can also define your own templates or use discovered templates to run on all clusters, or on specific clusters. You can multiply the given template capacity by the headroom to see about how big the largest VM can be. You could even then create a template to match that giant VM, then wait overnight to refresh the Cluster Capacity dashboard, and see whether there's any headroom for your giant VM.
You set the template to use in the Policy view, like so:
Here's what the template catalog looks like, where you can see the capacity that's set for the given VM template -- You could set this or any other defined template to be used on the Cluster Capacity plans:
Thank you very much for explaining that to me. I saw it a few places, but didnt fully understand it.
I created a custom one with 8 cpu and 64 gb of ram, but the headroom doesnt change. I change it back to a 1 cpu and 4gb and still the same. I hover over the cluster and see that it is basing it off of the correct template. Any thoughts?
How long did you give it to register a change? Note that the Cluster Capacity dashboard is based on plans that the product runs nightly. If you're not sure what plans are and how they work, you should read up on them in the docs.
Also, you can find out about the Cluster Capacity dashboard at:
Anyway, depending on the size of your environment, and how many clusters you have, you can see changes overnight. The product runs up to 10 plans a night -- that's 10 clusters at a time. So the soonest you would see changes would be the next day, after you change the template (and click APPLY for the setting change in the Policy view).
Now, you *can* force the product to run cluster capacity plans for all your clusters on command. There's an API call to accomplish that -- You should contact Support if you need to do this. But unless it's a burning need to know today, I'd suggest setting up your cluster capacity templates, and letting it run its own course.
An immediate way to see what your cluster can manage would be to run a plan yourself, with that mega-template you created. Go to the plan view, and start a new plan. Then set the scope to the cluster you're interested in:
Then add workload -- add your mega-template.
Then you can just run the plan and see if it provisions a new host. Or, you can make an advanced setting to disable provisioning of hosts, and see if the plan can place the new VM on your current hardware. Note that this will calculate for the cluster -- It could easily find that you need to move some VMs to different hosts to make room for this mega-VM... That's part of what the product is all about. I would think that if the plan can add your VM to the cluster, then you could use the Deploy view to actually deploy that VM to the cluster, with the calculated placement.
Again, you should look at the docs for more info on how to use plans. (I would say that... I'm the tech writer!)
So this doesn't answer your initial question -- How big of a VM can I hope to place on the cluster? But it does answer the question -- Can I place this specific VM on the cluster?
Thank you for your response. I have read that it should run every night, but it did not. I hover over it and it seems the last time it ran was 3/29. When i try to run it manually with the api, it just is a blank screen and doesnt seem to do anything.
How many clusters do you have in your environment? That could influence whether it could have completed overnight.
So I presume you got the correct API call to force it to run all the cluster plans on command. Did you wait long enough for it to actually run them all? Or could it be that the plan for the cluster you want hasn't been run yet, or is still running? You can actually check to see whether plans are currently running in your environment via the API... Open the API Guide (in the same menu as the Help), and navigate to Rest Resources > markets. Expand the first GET button for markets. Make sure you put your credentials int eh USER fields at the top of the doc, and then click Try It. If no plans are currently running, you should only see two records of market data.
Also, exactly what version of the product are you running?
it is 4.7. We are in the process of upgrading it.
I dont see API guide, is it somewhere else in this version?
I think it was introduced for 5.0... If you run the following URL you should see the markets information:
Of course, you need to give credentials and the correct IP address...
Retrieving data ...