Skip navigation
All Places > Product Forum > Blog

Product Forum

9 posts

The Turbonomic Early Access program offers customers and partners with access to pre-generally available (GA) versions of Turbonomic to evaluate and test in a lab environment.


Early Access provides a great opportunity to influence development through direct contact with our Product Management and Engineering teams. In addition, all participants receive a free spot in an ACE Training class and plenty of swag.


Our Early Access timelines coincide with major releases (e.g. 6.0 6.1, 6.2). Participants typically deploy a separate (non-production) Turbonomic instance and spend a couple of hours reviewing and providing feedback. Obviously, will take as much time and feedback as you’re willing to spare


Turbonomic 6.3 Early Access is now live! 


If you're already part of the EA program learn more here:  6.3 Early Access


If you’re interested in participating in Early Access, please send an email to


Stay Green! 



Please could somebody clear up what policy wins when i have more than one policy applied to a VM. For example, i have just created a Group. This group contains 2 vms that require vcpu increase. I have assigned a policy to this group that automates resize up however it has been overruled by a policy that assigns Resize Up to recommend. This policy is assigned at 'Migrated Policy: Settings::Virtual Machines By PM Cluster'.


Also, I have enabled non-disruptive mode in the 'Resize Up' policy previously discussed. Does this cause the VMs to be non-disruptive for all actions or just the Resize Up action in that specific policy. If so, how do i apply a non-disruptive mode to a specific task such as Resize Up? 



In certain situations you may want to excluded EC2 instance or Azure Virtual Machine types from being recommended by Turbonomic for either real-time scaling actions or migration plans.  This may be due to an application team that prefers to only run on a certain size, an operational policy to only use a certain family, or a way to attempt and reduce cost by not allowing expensive ones. 


Keep in mind that Turbonomic makes the best decisions to match your workload demand with the available templates across families and clouds so whenever you define these types of constraints you may be limiting the value Turbo can add in your environment.


That being said here is the approach:   


To exclude Globally, click on Settings> Policies> Default>Virtual Machine Default>Scroll down to Scaling Constraint and click ADD > select Excluded Templates (figure 1). 


Note in Turbonomic 6.1 you can also exclude templates for Databases on AWS or Azure. 


Figure 1: Select Exclude Templates from Scaling Constraints  


From here you can add Templates to exclude by clicking on Add Template and then selecting the templates your want to exclude (figure 2).

Figure 2: Click Add Template and select the templates you want to exclude 


To exclude for a group, for example if you want to exclude templates for a certain Region or a set of applications defined by tags,  click on Settings>Automation Policies> + Automation Policy (top right)>Virtual Machine>Scope (enter group name)> +Scaling Constraints>Add Scaling Constraint>Excluded Templates> Choose the templates you’d like to exclude (figure 3). 


Figure 3: Excluding Templates for a Group 


That's it, the policy now applies to both real-time scaling actions and migration plans. It will also let you know if certain workloads are out of compliance and run on templates you've excluded. In that case you will see an action to move to the best template that provides the workloads with the resources it needs while minimizing your costs.  

Turbonomic can automatically suspend unused virtual machines in an AWS or Azure environment. This can significantly reduce monthly bills and eliminate the need to chase down development teams to clean up their environments. 


Here is an example of how to enable this capability in your environment with Turbonomic version 6.1. 


Navigate to Settings > Groups. 


Create a new application group and select the apps to include. See example below. 

Now you need to set an Automation Policy for the application group you just created. This can be found under Settings > Policies and selecting Automation Policy. 


Select Application under policy type and define the following:

  • Scope to the app group you defined 'auto suspend group' in my example above  
  • Set the schedule, e.g. daily 5 pm to 11 pm 
  • Set Application Priority to Normal 


See example below. 


Finally you need to set the Minimum Sustained Utilization and ensure the underlying VMs for these application workloads have the right automation policy in place to stop and start (for proof of concept you may want to first set this to Recommend or Manual before moving to full automation). 


To the set the VM automation policy navigate to Settings > Policies and selecting Automation Policy. 


Select Virtual Machines under policy type and define the following:

  • Scope to the underlying VM group for the apps 
  • Set the Minimum Sustained Utilization to a current and historical VCPU utilization value below which you'd like the VM to suspend 
  • Set Suspend under Action Automation to Automated, Manual or Recommended 
  • Set Start under Action Automation to Automated, Manual or Recommended 


See examples below. 



That's it. Turbonomic will identify apps that are not consuming resources during the time window you define and drive suspend actions on the underlying VMs. And when the policy time expires Turbonomic will turn those VMs back on. 

Turbonomic supports adding OpenStack as a cloud target. To add OpenStack as a target to Turbonomic the user used should have an admin role for a particular project. A project is a requirement when adding the OpenStack target. The reasoning behind the requirement for an admin role is because Turbonomic requires access to the underlying physical infrastructure.


Here is an example of a user Turbo_User who has admin role on the project demo.


These instructions are for when you are extending the existing hard drive and NOT adding a new hard drive. 

1. Extend hard drive space in your hypervisor (vCenter shown below):

2. To avoid restarting the vm run the following command on your Turbonomic instance

3. Look at your current disk partition table

4. Next, create the physical volume and set it to the Linux LVM type

5. Now we will update the partition table to include sda3

6. Next, we run the command to initialize the partition so it can be used

7. This command will add the physical volume to the volume group of /dev/turbo\

8. We can check that the physical volume /dev/sda3 is now part of /dev/turbo and the total amount of free 


9. Now we will extend the size of the logical volume by the amount of Free PE we found in the last step (replace the number with what you find and the volume with the one you want to extend, ex: /dev/turbo/var_log | /dev/turbo/var_lib_mysql)

10. The last step for expanding the partition is to expand the XFS filesystem (replace with the volume you extended, ex: /dev/turbo/var_log | /dev/turbo/var_lib_mysql)

11. Lastly, we are just confirming the partition has been extended

Are you seeing "Provision New Physical Machine" actions and don't understand why?

We are going to look at a specific host details to diagnose these actions.

As we can see in the screenshot memory does not look over-utilized although the reason to provision is because of "Critical Mem congestion."  To further investigate we need to expand the "Resources" panel.

The percentage that is shown under "Utilization %" is based off of the capacity of the host, however many customers have High Availability (HA) or other ways to reduce the capacity Turbonomic can use.  You can find the capacity Turbonomic uses under "Effective Capacity".

To find the actual Utilization % (Effective Utilization %) that is used out of the available capacity you need to do the following:

Effective Utilization % = ( Used / Effective Capacity ) * 100

Effective Utilization % = (149203328 / 201153952) * 100

Effective Utilization % = 0.7417 * 100

Effective Utilization % = 74.17

Since 74.17% is greater than 70% which is about what the default desired state tries to keep resources under a Provision New Physical Machine action is given.



1. SSH into your Turbonomic appliance

  The username is: root The default password is: vmturbo


2. Next, Run the following commands:

     cd /tmp

     openssl req -out vmturbo.csr -new -newkey rsa:2048 -nodes -keyout vmturbo.key

Fill out the appropriate information.


3. Use SCP to copy the .csr file from  /tmp on the Turbonomic appliance to the local computer


4. Open the CSR file with a text editor and copy the text into the request text box on your CA. From the internal CA (Windows CA) go to 'Request cert' -> Advanced -> base-64-encoded -> 'Template Used' = Web Server (or whatever custom template you may have)


5. Make a new directory call C:\cert\ and download the certificate chain in Base 64, call it Turbonomic.p7b


6. Right click Turbonomic.p7b, click open and navigate to the Certificates folder. Starting with your Root Cert right click the cert and click all taks -> Export, export it in Base-64 to the C:\cert directory, call it root.cer. Repeat for your intermediate ca cert if you have one and finally for your turbonomic cert. Call them inter.cer and turbo.cer to make it easy.


7. At this point you should have 3 or 4 files if you have an intermediate ca cert and you need to chain them together



     C:\cert\inter.cer (only if you have an an intermediate ca)


Open a command prompt from C:\Cert and type these commands:


     more turbo.cer >> turbonomic.cer

     more inter.cer >>  turbonomic.cer (only if you have an an intermediate ca)

     more root.cer >>  turbonomic.cer


now you have a turbonomic.cer that has all three certs chained together in Base 64


8. Back in your Turbonomic SCP session:

          upload C:\cert\turbonomic.cer to /etc/ssl/certs


9. Back in your Turbonomic SHH session:

          cd /etc/ssl/certs


10. Convert turbonomic.cer to pem:

          openssl x509 -in turbonomic.cer -out turbonomic.pem




Stop the apache2 service


         service apache2 stop



Stop the httpd service


         service httpd stop


12. Backup old certs and keys: (BUT YOU TOOK A SNAPSHOT BEFORE YOU STARTED!)

      Copy the existing /etc/apache2/ssl.crt/server.crt to server-old.crt with below command

           cp /etc/apache2/ssl.crt/server.crt /etc/apache2/ssl.crt/server-old.crt

      Copy the existing /etc/apache2/ssl.key/server.key to server-old.key with below command

          cp /etc/apache2/ssl.key/server.key /etc/apache2/ssl.key/server-old.key


13. Copy the turbonomic.pem file to /etc/apache2/ssl.crt/ and call it server.crt

          cp turbonomic.pem /etc/apache2/ssl.crt/server.crt


14. Move /tmp/vmturbo.key (the file from step 2) into /etc/apache2/ssl.key/ and name it server.key

           mv /tmp/vmturbo.key  /etc/apache2/ssl.key/server.key




Start the apache2 service


         service apache2 start



Start the httpd service


         service httpd start

First a big shout out to Umar for his help solving our Scheduling problem! Once again we received  a quick response and resolution form Turbonomic's tech support.


We are a big fan of using Turbonomic's automation and learned a while back that you need to disable automation during your back-up window. Recently we noticed some VMs were moving during our "disable" time. It ended up being a problem with setting options at the top level and the cluster level along with how we originally configured our schedules.


With Umar's help we were able to streamline our configuration.


Here are some things to consider when you are working with the schedule settings:

     When you need to set a time when moves and changes need to be disabled like during your backups

          Configure the items you want to automate at the top level like "Virtual Machines by PM Cluster" to "Manual". Then

          Create a schedule for setting this to "automate" outside of your backup window.

     We then disabled moves and resizing for a couple of clusters by setting them to disabled at the cluster level. This is          much easier than changing rules for 34 clusters!


     When you set the time in the schedule the change doesn't take effect until that time. For example if you configure an      action for "automate" from 6am to Midnight at 3pm in the afternoon the rule is not processed until Midnight and then 6am      the next morning.