Skip navigation
All Places > Product Forum > Blog

Product Forum

10 posts

The Turbonomic Early Access program offers customers and partners with access to pre-generally available (GA) versions of Turbonomic to evaluate and test in a lab environment.


Early Access provides a great opportunity to influence development through direct contact with our Product Management and Engineering teams. In addition, all participants receive a free spot in an ACE Training class and plenty of swag.


Our Early Access timelines coincide with major releases (e.g. 6.2, 6.3, 6.4). Participants typically deploy a separate (non-production) Turbonomic instance and spend a couple of hours reviewing and providing feedback on new capabilities added in the major release. 


Obviously, will take as much time and feedback as you’re willing to spare


Turbonomic 6.4 Early Access is now live! 


If you're already part of the EA program learn more here:  6.4 Early Access


If you’re interested in participating in Early Access, please send an email to


Stay Green! 


This post details the installation and operation of the P2V import utility recently created to aid clients with both bulk template importing, and physical to virtual migration simulations. A previous script called plan.html has been distributed with Turbonomic, and used by some clients. This utility was commissioned to fully replace the previous CGI script while keeping close parity with the new UI design. In addition to providing the p2v planning functionality, it adds support for bulk importing templates into Turbonomic from a CSV file.



Installation of the utility requires copying a single file to each instance upon which a client wishes to run P2V plans, or bulk import templates. The utility is JavaScript based, and operates fully client-side using API 2 calls.


To install, the attached import.html file must be copied to the Turbonomic instance, and placed in the /srv/www/htdocs/ folder. This operation must be performed on each Turbonomic instance that will be used. 


For example if the target instance resides at, from a Linux or OS X machine, you might copy the file like follows:

scp ./import.html root@



Accessing the UI extension is simple. Using our same host from before, you navigate to the new import.html page on the host as follows:


If not already logged in, the user will be prompted for their Turbonomic credentials as usual. Once logged in, the user will be redirected to the import utility.


The import utility contains two modes: Template Import & P2V Plan


Bulk Import Mode

  1. File selection dialog - The file must be a simple CSV file. See the attached Template Values Documentation file for details on the format
  2. Mode selection - Switches between bulk import and P2V planning modes
  3. Submit button - Imports templates from the provided CSV file
  4. Close button - Exits back to the main UI


P2V Plan Mode

  1. File selection dialog - The file must be a simple CSV file. See the attached Template Values Documentation file for details on the format
  2. Mode selection - Switches between bulk import and P2V planning modes
  3. Market - On 6.0.x this will show only the Realtime Market, while on 6.1 instances, you may apply the P2V to a previously run plan to perform a plan-on-plan simulation
  4. Submit button - Imports templates, and runs the simulation; the plan itself will open in a new tab, and after the plan copies the temporary templates will be removed
  5. Close button - Exits back to the main UI



File format: Both modes of the utility make use of a comma separated values (CSV) formatted input file. Excel XLS or XLST files and CSV files saved as XLS are not supported and will generate errors if used. The file must be a basic CSV file.


Supported Fields: The attached Template Values Documentation.xlsx is the template field documentation, providing details on which fields are valid for which templates, required fields, and default field values. The sample.csv is an example file you can refer to.


Creating the CSV file:

When creating the CSV file, the first line is the field headers and must contain all the field names you will be using, comma separated. The field names are not case-sensitive, and the order in which they appear in the header does not matter either. Each following line after the first represents a single template entry, and the order of the columns must match what you specified in the first line. As shown in the sample.csv, fields that are not valid for a template (i.e. CPU for a storage template) will be ignored. This allows you to mix and match all different template types within one file.  


Control Fields:

In addition to the template fields, there are two special control fields:

  • class - this field indicates which template type the row is for. Template names are listed in the Template Values Documentation file, and the following short-hand template codes may also be used:
    • vm for virtual machine
    • pm for physical machine
    • st for storage
    • ct for container
  • qty - this field is required in P2V plan mode, and is ignored by the bulk template import mode. The field indicates how many copies of the template are to be used in the plan, and should be 1 or greater. 


Known Issues

There is a single known issue with the removal of P2V plan templates which may impact very large instances that take longer than 1 minute to copy the plan. This may result in some templates not being copied into the plan prior to being removed from the system by the utility. This will be addressed in a future release.



Please could somebody clear up what policy wins when i have more than one policy applied to a VM. For example, i have just created a Group. This group contains 2 vms that require vcpu increase. I have assigned a policy to this group that automates resize up however it has been overruled by a policy that assigns Resize Up to recommend. This policy is assigned at 'Migrated Policy: Settings::Virtual Machines By PM Cluster'.


Also, I have enabled non-disruptive mode in the 'Resize Up' policy previously discussed. Does this cause the VMs to be non-disruptive for all actions or just the Resize Up action in that specific policy. If so, how do i apply a non-disruptive mode to a specific task such as Resize Up? 



In certain situations you may want to excluded EC2 instance or Azure Virtual Machine types from being recommended by Turbonomic for either real-time scaling actions or migration plans.  This may be due to an application team that prefers to only run on a certain size, an operational policy to only use a certain family, or a way to attempt and reduce cost by not allowing expensive ones. 


Keep in mind that Turbonomic makes the best decisions to match your workload demand with the available templates across families and clouds so whenever you define these types of constraints you may be limiting the value Turbo can add in your environment.


That being said here is the approach. 


To exclude for a group of VMs, click on Settings> Policies> Automation Policy>Virtual Machine 

To exclude for a group, click on Settings>Automation Policies> + Automation Policy (top right)>Virtual Machine. 


From here you can add Templates to exclude by defining the:


  • Scope (enter group (s) name)
  • Excluding templates under Scaling Constraints>Add Scaling Constraint>Excluded Templates> Choose the templates you’d like to exclude 


Below is an example excluding the C3 family for all AWS VMs. 




That's it, the policy now applies to both real-time scaling actions, migration and cloud optimization plans . The policy will take affect within the next discovery cycle, typically 10 min.  


Turbo will also let you know if certain workloads are out of compliance and run on templates you've excluded. In that case you will see an action to move to the best template that provides the workloads with the resources it needs while minimizing your costs.  


Note in Turbonomic 6.1 you can also exclude templates for Databases on AWS or Azure and can no longer set this policy as a default policy, i.e. you have to select a group or multiple groups. 

Turbonomic can automatically suspend unused virtual machines in an AWS or Azure environment. This can significantly reduce monthly bills and eliminate the need to chase down development teams to clean up their environments. 


Note, in 6.4 we improved our scheduling functionality see:  Automating Stop and Start AWS and Azure VMs on a Schedule in 6.4


Here is an example of how to enable this capability in your environment with Turbonomic version 6.1. 


Navigate to Settings > Groups. 


Create a new application group and select the apps to include. See example below. 


Now you need to set an Automation Policy for the application group you just created. This can be found under Settings > Policies and selecting Automation Policy. 


Select Application under policy type and define the following:

  • Scope to the app group you defined 'auto suspend group' in my example above  
  • Set the schedule, e.g. daily 5 pm to 11 pm 
  • Set Application Priority to Normal 
  • Note, the schedule should only be attached to the Application Policy 


See example below. 


Finally you need to set the Minimum Sustained Utilization and ensure the underlying VMs for these application workloads have the right automation policy in place to stop and start (for proof of concept you may want to first set this to Recommend or Manual before moving to full automation). 


To the set the VM automation policy navigate to Settings > Policies and selecting Automation Policy. 


Select Virtual Machines under policy type and define the following:

  • Scope to the underlying VM group for the apps 
  • Set the Minimum Sustained Utilization to a current and historical VCPU utilization value below which you'd like the VM to suspend 
  • Set Suspend under Action Automation to Automated, Manual or Recommended 
  • Set Start under Action Automation to Automated, Manual or Recommended 
  • Note, when the Application Policy schedule expires (the app goes from normal back to mission critical) Turbonomic will automate the start action based on the Action Automation level you specify 


See examples below. 



That's it. Turbonomic will identify apps that are not consuming resources during the time window you define and drive suspend actions on the underlying VMs. And when the policy time expires Turbonomic will turn those VMs back on. 

Turbonomic supports adding OpenStack as a cloud target. To add OpenStack as a target to Turbonomic the user used should have an admin role for a particular project. A project is a requirement when adding the OpenStack target. The reasoning behind the requirement for an admin role is because Turbonomic requires access to the underlying physical infrastructure.


Here is an example of a user Turbo_User who has admin role on the project demo.


These instructions are for when you are extending the existing hard drive and NOT adding a new hard drive. 

1. Extend hard drive space in your hypervisor (vCenter shown below):

2. To avoid restarting the vm run the following command on your Turbonomic instance

3. Look at your current disk partition table

4. Next, create the physical volume and set it to the Linux LVM type

5. Now we will update the partition table to include sda3

6. Next, we run the command to initialize the partition so it can be used

7. This command will add the physical volume to the volume group of /dev/turbo\

8. We can check that the physical volume /dev/sda3 is now part of /dev/turbo and the total amount of free 


9. Now we will extend the size of the logical volume by the amount of Free PE we found in the last step (replace the number with what you find and the volume with the one you want to extend, ex: /dev/turbo/var_log | /dev/turbo/var_lib_mysql)

10. The last step for expanding the partition is to expand the XFS filesystem (replace with the volume you extended, ex: /dev/turbo/var_log | /dev/turbo/var_lib_mysql)

11. Lastly, we are just confirming the partition has been extended

Are you seeing "Provision New Physical Machine" actions and don't understand why?

We are going to look at a specific host details to diagnose these actions.

As we can see in the screenshot memory does not look over-utilized although the reason to provision is because of "Critical Mem congestion."  To further investigate we need to expand the "Resources" panel.

The percentage that is shown under "Utilization %" is based off of the capacity of the host, however many customers have High Availability (HA) or other ways to reduce the capacity Turbonomic can use.  You can find the capacity Turbonomic uses under "Effective Capacity".

To find the actual Utilization % (Effective Utilization %) that is used out of the available capacity you need to do the following:

Effective Utilization % = ( Used / Effective Capacity ) * 100

Effective Utilization % = (149203328 / 201153952) * 100

Effective Utilization % = 0.7417 * 100

Effective Utilization % = 74.17

Since 74.17% is greater than 70% which is about what the default desired state tries to keep resources under a Provision New Physical Machine action is given.



1. SSH into your Turbonomic appliance

  The username is: root The default password is: vmturbo


2. Next, Run the following commands:

     cd /tmp

     openssl req -out vmturbo.csr -new -newkey rsa:2048 -nodes -keyout vmturbo.key

Fill out the appropriate information.


3. Use SCP to copy the .csr file from  /tmp on the Turbonomic appliance to the local computer


4. Open the CSR file with a text editor and copy the text into the request text box on your CA. From the internal CA (Windows CA) go to 'Request cert' -> Advanced -> base-64-encoded -> 'Template Used' = Web Server (or whatever custom template you may have)


5. Make a new directory call C:\cert\ and download the certificate chain in Base 64, call it Turbonomic.p7b


6. Right click Turbonomic.p7b, click open and navigate to the Certificates folder. Starting with your Root Cert right click the cert and click all taks -> Export, export it in Base-64 to the C:\cert directory, call it root.cer. Repeat for your intermediate ca cert if you have one and finally for your turbonomic cert. Call them inter.cer and turbo.cer to make it easy.


7. At this point you should have 3 or 4 files if you have an intermediate ca cert and you need to chain them together



     C:\cert\inter.cer (only if you have an an intermediate ca)


Open a command prompt from C:\Cert and type these commands:


     more turbo.cer >> turbonomic.cer

     more inter.cer >>  turbonomic.cer (only if you have an an intermediate ca)

     more root.cer >>  turbonomic.cer


now you have a turbonomic.cer that has all three certs chained together in Base 64


8. Back in your Turbonomic SCP session:

          upload C:\cert\turbonomic.cer to /etc/ssl/certs


9. Back in your Turbonomic SHH session:

          cd /etc/ssl/certs


10. Convert turbonomic.cer to pem:

          openssl x509 -in turbonomic.cer -out turbonomic.pem




Stop the apache2 service


         service apache2 stop



Stop the httpd service


         service httpd stop


12. Backup old certs and keys: (BUT YOU TOOK A SNAPSHOT BEFORE YOU STARTED!)

      Copy the existing /etc/apache2/ssl.crt/server.crt to server-old.crt with below command

           cp /etc/apache2/ssl.crt/server.crt /etc/apache2/ssl.crt/server-old.crt

      Copy the existing /etc/apache2/ssl.key/server.key to server-old.key with below command

          cp /etc/apache2/ssl.key/server.key /etc/apache2/ssl.key/server-old.key


13. Copy the turbonomic.pem file to /etc/apache2/ssl.crt/ and call it server.crt

          cp turbonomic.pem /etc/apache2/ssl.crt/server.crt


14. Move /tmp/vmturbo.key (the file from step 2) into /etc/apache2/ssl.key/ and name it server.key

           mv /tmp/vmturbo.key  /etc/apache2/ssl.key/server.key




Start the apache2 service


         service apache2 start



Start the httpd service


         service httpd start

First a big shout out to Umar for his help solving our Scheduling problem! Once again we received  a quick response and resolution form Turbonomic's tech support.


We are a big fan of using Turbonomic's automation and learned a while back that you need to disable automation during your back-up window. Recently we noticed some VMs were moving during our "disable" time. It ended up being a problem with setting options at the top level and the cluster level along with how we originally configured our schedules.


With Umar's help we were able to streamline our configuration.


Here are some things to consider when you are working with the schedule settings:

     When you need to set a time when moves and changes need to be disabled like during your backups

          Configure the items you want to automate at the top level like "Virtual Machines by PM Cluster" to "Manual". Then

          Create a schedule for setting this to "automate" outside of your backup window.

     We then disabled moves and resizing for a couple of clusters by setting them to disabled at the cluster level. This is          much easier than changing rules for 34 clusters!


     When you set the time in the schedule the change doesn't take effect until that time. For example if you configure an      action for "automate" from 6am to Midnight at 3pm in the afternoon the rule is not processed until Midnight and then 6am      the next morning.