eric.bannon

Prerequisites and Instructions for Adding Openstack to OpsMan

Blog Post created by eric.bannon on Aug 6, 2015

Hey everyone!

 

In this post we will outline all of the requirements for adding Openstack instances to your Turbonomic platform.  It is recommended that you upgrade your Turbonomic instance to v5.0 or higher before adding targets. Turbonomic is available on the Turbonomic download site in QCOW format for installation within OpenStack / KVM

environment: http://vmturbo.com/downloads/operations-manager-30-day-trial/?t=new-home 

 

Supported Versions: Turbonomic has been tested in multiple Red-Hat environments and is also tested periodically against OpenStack versions from Icehouse onwards.

 

Operational Requirements

Turbonomic uses four OpenStack services:

 

1. Identity (Keystone) - discovery of OpenStack Tenants

2. Compute (Nova) - discovery of Hypervisors

3. Block Storage (Cinder) - discovery of Storage

4. Telemetry (Ceilometer) - monitoring of resource usage

 

All of these services must be installed and enabled, and Turbonomic must be able to access each service at its administrative endpoint

(provided by Keystone during Target validation).To check and see whether these services are enabled, you can navigate to Admin/System info page in Horizon and check if "Keystone" "Nova", "Cinder, and Ceilometer services are configured and enabled

 

openstack preqewqs.JPG

 

This can also be done through the Openstack Command Line:

 

[root@openstack ~(keystone_admin)]# keystone service-list

+----------------------------------+------------+--------------+--------------------------------+

|                id                |    name    |     type     |          description           |

+----------------------------------+------------+--------------+--------------------------------+

| cc42edb86b7f47fbbacc4dcf35ecc307 | ceilometer |   metering   |   Openstack Metering Service   |

| 156ce39c4cfc43a4bbee0e51a83475ce |   cinder   |    volume    |         Cinder Service         |

| 3912c33409a44f09b64b96e9caff130e | cinder_v2  |   volumev2   |       Cinder Service v2        |

| 3ba34d51e51f42a39042e699ee244599 |  cinderv2  |   volumev2   |       Cinder Service v2        |

| 4c650e92fb454914aa1d3da3735c36f3 |   glance   |    image     |    Openstack Image Service     |

| 49e932929b1a497397847dca6772bfa7 |  keystone  |   identity   |   OpenStack Identity Service   |

| 507e6cfed4394c2090add57ffc05607d |  neutron   |   network    |   Neutron Networking Service   |

| 0e4a50f537b648d5bba1803f5a72d1c7 |    nova    |   compute    |   Openstack Compute Service    |

| 2767478c1523478bbb52ca0617b73106 |  nova_ec2  |     ec2      |          EC2 Service           |

| 3c2108e248ce49c5b327317cf8af5576 |   swift    | object-store | Openstack Object-Store Service |

| e9ee4c0581c64124bbcb8a19aa07ad70 |  swift_s3  |      s3      |      Openstack S3 Service      |

+----------------------------------+------------+--------------+--------------------------------+

 

OpenStack Target Validation

 

OpenStack targets added to Turbonomic Operations Manager are validated to meet these requirements:

 

1. All required OpenStack services (listed above) are enabled.

2. The target URL is the public URL of the Keystone service. Specify the port unless it is 5000. A secure "https" connection is assumed for all ports other than 5000; specify the protocol to override this default. Turbonomic Operations Manager must be able to access this URL.(See below)

3. The User ID must be authenticated by OpenStack and must be authorized with an Admin role for the specified Tenant.

   Note - More detail on the permissions required can be seen at this blog post Permissions required to add an OpenStack target in Turbonomic 

4. Turbonomic Operations Manager must be able to access each OpenStack service endpoint at its administrative endpoint.

 

Openstack VMT Target Config.JPG

 

Resource Monitoring Requirements

Turbonomic relies on Ceilometer to fetch telemetry data essential in driving actions and bringing your OpenStack environment to the desired state. The following Green Circle Community articles will help with the configuration of the host and virtual machine metrics in Ceilometer:

 

Enabling CPU metrics
Enabling CPU metrics in OpenStack 

 

Enabling Memory Metrics
Enabling Memory metrics in OpenStack 

 

Enabling SNMP Metrics
Enabling SNMP Metrics in OpenStack 

 

Automated Control Requirements:

Turbonomic will recommend VM resize and live migration actions as dictated by the demand of the OpenStack workloads. These actions can be taken manually, or automated entirely to provide command and control of the environment. This control requires no additional configuration beyond the availability of the metrics specified in OpenStack Target Validation above. Additionally, Turbonomic provides a Turbonomic-specific Nova scheduler plugin that replaces existing filters and automates placement decisions for OpenStack VM deployments based on an intelligence around real time performance and capacity .

 

This Nova scheduler plugin interfaces with Turbonomic Operations Manager to obtain workload deployment recommendations. The plugin is available on the Turbonomic Github repository is open source and can be downloaded and installed on the OpenStack Nova controller. The specific version of the scheduler can be found at the links below:

 

Icehouse: https://raw.githubusercontent.com/vmturbo/nova/stable/icehouse/nova/scheduler/vmt_scheduler.py

 

Juno: https://raw.githubusercontent.com/vmturbo/nova/stable/juno/nova/scheduler/vmt_scheduler.py

 

Kilo: https://github.com/vmturbo/nova/blob/stable/kilo/nova/scheduler/vmt_scheduler.py

 

Mitaka: https://raw.githubusercontent.com/vmturbo/nova/stable/mitaka/nova/scheduler/vmt_scheduler.py

 

The scheduler can be fetched using the following commands on the controller [Change the vmt_scheduler.py reference based on the version required, as listed above]:

 

cd /usr/lib/python2.6/site-packages/nova/scheduler/
curl -O https://raw.githubusercontent.com/vmturbo/nova/stable/juno/nova/scheduler/vmt_scheduler.py

 

This will add the VMTScheduler on the controller. Nova needs to be configured to use this scheduler in place of the default filter scheduler. The following entries must be added in the /etc/nova/nova.conf file under the [DEFAULT] section:

 

scheduler_driver = nova.scheduler.vmt_scheduler.VMTScheduler
vmturbo_rest_uri = <Turbonomic_IPAddress>
vmturbo_username = <Turbonomic_UserName>
vmturbo_password = <Turbonomic_Password>

 

Restarting the Nova scheduler service is required after applying the changes.

 

NOTE: 'scheduler_driver' might already be configured to the default scheduler. In this case the existing entry needs to be changed.

 

Once installed, you will be able to use Turbonomic to reserve capacity for future workload and deploy instances directly into the Openstack environment.

 

Openstack Deploy.JPG

 

For more information on Turbonomic's opensource contributions you can explore our GitHub at Turbonomic Open Source Projects.

Outcomes