For the past few months an elite team of Cloudscalers (EMCers) have been working diligently on making ScaleIO great in OpenStack-land. ScaleIO “supported” OpenStack, but it might take a fair bit of time and energy to get it setup. No longer. As of last Friday, both Mirantis OpenStack and Canonical OpenStack work with ScaleIO at the push of a button. The free and frictionless version of ScaleIO can be automatically installed for these OpenStack distributions and at any time you can upgrade your free & frictionless installation to a fully supported commercial version of ScaleIO.
This work was done by coordinating carefully with our partners, writing a bunch of plugin and recipe code, feedback from customers, and testing, testing, testing.
There are several key areas of work:
- Creation and testing of general purpose Puppet-based recipes for installing ScaleIO, configuring ScaleIO, configuring OpenStack, and patching OpenStack (if/when required)
- Canonical integration by building Juju Charms based on the Puppet recipes above
- Mirantis integration by creating Fuel plugins for ScaleIO, which also leverage Puppet recipes as required
ScaleIO Puppet Recipes
At the heart of much of the work is a set of Puppet recipes to configure and install ScaleIO and OpenStack. There are two sets of work: puppet-scaleio for ScaleIO and puppet-scaleio-openstack for the configuration of OpenStack for managing ScaleIO.
These two sets of recipes together support:
- Downloading and installation of all free & frictionless ScaleIO components
- Creation and configuration of multiple protection domains, storage pools, and fault sets
- Deploying and operating ScaleIO in a hyperconverged or “2-layer” configuration
- Ubuntu 14.04 LTS
This Puppet recipe configures and manages ScaleIO.
It affects all of the following:
- Installs firewall (iptables) settings based on ScaleIO components installed
- Installs dependency packages such as numactl and libaio1
- Installs oracle-java8 for gateway
And was tested with all of the following:
- Puppet 3.*
- ScaleIO 2.0
- Ubuntu 14.04 LTS
- Linux kernel 4.2.0-30-generic and 3.13.0-*-generic
This Puppet recipe configures and manages OpenStack to work with ScaleIO.
If affects all of the following:
- Adds rootwrap filters
- Modifies nova.conf
- Patches nova python files
- Modifies cinder.conf
- Adds cinder_scaleio.config for Juno and Kilo versions
- Patches cinder python files
And was tested with all of the following:
- Puppet 3.*
- ScaleIO 2.0
- Ubuntu 14.04 LTS
- Linux kernel 4.2.0-30-generic and 3.13.0-83-generic
- OpenStack Juno, Kilo, Liberty
Canonical OpenStack and ScaleIO
Juju is a state of the art, open source service modelling tool used with Ubuntu Server and Canonical OpenStack to manage your datacenter. Juju allows you to model, configure, manage, maintain, deploy and scale cloud services quickly and efficiently. It makes Canonical OpenStack easy for customers to deploy and automates many manual steps.
Canonical OpenStack depends on Juju “charms” as it’s primary atomic component for automated deployments. The ScaleIO Juju charms rely on the Puppet recipes above to do most of their work and use the charm framework for integration with Juju and hence Canonical OpenStack.
The following configuration is tested and supported:
- Canonical OpenStack Liberty with ScaleIO 2.0 running on Ubuntu 14.04 LTS
Mirantis OpenStack and ScaleIO
Mirantis OpenStack depends on Fuel, an open-source deployment and lifecycle management engine. It makes Mirantis OpenStack easy for customers to deploy in optimal configurations on a wide range of generic x86 servers and network hardware. It automates manual steps that might otherwise require great familiarity with OpenStack and hours or days of engineering time; as well as has a simple UI and can auto-discover servers and storage components.
Fuel operates with a pluggable driver module. These are all new open source ScaleIO plugins for Fuel that support the fully automated deployment of ScaleIO with Mirantis OpenStack. All configuration is done via Fuel’s GUI.
The following configurations have been tested and validated by both Cloudscalers and Mirantis team:
- Mirantis OpenStack versions 6.1, 7.0 and 8.0 with ScaleIO 2.0 running on Ubuntu 14.04 LTS
Minor Differences Between Distributions
There are minor differences in support of the various distributions based on constraints of those systems themselves. Meaning that while we support a fairly robust set of options for configuration with the Puppet recipes, not all options may be available in Canonical or Mirantis. We will work to close this gap over time.
The following table summarizes differences.
|Ubuntu||+||+||Tested on Ubuntu 14.04 LTS|
|Install ScaleIO without OpenStack||-||+|
|2-Layer environment (Separate nodes for each ScaleIO service)||-||+||In FUEL: SDS is installed on each compute node and optionally on controller nodes Gateway and MDM services are installed on controller nodes SDC is installed on all compute nodes and on all nodes with Cinder role Xcache is optionally installed on all nodes with SDS
In JUJU: Admin is free to chose any configuration
|Utilize existing ScaleIO cluster||+||-||Configure OpenStack to use ScaleIO cluster that deployed by some other tool.|
|High availability for Gateway||+||+||With HAProxy|
|1_node, 3_node, 5_node cluster configuration||+||+|
|Multiple Protection domains support||+||+||In ScaleIO configuration of Protection domains is fully available. In FUEL protection domains are created automatically if number of SDS-es in protection domain exceeds a limit from configuration. The same storage pools are created in new protection domain with the same settings.|
|Multiple Storage Pools support||+||+|
|Fault sets support||-||+|
|SSD caching support (Xcache)||+||+||It is possible to choose storage pools that should be cached.|
|Cluster reconfiguration support||+||+||It is possible to add/remove nodes from cluster (both Compute and Controller nodes).|
|Checksum protection tuning||+||+|
|Spare policy tuning||+||+|
|Zero Padding for Storage Pools tuning||+||+|
|Background device scanner tuning||+||+|
|RAM cache tuning||-||+||In FUEL: it is by ScaleIO default. It is planned for update.|
|High priority alert capacity tuning||+||+|
|Critical priority alert capacity tuning||+||+|
|Separate networks for data and management||+||+|
|High performance profile||+||+|
|Low-Latency IO-Scheduler for SSD disks||+||+|
|OpenStack Juno||+||+||MOS6.1 for FUEL|
|OpenStack Kilo||+||+||MOS7.0 for FUEL|
|Openstack Liberty||+||+||MOS8.0 for FUEL|
|OpenStack Mitaka||-||-||Planned for update.|
|MOS9.0 for FUEL.|
|ScaleIO backend for persistent and ephemeral volumes||+||+|
|Thin/Thick volume provisioning type||+||+||thin/thick for both persistent and ephemeral volumes
Default values are in cinder/nova configs
Could be fine tuned per volume type / flavor
|Persistent Volume QoS||+||+||Limits by:
Dynamic* by formula ‘SizeOfDisk * ValuePerGB’ where ValuePerGB’ is set by user. Dynamic value could be limited by IOPS and bandwidth options.
*This is for Cinder only
|Volumes size granularity is 8GB||+||+||For persistent volumes volume sizes are rounded by 8GB automatically, it could be forbidden via config file.|
|Live instance migration||+||+|
Making ScaleIO Great in OpenStack (Again?)
This is the beginning of driving greater ScaleIO adoption in OpenStack land. As a reminder, ScaleIO is a much easier to operate and scale version of Ceph RBD. It is usually in the range of 7-10x faster (sometimes much faster than this, depending on configuration or workload). EMC is finding that our some of our best ScaleIO customers are those who have been ground up on the shoals of the Ceph cloud-wrecking reef. Hopefully this program makes it easier for adopters of OpenStack to perform their own bake off between ScaleIO and Ceph and helps make a better decision sooner in your OpenStack journey.
Meanwhile, we’re working on better integration to Red Hat OpenStack, updates for Mirantis OS 9.0, support for Nova ephemeral volumes , and have a slew of improvements and ongoing work to make ScaleIO even better with the OpenStack distribution of your choice.
 Free & frictionless refers to our program to allow EMC software such as ScaleIO and Elastic Cloud Storage (ECS) to be downloaded at no charge and installed and used with no restrictions except for a lack of support
 Hyper-converged configurations mean that the storage and compute run together on the same server(s) where a 2-layer configuration refers to a more traditional SAN architecture where storage runs on dedicated nodes and is connected to the compute that consumes it via a network
 Other flavors of Ubuntu and other Linux distributions will be tested in the near future
 That means support for KVM live migration just as we have in the VxRack Neutrino system
Last year, EMC announced the open sourcing of ViPR Controller, the world’s leading SDS controller as CoprHD (“copperhead”). Since then we have made tremendous progress, with multiple releases, and operating the project in the open. Recently we achieved another key milestone, the addition of support for RedHat’s Ceph...
Opportunity: Chief Evangelist, Emerging Technologies Division, EMC Corporation
You might be a little too jaded to “change the world”, but it’s important to you to make a difference in a customer’s initiatives, driving them towards success. An ideal job for you is one where you are challenged, but relatively...
After OpenStack, the number one topic that I get asked about these days is containers and their prospects for the enterprise and cloud-native applications. The prospect of containers replacing hypervisors such as VMware ESX or Linux KVM (the default for most OpenStack deployments) is of keen interest to many. Yet,...
You don’t want lock-in? Great. Find another universe. In this universe everyone is locked in. Seriously, stop deluding yourself and trying to get something you can never have. Didn’t your parents tell you? There is no Santa Claus. Nor Unicorns (storage or otherwise). Really, I am just trying to...
When I first started promulgating the pets vs. cattle meme, it really helped me get through roadblocks of confusion. Many in IT couldn’t tell the difference between the old way and the new way. Explaining “cloud native” could take a long time, but this meme gave me a way to...
As you are probably aware, voting is open for the OpenStack 2016 Spring Summit. It is incredibly difficult to go through the entire list of abstract submissions. I’ve gotten wind of some of the more interesting submissions from EMC, WalmartLabs, Mirantis, and others. Following is an incomplete list of talks...
At the most recent OpenStack SV 2015 event, I was invited to speak and provide a viewpoint on OpenStack’s future. The year before in 2014 I gave a talk entitled “Lie of the Benevolent Dictator” which spawned the Product Working Group for OpenStack. A blog posting entitled
I’ve been doing “cloud” for about as long as it’s been a “thing”. It is safe to say that I’ve talked about every conceivable topic related to cloud and cloud computing. Unfortunately, I still run into a common problem, which is that the average enterprise looking to adopt cloud or...
Collectively it’s clear that we’ve all had it with the cost of storage, particularly the cost to maintain and operate storage systems. The problem is that data requirements, both in terms of capacity and IOPS are exploding and growing exponentially, while the cost of storage operations and management is growing...