Hyper-Converged Confusion

Posted on by Randy Bias


I have had my doubts about converged and hyper-converged infrastructure since the Vblock launched, but I eventually came around to understanding why enterprises love the VCE Vblock. I am now starting to think of converged infrastructure (CI) as really “enterprise computing 2.0”. CI dramatically reduces operational costs and allows for the creation of relatively homologous environments for “platform 2” applications. Hyper-converged infrastructure (HCI) seeks to take CI to the next most obvious point: extreme homogeneity and drastic reduction in labor costs.

I can see why so many come to what seems like a foregone conclusion: let’s just make CI even easier! Unfortunately, I don’t think people have thought through all of the ramifications of HCI in enterprises. Hyper-converged is really an approach designed for small and medium businesses, not larger enterprises operating at scale.  The primary use cases in larger environments is for niche applications with less stringent scalability and security requirements: VDI, remote office / branch office servers, etc. 

There are three major challenges with HCI in larger scale enterprise production environments that I will cover today: 

  1. Security
  2. Independent scaling of resources
  3. Scaling the control plane requires scaling the data plane

I think this perspective will be controversial, even inside of EMC, so hopefully I can drive a good conversation about it.

Let’s take a look.

Continue reading →

Posted in Cloud Computing | Leave a comment

Vancouver OpenStack Summit – EMC Federation Presentations

Posted on by Randy Bias

Voting for presentations at the Vancouver OpenStack Summit is now open.  Please help us by voting on the sessions submitted by EMC Federation speakers along with any other sessions that cover topics that might interest you.  Please vote at your earliest convenience since each vote helps!

OpenStack community members are voting on presentations to be presented at the OpenStack Summit, May 18-22, in Vancouver, Canada. Hundreds of high-quality submissions were received, and your votes can help determine which ones to include in the schedule.

PDF containing all of the submissions by the EMC Federation: 

https://my.syncplicity.com/share/6ndhnwrkpxkgodp/OpenStack%20Liberty%20Summit%20-%20Vancouver%20-%20EMC%20Federation%20Session%20Proposals%20v2

Here is a list you can click through to vote. 

Thank you so much for taking the time to vote on these sessions!

Posted in OpenStack | Leave a comment

The Future of OpenStack’s EC2 APIs

Posted on by Randy Bias

Recently, some more talk was had around the future of the EC2 APIs, beginning with some comments on the openstack-operators mailing list, followed by threads on the dev and foundation mailing list.  This ultimately resulted in a suggested commit to officially “deprecate” the EC2 APIs from Nova.  This commit was rejected, but make sure you read through the commentary if you have time.  Some really great perspective.  If you don’t here’s my basic summation:

  • Many people still very much care about the EC2 and AWS APIs and are quite concerned about their state and the lack of attention to keeping them current
  • Some people are adamant about deprecating and then removing them as expeditiously as possible
  • Others are interested in keeping them around, but moving them out of the default distribution and making sure they have a good home

As many people know, I’m passionate about this subject.  If you missed the blog posting that caused a massive kerfuffle in summer of 2013, now is the time to take a look again: OpenStack’s Future Depends on Embracing Amazon. Now. There was a pretty massive response to that original article, including a very vibrant OpenStack meetup with a debate that was covered live between myself and Boris Renski, co-founder of Mirantis. 

I am proud of driving that conversation, but one pushback that arose could be summarized as: “put your money where your mouth is”.  At the time we were already working towards a goal that would have responded to this pushback but it’s taken alot longer than I would like to materialize.

We are finally there.  Let me explain.

The StackForge Standalone EC2 API

It’s taken a while and the entire backstory and history isn’t really relevant for this article, but Cloudscaling (now part of EMC) has been working diligently to build a drop-in replacement for the existing Nova EC2 API. This standalone EC2 API can be found in StackForge. This re-implementation of the EC2 APIs is now ready for prime time and serendipitously you can see from the opening comments that the community is very interested in adopting it.

Some details on the status of the new EC2 API can be found in the initial documentation in the commit.

To summarize, the new standalone API:

  • Is feature complete and at parity with the existing Nova EC2 API
  • Has equivalent or better test coverage to the existing Nova EC2 API
  • Is configured by default on a different port (can be run in parallel to all existing APIs)
  • Included a new set of features in the form of full coverage for the VPC APIs (a subset of EC2)
  • Has been tested exhaustively with the AWS unified CLI tool, a python CLI for driving all of the AWS services
  • Calls the OpenStack REST APIs rather than any of the “internal API” or function calls for a clean separation of duties
  • AWS tempest tests have been expanded, tested against AWS itself as a baseline then used to validate this APIs behavior

This is very exciting and it’s what the community has been asking for.  More importantly, to me, at least, is that the EC2 API could potentially stay in StackForge and become an optional plugin to OpenStack, letting those who care use it while also allowing the team who is maintaining it to iterate at a slightly different speed from the current 6-month integrated release cycle.

For those who are wondering, it’s EMC’s intention to continue to invest into and maintain this API bridge to OpenStack. 

The EC2 API Still Matters to OpenStack

During the “debate” that occurred in 2013, I was frequently bemused by the attempts of community members to downplay the importance of the EC2 APIs. I think it’s all settled down now and generally accepted that we want the EC2 APIs to live on and succeed in OpenStack-land and hopefully we’ll even support other APIs down the road.

For those who are still holdouts though, I think the latest OpenStack User Survey data continues to reinforce how important the EC2 and other AWS APIs are:

A Brief State of the Stack 2014 v3 - 2014-11-06 CSH Updates-09.019

What’s enlightening here is that in 2013 I was hearing the constant refrain of “the EC2 APIs are used by only a ‘fraction’ of the community”.  That ‘fraction’ was *merely* ~30-35% at the time according to the user surveys.  As you can see, usage of the EC2 APIs has actually increased since that time and now we’re at 44% for production deployments, a 25% increase in roughly 18 months. This is very important.

It means that usage of the EC2 APIs is increasing, fairly dramatically, over time.

I’ll reiterate again, since folks still sometimes get confused, I’m not advocating dropping the OpenStack APIs in favor of AWS.  I’m advocating embracing the AWS APIs, making them a first class citizen, and viewing AWS as a partner, not an enemy.  A partner in making cloud big for everyone.

This reality inside the OpenStack community is starting to materialize and I need your help.

The Game Plan

Awesome, we have a new set of improved EC2 APIs, a path towards supporting them and deprecating the old.  Whether you love the EC2 APIs or hate them, it’s good for everyone to move them out of the default deployment, create greater isolation between these APIs and OpenStack internals, and to have a path forward where they can be maintained with love. 

Everybody wins, even the detractors.

Well, to get this the rest of the way, we need to do the following:

  1. Test, test, test: if you are using the existing EC2 APIs, please give these a try, break them, and file bugs
  2. If you are a developer and want to help cover any gaps in functionality or bugs that have been found, then get involved now; this is a standard stackforge project, so anyone can get in the mix
  3. There are some known challenges in the existing OpenStack APIs that need to be addressed for a more robust solution; these are documented in a new blueprint you can find here
  4. Help update and maintain the documentation so people know that this capability is available for their OpenStack deployments/products, whether DIY or product based
  5. Add a set of testing capabilities to RefStack to test for “AWS interoperability” alongside “OpenStack interoperability” 

I really appreciate all of the supporters and also detractors who have been involved in this discussion. I believe that this kind of debate and action, like the Internet before it and the IETF mantra of old (“running code and rough consensus”), is what makes OpenStack great. Completing this project will also provide us a blueprint for how we support the public APIs of other public clouds in OpenStack-land.

Finally, a big thanks to Alex Levine, Feodor Tersin, and Andrey Pavlov, for being the tip of the spear on this work.  Without them we wouldn’t have made it this far.

Posted in OpenStack | Leave a comment

“Vanilla OpenStack” Doesn’t Exist and Never Will

Posted on by Randy Bias

One of the biggest failures of OpenStack to date is expectation setting.  New potential OpenStack users and customers come into OpenStack and expect to find:

  • A uniform, monolithic cloud operating system (like Linux)
  • Set of well-integrated and interoperable components
  • Interoperability with their own vendors of choice in hardware, software, and public cloud

Unfortunately, none of this exists and probably none of it should have ever been expected since OpenStack won’t ever become a unified cloud operating system.

The problem can be summed up by a request I still see regularly from customers:

I want ‘vanilla OpenStack’

Vanilla OpenStack does not exist, never has existed, and never will exist.

Examining The Request

First of all, it’s a reasonable request.  The potential new OpenStack customer is indirectly asking for those things that led them to OpenStack in the first place.  The bi-annual user survey has already told us what people care about:

OpenStack User Survey Fall 2014 v1 copy.001

The top reasons for OpenStack boil down to:

  • Help me reduce costs
  • Help me reduce or eliminate vendor lock-in

Hence the desire for “vanilla” OpenStack.

But what is “vanilla”?  It could be a number of things:

  1. Give me the “official” OpenStack release with no modifications
  2. Give me OpenStack with all default settings
  3. Give me an OpenStack that has no proprietary components

Which immediately leads into the next problem: what is “OpenStack”?  Which could also be a number of things[1]:

  1. Until recently, officially the principle trademark “OpenStack-powered” meant Nova + Swift *only*

  2. The de facto “baseline” set of commonly deployed OpenStack services

    1. Nova, Keystone, Glance, Cinder, Horizon, and Neutron

    2. There is no name or official stance on this arbitrary grouping

  3. Use of DefCore + RefStack to test for OpenStack

UPDATE: to be more clear, the baseline set above *does* have a name. It is called “core” and called out in section 4.1 of the bylaws, which is below. I apologize for the confusion as “core” has been overloaded a fair bit in discussions on the board and at one point trademark rights were tied to “core”.

4.1 General Powers.

(a) The business and affairs of the Foundation shall be managed by or under the direction of a Board of Directors, who may exercise all of the powers of the Foundation except as otherwise provided by these Bylaws.
(b) The management of the technical matters relating to the OpenStack Project shall be managed by the Technical Committee. The management of the technical matters for the OpenStack Project is designed to be a technical meritocracy. The “OpenStack Project” shall consist of a “Core OpenStack Project,” library projects, gating projects and supporting projects. . The Core OpenStack Project means the software modules which are part of an integrated release and for which an OpenStack trademark may be used. The other modules which are part of the OpenStack Project, but not the Core OpenStack Project may not be identified using the OpenStack trademark except when distributed with the Core OpenStack Project. The role of the Board of Directors in the management of the OpenStack Project and the Core OpenStack Project are set forth in Section 4.13. On formation of the Foundation, the Core OpenStack Project is the Block Storage, Compute, Dashboard, Identity Service, Image Service, Networking, and Object Storage modules. The Secretary shall maintain a list of the modules in the Core OpenStack Project which shall be posted on the Foundation’s website.

So this is helpful, but still confusing.  If, for example, you don’t ship Swift, which some OpenStack vendors do not, then technically you can’t call your product OpenStack-powered. HP’s public cloud and Rackspace’s public clouds, last I checked anyway, don’t use the identity service (Keystone), which also means that technically they can’t be “OpenStack-powered” either. A strict reading of this section also says that all projects that are in “integrated” status are also part of “core” and that you can’t identify “core” with an OpenStack trademark unless “core” is distributed together, which implies that if you don’t have Sahara, then you aren’t OpenStack. Which, of course makes no sense.

So my point still stands.  There has been a disconnect between how OpenStack is packaged up by vendors, how the trademarks are used, and how integrated is defined and contrasted to “core”, etc.

This is why item #3 above is still in motion and is the intended replacement model for #1.  You can find out more about DefCore’s work here.

Understanding The OpenStack Trademark and Its History

It’s not really a secret, but it’s deeply misunderstood: until the last few weeks, the Bylaws very specifically said that “OpenStack-powered” is Nova plus Swift.  That’s it. No other projects were included in the definition.  Technically, many folks who shipped an “OpenStack-powered” product without Swift were not actually legally allowed to use the trademark and brand.  This was widely unenforced because the Board and Foundation knew the definitions were broken.  Hence DefCore.

Also, the earliest deployments out there of OpenStack were Swift-only.  Cloudscaling launched the first ever production deployment of OpenStack outside of Rackspace in January of 2011, barely 6 months after launch.  At that time, Nova was not ready for prime time.  So the earliest OpenStack deployments also technically violated the trademark bylaws, although since the Foundation and Bylaws had yet to be created this didn’t really matter.

My point here is that from the very beginning of OpenStack’s formation drawing a line around “OpenStack” has been difficult and still is to this day, given the way the Bylaws are written.

FYI, the new proposed Bylaws changes are explained here.  You will notice that they get rid of a rigid definition of “OpenStack” in favor of putting the definition in the hands of the technical committee and board.  They also disconnect the trademark from “core” itself.

Explosive Growth of Projects is Making Defining OpenStack Much Harder

There are now 20 projects in OpenStack.  Removing libraries and non-shipping projects like Rally, there are still ~15 projects in “OpenStack” integrated status.  And there are many more on the way.  Don’t be surprised if by the end of 2016 there are as many as 30 shipping OpenStack projects in integrated status.

Many of these new projects are above the imaginary waterline many have created in their minds for OpenStack.  Meaning that for many OpenStack is an IaaS-only effort.  However, we can now see efforts like Zaqar, Sahara, and others are blurring the line and moving us up into PaaS land.

Slide1

So when a customer is asking for “OpenStack”, just what are they asking for?  The answer is that we don’t know and rarely do they.  The lack of definition on the part of the Board, the Foundation, and the TC has made explaining this very challenging.

Vanilla-land: A Fairy-Tale in the Making

You can’t run OpenStack at scale and in production (the only measure that matters) in a “vanilla” manner.  Here I am considering “vanilla” to include all three definitions above: default settings, no proprietary code, and no modifications.

UPDATE: I want to be clear that by “production” I mean large scale production deployments.  Anything that requires a multi-tiered switch fabric and typically over 3-5 racks in size.  Yes, people run smaller production systems; however, it’s arguable they should be all-in on public cloud instead of wasting time running infrastructure.  Also, for the purposes of this article talking about 5-10 server deployments doesn’t make sense.  At that size, you can obviously run “vanilla OpenStack”, but I haven’t engaged with any enterprise that operates at this scale.

The closest you can get to this is DevStack, which is not scalable nor acceptable for production.

Why?

It would really take far too long to go through all of the gory details, but I need to give you some solid examples so you can understand.  Let’s do this.

General Configuration

First, there are well over 500 configuration options for OpenStack.  Many options immediately take you into proprietary land.  Want to use your existing hypervisor, ESX?  ESX is proprietary code, creating vendor lock-in and increasing costs.  Want to use your existing storage or networking vendors?  Same deal.

Don’t want to reuse technology you already have?  Then be prepared for a shock about what you’ll get by default from core projects.

Networking

Whether you use nova-networking or Neutron, the “default” mode is what nova-networking calls “single_host” mode.  Single host mode is very simple.  You attach VLANs to a single virtual machine (or a single bare metal host) which acts as the gateway and firewall for all customers and all applications.  Limited scalability and performance since an x86 server will never have the performance of a switch with proprietary ASICs and firmware.  Worst of all, the only real failover model here is to use a high availability active/passive model.  Most people use Linux-HA, which means that on failover, you’re looking at 45-60 seconds when absolutely NO network traffic is running through your cloud.  Can you imagine a system-wide networking failure of 60 seconds each time you failover the single_host server to do maintenance?

You can’t run like this in production, which means you *will* be using a Neutron plugin that provides control over a proprietary non-OpenStack networking solution, whether that’s bare metal switching, SDN, or something else.

Storage

Like networking, the default block storage on demand capability in Cinder is not what people expect.  By default, Cinder simply assumes that each hypervisor has it’s own locally attached storage (either DAS or some kind of shared storage using Fiber Channel, etc).  Calls to the Cinder API result in the hypervisor creating a new block device (disk) on its local storage.  That means:

  • The block storage is probably not network-attached
  • You can’t back the block storage up to another system
  • The block device can’t be moved between VMs like AWS EBS
  • Hypervisor failure likely means loss of not only VM, but also all storage attached to it

UPDATE: Sorry folks.  This was inaccurate.  Cinder does use iSCSI by default.

I believe this still isn’t what customers are expecting. You would need to add HA to each cinder-volume instance combined with DRBD to do disk replication and potentially iSCSI multi-pathing for failover.

That means in order to meet the actual requirements of the customer they have to deal with the feature gaps above on their own, get an OpenStack distribution that handles the gap, or load a Cinder plugin that manages a proprietary non-OpenStack block storage solution.  That could be EMC’s own ScaleIO, an open source distributed block store like Ceph, industry-standard SAN arrays like VMAX/VNX, or really anything else.

If you look at the laundry list of storage actually used in production you’ll see that over half of all deployments take this option and that the default Cinder configuration is only 23% of production deployments:

OpenStack User Survey Fall 2014 v1 - Block Storage Slide Only.001

Application Management

Want your developers to create great new scalable cloud-native applications?  Great, let’s do it, but it won’t be with Horizon.  Horizon is a very basic dashboard and even with Heat, there are major gaps if you want to help your developers succeed.  You’ll need Scalr or Rightscale as cloud application management frameworks (especially if you are going multi-cloud or hybrid cloud with the major public clouds) or you’ll need a PaaS like CloudFoundry that does the heavy lifting for you.

You Can Reduce Vendor Lock-in, But … You Can’t Eliminate It

Are you trying to eliminate vendor lock-in?  Power to you.  That’s the right move.  Just don’t expect to succeed.  You can reduce but not eliminate vendor lock-in.  It’s better to demand that your vendors provide open source solutions, which don’t necessarily eliminate lock-in, but does reduce it.

Why isn’t it possible?  Well, network switches, for example, are deeply proprietary.  Even if you went with something like Cumulus Linux on ODM switches from Taiwan, you will *still* run proprietary firmware and use a proprietary closed-source ASIC from someone like Marvell or Broadcom.  Not even Google gets around this.

Firmware and BIOS on standard x86 servers is all deeply proprietary, licensed strictly, and this won’t change any time soon.  Not even the Open Compute Project (OCP) can get entirely around this.

The Notion of Vanilla OpenStack is Dangerous

This idea that there is a generic “no lock-in” OpenStack is one of the most dangerous ideas in OpenStack-land and needs to be quashed ruthlessly.  Yes, you should absolutely push to have as much open source in your OpenStack deployment as possible, but since 100% isn’t possible, what you should be evaluating is what combination of open source and proprietary get you to the place where you can solve the business problem you are trying to conquer.

Excessive navel-gazing and trying to completely eliminate proprietary components is doomed to failure, even if you have the world’s most badass infrastructure operations and development team.

If Google can’t do it, then you can’t either.  Period.

The Process for Evaluating Production OpenStack

Here’s the right process for evaluating OpenStack:

  1. Select the person in your organization to spearhead this work

  2. Make him/her read this blog posting

  3. The leader should immediately download and play with DevStack

  4. The leader should create a team to build a very simple POC (5 servers or less)

  5. Understand how the plugins and optional components work

  6. Commission a larger pilot (at least 20-40 servers) with a trusted partner or set of trusted partners who have various options for “more generic” and “more proprietary” OpenStack

  7. Kick the crap out of this pilot; make sure you come with an exhaustive testing game plan

    1. VM launch times

    2. Block storage and networking performance

    3. etc…

  8. Gather business requirements from the internal developers who will use the system

  9. Figure out the gap between “more generic” and “more proprietary” and your business requirements

  10. Dial into the right level of “lock-in” that you are comfortable with from a strategic point of view that meets the business requirements

  11. If at all possible (it probably won’t be, but try anyway), get OpenStack from a vendor who can be a “single throat to choke”

Summarizing

I am trying to put a pragmatic face on what is a very challenging problem: how do you get to the next generation of datacenter?  We all believe OpenStack is the cornerstone of such an effort.  Unfortunately, OpenStack itself is not a single monolithic turn key system.  It’s a set of interrelated but not always dependent projects.  A set of projects that is increasing rapidly and your own business probably needs only a subset of all the projects, at least initially.

That means being realistic about what can be accomplished and what is a pipe dream.  Follow these guidelines and you’ll get there.  But whatever you do, don’t ask for “vanilla OpenStack”.  It doesn’t exist and never will.

[1] Mark Collier pointed out some inaccuracies and I have adjusted this bullet list to reflect the situation as correctly as possible.

Posted in OpenStack | Leave a comment

My Endorsements for the 2015 OpenStack Individual Director Elections

Posted on by Randy Bias

If you are voting this year in the individual director elections, and I sincerely hope you are, I would appreciate it if you would give special consideration to the following candidates and a super brief “why”

  • Kavit Munshi – international and Indian representation
  • Tim Bell – user representative and continuity with user committee and user survey
  • Jesse Proudman – operator representative and independent voice
  • Haiying Wang – international and Chinese representation

There are many other fantastic candidates running including Monty Taylor, Rob Hirschfeld, Alex Freedland, Sean Winn, and Ken Hui.  However, I decided to cut this down to a very short list that was stack ranked as follows:

  • International representation (we need more)
  • User representation (we need more)
  • Operator representation (we need more)

Good luck to everyone.

 

Posted in OpenStack | Leave a comment

The Future of OpenStack is Now, 2015

Posted on by Randy Bias

This year will be a crucial year in OpenStack history.  This is the year we fix much of how OpenStack is structured or die trying.  By structure, I mean the vision, the project structure, the integrated release cycle, and the board and TC’s role in driving direction.

First, stop and read this blog posting by Thierry Carrez, release manager for OpenStack.  Then, if you haven’t already, make certain you watch my related keynote (preso w/no video is here) from the OpenStackSV meeting in September of last year.

The problem can be stated simply, if somewhat brutally: OpenStack is at risk of collapsing under its own weight.

The original vision had the OpenStack community delivering a tightly integrated release focused on basic infrastructure services on a 6-month release cycle.  The problem is that this shared vision was at odds with two things: 1) the inherent inclusivity of the OpenStack community and 2) people’s wildly differing interpretations of the word “cloud”.  To be honest the latter is endemic to the cloud space and has been since its inception, but the mission statement of OpenStack doesn’t clarify, instead using the hopelessly abuse word “cloud”.

Let’s examine these challenges I outlined and examine a way forward, ending with a plea from myself on how you can help.

Background

When OpenStack launched in summer of 2010, I and many others saw the immediate value of an open source infrastructure-as-a-service stack with a vibrant community.  Something Eucalyptus and CloudStack had both failed to achieve.  In fact, the very hallmark of OpenStack was its inclusivity.  If you joined the community, played by the rules, and wanted to make something happen, it was clear how to do so and you were actively encouraged to go for it.  This is an important facet of why OpenStack grew so fast and had so many amazing participants.

However, there was one dark spot in this inclusivity.  Namely, competing OpenStack projects were actively discouraged as were adding projects written predominantly in non-Python languages.  The attitude in the former case was one of “why don’t you fix what is already broken?” and in the latter, one of “we want to allow developers to move easily between projects!”  Both laudable goals, but both ultimately unwieldy and misguided.

Although unspoken, another source of this tension was the process by which the “integrated release” was delivered every 6 months, where in theory we:

  1. release code
  2. have a summit where we discuss the next major release
  3. work for months including mid-cycle meetups to get new code ready
  4. test, test, and re-test all of the code together
  5. release code and begin again

Now with only two projects, Nova and Swift, in the beginning, this was not a problem, but as the number of projects grew, significant organizational issues began to arise.  Thierry did the most eloquent job explaining so I am just going to try and fill in the gaps and talk about this more from a product and business perspective.

The issues with the growth in projects was deeply compounded by many of the new projects being “cloud” but entirely different areas of cloud, such as Platform-as-a-Service, Database-as-a-Service, etc.  Many of these don’t need to be part of an integrated test release every 6 months and in fact should probably be developed on their product cycles

All together this meant several things:

  • The integrated code cycle demanded a rethink
  • The importance of delivering a tightly integrated release was in question
  • OpenStack as a single monolithic “cloud operating system” was clearly untenable
  • The idea that developers could move seamlessly between projects was dubious
  • Delivering inclusivity is probably the “killer app” for OpenStack and its Foundation

OpenStack by Design

In my OpenStackSV Keynote, Lie of the Benevolent Dictator, I highlighted what I saw as a critical gap in the organizational structure of the community.  Namely that we needed real product management and product strategy leadership.  During the 2014 Atlanta Spring Summit, the Board and the Technical Committee had their first joint session.  From that meeting it became clear that the TC was focused on managing the 6-month integrated release cycle and was focused exclusively on tactics.  At the same time, the Board and Foundation did not feel that they had the remit to drive technical product requirements.

The result is that there is a lack of cohesive long term (2-5 year) planning around OpenStack from a product perspective.  Instead, we rely on the the grassroots level organization that may or may not happen as each developer or company contributes code.  In effect, we suffer from the Tragedy of the Commons.

We asked for this when we encouraged an inclusive environment and I certainly don’t want to do away with a key strength of OpenStack; however, the bent for inclusivity needs to be tempered with better long term product planning.

We need OpenStack by Design and Intent, not by accident.

OpenStack’s Way Forward

Again, Thierry’s extremely eloquent outline of how to move forward from a technical point of view is fantastic, but perhaps there is room for improvement?  If inclusivity and the community is the most important aspect of OpenStack, then perhaps we should operate as such.

I believe there are a number of key items that need to happen this year at the Board, TC, and Foundation level.  Fundamentally, we need to look more like a set of loosely-coupled independent projects that MAY be put together in a variety of ways. [1]

These are the key items that need to addressed this year:

  • Reorganize to a more scalable model, like the Apache Software Foundation
  • Discard the integrated release process, in favor of interoperability testing
  • Promote DefCore and the CI system for delivering interoperability between projects
  • Explicitly encourage non-Python OpenStack projects
  • Re-position OpenStack in the minds of the market and community
  • Create an ongoing educational process to help reinforce this re-positioning
  • Develop “integration streams” for interrelated OpenStack projects that need interop
  • Re-imagine the TC as an integration and architecture team not SDLC management
  • Plug product management kung fu into the TC

Your Help is Necessary To Enable This Vision in 2015

I have been working behind the scenes, along with many other board and TC members, to help educate on these issues.  I worked with the DefCore and RefStack teams, helping to encourage formation of the product management working groups, and bringing up key issues at board meetings.  I believe that our collective efforts helped us get to the point where change is possible and Thierry’s article shows that the appetite and willingness to change is here.

I want to continue representing the community as a whole on the OpenStack Foundation Board of Directors.  I know that I can represent your interests and help guide OpenStack down the right path.  This year is a formative year for OpenStack and I know that my particular flair for breaking the glass will be critical in encouraging change.

For the first time I’m running as an individual representative who wants to create the best OpenStack possible for everyone.  I am focusing on inclusivity, revitalizing the OpenStack community and process, and driving towards a model that ultimately is the best for vendors, customers, end-users, developers, operators, and all other stakeholders within OpenStack.

I want your vote!  Thank you.

–Randy

 


[1] Hopefully this will get rid of the banal requests from unwitting customers for “vanilla OpenStack”, something that has never existed anyway.

Posted in OpenStack | Leave a comment

The EMC Federation Joins the OpenStack Foundation

Posted on by Randy Bias

Last week a major set of milestones was reached for the EMC Federation’s involvement with OpenStack. First, EMC and it’s affiliated companies and brands (VMware, VCE, Pivotal, RSA, Cloudscaling) determined a cohesive strategy for engagement with the OpenStack Foundation Board. Second, EMC appointed a VMware employee, Sean Roberts (@sarob), as the official representative of EMC and hence the EMC Federation generally. This means that I am no longer the EMC (Cloudscaling) OpenStack Foundation Gold Director.

The why of this may be confusing so I will briefly explain the background and then provide some more details on what exactly transpired.

Background
By and large the OpenStack bylaws have stood the test of time quite well at this point. Most of the upcoming proposed changes are simply things we could only have known in hindsight. One area that I think the bylaws got right are the articles that limit participation by “Affiliated” companies:

2.5 Affiliation Limits. Gold Members and Platinum Members may not belong to an Affiliated Group. An Affiliated Group means that for Members that are business entities, one entity is “Controlled” by the other entity. “Controlled” or “Control” means one entity owns, directly or indirectly, more than 50% of the voting securities of the Controlled entity which vote for the election of the board of directors or other managing body of an entity, or which is under common control with the Controlled entity. An Affiliated Group does not apply to government agencies, academic institutions or individuals.

What this means, in essence, is that if there are two companies with a relationship like parent/child or joint venture, in which one owns more than 50% of the other, only ONE of the companies can join the OpenStack Foundation as a Gold or Platinum Member. This is a good measure to prevent a group of companies from “stacking the deck” within the OpenStack Foundation and using that as leverage to control or dominate OpenStack, which is something no one wants. I also need to note that any company may also have one to two Individual Members represent them. Two Directors from any single affiliated group is the maximum representation on the OpenStack Board of Directors. This works out to one Gold or Platinum Director plus one Individual Director OR two Individual Directors. This is why I am allowed to run as an Individual Director in 2015. Of course, I would very much appreciate your support in this endeavour!

So, things became very interesting upon EMC’s acquisition of Cloudscaling as it inherited the Gold Member status of Cloudscaling while VMware also retained their Gold Member status, creating an edge case where the Bylaws were technically in violation. This required EMC and VMware to work closely with the Foundation staff to resolve the situation.

This is why VMware resigned their Gold Member status and why EMC appointed a VMware employee as a representative for EMC and hence the EMC Federation.

Which means we should quickly explain what the EMC Federation is.

EMC Federation
The EMC Federation is composed of a number of different entities, from security companies, to storage, to Platform-as-a-Service, big data, virtualization, converged infrastructure, and now OpenStack via the Cloudscaling acquisition. Members of the EMC Federation are already representatives on the OpenStack Foundation Board of Directors, OpenStack Foundation Gold Members, OpenStack Foundation Corporate Sponsors, and have deepening ties to OpenStack generally.

In April of 2013, EMC and VMware launched Pivotal and created a federation of its businesses. EMC is the majority owner, by a large margin of VMware, Pivotal, and RSA is a wholly owned subsidiary. Recently, VCE, the leader in converged infrastructure joined the Federation. Federation messaging and joint solutions were prominent during EMC World 2014. The following diagram gives you some idea of how the Federation is organized.

EMCFederation-and-OpenStack-Diagram

When asked about why the Federation model is needed and what differentiates the companies from competitors, the answer is “choice”. While VMware is the leading hypervisor, EMC also desires the opportunity to forge alliances and solutions with Microsoft, Citrix, and others. Conversely, VMware desires to support and work with a variety of storage and security solutions.

Similarly, members of the Federation desire to operate and support OpenStack’s mission in different manners (converged infrastructure, appliance models, and software distributions) while also supporting the joint goals of empowering and promoting OpenStack within the enterprise.

Wikibon covers the EMC Federation Model extensively here:

http://wikibon.org/wiki/v/Primer_on_the_EMC_Federation

The EMC Federation OpenStack Strategy
As a group, the EMC Federation strongly desires to play by the rules of the OpenStack community, while deepening our commitments and contributions. As a group we are already a #6 contributor to the latest release and we aspire to go even further. OpenStack is a critical strategy for the Federation as a whole, even for members like Pivotal who see a significant increase in the number of enterprises who wish to run CloudFoundry on top of OpenStack.

What this meant for us when resolving the Bylaws issue is that we wanted to have the entire EMC group represented as a whole, such that others like VMware, VCE, and Pivotal, could all be a part of the picture. The Bylaws however require that the Gold Member selected is an actual legal entity.

Our final resolution was then to have VMware resign their Gold Membership, EMC retains the Cloudscaling Gold Membership, and in order to show EMC Federation coordination, EMC is appointing Sean Roberts to represent EMC, and hence the entire Federation, as our Gold Member representative. Finally, all of the branding on the OpenStack Foundation website will be a Federation-oriented branding (EMC2).

Meanwhile, behind the scenes, I’m working closely with Sean Roberts of VMware, Josh McKenty of Pivotal, Jay Cuthrell of VCE, and others to make sure that we have cohesion across the Federation.

Hopefully this helps explain these recent changes.

Posted in OpenStack | Leave a comment

An OpenStack Dream Team: EMC + Cloudscaling

Posted on by Randy Bias

If you’re following the buzz surrounding the EMC acquisition of Cloudscaling, you might wonder:

Is this a mismatch, or am I missing something?

Yes. You’re missing something. Let me explain.

First, you’ll want to take a closer look at the announcements by EMC today [1]. We will join the EMC Emerging Technologies Division led by CJ Desai. Cloudscaling OCS is now a core part of EMC’s Enterprise Hybrid Cloud Solutions. The Enterprise Hybrid Cloud Solutions powered by Openstack will be available in 2015. But this only paints half of the bigger picture to help you understand why this is a match made in heaven.

The other half of the backstory can be found in the Cloudscaling blog and in a few of my more notable presentations. Here is the synopsis:

  • Cloud computing is NOT virtualization on demand
  • Cloud computing was created by web-scale pioneers like Google and AWS
  • Cloud computing is a completely new kind of computing, fundamentally different from legacy enterprise computing

A couple of solid resources that will help deepen your understanding of these three points can be found in this presentation for NIST and in an older interview with Adrian Cockroft of Netflix.

This view of the world is the cornerstone of Cloudscaling’s product strategy. Essentially, we believe that two kinds of clouds are needed for two different types of applications:

Cloudscaling OCS Product Deck - 2014-08-17 - DRAFT.083

Yes, part of this was driven by pragmatism. VMware is king of the hill in enterprise virtualization and it’s hard to imagine a universe where that changes. But once nascent “cloud-native” applications are emerging very rapidly. Cloud-native apps manage their own availability and uptime, and they’re designed for scale-out with minimal or zero human engagement. This is where DevOps comes in. It’s the primary vehicle by which enterprises can successfully build cloud-native applications. It also brings into focus the prevalent “pets vs. cattle” meme that describes in a nutshell what a cloud-native application is: treating servers as disposable field replaceable units (FRUs) rather than critical pieces of infrastructure that must never fail.

EMC’s strategy is consistent with this approach to infrastructure and applications. They use a term that IDC coined called “third platform apps” to describe these new cloud-native applications. The speed at which the third platform is growing is almost unprecedented:

6 Requirements for Enterprise-grade OpenStack Supporting Material.003

We didn’t realize it during our first conversations back in 2012, but EMC and Cloudscaling were slowly moving towards each other, even though we didn’t quite see eye to eye on how to get there. EMC made some impressive moves in pursuing its scale-out architecture portfolio, including the acquisitions of Isilon and ScaleIO, and internal developments such as ViPR.

Through our interactions with EMC it became clear to our leadership team that EMC was more closely aligned philosophically to Cloudscaling than anyone else.

EMC Understands Disruption
Smart companies disrupt themselves. Apple and Amazon get this. EMC does, too.

This becomes clear when you look at EMC’s philosophy of purchasing companies like VMware and Greenplum, spinning off Pivotal, and the acquisition of their own all flash arrays with XtremIO. It demonstrates an appetite and willingness to take on risk and move boldly into the future.

It’s here that you see proof of EMC’s belief in the rise of “third platform” or “cloud-native” or “scale-out” applications, three concepts for describing the same phenomenon. Cloudscaling’s delivery of the first enterprise-grade OpenStack-powered cloud operating system with Open Cloud System, surely did not go unnoticed. EMC saw the value in our shared vision and that a deep collaboration could mean great things.

Delivering on Greatness the EMC Way
As the Innovator’s Dilemma illustrates, a well-run company could pay too much attention to existing customers at the expense of identifying, adopting, and nurturing new technologies that new market entrants could use to disrupt existing business lines. EMC knows this, and it’s planning for it.

Look at the recent reorganization of all of EMC’s scale-out technologies under CJ Desai in the Emerging Technologies Division (ETD). This group now includes:

  • XtremIO, the leader in all flash arrays
  • ViPR, software defined storage
  • ScaleIO, scale-out block storage
  • Atmos, scale-out object storage
  • Cloudscaling, scale-out cloud operating system powered by OpenStack

What you will notice about ETD is that it is fundamentally about incubating and delivering new technologies that are potentially disruptive to the existing EMC product lines.

For these reasons, Cloudscaling finds itself in excellent company.

The Cloudscaling and EMC Dream Team
This is why EMC+Cloudscaling makes sense. Both companies are planning for cloud-native apps to be embraced by the enterprise. And OpenStack will be key to delivering the infrastructure to support these apps.

The mission, vision and go-to-market execution proof-points driving EMC toward cloud computing are perfectly aligned with Cloudscaling. The quality of EMC’s leadership team and the company’s commitment to making things happen impress me every day.

Here’s to the future. It’s going to be bright!

randyb-signature-page1

 

 

 

—Randy Bias

[1] Also make sure to check out Ken Hui’s blog posting What EMC is up to with OpenStack solutions

Posted in Cloud Computing | Leave a comment

Public Cloud Economies of (Web-)Scale Aren’t About Buying Power

Posted on by Randy Bias

As you no doubt heard this week, Rackspace has announced the intention to focus on managed cloud.  Inevitably this brought observations from many about RAX, and others, ability to compete effectively against the web scale public cloud giants: Amazon, Microsoft, and Google. One of the commenters was Mike Kavis (twitter link), a long time cloud pundit and someone who’s opinion I respect. Mike wrote up a fairly interesting article that he posted on Forbes that I encourage you to read in full. Unfortunately, Mike falls into one of the older cloud tropes that I thought was well and truly dead. Today I seek to clarify and hopefully amplify much of what he said.

First, we need to address the so-called “economies of scale” that large public cloud providers enjoy. Simply put, economies of scale are structural cost advantages that come from sufficient size, greater speed, enhanced productivity, or scale of operation. Unfortunately, many folks, including Mike, fall into the trap of assuming that “economies of scale” == “buying power”. Buying power can be an element of achieving scale, but it is seldom a structural or sustainable advantage, certainly not against other large businesses who can command similar quantities of capital.

No, the real economies of scale that are relevant here are the tremendous investments in R&D that have led to technological innovations that directly impact the cost structures of Amazon Web Services, Google Cloud Platform, and Microsoft Azure. Here are some examples of what I mean:

This is just the first three items that are A) public and B) come to my mind without a lot of additional research. There are hundreds of other innovations. Most of these innovations share a central theme: reduction of cost through greater efficiency or being able to deploy lower cost hardware. For example, Google’s G-Scale Network uses inexpensive Taiwanese ODM switches.

As I have mentioned previously, the rate of innovation and development at these public clouds is where the true economies of scale reside.

Much of this is alluded to when Mike covers the level of investment in infrastructure and R&D from Amazon, Microsoft, and Google.  Unfortunately Mike mixes infrastructure investment (CapEx) with R&D investment (OpEx).  We actually don’t know what the levels of R&D investment are at the big three, although we know that they literally have thousands of developers at each working diligently on new capabilities.

And this is where we go off the rails because Mike throws IBM’s hat in the ring as a real contender because they are planning to invest $1.2B in new datacenters. This is actually uninteresting and mostly irrelevant when it comes to measuring the probability of success in the public cloud game. New datacenters and hardware won’t provide a true structural cost advantage. That can only come through investment in R&D and a proven track record of innovation in public cloud, neither of which IBM is clearly succeeding in. Perhaps they will and perhaps they are a true public cloud contender, but it’s hard for me to see that given that so much of this is about a cultural and organizational structure that can encourage innovation.

What does it take to change? As many of you know, I poo-pooed Microsoft’s chances for quite a while because I had no belief in their eventual success at delivering online services. Mostly because I felt the organization as a whole struggled with the operating system boat anchor and couldn’t let go. Of course this was before Satya Nadella took over the helm and declared a focus on cloud and mobile. In essence he empowered and enabled the Online/Live  teams to become the new Microsoft. The Live teams have learned “web scale” the hard way, over many many years and through the spilling of much red ink. See this article from 2011 on MSFT’s Online services operating income.  Microsoft has spent 10 years and many billions of dollars to become a credible player against the likes of Amazon and Google.

In that light, how can anyone else even pretend to the throne without similar levels of investment? Buying a hosting company is not going to get you there. This isn’t a game of buying power or outsourcing. It’s an innovation game and that’s it. The number of players who can pull this off are vanishingly small.

You want “economies of scale?” You’re going to pay for it and at this point it’s probably too little too late.

Posted in Cloud Computing | Leave a comment

Voting for the Fall OpenStack Summit in Paris, France is now open!

Posted on by Randy Bias

The Cloudscaling team has, once again, submitted an outstanding array of talks and we would all appreciate if you took the time to vote for our presentation submissions. We’ve taken the time to summarize our presentations for you below, as well as provide you with an easy link to cast your vote:

There’s no doubt that Cloudscaling would not be as great as it is without our customers and other Stacker friends.  That’s why we humbly ask you to please take the time to vote for these submissions which include user stories from companies who are using OpenStack to achieve agility.

Customer Use Cases:

How Lithium used Cloudscaling OCS to bring IT into the Modern Cloud erahttps://www.openstack.org/vote-paris/Presentation/how-lithium-used-cloudscaling-ocs-to-bring-it-into-the-modern-cloud-era

This session will include a case study by Randy Bias, CEO Cloudscaling and Joe Sandoval, Lithium about the key benefits experienced by Lithium in their journey: increased agility, “open” architecture, application modernization, improved DevOps efficiency and a foundation for the future, just to name a few.

No Wait IT Keeps Developers Productive at Ubisofthttps://www.openstack.org/vote-paris/Presentation/no-wait-it-keeps-developers-productive-at-ubisoft

This session will include a case study by Randy Bias, CEO Cloudscaling and Marc Heckmann, Enterprise Cloud Architect, Ubisoft about the key benefits experienced by Ubisoft in their journey: agility with “control”, satisfying the LOB’s need for speed, increased IT efficiency, application modernization, and improved DevOps efficiency.

Service Provider Achieves Ultra-Agile Infrastructure using Cloudscaling OCShttps://www.openstack.org/vote-paris/Presentation/service-provider-achieves-ultra-agile-infrastructure-using-cloudscaling-ocs

This session will include a case study by Randy Bias, CEO Cloudscaling and Matt Kinney, Idig.net about the key benefits experienced by Canadian Web hosting in their journey: increased agility, services that are cost-competitive with major public clouds like AWS and a slew of new, dynamic cloud applications that customers love.

Panels on OpenStack, Hybrid Cloud, and Other Business Cases:

The OpenStack Thunderdomehttps://www.openstack.org/vote-paris/Presentation/the-openstack-thunderdome

“…five highly opinionated Stackers will lock horns over the best way to bring about global dominance of OpenStack as the default cloud-building platform. “

Top Hybrid Cloud Myths Debunkedhttps://www.openstack.org/vote-paris/Presentation/top-hybrid-cloud-myths-debunked

Four experts from diverse industries including public and private cloud, systems integration and cloud management will square off and separate reality from assumption around hybrid cloud myths and top trends.

Hybrid Cloud War Stories: Expecting the Unexpectedhttps://www.openstack.org/vote-paris/Presentation/hybrid-cloud-war-stories-expecting-the-unexpected

Whatever you didn’t expect to go wrong, does.  What you hoped for does not materialize.  Building a private or public cloud is hard enough, but putting them together is not for the faint of heart.  Four experts who have extensive experience in using, delivering, and architecting hybrid clouds will chime in on what to look for when going for the gold.

Compliance Slows Us Down While Cloud Speeds Us Up; Or Does It?https://www.openstack.org/vote-paris/Presentation/compliance-slows-us-down-while-cloud-speeds-us-up-or-does-it

Compliance and governance give the appearance of slowing down IT, while Cloud gives us the hope of moving faster and faster.  Can managing down risk co-habitate with greater agility and flexibility?  We’ll ask that question and more.

Adopting Cloud? Unlearn everything you know about traditional enterprise architecture first.https://www.openstack.org/vote-paris/Presentation/adopting-cloud-unlearn-everything-you-know-about-traditional-enterprise-architecture-first

Cloud is about a lot more than VMs-on-demand.  Traditional enterprise IT approaches are being disrupted by “web scale” techniques.  As the cloud changes everything, we need to understand how web scale ultimately dovetails with traditional enterprise requirements.  How can we interpret the lessons learned from the big guys for your every day enterprise?

Next-Gen Organizational Design – Growth hacking with “BusDevOps”https://www.openstack.org/vote-paris/Presentation/next-gen-org-design-growth-hacking-with-busdevops

The silos between dev and ops are coming down, but is it possible to extend that thinking to the rest of the business?  Can we achieve the impossible?  Bringing together the best of dev, ops, business development, sales, and product functions, we’ll discuss how this might play out and why it could be important to the next generation of businesses.

Technology Oriented Presentations

Scale-Out Ceph: Rethinking How Distributed Storage is Deployed (Speakers: Randy Bias, Tushar Kalra) – https://www.openstack.org/vote-paris/Presentation/scale-out-ceph-rethinking-how-distributed-storage-is-deployed

At Cloudscaling, we believe that unification isn’t the answer. Good solid tiered storage architecture is the answer. In this session, see how Ceph can shine as part of a considered storage strategy rather than as ‘the only answer’.

Cloud Operations Dashboard Demo: Cloudscaling OCS User Interfacehttps://www.openstack.org/vote-paris/Presentation/cloud-operations-dashboard-demo-cloudscaling-ocs-user-interface

Step inside and we’ll give you a tour of Cloudscaling’s Open Cloud System cloud operator GUI, API, and CLI tools. By the operator, for the operator. Power up now!

Tales From the Field: A Day in the Life of Cloud Operationshttps://www.openstack.org/vote-paris/Presentation/tales-from-the-field-a-day-in-the-life-of-cloud-operations

Cloudscaling has been supporting 24×7 production clouds since before OpenStack existed. In this session, we will discuss the kinds of problems folks run into in typical OpenStack deployments and pull out a couple of interesting incidents to perform a deep dive on.

Tempest Testing for Hybrid and Public Cloud Interoperabilityhttps://www.openstack.org/vote-paris/Presentation/tempest-testing-for-hybrid-and-public-cloud-interoperability

In this presentation we will take a closer look at DefCore, it’s origins and intentions, how the initiative can be extended, how RefStack can be used as the basis not only for OpenStack interoperability testing but also public cloud interop testing, and finally give a demonstration of wrapping this all up into an actionable package.

OpenStack Design Guide Panelhttps://www.openstack.org/vote-paris/Presentation/panel-with-the-authors-of-the-openstack-design-guide

Bring your real-world questions and be prepared to talk OpenStack architecture with a panel of experts from across multiple disciplines and companies. We’ll be drawing on real architecture and design problems taken from real-world experiences working with, and developing solutions, built on OpenStack.

Virtual Private Cloud (VPC) Powered by Cloudscaling OCS & OpenContrailhttps://www.openstack.org/vote-paris/Presentation/virtual-private-cloud-vpc-powered-by-cloudscaling-ocs-and-opencontrail

We’ll explore how the Cloudscaling VPC cloud solution leverages SDN technology which has been well-tested in the telecommunications and service provider industries to build overlay networks that scale – both within and across data centers. We’ll also take a closer look at the VPC API updates to the OpenStack EC2 API and how the development work done there is providing real fidelity with leading public cloud providers enabling true hybrid cloud solutions.

OpenStack Reference Architecture: Scaling to Infinity and Beyondhttps://www.openstack.org/vote-paris/Presentation/openstack-reference-architecture-scaling-to-infinity-and-beyond

We’ll explore some of the basic principles of creating a reference architecture and discuss real-world examples that demonstrate why implementing a reference architecture allows scaling from one to thousands of racks with relative ease. We will also touch upon why your reference architecture choices can directly affect interoperability between OpenStack clouds and between OpenStack and major public clouds.

We hope that you can take the time to vote for all of our presentations and we certainly hope to see you in Paris!

Also make sure you check out the list of fantastic presentations at the Mirantis blog.

Posted in OpenStack | Leave a comment

 

← Older posts