A CoprHD Status Update

Posted on by Randy Bias

I wanted to provide you a big update on Project CoprHD and a mea culpa.  As many of you know, EMC launched CoprHD during EMC World 2015 and made the code generally available on June 5th.  Unfortunately, we are learning the hard way about proper follow through when open sourcing a project.  As you probably noticed, since June 5th, we have “gone dark” and there hasn’t been a lot of information.  Our bad.  That wasn’t intentional, but simply a lot of focus on our part on execution and not so much on communication.

So this blog posting is an attempt to get us back on track.  We’ll also be trying to communicate better on the mailing lists and project website.  We do believe in openness and transparency, we just aren’t very good at it yet.  :)

You can find out more about CoprHD at my previous blog postings: introducing CoprHD and CoprHD’s architecture.

CoprHD Update

One of the biggest problems to date is that we don’t have a clearly published timeline on the follow through to the original open sourcing of the project.  We’re working on providing more regular updates via the Google Group.  Meanwhile, here’s a quick list of what happened over the past 45 days and where we are headed in the future. This includes the big news that as of today (July 31st, 2015) we can accept outside pull requests!

  • Our development team is swapped over and developing off of the public repository
  • The Jira ticketing system is open, but you’ll have to create an account to create new tickets
  • July 31st: open for bidness (i.e. external contributions)
  • August 13th: open architectural discussion for CoprHD projects
  • April is next planned major release
  • Expect on a weekly basis every Friday for the next several months at least for a tutorial video to be posted.
    • These will be videos ranging from simply how to request your dev accounts and get CoprHD up and running to whiteboarding the architecture of CoprHD and walking through our directory structure

We also have some very exciting upcoming events:

  • August 17-19th: Join us at the Intel Developer Forum in San Francisco for a demonstration of CoprHD with Intel running CoprHD on Intel’s Rack Scale Architecture (RSA) and managing EMC’s distributed block storage software system, ScaleIO.
  • September 1st-3rd: CoprHD’s first ever developer meetup in Cambridge, MA; we are actively soliciting as many folks to come as possibly, particularly other storage vendors who want to help with this initiative.  Meet the core developers f2f, ask questions, and probably some focused “hacking” sessions, particularly on parts of the code that need to change to allow a “bigger tent”.

Summary

Again, a brief mea culpa.  We didn’t quite mean to take so long to give everyone an update.  We are super excited over here.  So far everything has been a success and I think we’ve been caught up in the headiness of it all.  Next step is to get as many more people on board as possible, particularly our fellow storage companies.

Posted in Cloud Computing | Leave a comment

Project CoprHD’s Architecture

Posted on by Randy Bias

Unless you had your head in the sand, you probably saw my blog post talking about Project CoprHD (“copperhead”), EMC’s first open source product. Exciting times are ahead when one of the world’s largest enterprise vendors embraces open source in a big way. Does it get any bigger than picking your flagship software-defined storage (SDS) controller and open sourcing it? But there is a bigger story here. The story of CoprHD itself and specifically its architecture. CoprHD is a modern application architecture from the so-called “third platform” or “cloud native” school of thought. You probably didn’t know this and you may even be a little curious what goes into CoprHD.

So I thought I would give you a basic overview on CoprHD and then a short comparison to OpenStack Cinder, which is the closest analog to CoprHD out there today. CoprHD is meant to work with Cinder, but in the same way that OpenDaylight plays nice with OpenStack Neutron (both “software-defined networking controllers”). There is overlap, but the current OpenStack cultural environment doesn’t really allow swapping out individual components without a lot of blowback, so most software-defined-* controllers simply integrate to existing OpenStack projects to reduce friction and encourage adoption. CoprHD is no different.

Let’s get to it.

Introduction to Software-Defined-* Controllers

SDS or SDN can seem a little confusing at first, but for me it’s easy. There are two parts, the control plane and the data plane:

SD-star Controllers.001

These two parts represent the two fundamental components of anything that is “software-defined”, which is programmability (“software-defined”) and the abstracted resources driven through the API (“virtualization” is the commonly used term, but I will reserve that for compute only).

Frequently, the control plane and data plane are separated, although there is a movement around “hyperconverged” to collapse them, which I have an issue with. When talking about SDS or SDN, the separated control plane is referred to as the “controller”. So you will see folks talk about “SDN controllers” in the networking space and they are referring to things like OpenStack Neutron, OpenDayLight, the OpenContrail Controller, and the VMware NSX Controller.

As you might infer, given the name “software-defined”, the control plane is one of the more critical components in an SDS architecture. To date there has been little or no SDS controller in the marketplace other than OpenStack Cinder and ViPR Controller. So the open sourcing of ViPR into Project CoprHD is extremely important.

An SDS controller is your primary tool for managing storage systems, be they legacy storage arrays or modern scale-out software-only storage solutions; and regardless of whether they are block, file, or object. SDS controllers should:

  1. reduce complexity
  2. decrease provisioning times
  3. provide greater visibility and transparency into the managed storage systems
  4. reduce cost of storage operations and management
  5. be architecturally scalable and extensible

This last item is what I really want to talk about today, because while CoprHD gives you #1-4, #5 is where it really shines. #5 is also a large part of why we chose to open source CoprHD.

CoprHD’s Cloud Native Architecture

Now that CoprHD is out you can check out the code yourself. We’re still in the process of getting all of the pieces together to take pull requests and move our documentation into the public. Something like this isn’t simple, but there is some initial documentation that I want to walk you through.

CoprHD’s design principles are:

  • CoprHD must be scalable and highly available from the very beginning
    • Scalability should not be an add-on feature
  • It must use Java on Linux [1]
    • A good server platform with good support for concurrency and scalability
  • Must use solid and popular open source components extensively
    • Allowing for focus on the real issues without reinventing the wheel
  • Must avoid redundant technologies as much as possible
    • Adding new technology often adds new problems
  • The system console must be for troubleshooting only
    • All operations must have corresponding REST calls

That seems like a solid set of initial design goals.

Let’s take a look at a typical CoprHD deployment:

Here we can see where CoprHD sits relative to the other components in a modern cloud or “software-defined datacenter” (SDDC).

In the box labeled CoprHD above, which represents a CoprHD Cluster (3 or 5 identical servers) are:

  • Reverse web proxy & load balancer (nginx)
  • Robust GUI framework (play framework)
  • REST API and GUI (EMC-written Java code)
  • SDS controller business logic (EMC-written Java code)
  • Distributed data store (Cassandra)
  • Distributed coordinator (Zookeeper)

Here is a more precise diagram showing more of the individual services each running on identical nodes in a 3-node cluster.

You can figure out the mapping of the components in the list above I gave you to the boxes above, I’m certain. Or refer to the formal documentation for more details.

Each CoprHD server is identical, using Cassandra to replicate data between nodes, Zookeeper to coordinate services across nodes, nginx as proxy and load-balancer, and VRRP for service failover.

I love this approach because it uses a bunch of well-baked technologies and doesn’t get overly experimental. VRRP and Zookeeper’s quorum are tried and true. Cassandra powers some of the world’s largest cloud services. Most importantly, this is clearly a “cloud native” pattern, using scale out techniques and distributed software. There are no single points of failure and you can run up to 5 servers according to the documentation, although I don’t see any inherent flaws here that would stop you from running 7, 9, 11, or more (always an odd number to make certain the Zookeeper quorum works).

CoprHD’s Pluggable Architecture

The documentation unfortunately isn’t up yet, but CoprHD supports managing a wide variety of traditional storage arrays and cloud storage software, including EMC hardware systems (VNX, VMAX, etc), EMC software systems (ScaleIO), NetApp, and Hitachi Data Systems. This means the CoprHD system and the controllersvc specifically have been designed to be extensible and have a pluggable architecture.

In addition, by integrating to OpenStack Cinder, CoprHD can support the management of additional hardware vendors. The major downside here, of course, is that it’s difficult to deploy Cinder by itself. You’ll need a fair bit of the rest of OpenStack in order to get it up and running.

CoprHD’s Scalable Control Interfaces: REST API & Non-Blocking Async GUI

One of the signatures of the third platform is that the control systems need to scale well. This is because systems are highly automated and the new API-centric model means that there will be large numbers of API calls. Related, mobile applications and interfaces take advantage of technologies such as WebSockets, to allow a more interactive GUI than before. Another interesting challenge is that for both the API and GUI, some interactions are inherently synchronous (you take an action and see an immediate result) and some are asynchronous (take an action and wait for a result to be sent back [push] or query regularly for a response [poll]).

CoprHD uses a standard shared nothing architecture and the only state resides in the Cassandra cluster. This means that the REST API can be run on all active nodes and effectively scale to the size of the number of nodes. The GUI was designed on top of the Play Framework, which is designed as a distributed non-blocking asynchronous GUI that is mobile friendly. This is the same GUI framework that powers massive websites like LinkedIn.

Combined, CoprHD’s built-in API and GUI are designed for true scale.

Cassandra as Distributed Data Store

Cassandra is covered elsewhere, but I always like to point to this slide I used to describe Netflix’s testing of it. What you see here is near perfect linear scale moving from 50 to 300 servers. And I think that says pretty much all you need to know about Cassandra scalability.

Patterns for Elastic Clouds - Deloitte.018

Comparing CoprHD vs. ScaleIO, Ceph, Cinder, and Manila

The last thing I think we need to do is a quick comparison against some of what are considered CoprHD’s contemporaries. I don’t really see them that way, but folks will draw parallels and I’d like to tackle those head on. Right off the bat we should be clear: CoprHD is designed to be an independent SDS controller, whereas the rest of these tools are not. CoprHD is also designed to be vendor neutral and designed for file/object/block.

So here’s a quick comparison chart to make things clear. This comparison is purely about the SDS controller aspects of these technologies, not about storage capabilities themselves.

SD-star Controllers.002

I am not going to spend a lot of time on this chart. Some may argue about its contents, but probably any such arguments will come down to semantics. The takeaway here is that CoprHD is the only true scale-out pure-play SDS controller out there that is open source, vendor neutral, and designed for extensibility.

Summing Up CoprHD

CoprHD is written for the third platform and designed for horizontal scalability. It can act as a bridge to manage both legacy “second platform” storage systems and more modern “third platform” storage systems. It is open source and is building a community of like-minded individuals. CoprHD delivers on the criteria required to be a successful SDS controller and has an architecture that is scalable and future-proofed. It should be at the center of any SDS strategy you are designing for your private or public cloud.


[1] Some folks are allergic to Java. I’m not a fan, but I’ve never figured out why they take issue other than it’s verbosity. C++ isn’t exactly terse. For those critical of Java’s place in the “third platform”, you should be aware that large chunks of Amazon Web Services (AWS) are written in Java. It’s still one of the most robust language virtual machines out there and that’s why next generation languages like Scala and Clojure run on top of it, eh?

 

Posted in Cloud Computing | Leave a comment

EMC and Canonical expand OpenStack Partnership

Posted on by Randy Bias

As you saw at last week’s OpenStack Summit, EMC® is expanding its partnership with Canonical amongst others. I want to take a moment to talk specifically about our relationship with Canonical. We see it as a team up between the world’s #1 storage provider and the world’s #1 cloud Linux distribution.

For the last two years, EMC has been a part of Canonical’s Cloud Partner Program and OpenStack Interoperability Lab (OIL). During this time EMC created a new Juju Charm for EMC VNX technology. This enables deployment by Canonical’s Juju modeling   software. This past week, we specifically announced the availability of a new OpenStack solution with Ubuntu OpenStack and Canonical as part of the Reference Architecture Program announced last November in Paris. The solution is built in close collaboration with Canonical in EMC labs then tested, optimized, and certified.

Cloud workloads are driving storage requirements, making it a crucial part of any OpenStack deployment. Companies look for scalable systems that leverage features of advanced enterprise storage while also avoiding complexity. EMC and Canonical created an easily modeled and reference architecture using EMC storage platforms (VNX® and EMC XtremIO™), Ubuntu OpenStack and Juju. This allows for repeatable and automated cloud deployments.

According to the OpenStack User Survey, 55% of production clouds today run on Ubuntu. Many of these deployments have stringent requirements for enterprise quality storage. EMC and Canonical together fulfill these requirements by providing a reference architecture combining the world’s #1 storage, #1 cloud Linux distribution, and tools for repeatable automated deployments.

We will be releasing an XtremIO (our all flash array) Charm and eventually ScaleIO (our software-only distributed block storage) as well. ScaleIO is a member of EMC’s Software Defined Storage portfolio, has been proven at massive scale, and is a great alternative to Ceph. You will soon be able to download a free, unsupported and unlimited version of ScaleIO to evaluate yourself.  Look for these products and others, such as ViPR Controller, to be available in Canonical’s Charm Store and through Canonical’s Autopilot OpenStack deployment software later this year.

This work is in support of eventually making all of EMC’s storage solutions available via OpenStack drivers available for use with Ubuntu OpenStack. Given the wide acceptance of Ubuntu with the OpenStack community, EMC will use Ubuntu internally and in future products. We believe that these efforts coupled with the quality professional services and support customers have come to expect from us will help give enterprise customers peace of mind. This will accelerate adoption of OpenStack Cloud solutions in the enterprise.

With EMC storage and Canonical solutions, customers realize these benefits:

  • A repeatable deployable cloud infrastructure
  • Reduced operating costs
  • Compatibility with multiple hardware and software vendors
  • Advanced storage features only found with enterprise storage solutions

Our  reference architecture takes the Ubuntu OpenStack distribution, and combines it with EMC VNX or XtremIO arrays, and Brocade 6510 switches. Automated with Juju, the time to production for OpenStack is dramatically reduced.

The solution for Canonical can be found at this link and a brief video with John Zannos can be found here on EMCTV. The EMC and Canonical  architecture is below for your perusal.

EMC and Canonical Ubuntu OpenStack Reference Architecture

This reference architecture underscores EMC commitment to providing customers choice. EMC customers can now choose to build an Ubuntu OpenStack cloud based on EMC storage, and use Juju for deployment automation.

It’s an exciting time for Stackers as the community and customers continue to demand reference architectures, repeatable processes, and support for existing and future enterprise storage systems.

Posted in OpenStack | Leave a comment

State of the Stack v4 – OpenStack In All It’s Glory

Posted on by Randy Bias

Yesterday I gave the seminal State of the Stack presentation at the OpenStack Summit.  This is the 4th major iteration of the deck.  This particular version took a very different direction for several reasons:

  1. Most of the audience is well steeped in OpenStack and providing the normal “speeds and feeds” seemed pedantic
  2. There were critical unaddressed issues in the community that I felt needed to be called out
  3. It seemed to me that the situation was becoming more urgent and I needed to be more direct than usual (yes, that *is* possible…)

There are two forms you can consume this in: the slideshare and the YouTube video from the summit.  I recommend the video first and then the Slideshare.  The reason being, that with the video I provide a great deal of additional color, assuming you can keep up with my rapid fire delivery.  Color in this case can be construed several different ways.

I hope you enjoy. If you do, please distribute widely via twitter, email, etc. :)

The video:

The Slideshare:

State of the Stack v4 – OpenStack in All It's Glory from Randy Bias

Posted in OpenStack | Leave a comment

OpenStack Self-Improvement Mini-Survey

Posted on by Randy Bias

Want to help make OpenStack great?  I’ve put together a very quick survey to get some deeper feedback than is provided by the User Survey.  The intention is to provide some additional information around the State of the Stack v4 I’m giving next week at the summit.

I will really appreciate it if you take the 2-3 minutes out of your day to answer this honestly.

Click here for the survey.

UPDATE: Horizon was missing from the survey and I have added it.  Heartfelt apologies to all of the Horizon committers.  An honest mistake on my part.  OpenStack is almost too big to keep track of.  :)

Posted in OpenStack | Leave a comment

Introducing CoprHD (“copperhead”), the Cornerstone of a Software-Defined Future

Posted on by Randy Bias

You’ve probably been wondering what I’ve been working on post-acquisition and yesterday you saw some of the fruits of my (and many others) labor in the CoprHD announcement.  CoprHD, pronounced “copperhead” like the snake, is EMC’s first ever open source product.  That EMC would announce open sourcing a product is probably as big a surprise to many EMCers as it may be to you, but more importantly it’s a sign of the times.  It’s a sign of where customers want to take the market.  It’s also the sign of a company willing to disrupt itself and it’s own thinking.

This is not your father’s EMC.  This is a new EMC and I hope that CoprHD, a core storage technology based on EMC’s existing ViPR Controller product, will show you that we are very serious about this initiative.  It’s not a me too move.

This move is partly in direct response to enterprise customer requests and our own assessment of where the market is headed.  Perhaps more importantly, this move drives freedom of choice and the maintenance of control on the part of our customers.  Any community member (partner, customer, competitor) is free to add support for any storage system.  CoprHD is central to a vendor neutral SDS controller strategy.

For those of you not familiar with ViPR Controller, it is a “software-defined storage” (SDS) controller, much like OpenDaylight is a software-defined networking (SDN) controller.  This means that ViPR can control and manage a variety of storage platforms and, in fact, today it is already multi-vendor, supporting not only EMC, but NetApp, Hitachi, and many others.  ViPR Controller has REST APIs, ability to integrate to OpenStack Cinder APIs, a pluggable backend, and is truly the only software stack I’ve seen out there that fulfills the hopes and dreams of a true SDS controller by not only providing heterogeneous storage management but also metering, a storage service catalog, resource pooling, and much, much more.

CoprHD is the open source version of ViPR Controller.  A comparison:

Comparing CoprHD vs ViPR Controller.001

What is “Non-essential, EMC-specific code”?  In this case, it’s simply the part of the code that enables “phone home” support to EMC, which has no relevance to users of CoprHD’s SDS services with non-EMC data stores.  CoprHD is in every way ViPR Controller and the two are interchangeable, delivering on the promise of vendor neutrality and providing customers control, choice, and community.  A quick caveat: please be aware that at this time, although this is the same code base and APIs, a clean installation is required to convert CoprHD to ViPR Controller or vice versa.  There is no “upgrade” process and it’s unclear that it ever makes sense to create one, although we might eventually create a migration tool depending on customer demand for one.

The rest of this blog post seeks to answer the key questions many have about this initiative:

  • Why ViPR Controller?
  • Why now?
  • Why would EMC do this?

Exciting times.  Let’s walk through it!

The Emerging Strategy for Enterprise: Open Source First

More and more we’re seeing from customers that proprietary software solutions have to be justified.  Today, the default is to first use open source software and open APIs to solve 80% of the problem and only to move to proprietary software when it is truly required.  This reflects the growing awareness of traditional enterprises that it is in their best interests to maintain control of their IT capabilities, reduce costs and increase agility.  This strategy, of course, mirrors what newer webscale enterprises such as Amazon and Google already knew.  Webscale players have been estimated to be as much as 80-90% open source software internally, compared to traditional enterprises which can be closer to 20-30% [1].

We heard from many enterprise customers that they were reluctant to adopt ViPR Controller, despite it being proven in production, simply because it was not open source.  No one wants “lock-in”, by which what they really mean is they desire vendor neutrality and maintaining control.

Businesses also want to know that not only could they switch vendors for support of an open source project, but perhaps more importantly, that they could directly affect the prioritization of roadmap features, by providing their own development resources or hiring outside engineering firms.

Finally, in any open source first strategy is the need and desire to have like-minded consumers of the same project around the table.  Businesses want to know that others like them are close by and available in public forums such as bug tracking systems, code review tools, and Internet Relay Chat (IRC).

This then is the “control” provided by an open source first strategy:

  1. Vendor neutrality and choice of support options
  2. Direct influence and contribution to the roadmap
  3. Ability to engage with like-minded businesses through public forums

You’ll probably notice that none of these equate to “free”.  Nowhere in our dialogues with customers has there been an overt focus on free software.  Certainly every business wants to cut costs, but all are willing to pay for value.

EMC Puts Customers and Business Outcomes First

EMC is renowned for being the world’s leader in storage technology, but more than a storage business, EMC is an information management business.  We put a premium on helping customers succeed even when that means that there may be an impact to our business.  If you look at today’s EMC, it is organized in such a way that an entire division, Emerging Technologies Division, is dedicated to disrupting the old way of doing things.  Software-only technologies such as ScaleIO, ViPR, and ECS (the non-appliance version) exist here.  Software that can run on anyone’s hardware, not just EMC’s.  All-flash technologies like XtremIO were birthed here.  ETD has led EMC’s community development with EMC{code} and is also leading the way in helping EMC become more involved with open source initiatives and delivering open source distributions of some of its products.

Our product strategy is to meet the customer where they are at and to be “flexible on the plan, while firm on the long term mission.”  Our broader strategy is to drive standardization and clarity in the industry around “Software-Defined Storage” (SDS), to help establish open and standard APIs, and to ease the management of storage through automation and vendor neutral management systems.  This means continually evolving and adjusting our business and our products.  It also implies a need to do more than storage (hence Emerging Technologies and not Emerging Storage Technologies Division) but more on that at a later date.

Achieving this vision requires leadership and forethought.  CoprHD is a sign of our willingness to go the distance, adapt and change, and disrupt ourselves.  Software-defined infrastructure and software-defined datacenters are a critical part of EMC II’s future and CoprHD is vital to enabling the SDS layer of any SDDC future.

CoprHD Is Leading The Way in SDS

Make no doubt, CoprHD (code available in June) is leading the way in SDS.  EMC welcomes everyone who wants to participate and we have already heard from customers who will ask their vendors to come to the party by adding support for their product to the open source project.  A truly software-defined future awaits and EMC is using its deep storage roots and focus on software to deliver on that future.

Again, this is NOT your father’s EMC.  This is a new EMC.

Thank-Yous Are In Order

Finally, although I acted as a “lightning rod” to drive organizational change, I mostly educated, where others acted.  I want to thank a number of EMCers without whom the CoprHD open source project simply wouldn’t have happened.  A short and incomplete list of amazing people who made this possible follows:

  • Jeremy Burton: executive buy-in and sponsorship
  • Manuvir Das: engineering leadership
  • Salvatore DeSimone: architecture, thought-leadership, and co-educator
  • James Lally: project management
  • The entire ViPR Controller software team for being willing to make a change
  • Intel for stepping up and helping us become a better open source company
  • Canonical for validating our direction and intentions
  • EMC{code} team for encouragement and feedback

 


[1] An estimate from my friends at Black Duck Software.

Posted in Cloud Computing | Leave a comment

What AWS Revenues Mean for Public Cloud and OpenStack More Generally

Posted on by Randy Bias

At the risk of sounding like “I told you so”, I wanted to comment on the recent Amazon 10-Q report.  If you were paying attention you likely saw it as it was the first time that AWS revenues were reported broken out from the rest of Amazon.com, ending years of speculation on revenue. The net of it is that AWS revenues for Q1 2015 were 1.566B, putting it on a run rate of just over 6B this year, which is almost on the money for what I predicted at the 2011 Cloud Connect keynote I gave [ VIDEO, SLIDES ]. Predictions in cloud pundit land are tricky as we’re usually about as often wrong as we are right; however, I do find it somewhat gratifying to have had this particular prediction correct and I will explain why shortly.

The 2015 Q1 AWS 10-Q

If you don’t want to wade through the 10-Q, there are choice pieces in here that are quite fascinating.  For example as pointed out here AWS is actually the fastest growing segment of Amazon by a long shot.  It is also the most profitable in terms of gross margin according to the 10-Q.  I remember having problems convincing people that AWS was operating at a significant profit over the last 5 years, but here it is laid out in plain black and white numbers.

Other interesting highlights include:

  • Growth from Q1 2014 -> Q1 2015 is 50% y/o/y, matching my original numbers of 100% y/o/y growth in the early days scaling down to 50% in 2015/2016
  • Goodwill + acquisitions is 760M, more than that spent on Amazon.com (retail) internationally and a third of what is spent on Amazon.com in North America
  • 1.1B spent in Q1 2015 “majority of which is to support AWS and additional capacity to support our fulfillment operations”
  • AWS y/o/y growth is 49% compared to 24% for Amazon.com in North America and AWS accounts for 7% of ALL Amazon sales

Here is a choice bit from the 10-Q:

Property and equipment acquired under capital leases were $954 million and $716 million during Q1 2015 and Q1 2014. This reflects additional investments in support of continued business growth primarily due to investments in technology infrastructure for AWS. We expect this trend to continue over time.

The AWS Public Cloud is Here to Stay

I’ve always been bullish on public cloud and I think these numbers reinforce that it’s potentially a massively disruptive business model. Similarly, I’ve been disappointed that there has been considerable knee-jerk resistance to looking at AWS as a partner, particularly in OpenStack land [1].

What does it mean now that we can all agree that AWS has built something fundamentally new?  A single business comparable to all the rest of the U.S. hosting market combined?  A business focused almost exclusively on net new “platform 3” applications that is growing at an unprecedented pace?

It means we need to get serious about public and hybrid cloud. It means that OpenStack needs to view AWS as a partner and that we need to get serious about the AWS APIs.  It means we should also be looking closely at the Azure APIs, given it appears to be the second runner-up.

As the speculation ceases, let’s remember, this is about creating a whole new market segment, not about making incremental improvements to something we’ve done before.


[1] If you haven’t yet, make sure to check out the latest release we cut of the AWS APIs for OpenStack

Posted in Cloud Computing, OpenStack | Leave a comment

DevOps Event @ EMC World 2015

Posted on by Randy Bias

I am super excited to announce that EMC is sponsoring a DevOps event at EMC World 2015.  As many of you guessed, with the acquisition of Cloudscaling, and the recent creation of the EMC{code} initiative, we are trying to become a company that engages more directly with developers and the DevOps community in particular.

We have some great guests who are going to come and speak and some of the EMC{code} evangelists will be leading sessions as well.  Here’s a list of the currently planned sessions:

  • Engaging the New Developer Paradigm
  • The DevOps Toolkit
  • The Enterprise Journey to DevOps
  • Docker 101
  • Container Management at Scale
  • Deploying Data-Centric APIs
  • Predictive Analytics to Prevent Fraud
  • Deploying Modern Apps with CloudFoundry

This will not be your normal EMC event and does not require registration for EMC World to attend.  So if you are in Las Vegas May 3rd, come join us!

REGISTER HERE

Posted in OpenStack | Leave a comment

Hyper-Converged Confusion

Posted on by Randy Bias


I have had my doubts about converged and hyper-converged infrastructure since the Vblock launched, but I eventually came around to understanding why enterprises love the VCE Vblock. I am now starting to think of converged infrastructure (CI) as really “enterprise computing 2.0”. CI dramatically reduces operational costs and allows for the creation of relatively homologous environments for “platform 2” applications. Hyper-converged infrastructure (HCI) seeks to take CI to the next most obvious point: extreme homogeneity and drastic reduction in labor costs.

I can see why so many come to what seems like a foregone conclusion: let’s just make CI even easier! Unfortunately, I don’t think people have thought through all of the ramifications of HCI in enterprises. Hyper-converged is really an approach designed for small and medium businesses, not larger enterprises operating at scale.  The primary use cases in larger environments is for niche applications with less stringent scalability and security requirements: VDI, remote office / branch office servers, etc. 

There are three major challenges with HCI in larger scale enterprise production environments that I will cover today: 

  1. Security
  2. Independent scaling of resources
  3. Scaling the control plane requires scaling the data plane

I think this perspective will be controversial, even inside of EMC, so hopefully I can drive a good conversation about it.

Let’s take a look.

Continue reading →

Posted in Cloud Computing | Leave a comment

Vancouver OpenStack Summit – EMC Federation Presentations

Posted on by Randy Bias

Voting for presentations at the Vancouver OpenStack Summit is now open.  Please help us by voting on the sessions submitted by EMC Federation speakers along with any other sessions that cover topics that might interest you.  Please vote at your earliest convenience since each vote helps!

OpenStack community members are voting on presentations to be presented at the OpenStack Summit, May 18-22, in Vancouver, Canada. Hundreds of high-quality submissions were received, and your votes can help determine which ones to include in the schedule.

PDF containing all of the submissions by the EMC Federation: 

https://my.syncplicity.com/share/6ndhnwrkpxkgodp/OpenStack%20Liberty%20Summit%20-%20Vancouver%20-%20EMC%20Federation%20Session%20Proposals%20v2

Here is a list you can click through to vote. 

Thank you so much for taking the time to vote on these sessions!

Posted in OpenStack | Leave a comment

 

← Older posts