This is a weekly browsing of recent relevant industry news articles, helpful for educating ourselves as well as for sharing with our peers. Please post any thoughts in the comments section!
Roughly 35 percent of cloud computing spending is wasted via instances that are over provisioned and not optimized, according to RightScale, which manages multiple services such as Google Cloud and Amazon Web Services. The data, compiled through RightScale's cloud cost optimization service, boils down like this:
- 76 percent of cloud spending is compute with 15 percent database. Network is 3 percent of cloud spend followed by 2 percent on storage and 4 percent "other."
- Spending on instances is up 76 percent from the beginning of the year.
- Spending on Amazon RDS is starting to pick up.
- The majority of enterprises are running instances 24/7.
- 39 percent of instance spend is on virtual machines running at less than 40 percent of CPU and memory utilization.
- Only 19 percent of AWS instances are reserved.
- Companies don't clean up storage that isn't being used and 7 percent of cloud spending is wasted on unattached storage and old snapshots.
With all the many benefits promised by the cloud, it’s no wonder companies large and small are eager to put it to work. But it’s one thing to embrace the cloud on a limited scale; all-out migrations are a much more difficult proposition. Today’s enterprises may have hundreds of applications in their portfolios, each of them linked in complex ways to databases, external services, mainframes, and other applications. Migrating all this successfully is a critical part of adopting the cloud at scale. Within the next five years, more than 40 percent of large enterprises’ workloads will run in the public cloud. Application migration remains a challenge, but there are numerous proven approaches and industry best practices to draw upon for help. Three are especially valuable.
Containers are a big deal, and they’re only going to get bigger. That’s my view after attending the latest KubeCon (and CloudNativeCon) in Seattle last week. A year ago, I was confused about what containers mean for IT, because the name ‘container’ had me thinking it was about the little box that code was stored in: the container image. I’m here to tell you that the container image format itself (Docker, rkt, whatever you like) is not the point. The most important thing about containers is the process of using them, not the things themselves. The process is heavily automated. No more installing software by sitting in front of a console and clicking ‘Next’ every five minutes. Unix people everywhere rejoice that Windows folk have discovered scripting is a good thing.
LightReading: OpenStack Punches Above Its Weight
While OpenStack's installed base is small compared with Amazon Web Services and other public hyperclouds, OpenStack has attracted power users for which proprietary public clouds are unsuitable. "There is a necessity to have something other than the public cloud, and OpenStack is the answer," Johan Christenson, founder and CEO of CityNetwork, a European cloud provider, tells Light Reading. "Amazon is not the one vendor for everything, even though they are certainly a great company." OpenStack fills needs that the proprietary public clouds can't meet. It brings the benefits of the public cloud to companies' own data centers. "The idea of being able to provision and deprovision slices of compute with your own data center is what OpenStack provides," says Dustin Kirkland, who works on the Ubuntu product and strategy for open source software provider Canonical Ltd.
Microsoft Corp. struck a partnership with Elon Musk's artificial intelligence research group, OpenAI, and said the organization will use the company's Azure cloud system for most of its large-scale experiments. OpenAI has been an early customer for Microsoft's Azure N-Series Virtual Machines, a powerful cloud-computing service that relies on Nvidia Corp. graphical processing units. The two will also collaborate on ways to advance AI research and its use, Microsoft and Open AI said Tuesday in blog posts. "In the coming months we will use thousands to tens of thousands of these machines to increase both the number of experiments we run and the size of the models we train," OpenAI said in its post.
A little-known startup is making a big bet that it can parlay new ARM chips, and backing from a Japanese investment giant, to make its presence felt among the cloud computing giants. The company, Packet, on Tuesday is launching new rentable “bare metal” computing services based on the ARM v8 chip architecture from its data centers in New Jersey, Northern California, Amsterdam, and Tokyo. Customers can set up and launch these resources within minutes, Packet said The move is unusual because ARM chips are not commonly found in the servers that power corporate data centers or public cloud computer services, such as those sold by Amazon Web Services. They do, however, dominate the smartphone market—scratch an Apple iPhone (God forbid) and you’ll see an ARM chip. And many techies see ARM’s energy-efficient design as an interesting option for servers going forward.