This is a weekly browsing of recent relevant industry news articles, helpful for educating ourselves as well as for sharing with our peers. Please post any thoughts in the comments section!
For anyone following technology trends, the notion that many businesses are supplementing or even replacing their own data centers with a cloud like Amazon Web Services is no longer a shock, or even news. Public cloud companies, like Amazon, Microsoft, Google, and IBM aggregate vast numbers of connected servers and storage arrays in data centers around the world, and rent that capacity out to multiple customers. In particular, businesses with uneven or “spiky” workloads like the ability to pay for data center resources when they need them and shutting them down when they don’t. That’s an attractive alternative to stocking their own data centers for peak loads and then only using their full capacity a few times a year.
With respect to capacity planning, containers boast two advantages over virtual machines: lower overhead and free movement across infrastructure types. By sharing hardware as well as the operating system, Docker infrastructure consumption, particularly storage and memory, is lower per container than a VM. Dozens of VMs fit on one average physical host, while the same host could run hundreds of containers. In actual implementations, however, most IT teams run containers on VMs rather than bare metal.
Amazon Web Services (AWS) and other public cloud providers have permanently changed everything in the world of IT infrastructure. In the old days, IT organizations would work with their vendor of choice to buy and deploy new hardware in their own datacenters, paying a large up-front cost for the equipment. Today, the public cloud has removed the headache of buying and managing hardware, with a model that allows IT to easily spin up compute, storage and networking resources in the cloud almost instantaneously. To respond to this new IT consumption model, traditional enterprise hardware companies like Hewlett Packard Enterprise (HPE), Cisco Systems, Dell EMC, Lenovo , NetApp, and others are having to change the way they do business, to ensure on-premises datacenters remain a viable and cost-effective option.
Network World: Serverless: The next step in cloud computing’s evolution
First, know that “serverless” itself is a bit of a misnomer. There are servers involved behind the scenes, of course, but as you’ll see, they’re abstracted in such a way that developers are free from having to address operational concerns and instead focus on the creativity of writing code. One way to think about the concepts supporting a serverless architecture is to look at them as a set of three layers that sit atop your existing compute, network and storage resources: fabric, framework and functions.
Dell, EMC, and VMware rolled out a new VxRack hyperconverged system that has migrated to Dell's PowerEdge servers. Dell Technologies portfolio companies launched the Dell EMC VxRack System for SDDC. EMC, before Dell acquired the company, launched VxRail in 2015 with white-box servers, EMC storage and VMware software. For EMC, the VxRack system marked a departure from the VCE partnership with Cisco. EMC had teamed up with Cisco on converged VCE building blocks, but later bought the networking giant out.