IT organizations are looking for consistent operations, which is enabled by consistent infrastructure. Public cloud providers like AWS and Microsoft offer an extension of their cloud infrastructure and native services to the private cloud and edge, which is also known as Data Center as a Service.
Amazon Web Services (AWS) provides a fully managed service with AWS Outposts, that offers AWS infrastructure, AWS services, APIs and their tools to any data center or on-premises facility.
Microsoft has Azure Stack is even working on a new Azure Stack hybrid cloud solution that is codenamed “Fiji” to provide the ability to run Azure as a managed local cloud.
What do these offerings have in common or why would customers choose one (or even both) of these hybrid cloud options?
They bring the public cloud operation model to the private cloud or edge in form of one or more racks and servers provided as a fully managed service.
AWS Outposts (generally available since December 2019) and Azure Stack Fiji (in development) provide the following:
- Extension of the public cloud services to the private cloud and edge
- Consistent infrastructure with consistent operations
- Local processing of data (e.g., analytics at the data source)
- Local data residency (governance and security)
- Low latency access to on-premises systems
- Local migrations and modernization of applications with local system interdependencies
- Build, run and manage on-premises applications using existing and familiar services and tools
- Modernize applications on-prem resp. at the edge
- Prescriptive infrastructure and vendor managed lifecycle and maintenance (racks and servers)
- Creation of different physical pools and clusters depending on your compute and storage needs (different form factors)
- Same licensing and pricing options on-premises (like in the public cloud)
The pretty new AWS Outposts or the future Azure Stack Fiji solution are also called “Local Cloud as a Service” (LCaaS) or “Data Center as a Service” and meant to be consumed and delivered in the on-prem data center or at the edge. It’s about bringing the public cloud to your data center or edge location.
The next phase of cloud transformations is about the “edge” of an enterprise cloud and we know today that private and hybrid cloud strategies are critical for the implementation of IT infrastructure and the operation of it.
If you come from VMware’s standpoint, then it’s not about extending the public cloud to the local data centers. It’s about extending your VMware-based private cloud to the edge or the public cloud.
This article focuses on the local (private) cloud as a service options from VMware, not the public cloud offerings.
In case you would like to know more about VMware’s multi-cloud strategy, which is about running the VMware Cloud Foundation stack on top of a public cloud like AWS, Azure or Google, please check some of my recent posts.
Features and Technologies
Before I describe the different VMware LCaaS offerings based on VMware Cloud Foundation, let me show and explain the different features and technologies my customers ask about when they plan to build a private cloud with public cloud characteristics in mind.
I work with customers from different verticals like
- fast-moving consumer goods
- transportation (travel)
which are hosting IT infrastructure in multiple data centers all over the world including hundreds of smaller locations. My customers belong to different vertical markets, but are looking for the same features and technologies when it comes to edge computing and delivering a managed cloud on-premises.
Compute and Storage. They are looking for pre-validated and standardized configuration offerings to meet their (application) needs. Most of them describe hardware blueprints with t-shirts sizes (small, medium, large). These different servers or instances provide different options and attributes, which should provide enough CPU, RAM, storage and networking capacity based on their needs. Usually you’ll find terms like “general purpose”, “compute optimized” or “memory optimized” node types or instances.
Networking. Most of my customers look for the possibility to extend their current network (aka elastic or cloud-scale networking) to any other cloud. They prefer a way to use the existing network and security policies and to provide software-defined networking (SDN) services like routing, firewalling and IDS/IPS, load balancing – also known as virtualized network functions (VNF). Service providers are also looking at network function virtualization (NFV), which includes emerging technologies like 5G and IoT. As cloud native or containerized applications become more important, service providers also discuss containerized network functions (CNF).
Services. Applications consist of one or many (micro-)services. All my conversations are application-centric and focus on the different application components. Most of my discussions are about containers, databases and video/data analytics at the edge.
Security. Customers, that are running workloads in the public cloud, are familiar with the shared responsibility model. The difference between public cloud and local cloud as a service offering is the physical security (racks, servers, network transits, data center access etc.).
Scalability and Elasticity. IT providers want to provide the simplicity and agility on-prem as their customers (the business) would expect it from a public cloud provider. Scalability is about a planned level of capacity that can grow or shrink as needed.
Resource Pooling and Sharing. Larger enterprises and service providers are interested in creating dedicated workload domains and resource clusters, but also look for a way to provide infrastructure multi-tenancy.
The challenge for today’s IT teams is, that edge locations are not often well defined. And these IT teams need an efficient way to manage different infrastructure sizes (can range from 2 nodes up to 16 or 24 nodes), for sometimes up to 400 edge locations.
Rethinking Private Clouds
Organizations have two choices when it comes to the deployment of a private cloud extension to the edge. They could continue using the current approach, which includes the design, deployment and operation of their own private cloud. Another pretty new option would be the subscription of a predefined “Data Center as a Service” offering.
Enterprises need to develop and implement a cloud strategy to support the existing workloads, which are still mostly running on VMware vSphere, and build something, which is vendor and cloud-agnostic. Something, that provides a (public) cloud exit strategy at the same time.
If you decide to go for AWS Outposts or the coming Azure Stack Fiji solution, which for sure are great options, how would you migrate or evacuate workloads to another cloud and technology stack?
VMware Cloud on Dell EMC
At VMworld 2019 VMware announced the general availability of VMware Cloud on Dell EMC (VMC on Dell EMC). In 2018 introduced as “Project Dimension”, the idea behind this concept was to deliver a (public) cloud experience to customers on-premises. Give customers the best of two worlds:
The simplicity, flexibility and cost model of the public cloud with the security and control of your private cloud infrastructure.
Initially, Project Dimension was focused primarily on edge use cases and was not optimized for larger data centers.
Note: This has changed with the introduction of the 2nd generation of VMC on Dell EMC in May 2020 to support different density and performance use cases.
VMC on Dell EMC is a VMware-managed service offering with these components:
- A software-defined data center based von VMware Cloud Foundation (VCF) running on Dell EMC VxRail
- ESXi, vSAN, NSX, vCenter Server
- HCX Advanced
- Dell servers, management & ToR switches, racks, UPS
- Standby VxRail node for expansion (unlicensed)
- Option for half or full-height rack
- Multiple cluster support in a single rack
- Clusters start with a minimum of 3 nodes (not 4 as you would expect from a regular VCF deployment)
- VMware SD-WAN (formerly known as VeloCloud) appliances for remote management purposes only at the moment
- Customer self-service provisioning through cloud.vmware.com
- Maintenance, patching and upgrades of the SDDC performed by VMware
- Maintenance, patching and upgrades of the Dell hardware performed by VMware (Dell provides firmware, drivers and BIOS updates)
- 1- or 3-year term subscription commitment (like with VMC on AWS)
There is no “one size fits all” when it comes to hosting workloads at the edge and in your data centers. VMC on Dell EMC provides also different hardware node types, which should match with your defined t-shirt sizes (blueprints).
If we talk about at a small edge location with a maximum of 5 server nodes, you would go for a half-height rack. The full-height rack can host up to 24 nodes (8 clusters). Currently, the largest instance type would be a good match for high density, storage hungry workloads such as VDI deployments, databases or video analytics.
As HCX is part of the offering, you have the right tool and license included to migrate workloads between vSphere-based private and public clouds.
The following is a list of some VMworld 2020 breakout sessions presented by subject matter experts and focused on VMware Cloud on Dell EMC:
HCP1831: Building a successful VDI solution with VMware Cloud on Dell EMC – Andrew Nielsen, Sr. Director, Workload and Technical Marketing, VMware
HCP1802: Extend Hybrid Cloud to the Edge and Data Center with VMware Cloud on Dell EMC – Varun Chhabra, VP Product Marketing, Dell
HCP1834: Second-Generation VMware Cloud on Dell EMC, Explained by Product Experts – Neeraj Patalay, Product Manager, VMware
VMware Cloud Foundation and HPE Synergy with HPE GreenLake
At VMworld 2019 VMware announced that VMware Cloud Foundation will be offered in HPE’s GreenLake program running on HPE Synergy composable infrastructure (Hybrid Cloud as a Service). This gives VMware customers the opportunity to build a fully managed private cloud with the public cloud benefits in an on-premises environment.
HPE’s vision is built on a single platform that can span across multiple clouds and GreenLake brings the cloud consumption model to joint HPE and VMware customers.
Today, this solution is fully supported and sold by HPE. In case you want to know more, have a look at the VMworld 2020 session Simplify IT with HPE GreenLake Cloud Services and VMware from Erik Vogel, Global VP, Customer Experience, HPE GreenLake, Hewlett Packard Enterprise.
VMC on AWS Outposts
If you are an AWS customer and look for a consistent hybrid cloud experience, then you would consider AWS Outposts.
There is also VMware variant of AWS Outposts available for customers, who already run their on-premises workloads on VMware vSphere or in a cloud vSphere-based environment running on top of the AWS global infrastructure (called VMC on AWS).
VMware Cloud on AWS Outposts is a on-premises as-a-service offering based on VMware Cloud Foundation. It integrates VMware’s software-defined data center software, including vSphere, vSAN and
NSX. Ths Cloud Foundation stack runs on dedicated elastic Amazon EC2 bare-metal infrastructure, delivered on-premises with optimized access to local and remote AWS services.
Key capabilities and use cases:
- Use familiar VMware tools and skillsets
- No need to rewrite applications while migrating workloads
- Direct access to local and native AWS services
- Service is sold, operated and supported by VMware
- VMware as the single point of primary contact for support needs, supplemented by AWS for hardware shipping, installation and configuration
- Host-level HA with automated failover to VMware Cloud on AWS
- Resilient applications required to work in the event of WAN link downtime
- Application modernization with access to local and native AWS services
- 1- or 3-year term subscription commitment
- 42U AWS Outposts rack, fully assembled and installed by AWS (including ToR switches)
- Minimum cluster size of 3 nodes (plus 1 dark node)
- Current cluster maximum of 16 nodes
Currently, VMware is running a VMware Cloud on AWS Outposts Beta program, that lets you try the pre-release software on AWS Outposts infrastructure. An early access program should start in the first half of 2021, which can be considered as a customer paid proof of concept intended for new workloads only (no migrations).
VMware on Azure Stack
To date there are no plans communicated by Microsoft or VMware to make Azure VMware Solution, the vSphere-based cloud offering running on top of Azure, available on-premises on the current or future Azure Stack family.
VMware on Google Anthos
To date there are no plans communicated by Google or VMware to make Google Cloud VMware Engine, the vSphere-based cloud offering running on top of the Google Cloud Platform (GCP), available on-premises.
The only known supported combination of a Google Cloud offering running VMware on-premises is Google Anthos (Google Kubernetes Engine on-prem).
Multi-Cloud Application Portability
Multi-cloud is now the dominant cloud strategy and many of my customers are maintaining a vSphere-based cloud on-premises and use at least two of the big three public clouds (AWS, Azure, Google).
Following a cloud-appropriate approach, customers are inspecting each application and decide which cloud (private or public) would be the best to run this application on. VMware gives customers the option to run the Cloud Foundation technology stack in any cloud, which doesn’t mean, that customers at the same time are not going cloud-native and still add AWS and Azure to the mix.
How can I achieve application portability in a multi-cloud environment when the underlying platform and technology formats differ from each other?
This is a question I hear a lot. Kubernetes is seen as THE container orchestration tool, which at the same time can abstract multiple public clouds and the complexity that comes with them.
A lot of people also believe that Kubernetes is enough to provide application portability and figure out later, that they have to use different Kubernetes APIs and management consoles for every cloud and Kubernetes (e.g., Rancher, Azure, AWS, Google, RedHat OpenShift etc.) flavor they work with.
That’s the moment we have to talk about VMware Tanzu and how it can simplify things for you.
The Tanzu portfolio provides the next generation the building blocks and steps for modernizing your existing workloads while providing capabilities of Kubernetes. Additionally, Tanzu also has broad support for containerization across the entire application lifecycle.
Tanzu gives you the possibility to build, run, manage, connect and protect applications and to achieve multi-cloud application portability with a consistent platform over any cloud – the so-called “Kubernetes grid”.
Note: I’m not talking about the product “Tanzu Kubernetes Grid” here!
I’m talking about the philosophy to put a virtual application service layer over your multi-cloud architecture, which provides a consistent application platform.
Tanzu Mission Control is a product under the Tanzu umbrella that provides central management and governance of containers and clusters across data centers, public clouds, and edge.
Enterprises must be able to extend the value of their cloud investments to the edge of the organization.
The edge is just one piece of a bigger picture and customers are looking for a hybrid cloud approach in a multi-cloud world.
Solutions like VMware Cloud on Dell EMC or running VCF on HPE Synergy with HPE Greenlake are only the first steps towards innovation in the private cloud and to bring the cost and operation model from the public cloud to the enterprises on-premises.
IT organizations are rather looking for ways to consume services in the future and care less about building the infrastructure or services by themselves.
The two most important differentiators for selecting an as-a-service infrastructure solution provider will be the provider’s ability to enable easy/consistent connectivity and the provider’s established software partner portfolio.
In cases where IT organizations want to host a self-managed data center or local cloud, you can expect, that VMware is going to provide a new and appropriate licensing model for it.