VMware Explore Europe 2022 is history. This year felt different and very special! Rooms were fully booked, and people were queuing up in the hallways. The crowd had a HUGE interest in technical sessions from known speakers like Cormac Hogan, Frank Denneman, Duncan Epping, William Lam, and many more!
Compared to VMware Explore US, there were not that many major announcements, but I thought it might be helpful again to list the major announcements, that seem to be the most interesting and relevant ones.
VMware Aria Hub Free Tier
For me, the biggest and most important announcement was the Aria Hub free tier. I am convinced that Aria Hub will be the next big thing for VMware and I am sure that it will change how the world manages a multi-cloud infrastructure.
VMware Aria Hub is a multi-cloud management platform that unifies the management disciplines of cost, performance, configuration, and delivery automation with a common control plane and data model for any cloud, any platform, any tool, and every persona. It helps you align multiple teams and solutions on a common understanding of resources, relationships, historical changes, applications, and accounts, fundamental to managing a multi-cloud environment.
The new free tier enables customers to inventory, map, filter, and search resources from up to two of their native public cloud accounts, currently from either AWS or Azure. It also helps you understand the relationships of your resources to other resources, policies, and other key components in your public cloud and Kubernetes environments. WOW!
Many customers asked for it, it is coming! Tanzu Mission Control (TMC) will become available on-premises for sovereign cloud partners/providers and enterprise customers!
There is a private beta coming. Hence, I cannot provide more information for now.
Tanzu Kubernetes Grid 2.1
At VMware Explore US 2022, VMware announced Tanzu Kubernetes Grid (TKG) 2.0, and at Explore Europe 2022, they announced TKG 2.1, which adds support for Oracle Cloud Infrastructure (OCI). Additionally, it will now also have the option of leveraging VMs as the management cluster. Each will be familiar, but now they both support a single, unified way of cluster creation using a new API called ClusterClass.
VMware unveiled new enhancements for Tanzu Service Mesh (TSM) as well, which are going to bring new capabilities that would provide VM discovery and integration into the mesh, providing the ability to combine VMs and containers in the same service mesh for secure communications and to apply consistent policy.
VMware Cloud on Equinix Metal (VMC-E)
The last thing I want to highlight is the VMC-E announcement. It is a combination of VMware Cloud IaaS with Equinix Metal hardware as-a-service, which can be deployed in over 30 Equinix global data centers.
VMware Cloud on Equinix Metal is a great option for enterprises that want the flexibility and performance of the Public Cloud, where business requirements prevent moving data or applications to the public cloud. It offers full compatibility and consistency with on-premises and VMware Cloud operational models and policies and zero downtime migration
VMware Cloud on Equinix Metal is a fully managed solution by VMware (delivered, operated, managed, supported).
Today, more than ever, both humans and machines consume or process data. We, humans, consume data through multiple applications that are hosted in different clouds from different devices like smartphones, laptops, and tablets. Companies are building applications that need to look good and work well on any platform/device.
At the same time, developers are building new applications following cloud-native principles. A cloud-native architecture is a design pattern for applications that are built for the cloud. Most cloud-native apps are organized as microservices which are used to break up larger applications into loosely coupled units that can be managed by smaller teams. Resilience and scale are achieved through horizontal scaling, distributed processing, and automated placement of failed components.
Different people have a different understanding of “cloud-native” and the chances are high that you will get different answers. Let us look at the official definition from CNCF:
“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”
A widely accepted methodology for building cloud-based applications is the “Twelve-Factor Application”. It uses declarative formats for automation to minimize time and costs. It should offer maximum portability between execution environments and be suitable for the deployment on modern cloud platforms. The 12-factor methodology can be applied with any programming language and may use any combination of backing servers (caching, queuing, databases).
Interestingly, we now see other factors like API-first, telemetry, and security complementing this list.
While doing research for my book about “workload mobility and application portability”, I saw the term “API-first” many times.
Then I started to remember that VMware acquired Mesh7 a while ago and they announced Tanzu Service Mesh Enterprise last year at VMworld Europe (now known as VMware Explore). API security was even one of their main topics during the networking & security solutions keynote presented by Tom Gillis.
That is why I thought it is time to better understand this topic and write a piece about APIs. Let us start with some basics first.
What is an API?
An application programming interface (API) is a way for two or more software components to communicate with each other using a set of defined protocols and definitions. APIs are here to make the developer’s life easier.
I bet you have seen parts of Google Maps already embedded in different websites when you were looking for a specific business or restaurant location. Most websites and developers would use Google Maps in this case, because it just makes sense for us, right? That is why Google exposes the Google Maps API so developers can embed Google Maps objects very easily in a standardized way. Or have you seen anyone who wants to develop their own version of Google Maps?
In the case of enterprises, APIs are a very elegant way to share data with customers or other external users. Such public APIs like Google Maps APIs can be used by partners who then can access your data. And we all know that data is the new oil. Companies can make a lot of money today by sharing their data.
Even when using private APIs (internal use only), you decide who can access your API and data. This is one of the reasons why API security and API management become more important. You want to provide secure access when sensitive data is being exposed.
What is an API Gateway?
For microservices-based apps, it makes sense to implement an API gateway, because it can act as a single entry point for all API calls made to your system. And it doesn’t matter if your system/application is hosted on-premises, in the public cloud, or a combination of both. The API gateway takes care of the request (API call) and returns the requested data.
API gateways can also handle other tasks like authentication, rate management, and statistics. This is important for example when you want to monetize some of your APIs by offering a service to consumers or other companies.
What is Spring Cloud Gateway for VMware Tanzu?
Spring Cloud Gateway for VMware Tanzu provides a simple way to route internal and external API requests to application services that expose APIs. This solution is based on the open-source Spring Cloud Gateway project and provides a library for building API gateways on top of Spring and Java.
Because it is intended that Spring Cloud Gateway sits between a requester and the resource that is being requested, it is in the position to intercept, analyze and modify requests.
Revitalize Legacy Apps with APIs
Before we had microservices, there were monolithic applications. An all-in-one application architecture, where all services are installed on the same virtual machine and depend on each other.
There are multiple reasons why such a monolith cannot be broken up into smaller pieces and modernized. Sometimes it’s not (technically) possible, not worth it, or it just takes too long. Hence many companies still use such monolithic (legacy) applications. The best example here is the mainframe which often still runs business-critical applications.
I always thought that my customers only have two options when modernizing applications:
Start from scratch (throw the old app away)
Refactor/Rewrite an application
Rewriting an application needs time and costs money. Imagine that you would refactor 50 of your applications, split these monoliths up in microservices, connect these hundreds or thousands of microservices, and at the same time must take care of security (e.g., vulnerabilities).
So, what are you going to do now?
APIs seem to provide a very cost-effective way to integrate some of the older applications with newer ones. With this approach, one can abstract away the data and services from the underlying (legacy) application infrastructure. APIs can extend the life of a legacy application and could be the start of a phased application modernization approach.
Tanzu Service Mesh Enterprise
At the moment, we only have an API gateway that sits in front of our microservices. Multiple (micro)services in an aggregated fashion create the API you want to expose to your internal or external customers. The question now is, how you do plan to expose this API when your microservices are distributed over one or more private or public clouds?
When we talk about APIs, we talk about data in motion. That is why we must secure this data that is sent from its source to any location. And you want to secure the application and data without increasing the application latency and decreasing the user’s experience.
API Security. API security is achieved through API vulnerability detection and mitigation, API baselining, and API drift detection (including API parameters and schema validation)
Personally Identifiable Information (PII) segmentation and detection. PII data is segmented using attribute-based access control (ABAC) and is detected via proper PII data detection and tracking, and end-user detection mechanisms.
API Security Visibility. API security is monitored using API discovery, security posture dashboards, and rich event auditing.
APIs are used to connect different applications. They are also used to aggregate services or functions that can be consumed by other businesses or partners. Modern and containerized applications bring a large number of APIs with them, that can be hosted in any cloud.
With Spring Cloud Gateway and Tanzu Service Mesh Enterprise, VMware can deliver application connectivity services that enable improved developer experience and more secure operations.
It took me almost a year to realize the strengths of these (combined) products and why VMware for example acquired Mesh7. But it makes sense to me now. Even I do not completely understand all the key features of Spring Cloud Gateway and Tanzu Service Mesh.
Everyone talks about multi-cloud and in most cases they mean the so-called big 3 that consist of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. If we are looking at the 2021 Gartner Magic Quadrant for Cloud Infrastructure & Platform Services, one can also spot Alibaba Cloud, Oracle, IBM and Tencent Cloud.
VMware has a strategic partnership with 6 of these hyperscalers and all of these 6 public clouds offer VMware’s software-defined data center (SDDC) stack on top of their global infrastructure:
While I mostly have to talk about AWS, AVS and GCVE, I am finally getting the chance to attend a OCVS customer workshop led by Oracle. That is why I wanted to prepare myself accordingly and share my learnings with you.
Amazon Web Services, Microsoft Azure and Google Cloud dominate the cloud market, but Oracle has unique capabilities and characteristics that no one else can deliver. Additionally, Oracle’s Cloud Infrastructure (OCI) has shown an impressive pace of innovation in the past two years, which led to a 16% increase on Gartner’s solution scorecard for OCI (November 2021, from 62% to 78%), which put them into the fourth place behind Alibaba Cloud!
What is Oracle Cloud VMware Solution?
Oracle Cloud VMware Solution or OCVS is a result of the strategic partnership announced by VMware and Oracle in September 2019. Like the other VMware Cloud solutions like VMC on AWS, AVS or GCVE, Oracle Cloud VMware Solution will enable customers to run VMware Cloud Foundation on Oracle’s Generation 2 Cloud Infrastructure.
Meaning, running an on-premises VMware-based infrastructure combined with OCVS should make cloud migrations easier and faster, because it is the same foundation with vSphere, vSAN and NSX.
Key Differentiator #1 – Different SDDC Bundles
Customers can choose between a multi-host SDDC (minimum of 3 production hosts) and a single-host SDDC, that is made for test and dev environments. Oracle guarantees a monthly uptime percentage of at least 99.9% for the OCVS service.
OCVS offers three different ESXi software versions and supports the following versions of other components:
The VMware Cloud offerings from AWS, Azure or Google are all vendor-controlled and customers get limited access to the VMware hosts and infrastructure components. With Oracle Cloud VMware Solution, customers get baremetal servers and the same operational experience as on-premises. This means full control over VMware infrastructure and its components:
SSH access to ESXi
Edit vSAN cluster settings
Browse datastores; upload and delete files
Customer controls the upgrade policy (version, time, defer)
Oracle has NO ACCESS after the SDDC provisioning!
Note: According to Oracle it takes about 2 hours to deploy a new SDDC that consists of 3 production hosts.
Customers can choose between Intel- and AMD-based hosts:
Two-socket BM.DenseIO2.52 with two CPUs each running 26 cores (Intel)
Two-socket BM.DenselO.E4.128 with two CPUs each running 16 cores (AMD)
Two-socket BM.DenselO.E4.128 with two CPUs each running 32 cores (AMD)
Two-socket BM.DenselO.E4.128 with two CPUs each running 64 cores (AMD)
Details about the compute shapes can be found here.
Key Differentiator #3 – Availability Domains
To provide high throughput and low latency, an OCVS SDDC is deployed by default across a minimum of three fault domains within a single availability domain in a region. But, upon request it is also possible to deploy your SDDC across multiple availability domains (AD), which comes with a few limitations:
While OCVS can scale from 3 up to 64 hosts in a single SDDC, Oracle recommends a maximum of 16 ESXi hosts in a multi-AD architecture
This architecture can have impacts on vSAN storage synchronization, and rebuild and resync times
Most hyperscaler only let you use two availability zones and fault domains in the same region. With Oracle it is possible to distribute the minimum of 3 hosts to 3 different availability domains. An availability domain consists of one or more data centers within the same region.
Note: Traffic between ADs within a region is free of charge.
Key Differentiator #4 – Networking
Because OCVS is customer-managed and can be operated like your on-premises environment, you also get “full” control over the network. OCVS is installed within a customers’ tencancy, which gives customer the advantage to run their VMware SDDC workloads in the same subnet as OCI native services. This provides lower latency to the OCI native services, especially for customers that are using Exadata for example.
Another important advantage of this architecture is capability to create VLAN-backed port groups on your vSphere Distributed Switch (VDS).
Key Differentiator #5 – External Storage
Since March 2022 the OCI File Storage service (NFS) is certified as secondary storage for an OCVS cluster. This allows customers to scale the storage layer of the SDDC without adding new compute resources at the same time.
And just announced on 22 August 2022, with Oracle’s summer ’22 release, OCVS customers can now connect to a certified OCI Block Storage through iSCSI as a second external storage option.
Block Storage provides high IOPS to OCI, and data is stored redundantly across storage servers with built-in repair mechanisms with a 99.99% uptime SLA.
Key Differentiator #6 – Billing Options
OCVS is currently only sold and supported by Oracle. Like with other cloud providers and VMware Cloud offerings, customers have different pricing options depending upon their commitment levels:
The rule of thumb for any hyperscaler says, that a 1-year commitment get around 30% discount and the 3-year commitments are around 50% discount.
The unique characteristic here is the monthly commitment option, which is caluclated with a discount of 16-17% depending on the compute shape.
Currently, OCI is available in 39 different cloud regions (21 countries) and Oracle announced five more by the end of 2022. On day one of each region, OCVS is available with a consistent and predictable pricing that doesn’t vary from region to region.
To compare: AWS has launched 27 different regions with 19 being able to host the VMware Cloud on AWS service. In Switzerland, AWS just opened their new data center without having the VMware Cloud on AWS service available, while OCVS is already available in Zurich.
While OCVS is a great solution for joint VMware and Oracle customers, it is not necessary for customers to using Oracle Cloud Infrastructure native solutions.
Data Center Expansion
As you just learned before, OCVS is a great fit if you want to maintain the same VMware software versions on-premises and in OCI. The classic use case here is the pure data center expansion scenario, which allows you to stretch your on-premises infrastructure to OCI, without the need to use their native services.
VMware Horizon on OCVS
As I mentioned at the beginning, Oracle Cloud VMware Solution is based on VMware Cloud Foundation and so it is no surprise that Horizon on OCVS is fully supported.
The Horizon deployment on OCVS works a little bit different compared to the on-premises installation and there is no feature parity yet:
Note: The support of NSX Advanced Load Balancer (Avi) is still a roadmap item
VMware Tanzu for OCVS
Since April 2022 it is possible for joint VMware and Oracle customers to use Tanzu Standard and its components with Oracle Cloud VMware Solution. Tanzu Standard comes with VMware’s Kubernetes distribution Tanzu Kubernetes Grid (TKG) and Tanzu Mission Control, which is the right solution for multi-cloud, multi-cluster K8s management.
With TMC you can deploy and manage TKG clusters on vSphere on-premises or on Oracle Cloud VMware Solution. You can even attach existing Kubernetes clusters from other vendors like RedHat OpenShift, Amazon EKS or Azure Kubernetes Service (AKS).
Multi-Cloud is a mess. You cannot solve that multi-cloud complexity with a single vendor or one single supercloud (or intercloud), it’s just not possible. But different vendors can help you on your multi-cloud journey to make your and the platform team’s life easier. The whole world talks about DevOps or DevSecOps and then there’s the shift-left approach which puts more responsibility on developers. It seems to me that too many times we forget the “ops” part of DevOps. That is why I would like to highlight the need for Tanzu Mission Control (which is part of Tanzu for Kubernetes Operations) and Tanzu Application Platform.
Challenges for Operations
What has started with a VMware-based cloud in your data centers, has evolved to a very heterogeneous architecture with two or more public clouds like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform. IT analysts tell us that 75% of businesses are already using two or more public clouds. Businesses choose their public cloud providers based on workload or application characteristics and a public clouds known strengths. Companies want to modernize their current legacy applications in the public clouds, because in most cases a simple rehost or migration (lift & shift) doesn’t bring value or innovation they are aiming for.
A modern application is a collection of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed in a private or public cloud. Many operations and platform teams see cloud-native as going to Kubernetes. But cloud-native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructures.
Expectation of Kubernetes
Kubernetes 1.0 was contributed as an open source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM and VMware.
Currently, the CNCF has over 167k project contributors, around 800 members and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.
If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding how many open source projects are supported by the CNCF and maintained this open source community. Since donated to CNCF, almost every company on this planet is using Kubernetes, or a distribution of it:
Amazon Elastic Kubernetes Service Distro (Amazon EKS-D)
These were just a few of total 63 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones:
Alibaba Cloud Container Service for Kubernetes
Amazon Elastic Container Service for Kubernetes (EKS)
Azure Kubernetes Service (AKS)
Google Kubernetes Engine (GKE)
Oracle Container Engine
OVH Managed Kubernetes Service
Red Hat OpenShift Dedicated
All these clouds and vendors expose Kubernetes implementations, but writing software that performs equally well across all clouds seems to be still a myth. At least we have a common denominator, a consistency across all clouds, right? That’s Kubernetes.
Consistent Operations and Experience
It is very interesting to see that the big three hyperscalers Amazon, AWS and Google are moving towards multi-cloud enabled services and products to provide a consistent experience from an operations standpoint, especially for Kubernetes clusters.
Microsoft got Azure Arc now, Google provides Anthos (GKE clusters) for any cloud and AWS also realized that the future consists of multiple clouds and plans to provide AKS “anywhere”.
They all have realized that customers need a centralized management and control plane. Customers are looking for simplified operations and consistent experience when managing multi-cloud K8s clusters.
Tanzu Mission Control (TMC)
Imagine that you have a centralized dashboard with management capabilities, which provide a unified policy engine and allows you to lifecycle all the different K8s clusters you have.
TMC offers built-in security policies and cluster inspection capabilities (CIS benchmarks) so you can apply additional controls on your Kubernetes deployments. Leveraging the open source project Velero, Tanzu Mission Control gives ops teams the capability to very easily backup and restore your clusters and namespaces. Just 4 weeks ago, VMware announced cross-cluster backup and restore capabilities for Tanzu Mission Control, that let Kubernetes-based applications “become” infrastructure and distribution agnostic.
Tanzu Mission Control lets you attach any CNCF-conformant K8s cluster. When attached to TMC, you can manage policies for all Kubernetes distributions such as Tanzu Kubernetes Grid (TKG), Azure Kubernetes Service, Google Kubernetes Engine or OpenShift.
Preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters can enable direct provisioning and management of Amazon EKS clusters so that developers and operators have less friction and more choices for cluster types. Teams will be able to simplify multi-cloud, multi-cluster Kubernetes management with centralized lifecycle management of Tanzu Kubernetes Grid and Amazon EKS cluster types.
Note: With this announcement I would expect that the support for Azure Kubernetes Service (AKS) is also coming soon.
Tanzu Mission Control provides cross-cloud services for your Kubernetes clusters deployed in multiple clouds. But there is still another problem.
Developers are being asked to write code and provide business logic that could run on-prem, on AWS, on Azure or any other public cloud. Every cloud provider has an interest to provide you their technologies and services. This includes the hosted Kubernetes offerings (with different Kubernetes distributions), load balancers, storage, databases, APIs, observability, security tools and so many other components. To me, it sounds very painful and difficult to learn and understand the details of every cloud provider.
Cross-cloud services alone don’t solve that problem. Obviously, neither Kubernetes solves that problem.
What if Kubernetes and centralized management and visibility are not “the” solution but rather something that sits on top of Kubernetes?
And Then Came PaaS
Kubernetes is a platform for building platforms and is not really meant to be used by developers.
The CNCF landscape is huge and complex to understand and integrate, so it is just a logical move that companies were looking more for pre-assembled solutions like platform as a service (PaaS). I think that Tanzu Application Service (formerly known as Pivotal Cloud Foundry), Heroku, RedHat OpenShift and AWS Elastic Beanstalk are the most famous examples for PaaS.
The challenge with building applications that run on a PaaS, is sometimes the need to leverage all the PaaS specific components to fully make use of it. What if someone wants to run her own database? What if the PaaS offering restricts programming languages, frameworks, or libraries? Or is it the vendor lock-in which bothers you?
PaaS solutions alone don’t seem to be solving the missing developer experience either for everyone.
Do you want to build the platform by yourself or get something off the shelf? There is a big difference between using a platform and running one. 🙂
Bring Your Own Kubernetes To A Portable PaaS
What’s next after IaaS has evolved to CaaS (because of Kubernetes) and PaaS? It is adPaaS (Application Developer PaaS).
The idea behind the golden path or paved road is that the (internal) platform offers some form of pre-assembled components and supported approach (best practices) that make software development faster and more scalable. Developers don’t have to reinvent the wheel by browsing through a very fragmented ecosystem of developer tooling where the best way to find out how to do things was to ask the community or your colleagues.
VMware announced Tanzu Application Platform (TAP) in September 2021 with the statement, that TAP will provide a better developer experience on any Kubernetes.
VMware Tanzu Application Platform delivers a prepaved path to production and a streamlined, end-to-end developer experience on any Kubernetes.
It is the platform team’s duty to install and configure the opinionated Tanzu Application Platform as an overlay on top of any Kubernetes cluster. They also integrate existing components of Kubernetes such as storage and networking. An opinionated platform provides the structure and abstraction you are looking for: The platform “does” it for you. In other words, TAP is a prescribed architecture and path with the necessary modularity and flexibility to boost developer productivity.
The developers can focus on writing code and do not have to fully understand the details like container image registries, image building and scanning, ingress, RBAC, deploying and running the application etc.
TAP comes with many popular best-of-breed open source projects that are improving the DevSecOps experience:
Backstage. Backstage is an open platform for building developer portals, created at Spotify, donated to the CNCF, and maintained by a worldwide community of contributors.
Carvel. Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes.
Cartographer. Cartographer is a VMware-backed project and is a Supply Chain Choreographer for Kubernetes. It allows App Operators to create secure and pre-approved paths to production by integrating Kubernetes resources with the elements of their existing toolchains (e.g. Jenkins).
Tekton. Tekton is a cloud-native, open source framework for creating CI/CD systems. It allows developers to build, test, and deploy across cloud providers and on-premise systems.
Grype. Grype is a vulnerability scanner for container images and file systems.
Cloud Native Runtimes for VMware Tanzu. Cloud Native Runtimes for Tanzu is a serverless application runtime for Kubernetes that is based on Knative and runs on a single Kubernetes cluster.
At VMware Explore US 2022, VMware announced new capabilities that will be released in Tanzu Application Platform 1.3. The most important added functionalities for me are:
Support for RedHat OpenShift. Tanzu Application Platform 1.3 will be available on RedHat OpenShift, running in vSphere and on baremetal.
Support for air-gapped installations. Support for regulated and disconnected environments, helping to ensure that the components, upgrades, and patches are made available to the system and that they operate consistently and correctly in the controlled environment and keep data secure.
Carbon Black Integration. Tanzu Application Platform expands the ecosystem of supported vulnerability scanners with a beta integration with VMware Carbon Black scanner to enable customer choice and leverage their existing investments in securing their supply chain.
The Power Combo for Multi-Cloud
A mix of different workloads like virtual machines and containers that are hosted in multiple clouds introduce complexity. With the powerful combination of Tanzu Mission Control and Tanzu Application Platform companies can unlock the full potential of their platform teams and developers by reducing complexity while creating and using abstraction layers on top your multi-cloud infrastructure.
VMworld is now VMware Explore and is currently happening in San Francisco! This is a consolidated of the announcements from day 1 (August 30th, 2022).
VMware Introduces vSphere 8, vSAN 8 and VMware Cloud Foundation+
VMware today introduced VMware vSphere 8 and VMware vSAN 8—major new releases of VMware’s compute and storage solutions.
vSphere 8 – vSphere 8 introduces vSphere on DPUs, previously known as Project Monterey. In close collaboration with technology partners AMD, Intel and NVIDIA as well as OEM system partners Dell Technologies, Hewlett Packard Enterprise and Lenovo, vSphere on DPUs will unlock hardware innovation helping customers meet the throughput and latency needs of modern distributed workloads. vSphere will enable this by offloading and accelerating network and security infrastructure functions onto DPUs from CPUs.
vSphere 8 will dramatically accelerate AI and machine learning applications by doubling the virtual GPU devices per VM, delivering a 4x increase of passthrough devices, and supporting vendor device groups which enable binding of high-speed networking devices and the GPU.
vSAN 8: vSAN 8 introduces breakthrough performance and hyper-efficiency. Built from the ground up, the new vSAN Express Storage Architecture (ESA) will enhance the performance, storage efficiency, data protection and management of vSAN running on the latest generation storage devices. vSAN 8 will provide customers with a future ready infrastructure that supports modern TLC storage devices and delivers up to a 4x performance boost.
VMware Cloud Foundation+ – VMware introduces a new cloud-connected architecture for managing and operating full stack HCI in data centers. Built on vSphere+ and vSAN+, VMware Cloud Foundation+ will add a new cloud-connected architecture for managing and operating full-stack HCI in our data center or co-location facility.
VMware Cloud Foundation+ will deliver new admin, developer and hybrid cloud services through a simplified subscription model and keyless entitlement. VMware Cloud Foundation 4.5 will enable VMware Cloud Foundation+ by adding vSphere+ and vSAN+, plus a cloud gateway that provides access to the VMware Cloud Console as part of the full stack architecture.
VMware Cloud for Hyperscalers
VMC on AWS – Amazon Elastic Compute Cloud (Amazon EC2) I4i instances for I/O-intensive Workloads: Powered by 3rd generation Intel® Xeon® Scalable processors (Ice Lake), Amazon EC2 instances help deliver better workload support and delivery, lower TCO, and increased scalability and application performance. Compared to I3, the I4i instances provide nearly twice the number of physical cores, twice the memory, three times the storage capacity, and three times the network bandwidth.
Amazon FSx for NetApp ONTAP Integration Availability – as a native AWS cloud storage service that is certified as a supplemental datastore for VMware Cloud on AWS, FSx for ONTAP offers fully managed shared storage built on the familiar NetApp ONTAP file system trusted by VMware customers running on premises today. Customers can now use FSx for ONTAP as a simple and elastic datastore for VMware Cloud on AWS, enabling them to scale storage up or down independently from compute while paying only for the resources they need.
VMware Cloud Flex Storage Availability – A new VMware-managed and natively integrated cloud storage and data management solution that offers supplemental datastore-level access for VMware Cloud on AWS. With just a few clicks in the VMware Cloud Console, customers can scale their storage environment without adding hosts, and elastically adjust storage capacity up or down as needed for every application. Customers also benefit from a simple, pay-as-you-consume pricing model. Together with VMware vSAN, VMware Cloud Flex Storage offers flexibility and customer value in terms of resilience, performance, scale, and cost in the cloud.
VMware Cloud Flex Compute – “Preview” of a new cloud compute model that will help customers get started faster with VMware Cloud on AWS. With this new model, VMware introduces a “resource-defined” cloud compute model in place of “hardware-defined” compute instance model which will provide customers higher flexibility, elasticity, and speed to better meet cost and performance requirements of enterprise applications. It will help customers get started faster with VMware Cloud on AWS by using smaller consumable units.
Oracle Cloud VMware Solution – New features and capabilities with VMware Tanzu Standard Edition and introduced support for single host SDDCs for non-production workloads.
VMware Cloud Management – VMware Aria
VMware unveiled a multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.
VMware Aria is a new brand for the vRealize components, Tanzu Observability by Wavefront and CloudHealth unified under one umbrella, one name.
The VMware products and services within the VMware Aria portfolio are:
VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.
VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.
VMware Aria provides features and functions that span management disciplines and clouds to deliver unique value for multi-cloud governance, cross-cloud migration, and actionable business insights. In addition, there are three new end-to-end management services built on top of VMware Aria Hub and VMware Aria Graph:
VMware Aria Guardrails – Automate enforcement of cloud guardrails for networking, security, cost, performance, and configuration at scale for multi-cloud environments with an everything-as-code approach
VMware Aria Migration – Accelerate and simplify the multi-cloud migration journey by automating assessment, planning, and execution in conjunction with VMware HCX
Project Northstar – Project Northstar is a SaaS-based network and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.
DPU-based Acceleration for NSX – Formerly known as Project Monterey, VMware announced that starting with NSX 4.0 and vSphere 8.0, customers can leverage DPU-based acceleration using SmartNICs. Offloading NSX services to the DPU can accelerate networking and security functions without impacting the host CPUs, addressing the needs of modern applications and other network-intensive and latency-sensitive applications.
Project Trinidad – Available as tech preview, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.
Project Watch – VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.
VMware Edge Compute Stack 2.0 – VMware announced the VMware Edge Compute Stack v1.0 last year and is now adding more features and functionalities optimized for different use cases at the enterprise edge – shipped with vSphere 8 and Tanzu Kubernetes Grid 2.0. VMware, for the first time, will introduce initial support for non-x86 processor-based specialized small form factor edge platforms to simultaneously run IT/OT workloads and workflows on a single stack.
VMware Private Mobile Network (Beta) – Delivered by service providers, this new managed service offering provides enterprises with private 4G/5G mobile connectivity in support of edge-native applications. VMware will empower partners with a single PMN orchestrator to operate multi-tenant private 4G/5G networks with an enterprise-grade solution.
Modern Applications (VMware Tanzu)
Tanzu Application Platform – VMware pre-announced new Tanzu Application Platform (TAP) 1.3 capabilities like the availability on RedHat OpenShift or the support for air-gapped installations for regulated and disconnected environments.
Tanzu Kubernetes Grid – With the release of TKG 2.0, VMware now includes a unified experience for applications running on any cloud. In the near future, Tanzu Kubernetes Grid 2.0 should support both Supervisor-based and VM-based management cluster models. On vSphere 8, both Supervisor-based and VM-based models will be supported, and VM-based management clusters will continue to be available on previous versions of vSphere and public clouds. This means in other words, that VMware continues with their “TKGS” and “TKGm” flavors.
Support for customer-owned enterprise certificate authority through integration with Venafi
Improved security with enterprise-approved container image registries, data services support, external services support
and a global SLO dashboard that allows developers and site-reliability engineers to view all managed service SLOs, helping with capacity planning, troubleshooting, and understanding the health of their applications.
VMware unveiled how it is advancing self-configuring, self-healing and self-securing outcomes across four key technology areas that are delivered by the Anywhere Workspace platform:
VDI and DaaS
Digital Employee Experience
Unified Endpoint Management
VMware is introducing a next generation of VMware Horizon Cloud that will enable multi-cloud agility and flexibility. This new release represents a major update to Horizon Cloud on Microsoft Azure that can dramatically simplify the infrastructure that needs to be deployed inside customer environments, reducing infrastructure costs in some cases by over 70% while increasing scalability and reliability of VMware’s DaaS platform.
Workspace ONE support for Windows OS multi-user mode is now available in Tech Preview for Azure Active Directory-based deployments; and it will soon be extended to Active Directory-based deployments.
VMware also announced the coming tech preview of Workspace ONE Cloud Marketplace, which will feature dashboards, widgets, reports, Freestyle Orchestrator workflows, and other resources that can be imported to help customers adopt additional solutions.
Horizon Managed Desktop – I am very excited about this announcement, because it will provide a managed service offering that takes care of lifecycle services, support, and more, on top of a customer-provided infrastructure. This will help customers that don’t have in-house experts get to value with VDI faster.
VMware Cloud Foundation+, VMware vSphere 8, VMware vSAN 8 and VMware Edge Compute Stack 2.0 are all expected to be available by October 28, 2022 (the close of VMware’s Q3 FY23). VMware Private Mobile Network is expected to be available in beta in VMware’s Q3 FY23.
Not bad for the first day, right? Stay tuned for more exciting VMware Explore announcements!
I am finally taking the time to write this piece about interclouds, workload mobility and application portability. Some of my engagements during the past four weeks led me several times to discussions about interclouds and workload mobility.
Cloud to Cloud Interoperability and Federation
Who has thought back in 2012 that we will have so many (public) cloud providers like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud etc. in 2022?
10 years ago, many people and companies were convinced that the future consists of public cloud infrastructure only and that local self-managed data centers are going to disappear.
This vision and perception of cloud computing has dramatically changed over the past few years. We see public cloud providers stretching their cloud services and infrastructure to large data centers or edge locations. It seems they realized, that the future is going to look differently than a lot of people anticipated back then.
I was not aware that the word “intercloud” and the need for it exists for a long time already apparently. Let’s take David Bernstein’s presentation as an example, which I found by googling “intercloud”:
Currently there are no implicit and transparent interoperability standards in place in order for disparate cloud computing environments to be able to seamlessly federate and interoperate amongst themselves. Proposed P2302 standards are a layered set of such protocols, called “Intercloud Protocols”, to solve the interoperability related challenges. The P2302 standards propose the overall design of decentralized, scalable, self-organizing federated “Intercloud” topology.
I do not know David Bernstein and the IEEE working group personally, but it would be great to hear from some of them, what they think about the current cloud computing architectures and how they envision the future of cloud computing for the next 5 or 10 years.
As you can see, the wish for an intercloud protocol or an intercloud exists since a while. Let us quickly have a look how others define intercloud:
Cisco in 2008 (it seems that David Bernstein worked at Cisco that time). Intercloud is a network of clouds that are linked with each other. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data.
teradata. Intercloud is a cloud deployment model that links multiple public cloud services together as one holistic and actively orchestrated architecture. Its activities are coordinated across these clouds to move workloads automatically and intelligently (e.g., for data analytics), based on criteria like their cost and performance characteristics.
Alvin Cheung is an associate professor at Berkeley EECS and wrote the following in his Twitter comments:
we argue that cloud computing will evolve to a new form of inter-cloud operation: instead of storing data and running code on a single cloud provider, apps will run on an inter-operating set of cloud providers to leverage their specialized services / hw / geo etc, much like ISPs.
Alvin and his colleagues wrote a publication which states “A Berkeley View on the Future of Cloud Computing” that mentions the following very early in the PDF:
We predict that this market, with the appropriate intermediation, could evolve into one with a far greater emphasis on compatibility, allowing customers to easily shift workloads between clouds.
[…] Instead, we argue that to achieve this goal of flexible workload placement, cloud computing will require intermediation, provided by systems we call intercloud brokers, so that individual customers do not have to make choices about which clouds to use for which workloads, but can instead rely on brokers to optimize their desired criteria (e.g., price, performance, and/or execution location).
We believe that the competitive forces unleashed by the existence of effective intercloud brokers will create a thriving market of cloud services with many of those services being offered by more than one cloud, and this will be sufficient to significantly increase workload portability.
Organizations place their workloads in that cloud which makes the most sense for them. Depending on different regulations, data classification, different cloud services, locations, or pricing, they then decide which data or workload goes to which cloud.
The people from Berkeley do not necessarily promote a multi-cloud architecture, but have the idea of an intercloud broker that places your workload on the right cloud based on different factors. They see the intercloud as an abstraction layer with brokering services:
In my understanding their idea goes towards the direction of an intelligent and automated cloud management platform that takes the decision where a specific workload and its data should be hosted. And that it, for example, migrates the workload to another cloud which is cheaper than the current one.
Cloud Native Technologies for Multi-Cloud
Companies are modernizing/rebuilding their legacy applications or create new modern applications using cloud native technologies. Modern applications are collections of microservices, which are light, fault tolerant and small. These microservices can run in containers deployed on a private or public cloud.
Which means, that a modern application is something that canadapt to any environment and perform equally well.
The challenge today is that we have modern architectures, new technologies/services and multiple clouds running with different technology stacks. And we have Kubernetes as framework, which is available in different formats (DIY or offerings like Tanzu TKG, AKS, EKS, GKE etc.)
Then there is the Cloud Native Computing Foundation (CNCF) and the open source community which embrace the principal of “open” software that is created and maintained by a community.
It is about building applications and services that can run on any infrastructure, which also means avoiding vendor or cloud lock-in.
Challenges of Interoperability and Multiple Clouds
If you discuss multi-cloud and infrastructure independent applications, you mostly end up with an endless list of questions like:
How can we achieve true workload mobility or application portability?
How do we deal with the different technology formats and the “language” (API) of each cloud?
How can we standardize and automate our deployments?
Is latency between clouds a problem?
What about my stateful data?
How can we provide consistent networking and security?
What about identity federation and RBAC?
Is the performance of each cloud really the same?
How should we encrypt traffic between services in multiple clouds?
What about monitoring and observability?
Workload Mobility and Application Portability without an Intercloud
VMware has a different view and approach how workload mobility and application portability can be achieved.
Their value add and goal is the same, but with a different strategy of abstracting clouds.
VMware is not building an intercloud but they provide customer a technology stack (compute, storage, networking), or a cloud operating system if you will, that can run on top of every major public cloud provider like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.
This consistent infrastructure makes it especially for virtual machines and legacy applications extremely easy to be migrated to any location.
What about modern applications and Kubernetes? What about developers who do not care about (cloud) infrastructures?
At VMworld 2021, VMware announced the technology preview of “Project Cascade” which will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.
The idea is to provide customers a converged IaaS and CaaS consumption service across any cloud, exposed through different Kubernetes APIs.
I heard the statement “Kubernetes is complex and hard” many times at KubeCon Europe 2022 and Project Cascade is clearly providing another abstraction layer for VM and container orchestration that should make the lives of developers and operators less complex.
Another project in tech preview since VMworld last year is “Project Ensemble“. It is about multi-cloud management platform that provides an app-centric self-service portal with predictive support.
Project Ensemble will deliver a unified consumption surface that meets the unique needs of the cloud administrator and SRE alike. From an architectural perspective, this means creating a platform designed for programmatic consumption and a firm “API First” approach.
I can imagine that it will be a service that leverages artificial intelligence and machine learning to simplify troubleshooting and that is capable in the future to intelligently place or migrate your workloads to the appropriate or best cloud (for example based on cost) including all attached networking and security policies.
I believe that VMware is on the right path by giving customers the option to build a cloud-agnostic infrastructure with the necessary abstraction layers for IaaS and CaaS including the cloud management platform. By providing a common way or standard to run virtual machines and containers in any cloud, I am convinced, VMware is becoming the defacto standard for infrastructure for many enterprises.
By providing a consistent cloud infrastructure and a consistent developer model and experience, VMware bridges the gap between the developers and operators, without the need for an intercloud or intercloud protocol. That is the future of cloud computing.