VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US is currently happening in Las Vegas and I am onsite! Below you will find an overview of the information that was shared with us during the general session and solution keynotes.

Please be aware that this list is not complete but it should include all the major announcements including references and sources.

VMware Aria and VMware Tanzu

Starting this year, VMware Aria and VMware Tanzu form a single track at VMware Explore and VMware introduced the develop, operate, and optimize pillars (DOO) for Aria and Tanzu around April 2023.

VMware Tanzu DOO Framework

The following name changes and adjustments have been announced at VMware Explore US 2023:

  • The VMware Tanzu portfolio includes two new product categories (product family) called “Tanzu Application Platform” and “Tanzu Intelligence Services”.
  • Tanzu Application Platform includes the products Tanzu Application Platform (TAP) and Tanzu for Kubernetes Operations (TKO), and the new Tanzu Application Engine module.
  • Tanzu Intelligence Services – Aria Cost powered by CloudHealth, Aria Guardrails, Aria Insights, and Aria Migration will be rebranded as “Tanzu” and become part of this new Tanzu Intelligence Services category.
    • Tanzu Hub & Tanzu Graph
    • Tanzu CloudHealth
    • Tanzu Guardrails
    • Tanzu Insights (currently known as Aria Insights)
    • Tanzu Transformer (currently known as Aria Migration)
  • Aria Hub and Aria Graph are now called Tanzu Hub
  • VMware Cloud Packs are now called the VMware Cloud Editions (more information below)

Note: VMware expects to implement these changes latest by Q1 2024

The VMware Aria and Tanzu announcement and rebranding information can be found here.

Tanzu Mission Control

After the announcement that Tanzu Mission Control supports the lifecycle management of Amazon EKS clusters, VMware announced the expansion to provide lifecycle management capabilities of Microsoft AKS clusters now as well. 

Tanzu Application Engine (Private Beta)

VMware announced a new solution for the Tanzu Application Platform category.

VMware Tanzu for Kubernetes Operations is introducing Tanzu Application Engine, enhancing multi-cloud support with lifecycle management of Azure AKS clusters, and offering new Kubernetes FinOps (cluster cost) visibility. A new abstraction that includes workload placement, K8s runtime, data services, libraries, infra resources, with a set of policies and guardrails.

The Tanzu Application Engine announcement can be found here.

VMware RabbitMQ Managed Control Plane

I know a lot of customers who built an in-house RabbitMQ cloud service.

VMware just announced a beta program for a new VMware RabbitMQ Managed Control Plane which allows enterprises to seamlessly integrate RabbitMQ within their existing cloud environment, offering flexibility and control over data streaming processes.

What’s New with VMware Aria?

Other Aria announcements can be found here.

What’s New with VMware Aria Operations at VMware Explore

Next-Gen Public Cloud Management with VMware Aria Automation

VMware Cloud Editions

What has started with four different VMware Cloud Packs, is now known as “VMware Cloud Editions” with five different options:

VMware Cloud Editions

Here’s an overview of the different solutions/subscriptions included in each edition:

VMware Cloud Editions Connected Subscriptions

More VMware Cloud related announcements can be found here.

What’s New in vSphere 8 Update 2

As always, VMware is working on enhancing operational efficiency to make the life of an IT admin easier. And this gets better with the vSphere 8 U2 release.

In vSphere 8 Update 2, we are making significant improvements to several areas of maintenance to reduce and in some cases eliminate this need for downtime so vSphere administrators can make those important maintenance changes without having a large impact on the wider vSphere infrastructure consumers.

These enhancements include, reduced downtime upgrades for vCenter, automatic vCenter LVM snapshots before patching and updating, non-disruptive certificate management, and reliable network configuration recovery after a vCenter is restored from backup.

More information about the vSphere 8 Update 2 release can be found here.

What’s New in vSAN 8 Update 2

At VMware Explore 2022, VMware announced the new vSAN 8.0 release which included the new Express Storage Architecture (ESA), which even got better with the recent vSAN 8.0 Update 1 release.

VMware vSAN Max – Petabyte-Scale Disaggregated Storage

VMware vSAN Max, powered by vSAN Express Storage Architecture, is a new vSAN offering in the vSAN family delivering
petabyte-scale disaggregated storage for vSphere. With its new disaggregated storage deployment model, vSAN customers can scale storage elastically and independently from compute and deploy unified block, file, and partner-based object storage to maximize utilization and achieve lower TCO.

VMware vSAN Max

vSAN Max expands the use cases in which HCI can provide exceptional value. Disaggregation through vSAN Max provides flexibility to build infrastructure with the scale and efficiency required for non-linear scaling applications, such as storage-intensive databases, modern elastic applications with large datasets and more. Customers have a choice of deploying vSAN in a traditional model or a disaggregated model with vSAN Max, while still using a single control plane to manage both deployment options.

The vSAN Max announcement can be found here.

VMware Cloud on AWS

VMware announced a VMware Cloud on AWS Advanced subscription tier that will be available on i3en.metal and i4i.metal instance types only. This subscription will include advanced cloud management, networking and security features:

  • VMware NSX+ Services (NSX+ Intelligence, NDR capabilities, NSX Advanced Load Balancer)
  • vSAN Express Storage Architecture Support
  • VMware Aria Automation
  • VMware Aria Operations
  • VMware Aria Operations for Logs

Note: Existing deployments (existing SDDCs) will be entitled to these advanced cloud management, networking and security features over time

The VMware Cloud on AWS Advanced Subscription Tier FAQ can be found here

Introduction of VMware NSX+

Last year, VMware introduced Project Northstar as technology preview:

Project Northstar is a SaaS-based networking and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

This year, VMware announced the initial availability of the NSX+ service. VMware NSX+ is a fully managed cloud-based service offering that allows networking, security, and operations teams to consume and operate VMware NSX services from a single cloud console across private and public clouds.

NSX+ Architectural Diagram

The following services are available:

  • NSX+ Policy Management: Provides unified networking and security policy management across multiple clouds and on-premises data centers.
  • NSX+ Intelligence (Tech Preview only): Provides a big data reservoir and a system for network and security analytics for real-time traffic visibility into applications traffic all the way from basic traffic metrics to deep inspection of packets.
  • NSX+ NDR (Tech Preview only): Provides a scalable threat detection and response service offering for Security Operations Center (SoC) teams to triage real time security threats to their data center and cloud.

There are three different NSX+ and two NSX+ distributed firewall editions available:

  • NSX+ Standard. For organizations needing a basic set of NSX connectivity and security features for single location software-defined data center deployments.
  • NSX+ Advanced. For organizations needing advanced networking and security features that are applied to multiple sites. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Enterprise. For organizations needing all of the capability NSX has to offer. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Distributed Firewall. For organizations needing implement access controls for east-west traffic within the network (micro-segmentation) but not focused on Threat detection and prevention services.
  • NSX+ Distributed Firewall with Threat Prevention. For organizations needing access control and select Threat prevention features for east-west traffic within the network. 

An NSX+ feature overview can be found here.

Note: Currently, NSX+ only supports NSX on-premises deployments (NSX 4.1.1 or later) and VMware Cloud on AWS

VMware Cloud Foundation

VMware announced a few innovations for H2 2023, which includes the support for Distributed Service Engine (DSE aka Project Monterey), vSAN ESA support, and NSX+.

 

Generative AI – VMware Private AI Foundation with Nvidia

VMware and Nvidia’s CEOs announced VMware Private AI Foundation as the result of their longstanding partnership. 

Built on VMware Cloud Foundation, this integrated solution with Nvidia will enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.

Bild

Anywhere Workspace Announcements

At VMware Explore 2022, VMware shared its vision for autonomous workspaces.

Autonomous workspace is a concept (not an individual product) that is our north star for the future of end-user computing. It means going beyond creating a unified workspace with basic automations, to analyzing huge amounts of data with AI and machine learning, to drive more advanced, context aware automations. This leads to a workspace that can be considered self-configuring, self-healing, and self-securing. 

VMware continued working on the realization of this vision and came up with a lot of announcements, which can be found here.

Other Announcements

Please find below some announcements that VMware shared with us during the SpringOne event or before and after the general session on August 22nd, 2023:

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

The time is right for VMware Cloud! In the rapidly evolving landscape of modern business, embracing the cloud has become essential for organizations seeking to stay competitive and agile. The allure of increased scalability, cost-efficiency, and flexibility has driven enterprises of all sizes to embark on cloud migration journeys. However, the road to a successful cloud adoption is often coming with challenges. Slow and failed migrations have given rise to what experts call the “cloud paradox,” where the very technology meant to accelerate progress ends up hindering it.

As businesses navigate through this paradox, finding the right strategy to harness the full potential of the cloud becomes paramount. One solution that has emerged as a beacon of hope in this complex landscape is VMware Cloud. With its multi-cloud approach, which is also known as supercloud, VMware Cloud provides organizations the ability to craft a winning strategy that capitalizes on momentum while minimizing the risks associated with cloud migrations.

The Experimental Phase is Over

Is it really though? The experimental phase was an exciting journey of discovery for organizations seeking the potential of multi-cloud environments. Companies have explored different cloud providers, tested a variety of cloud services, and experimented with workloads and applications in the cloud. It allowed them to understand the benefits and drawbacks of each cloud platform, assess performance, security and compliance aspects, and determine how well each cloud provider aligns with their unique business needs.

The Paradox of Cloud and Choice

With an abundance of cloud service providers, each offering distinct features and capabilities, decision-makers can find themselves overwhelmed with options. The quest to optimize workloads across multiple clouds can lead to unintended complexities, such as increased operational overhead, inconsistent management practices/tools, and potential vendor lock-in.

Furthermore, managing data and applications distributed across various cloud environments can create challenges related to security, compliance, and data sovereignty. The lack of standardized practices and tools in a multi-cloud setup can also hinder collaboration and agility, negating the very advantages that public cloud environments promise to deliver.

Multi-Cloud Complexity

(Public) Cloud computing is often preached for its cost-efficiency, enabling businesses to pay for resources on-demand and avoid capital expenditures on physical infrastructure. However, the cloud paradox reveals that organizations can inadvertently accumulate hidden costs, such as data egress fees, storage overage charges, and the cost of cloud management tools. Without careful planning and oversight, the cloud’s financial benefits might be offset by unexpected expenses.

Why Cloud Migrations are Slowing Down

Failed expectations. The first reasons my customers mention are cost and complexity.

While the cloud offers potential cost savings in the long run, the initial investment and perceived uncertainty in calculating the total cost of ownership can deter some organizations from moving forward with cloud migrations. Budget constraints and difficulties in accurately estimating and analyzing cloud expenses lead to a cautious approach to cloud adoption.

One significant factor impeding cloud migrations is the complexity of the process itself. Moving entire infrastructures, applications, and data to the cloud requires thorough planning, precise execution, and in-depth knowledge of cloud platforms and technologies. Many organizations lack the in-house expertise to handle such a massive undertaking, leading to delays and apprehensions about potential risks.

Other underestimated reasons are legacy systems and applications that have been in use for many years and are often deeply ingrained within an organization’s operations. Migrating these systems to the cloud may require extensive reconfiguration or complete redevelopment, making the migration process both time-consuming and resource-intensive.

Reverse Cloud Migrations

While I don’t advertise a case for repatriation, I would like to share the idea that companies should think about workload mobility, application portability, and repatriation upfront. You can infinitely optimize your cloud spend, but if cloud costs start to outpace your transformation plans or revenue growth, it is too late already.

Embracing a Smart Approach with VMware Cloud

To address the cloud paradox and maximize the potential of multi-cloud environments, VMware is embracing the cloud-smart approach. This approach is designed to empower organizations with a unified and consistent platform to manage and operate their applications across multiple clouds.

VMware Cloud-Smart

  • Single Cloud Operating Model: A single operating model that spans private and public clouds. This consistency simplifies cloud management, enabling seamless workload migration and minimizing the complexities associated with multiple cloud providers.
  • Flexible Cloud Choice: VMware allows organizations to choose the cloud provider that best suits their specific needs, whether it is a public cloud or a private cloud infrastructure. This freedom of choice ensures that businesses can leverage the unique advantages of each cloud while maintaining operational consistency.
  • Streamlined Application Management: A cloud-smart approach centralizes application management, making it easier to deploy, secure, and monitor applications across multi-cloud environments. This streamlines processes, enhances collaboration, and improves operational efficiency.
  • Enhanced Security and Compliance: By adopting VMware’s security solutions, businesses can implement consistent security policies across all clouds, ensuring data protection and compliance adherence regardless of the cloud provider.

Why VMware Cloud?

This year I realized that a lot of VMware customers came back to me because their cloud-first strategy did not work as expected. Costs exploded, migrations were failing, and their project timeline changed many times. Also, partners like Microsoft and AWS want to collaborate more with VMware, because the public cloud giants cannot deliver as expected.

Customers and public cloud providers did not see any value in lifting and shifting workloads from on-premises data centers to the public. Now the exact same people, companies and partners (AWS, Microsoft, Google, Oracle etc.) are back to ask for VMware their support, and solutions that can speed up cloud migrations while reducing risks.

This is why I am always suggesting a “lift and learn” approach, which removes pressure and reduces costs.

Organizations view the public cloud as a highly strategic platform for digital transformation. Gartner forecasted in April 2023 that Infrastructure-as-a-Service (IaaS) is going to experience the highest spending growth in 2023, followed by PaaS.

It is said that companies spend most of their money for compute, storage, and data services when using Google Cloud, AWS, and Microsoft Azure. Guess what, VMware Cloud is a perfect fit for IaaS-based workloads (instead of using AWS EC2, Google’s Compute Engine, and Azure Virtual machine instances)!

Who doesn’t like the idea of cost savings and faster cloud migrations?

Disaster Recovery and FinOps

When you migrate workloads to the cloud, you have to rethink your disaster recovery and ransomware recovery strategy. Have a look at VMware’s DRaaS (Disaster-Recovery-as-a-Service) offering which includes ransomware recovery capabilities as well. 

If you want to analyze and optimize your cloud spend, try out VMware Aria Cost powered by CloudHealth.

Final Words

VMware’s approach is not right for everyone, but it is a future-proof cloud strategy that enables organizations to adapt their cloud strategies as business needs to evolve. The cloud-smart approach offers a compelling solution, providing businesses with a unified, consistent, and flexible platform to succeed in multi-cloud environments. By embracing this approach, organizations can overcome the complexities of multi-cloud, unlock new possibilities, and set themselves on a path to cloud success.

And you still get the same access to the native public cloud services.

 

 

Supercloud – A Hybrid Multi-Cloud

Supercloud – A Hybrid Multi-Cloud

I thought it is time to finally write a piece about superclouds. Call it supercloud, the new multi-cloud, a hybrid multi-cloud, cross-cloud, or a metacloud. New terms with the same meaning. I may be biased but I am convinced that VMware is in the pole position for this new architecture and approach.

Let me also tell you this: superclouds are nothing new. Some of you believe that the idea of a supercloud is something new, something modern. Some of you may also think that cross-cloud services, workload mobility, application portability, and data gravity are new complex topics of the “modern world” that need to be discussed or solved in 2023 and beyond. Guess what, most of these challenges and ideas exist for more than 10 years already!

Cloud-First is not cool anymore

There is clear evidence that a cloud-first approach is not cool or the ideal approach anymore. Do you remember about a dozen years ago when analysts believed that local data centers are going to disappear and the IT landscape would only consist of public clouds aka hyperscalers? Have a look at this timeline:

VMware and Public Clouds Timeline

We can clearly see when public clouds like AWS, Google Cloud, and Microsoft Azure appeared on the surface. A few years later, the world realized that the future is hybrid or multi-cloud. In 2019, AWS launched “Outposts”, Microsoft made Azure Arc and their on-premises Kubernetes offering available only a few years later.

Google, AWS, and Microsoft changed their messaging from “we are the best, we are the only cloud” to “okay, the future is multi-cloud, we also have something for you now”. Consistent infrastructure and consistent operations became almost everyone’s marketing slogan.

As you can also see above, VMware announced their hybrid cloud offering “VMware Cloud on AWS” in 2016, the initial availability came a year after, and since 2018 it is generally available.

From Internet to Interclouds

Before someone coined the term “supercloud”, people were talking about the need for an “intercloud”. In 2010, Vint Cerf, the so-called “Father of the Internet” shared his opinions and predictions on the future of cloud computing. He was talking about the potential need and importance of interconnecting different clouds.

Cerf already understood about 13 years ago, that there’s a need for an intercloud because users should be able to move data/workloads from one cloud to another (e.g., from AWS to Azure to GCP). He was guessing back then that the intercloud problem could be solved around 2015.

We’re at the same point now in 2010 as we were in ’73 with internet.

In short, Vint Cerf understood that the future is multi-cloud and that interoperability standards are key.

There is also a document that also delivers proof that NIST had a working group (IEEE P2302) trying to develop “the Standard for Intercloud Interoperability and Federation (SIIF)”. This was around 2011. How did the suggestion back then look like? I found this youtube video a few years ago with the following sketch:

Intercloud 2012

Workload Mobility and Application Portability

As we can see above, VM or workload mobility was already part of this high-level architecture from the IEEE working group. I also found a paper from NIST called “Cloud Computing Standards Roadmap” dated July 2013 with very interesting sections:

Cloud platforms should make it possible to securely and efficiently move data in, out, and among cloud providers and to make it possible to port applications from one cloud platform to another. Data may be transient or persistent, structured or unstructured and may be stored in a file system, cache, relational or non-relational database. Cloud interoperability means that data can be processed by different services on different cloud systems through common specifications. Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Note: VMware HCX is available since 2018 and is still the easiest and probably the most cost-efficient way to migrate workloads from one cloud to another.

It is all about the money

Imagine it is March 2014, and you read the following announcement: Cisco is going big – they want to spend $1 billion on the creation of an intercloud

Yes, that really happened. Details can be found in the New York Times Archive. The New York Times even mentioned at the end of their article that “it’s clear that cloud computing has become a very big money game”.

In Cisco’s announcement, money had also been mentioned:

Of course, we believe this is going to be good for business. We expect to expand the addressable cloud market for Cisco and our partners from $22Bn to $88Bn between 2013-2017.

In 2016, Cisco retired their intercloud offering, because AWS and Microsoft were, and still are, very dominant. AWS posted $12.2 billion in sales for 2016, Microsoft ended up almost at $3 billion in revenue with Azure.

Remember Cisco’s estimate about the “addressable cloud market”? In 2018, Gartner presented the number of $145B for the worldwide public cloud spend in 2017. For 2023, Gartner forecasted a cloud spend of almost $600 billion.

Data Gravity and Egress Costs

Another topic I want to highlight is “data gravity” coined by Dave McCrory in 2010:

Consider Data as if it were a Planet or other object with sufficient mass. As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet. As the mass or density increases, so does the strength of gravitational pull. As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity. Relating this analogy to Data is what is pictured below.

Put data gravity together with egress costs, then one realizes that data gravity and egress costs limit mobility and/or portability discussions:

Source: https://medium.com/@alexandre_43174/the-surprising-truth-about-cloud-egress-costs-d1be3f70d001

By the way, what happened to “economies of scale”?

The Cloud Paradox

As you should understand by now topics like costs, lock-in, and failed expectations (technically and commercially) are being discussed for more than a decade already. That is why I highlighted NIST’s sentence above: Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Acceptable cost.

While the (public) cloud seems to be the right choice for some companies, we now see other scenarios popping up more often: reverse cloud migrations (also called repatriation sometimes)

I have customers who tell me, that the exact same VM with the exact same business logic costs between 5 to 7 times more when they moved it from their private to a public cloud.

Let’s park that and cover the “true costs of cloud” another time. 😀

Public Cloud Services Spend

Looking at Vantage’s report, we can see the following top 10 services on AWS, Azure and GCP ranked by the share of costs:

If they are right and the numbers are true for most enterprises, it means that customers spend most of their money on virtual machines (IaaS), databases, and storage.

What does Gartner say?

Let’s have a look at the most recent forecast called “Worldwide Public Cloud End-User Spending to Reach Nearly $600 Billion in 2023” from April 2023:

Gartner April 2023 Public Cloud Spend Forecast

All segments of the cloud market are expected see growth in 2023. Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth in 2023 at 30.9%, followed by platform-as-a-service (PaaS) at 24.1%

Conclusion

If most companies spend around 30% of their budget on virtual machines and Gartner predicts that IaaS is still having a higher growth than SaaS or PaaS, a supercloud architecture for IaaS would make a lot of sense. You would have the same technology format, could use the same networking and security policies, and existing skills, and benefit from many other advantages as well.

Looking at the VMware Cloud approach, which allows you to run VMware’s software-defined data center (SDDC) stack on AWS, Azure, Google, and many other public clouds, customers could create a seamless hybrid multi-cloud architecture – using the same technology across clouds.

Other VMware products that fall under the supercloud category would be Tanzu Application Platform (TAP), the Aria Suite, and Tanzu for Kubernetes Operations (TKO) which belong to VMware’s Cross-Cloud Services portfolio.

Final Words

I think it is important that we understand, that we are still in the early days of multi-cloud (or when we use multiple clouds).

Customers get confused because it took them years to deploy or move new or existing apps to the public cloud. Now, analysts and vendors talk about cloud exit strategies, reverse cloud migrations, repatriations, exploding cloud costs, and so on.

Yes, a supercloud is about a hybrid multi-cloud architecture and a standardized design for building apps and platforms across cloud. But the most important capability, in my opinion, is the fact that it makes your IT landscape future-ready on different levels with different abstraction layers.

VMware vSphere – The Enterprise Data Platform

VMware vSphere – The Enterprise Data Platform

The world is creating and consuming more data than ever. There are multiple reasons that can explain this trend. Data creates the foundation for many digital products and services. And we read more and more about companies that want or need to keep their data on-premises because of reasons like data proximity, performance, data privacy, data sovereignty, data security, and predictable cost control. We also know that the edge is growing much faster than large data centers. These and other factors are the reasons why CIOs and decision-makers are now focusing on data more than ever before.

We live in a digital era where data is one of the most valuable assets. The whole economy from the government to local companies would not be able to function without data. Hence, it makes sense to structure and analyze the data, so a company’s data infrastructure becomes a profit center and is not just seen as a cost center anymore.

Data Sprawl

A lot of enterprises are confronted with the so-called data sprawl. Data sprawl means that an organization’s data is stored on and consumed by different devices and operating systems in different locations. There are cases where the consumers and the IT teams are not sure anymore where some of the data is stored and how it should be accessed. This is a huge risk and results in a loss of security and productivity.

Since the discussions about sovereign clouds and data sovereignty have started, it has never been more important where a company’s data resides, and where and how one can consume that data.

Enterprises have started to follow a cloud-smart approach: They put the right application and its data in the right cloud, based on the right reasons. In other words, they think twice now where and how they store their data.

What databases are popular?

When talking to developers and IT teams, I mostly received the following names (in no particular order):

  • Oracle
  • MSSQL
  • MySQL
  • PostgreSQL

I think it would be a fair statement to make that a lot of customers are looking for alternatives to reduce expensive database and database management solutions (DBMS). It seems that Postgres and MySQL earned a lot of popularity over the past years, while Oracle is still considered one of the best databases on the market – even seen as one of the most expensive and least liked solutions. But I also hear other solutions like MongoDB, MariaDB, and Redis mentioned in more discussions.

DBaaS and Public Cloud Characteristics

It is nothing new: Developers are looking for a public-cloud-like experience for their on-premises deployments. They want an easy and smooth self-service experience without the need for opening tickets and waiting for several days to get their database up and running. And we also know that open-source and freedom of choice are becoming more important to companies and their developers. Some of the main drivers here are costs and vendor lock-in.

IT teams on the other side want to provide security and compliance, more standardization around versions and types, and an easy way to backup and restore databases. But the truth is, that a lot of companies are struggling to provide this kind of Database-as-a-Service (DBaaS) experience to their developers.

The idea and expectation of DBaaS are to reduce management and operational efforts with the possibility to easily scale databases up and down. The difference between the public cloud DBaaS offering and your on-premises data center infrastructure is the underlying physical and virtual platform.

On-premises it could be theoretically any hardware, but VMware vSphere is still the most used virtualization platform for an enterprise’s data (center) infrastructure.

VMware vSphere and Data

VMware shared the information that studying their telemetry from their customer base showed that almost 25% of VMware workloads are data workloads (databases, data warehouses, big data analytics, data queueing, and caching) and it looks like that MS SQL Server still has the biggest share of all databases that are hosted on-premises.

They are also seeing a high double-digit growth (approx. 70-90%) when it comes to MySQL and steady growth with PostgreSQL. Rank 4 is probably Redis followed by MongoDB.

VMware Data Solutions

VMware Data Solutions, formerly known as Tanzu Data Services, is a powerful part of the entire VMware portfolio and consists of:

  • VMware GemFire – Fast, consistent data for web-scaling concurrent requests fulfills the promise of highly responsive applications.
  • VMware RabbitMQ – A fast, dependable enterprise message broker that provides reliable communication among servers, apps, and devices.
  • VMware Greenplum – VMware Greenplum is a massively parallel processing database. Greenplum is based on open-source Postgres, enabling Data Warehousing, aggregation, AI/ML and extreme query speed.
  • VMware SQL – VMware’s open-source SQL Database (Postgres & MySQL) is a Relational database service providing cost-efficient and flexible deployments on-demand and at scale. Available on any cloud, anywhere.
  • VMware Data Services Manager – Reduce operational costs and increase developer agility with VMware Data Services Manager, the modern platform to manage and consume databases on vSphere.

VMware Data Services Manager and VMware SQL

VMware SQL allows customers to deploy curated versions of PostgreSQL and MySQL and DSM is the solution that enables customers to create this DBaaS experience their developers are looking for.

VMware DSM Personas

Data Services Manager has the following key features:

  • Provisioning – Provision different configurations of databases (MySQL, Postgres, and SQL Server) with either freely
    configurable or pre-defined sizing of compute and memory resources, depending on user permissions
  • Backup & Restore – Backup, Transactional log, Point in Time Recovery (PiTR), on-demand or as scheduled
  • Scaling – Modify instances depending on usage (scale up, scale down, disk extension)
  • Replication – Replicate (Cold/Hot or Read Replicas) across managed zones
  • Monitoring – Monitor database engine, vSphere infrastructure, networking, and more.

…and supports the following components and versions (with DSM v1.4):

  • MySQL 8.0.30
  • Postgres 10.23.0, 11.18.0, 12.13.0, 13.9.0
  • MSSQL Server 2019 (Standard, Developer, Enterprise Edition)

Companies with a lot of databases have now a way at least to manage, control and secure Postgres, MySQL and MSSQL DB instances from a centralized tool than can be accessed via the UI or API.

Project Moneta

VMware’s vision is to become the cloud platform of choice. What started with compute, storage and network, continues with data: make it as easy to consume as the rest of their software-defined data center stack.

VMware has started with DSM and sees Moneta, which is still an R&D project, as the next evolution. The focus of Moneta is to bring better self-service and programmatic consumption capabilities (e.g., integration with GitHub).

Project Moneta will provide native integration with vSphere+ and the Cloud Consumption Interface (CCI). While nothing is official yet, I think of it as a vSphere+ and VMware Cloud add-on service that would provide data infrastructure capabilities. 

Final Words

If your developers want to use PostgreSQL, MySQL and MSSQL, and if your IT struggles to deploy, manage, secure and backup those databases, then DSM with Tanzu SQL can help. Both solutions are also perfectly made for disconnected use cases or airgapped environments.

Note: The DB engines are certified, tested and supported by VMware.

Open Source and Vendor Lock-In

Open Source and Vendor Lock-In

When talking about multi-cloud and cost efficiency, open source is often discussed because it can be deployed and operated on all private and public clouds. From my experience and conversations with customers, open source is most of the time directly connected to discussions about vendor lock-ins.

Organizations want to avoid or minimize the use of proprietary software to avoid becoming dependent on a particular vendor or service. And there are different factors like proprietary technology or service, or long-term contracts. It is also about not giving a specific supplier leverage over your organization – for example when this supplier is increasing their prices. Another reason to avoid vendor lock-in is the notion that proprietary software can limit or reduce innovation in your environment.

CNCF and Kubernetes

Let us take Kubernetes as an example. Kubernetes, which is also known as K8s, was contributed as an open-source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM, and VMware.

Currently, the CNCF has over 167k project contributors, over 800 members, and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.

The Cloud Native Computing Foundation, its members, and contributors have the same mission in mind. They want to provide drive the cloud native adoption by providing open and cloud native software that “can be implemented on a variety of architectures and operating systems”. This is one of the values described in the CNCF mission statement).

If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding of how many open source projects are supported by the CNCF and this open source community.

CNCF Landscape Jan 2023

Since donated to CNCF, a lot of companies on this planet are using Kubernetes, or at least a distribution of it:

  • Amazon Elastic Kubernetes Service Distro (Amazon EKS-D)
  • Azure (AKS) Engine
  • Cisco Intersight Kubernetes Service
  • K3s – Lightweight Kubernetes
  • MetalK8s
  • Oracle Cloud Native Environment
  • Rancher Kubernetes
  • Red Hat OpenShift
  • VMware Tanzu Kubernetes Grid (TKG)

A distribution, or distro, is when a vendor takes core Kubernetes — that’s the unmodified, open source code (although some modify it) — and packages it for redistribution. Usually, this entails finding and validating the Kubernetes software and providing a mechanism to handle cluster installation and upgrades. Many Kubernetes distributions include other proprietary or open source applications.

These were just a few of the total 66 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones out of the 53 total:

  • Alibaba Cloud Container Service for Kubernetes (ACK)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE)
  • Nutanix Kubernetes Engine (formerly Karbon)
  • Oracle Container Engine for Kubernetes (OKE)
  • Red Hat OpenShift Dedicated

While Kubernetes is open source, different vendors create curated versions of Kubernetes, add some proprietary services, and then offer it as a managed service. The notion of open source is that you can take all of your applications and their components and leave a specific cloud provider if needed.

Trade-Offs

Open source software can make cloud migrations easier in some ways (e.g., if you use the same database in all the clouds). Kubernetes is designed to be cloud-agnostic, meaning that it can run on multiple cloud platforms. This can make it easier to move applications and workloads between different clouds without needing to rewrite the code or reconfigure the infrastructure. At least this was the expectation of Kubernetes. And it should be clear by now, that a managed service or platform means a lock-in. No matter if this is GKE, EKS, AKS, or VMware Tanzu for Kubernetes.

You cannot avoid a (vendor) lock-in. You have the same with open source. It is about trade-offs.

If you deploy workloads in multiple clouds, you end up with different vendors/partners, different solutions, and technologies. For me, it is about operations at the end of the day. How do you manage and operate multiple clouds and their different managed services? How do you deploy and use open source software in different clouds?

I have not seen one customer saying that they moved away from AKS, EKS, GKE, or Tanzu and went back to the upstream version of Kubernetes and built the application platform around it by themselves from scratch with other open source projects. You can do it, but you need someone who did that before and can guide you. Why?

There are other container-related technologies like databases, streaming & messaging, service proxies, API gateways, cloud native storage, container runtimes, service meshes, and cloud native network projects. Let us have a look at the different categories and examples:

  • Database, 62 different projects (Cassandra, MySQL, Redis, PostgreSQL, Scylla)
  • Storage, 66 different projects (Container Storage Interface, MinIO, Velero)
  • Network, 25 different projects (Antrea, Cilium, Flannel, Container Network Interface, Open vSwitch, Calico, NGINX)
  • Service Proxy, 21 different projects (Contour, Envoy, HAProxy, MetalLB, NGINX)
  • Observability & Analysis, 145 projects (Grafana, Icinga, Nagios, Prometheus)

CNCF Cloud Native Networking

It is complex to deploy, integrate, operate and maintain different open source projects that you most probably need to integrate with proprietary software as well. So, one trade-off and disadvantage of open source software could be that it is developed and maintained by a community of volunteers. Some companies need enterprise support.

Note: Do not forget that even though you may be using open source software in different private and public clouds, you cannot change the fact that you most probably still have to use specific services of each cloud platform (e.g., network and storage). In this case, you have a dependency or lock-in on a different architectural layer.

If it is about costs, then open source can be helpful here, sure, but we shouldn’t forget the additional operational efforts. You will never get the costs down to zero with open source

The Reality

Graduated and incubating CNCF projects are considered to be running stable and can be used in production. Some examples would be Envoy, etcd, Harbor, Kubernetes, Open Policy Agent, and Prometheus.

Companies and developers have different motivations why open source. Open source software lowers your total cost of ownership (TCO), is created by skillful and talented people, you have more flexibility because of non-proprietary standards, it is cloud agnostic, has strong and fast support from the community when finding bugs, and is considered to be secure for use in production.

Open source is even so much liked that its usage attracts talent. There is no other community of this size that is collaborating on innovation and industry standardization!

But the Apache Log4j vulnerability showed the whole world that open source software needs to become more secure, and that project contributors and users need to ensure the integrity of the source code, build, and distribution in all open source software since a growing number of companies are using open source software as part of their solutions and managed services.

There are certain situations where open source software needs to be integrated with proprietary software. Commercial software can also provide more enterprise-readiness and can provide a complete solution, whereas with open source software on the other hand, you have to deploy and use a combination of different projects (to achieve the same). This could mean a lot of effort for a company. And you have to ensure the interoperability of the implemented software stack.

Technical issues always occur, no matter if it’s open source or proprietary software. Open source software does not provide the enterprise support some organizations are looking for.

While one has to decide what is best for their company and strategy, a lot of people are overwhelmed by the huge and confusing CNCF landscape that gives you so many options. Instead of deploying and integrating different open source projects by themselves, organizations are looking for public cloud service providers that take care of the management and ecosystem (network, storage, databases etc.) related to Kubernetes and this way is seen as the easiest way to get started with cloud native.

What has started for some organizations in one public cloud with one hosted Kubernetes offering has sometimes grown to a landscape with three different public clouds and four different Kubernetes distributions or hosted services.

Example: Companies may have started with Kubernetes or VMware Tanzu on-premises and use AKS, EKS and GKE in their public clouds.

How do you cost-efficiently manage all these different distributions and services over different clouds with different management consoles and security solutions? Tanzu Mission Control and Tanzu Application Platform could be on option.

VMware and Open Source

VMware and some of their engineers are part of the community and they actively contribute to projects like Kubernetes, Harbor, Carvel, Antrea, Contour and Velero. Interested in some stats (filtered by the last decade)?

Open source is an essential part of any software strategy—from a developer’s laptop to the data center. At VMware, we’re committed to open source and their communities so that we can all deliver better solutions: software that’s more secure, scalable, and innovative. VMware Tanzu is open source aligned and built on a foundation of open source projects.

VMware Tanzu

VMware (Tanzu) leverages some of the leading open source technologies in the Kubernetes ecosystem. They use Cluster API for cluster lifecycle management, Harbor for container registry, Contour for ingress, Fluentbit for logging, Grafana and Prometheus for monitoring, Antrea and Calico for container networking, Velero for backup and recovery, Sonobuoy for conformance testing, and Pinniped for authentication.

VMware Open Source

VMware Tanzu Application Platform

According to VMware, they built Tanzu Application Platform (TAP) with an open source-first mindset. Here are some of the most popular technologies and projects:

More information can be found here.

VMware Data Services

VMware has also a family of on-demand caching, messaging, and database software (from the acquisition of Pivotal):

  • VMware GemFire – Fast, consistent data for web-scaling concurrent requests fulfills the promise of highly responsive applications.
  • VMware RabbitMQ – A fast, dependable enterprise message broker provides reliable communication among servers, apps, and devices.
  • VMware Greenplum – VMware Greenplum is a massively parallel processing database. Greenplum is based on open source Postgres, enabling Data Warehousing, aggregation, AI/ML and extreme query speed.
  • VMware SQL – VMware’s open-source SQL Database (Postgres & MySQL) is a Relational database service providing cost-efficient and flexible deployments on-demand and at scale. Available on any cloud, anywhere.

Watch the VMware Explore 2022 session “Introduction to VMware Tanzu Data Services” to learn more about this portfolio.

Developers could start with the Tanzu Developer Center.

VMware SQL and DBaaS

If you are interested in building a DB-as-a-Service offering based on PostgreSQL, MySQL or SQL Server, I recommend the following resources from Cormac Hogan:

  1. A closer look at VMware Data Services Manager and Project Moneta
  2. VMware Data Services Manager – Architectural Overview and Provider Deployment
  3. VMware Data Services Manager – Agent Deployment
  4. VMware Data Services Manager – Database Creation
  5. VMware Data Services Manager – SQL Server Database Template
  6. Introduction to VMware Data Services Manager (video)

Closing

Like always, you or your architects have to decide what makes the most sense for your company, your IT landscape, and your applications. Make or buy? Open source or proprietary software? Happy married or locked in? What is vendor lock-in for you?

In any case, VMware embraces open source!

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you!