From Cloud-First to Cloud-Smart to Repatriation

From Cloud-First to Cloud-Smart to Repatriation

VMware Explore 2024 happened this week in Las Vegas. I think many people were curious about what Hock Tan, CEO of Broadcom, had to say during the general session. He delivered interesting statements and let everyone in the audience know that “the future of enterprise is private – private cloud, private AI, fueled by your own private data“. On social media, the following slide about “repatriation” made quite some noise:

VMware Explore 2024 Keynote Repatriation

The information on this slide came from Barcley’s CIO Survey in April 2024 and it says that 8 out of 10 CIOs today are planning to move workloads from the public cloud back to their on-premises data centers. It is interesting, and in some cases even funny, that other vendors in the hardware and virtualization business are chasing this ambulance now. Cloud migrations are dead, let us do reverse cloud migrations now. Hybrid cloud is dead, let us do hybrid multi-clouds now and provide workload mobility. My social media walls are full of such postings now. It seems Hock Tan presented the Holy Grail to the world.

Where is this change of mind from? Why did only 43% during COVID-19 plan a reverse cloud migration and now “suddenly” more than 80%?

I could tell you the story now about cloud-first not being cool anymore, that organizations started to follow a smarter cloud approach, and then concluded that cloud migrations are still not happening based on their expectations (e.g., costs and complexity). And that it is time now to bring workloads back on-premises. It is not that simple.

I looked at Barclay’s CIO survey and the chart (figure 20 in the survey) that served as a source for Hock Tan’s slide:

Barclays CIO Survey April 2024 Cloud RepatriationWe must be very careful with our interpretation of the results. Just because someone is “planning” a reverse cloud migration, does it mean they are executing? And if they execute such an exercise, is this going to be correctly reflected in a future survey?

And which are the workloads and services that are brought back to an enterprise’s data center? Are we talking about complete applications? Or is it more about load balancers, security appliances, databases and storage, and specific virtual machines? And if we understand the workloads, what are the real reasons to bring them back? Figure 22 of the survey shows “Workloads that Respondents Intend to Move Back to Private Cloud / On-Premise from Public Cloud”:

Barclays CIO Survey April 2024 Workload to migrate

Okay, we have a little bit more context now. Just because some workloads are potentially migrated back to private clouds, what does it mean for public cloud vs. private cloud spend? Question #11 of the survey “What percentage of your workloads and what percentage of your total IT spend are going towards the public cloud, and how have those evolved over time?” focuses on this matter.

Barclays CIO Survey April 2024 Percentage of Workloads and Spend My interpretation? Just because one slide or illustration talks about repatriation does not mean, that the entire world is just doing reverse migrations now. Cloud migrations and reverse cloud migrations can happen at the same time. You could bring one application or some databases back on-premises but decide to move all your virtual desktops to the public cloud in parallel. We could still bring workloads back to our data center and increase public cloud spend. 

Sounds like cloud-smart again, doesn’t it? Maybe I am an organization that realized that the applications A, B, C, and D shouldn’t run in Azure, AWS, Google, and Oracle anymore, but the applications W, X, Y, and Z are better suited for these hyperscalers.

What else?

I am writing about my views and my opinions here. There is more to share. During the pandemic, everything had to happen very quickly, and everyone suddenly had money to speed up migrations and application modernization projects. After that, I think it is a natural thing that everything was slowing down a bit after this difficult and exhausting phase.

Some of the IT teams are probably still documenting all their changes and new deployments on an internal wiki, and their bosses started to hire FinOps specialists to analyze their cloud spend. It is no shocking surprise to me that some of the financial goals haven’t been met and result in a reverse cloud migration a few years later.

But that is not all. Try to think about the past years. What else happened?

Yes, we almost forgot about Artificial Intelligence (AI) and Sovereign Clouds.

Before 2020, not many of us were thinking about sovereign clouds, data privacy, and AI.

Most enterprises are still hosting their data on-premises behind their own firewall. And some of this data is used to train or finetune models. We see (internal) chatbots popping up using Retrieval Augmented Generation (RAG), which delivers answers based on actual data and proprietary information.

Okay. What else? 

Yep, there is more. There are new technologies and offerings available that were not here before. We just covered AI and ML (machine learning) workloads that became a potential cost or compliance concern.

The concept of sovereign clouds has gained traction due to increasing concerns about data sovereignty and compliance with local regulations.

The adoption of hybrid and hybrid multi-cloud strategies has been a significant trend from 2020 to 2024. Think about VMware’s Cloud Foundation approach with Azure, Google, Oracle etc., AWS Outposts, Azure Stack, Oracle’s DRCC, or Nutanix’s.

Enterprises started to upskill and train their people to deliver their own Kubernetes platforms.

Edge computing has emerged as a crucial technology, particularly for industries like manufacturing, telecommunications, and healthcare, where real-time data processing is critical.

Conclusion

Reverse cloud migrations are happening for many different reasons like cost management, performance optimization, data security and compliance, automation and operations, or because of lock-in concerns.

Yes, (cloud) repatriation became prominent, but I think this is just a reflection of the maturing cloud market – and not an ambulance.

And no, it is not a better moment to position your hybrid multi-cloud solutions, unless you understand the services and workloads that need to be migrated from one cloud to another. Just because some CIOs plan to bring back some workloads on-premises, does it mean/imply that they will do it? What about the sunk cost fallacy?

Perhaps IT leaders are going to be more careful in the future and are trying to find other ways for potential cost savings and strategic benefits to achieve their business outcomes – and keep their workloads in the cloud versus repatriating them.

Businesses are adopting a more nuanced workload-centric strategy.

What’s your opinion?

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Since VMware belongs to Broadcom, there was less focus and messaging on multi-cloud or supercloud architectures. Broadcom has drastically changed the available offerings and VMware Cloud Foundation is becoming the new vSphere. Additionally, we have seen big changes regarding the partnerships with hyperscalers (the Azures and AWSes of this world) and the VMware Cloud partners and providers. So, what happened to multi-cloud and how come that nobody (at Broadcom) talks about it anymore?

What is going on?

I do not know if it’s only me, but I do not see the term “multi-cloud” that often anymore. Do you? My LinkedIn feed is full of news about artificial intelligence (AI) and how Nvidia employees got rich. So, I have to admit that I lost track of hybrid clouds, multi-clouds, or hybrid multi-cloud architectures. 

Cloud-Inspired and Cloud-Native Private Clouds

It seems to me that the initial idea of multi-cloud has changed in the meantime and that private clouds are becoming platforms with features. Let me explain.

Organizations have built monolithic private clouds in their data centers for a long time. In software engineering, the word “monolithic” describes an application that consists of multiple components, which form something larger. To build data centers, we followed the same approach by using different components like compute, storage, and networking. And over time, IT teams started to think about automation and security, and the integration of different solutions from different vendors.

The VMware messaging was always pointing in the right direction: They want to provide a cloud operating system for any hardware and any cloud (by using VMware Cloud Foundation). On top of that, build abstraction layers and leverage a unified control plane (aka consistent automation and operations).

And I told all my customers since 2020 that they need to think like a cloud service provider, get rid of silos, implement new processes, and define a new operating model. That is VMware by Broadcom’s messaging today and this is where they and other vendors are headed: a platform with features that provide cloud services.

In other words, and this is my opinion, VMware Cloud Foundation is today a platform with different components like vSphere, vSAN, NSX, Aria, and so on. Tomorrow, it is still called VMware Cloud Foundation, a platform that includes compute, storage, networking, automation, operations, and other features. No more other product names, just capabilities, and services like IaaS, CaaS, DRaaS or DBaaS. You just choose the specs of the underlying hardware and networking, deploy your private clouds, and then start to build and consume your services.

Replace the name “VMware Cloud Foundation” in the last paragraph with AWS Outposts or Azure Stack. Do you see it now? Distributed unmanaged and managed hybrid cloud offerings with a (service) consumption interface on top.

That is the shift from monolithic data centers to cloud-native private clouds.

From Intercloud to Multi-Cloud

It is not the first time that I write about interclouds, that not many of us know. In 2012, there was this idea that different clouds and vendors need to be interoperable and agree on certain standards and protocols. Think about interconnected private and public clouds, which allow you to provide VM mobility or application portability. Can you see the picture in front of you? What is the difference today in 2024?

In 2023, I truly believed that VMware figured it out when they announced VMware Cloud on Equinix Metal (VMC-E). To me, VMC-E was different and special because of Equinix, who is capable of interconnecting different clouds, and at the same time could provide a baremetal-as-a-service (BMaaS) offering.

Workload Mobility and Application Portability

Almost 2 years ago, I started to write a book about this topic, because I wanted to figure out if workload mobility and application portability are things, that enterprises are really looking for. I interviewed many CIOs, CTOs, chief architects and engineers around the globe, and it became VERY clear: it seems nobody was changing anything to make app portability a design requirement.

Almost all of the people I have spoken to, told me, that a lot of things must happen that could trigger a cloud-exit and therefore they see this as a nice-to-have capability that helps them to move virtual machines or applications faster from one cloud to another.

VMware Workload Mobility

And I have also been told that a lift & shift approach is not providing any value to almost all of them.

But when I talked to developers and operations teams, the answers changed. Most of them did not know that a vendor could provide mobility or portability. Anyway, what has changed now?

Interconnected Multi-Clouds and Distributed Hybrid Clouds

I mentioned it already before. Some vendors have realized that they need to deliver a unified and integrated programmable platform with a control plane. Ideally, this control plane can be used on-premises, as a SaaS solution, or both. And according to Gartner, these are the leaders in this area (Magic Quadrant for Distributed Hybrid Infrastructure):

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

In my opinion, VMware and Nutanix are providing a hybrid multi-cloud approach.

AWS and Microsoft are providing hybrid cloud solutions. In Microsoft’s case, we see Azure Stack HCI, Azure Kubernetes Service (AKS incl. Hybrid AKS) and Azure Arc extending Microsoft’s Azure services to on-premises data centers and edge locations.

The only vendor, that currently offers true multi-cloud capabilities, is Oracle. Oracle has Dedicated Region Cloud@Customer (DRCC) and Roving Edge, but also partnerships with Microsoft and Google that allow customers to host Oracle databases in Azure and Google Cloud data centers. Both partnerships come with a cross-cloud interconnection.

That is one of the big differences and changes for me at the moment. Multi-cloud has become less about mobility or portability, a single global control plane, or the same Kubernetes distribution in all the clouds, but more about bringing different services from different cloud providers closer together.

This is the image I created for the VMC-E blog. Replace the words “AWS” and “Equinix” with “Oracle” and suddenly you have something that was not there before, an interconnected multi-cloud.

What’s Next?

Based on the conversations with my customers, it does not feel that public cloud migrations are happening faster than in 2020 or 2022 and we still see between 70 and 80% of the workloads hosted on-premises. While we see customers who are interested in a cloud-first approach, we see many following a hybrid multi-cloud and/or multi-cloud approach. It is still about putting the right applications in the right cloud based on the right decisions. This has not changed.

But the narrative of such conversations has changed. We will see more conversations about data residency, privacy, security, gravity, proximity, and regulatory requirements. Then there are sovereign clouds.

Lastly, enterprises are going to deploy new platforms for AI-based workloads. But that could still take a while.

Final Thoughts

As enterprises continue to navigate the above mentioned complexities, the need for flexible, scalable, and secure infrastructure solutions will only grow. There are a few compelling solutions that bridge the gap between traditional on-premises systems and modern cloud environments.

And since most enterprises are still hosting their workloads on-premises, they have to decide if they want to stretch the private cloud to the public cloud, or the other way around. Both options can co-exist, but would make it too big and too complex. What’s your conclusion?

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US is currently happening in Las Vegas and I am onsite! Below you will find an overview of the information that was shared with us during the general session and solution keynotes.

Please be aware that this list is not complete but it should include all the major announcements including references and sources.

VMware Aria and VMware Tanzu

Starting this year, VMware Aria and VMware Tanzu form a single track at VMware Explore and VMware introduced the develop, operate, and optimize pillars (DOO) for Aria and Tanzu around April 2023.

VMware Tanzu DOO Framework

The following name changes and adjustments have been announced at VMware Explore US 2023:

  • The VMware Tanzu portfolio includes two new product categories (product family) called “Tanzu Application Platform” and “Tanzu Intelligence Services”.
  • Tanzu Application Platform includes the products Tanzu Application Platform (TAP) and Tanzu for Kubernetes Operations (TKO), and the new Tanzu Application Engine module.
  • Tanzu Intelligence Services – Aria Cost powered by CloudHealth, Aria Guardrails, Aria Insights, and Aria Migration will be rebranded as “Tanzu” and become part of this new Tanzu Intelligence Services category.
    • Tanzu Hub & Tanzu Graph
    • Tanzu CloudHealth
    • Tanzu Guardrails
    • Tanzu Insights (currently known as Aria Insights)
    • Tanzu Transformer (currently known as Aria Migration)
  • Aria Hub and Aria Graph are now called Tanzu Hub
  • VMware Cloud Packs are now called the VMware Cloud Editions (more information below)

Note: VMware expects to implement these changes latest by Q1 2024

The VMware Aria and Tanzu announcement and rebranding information can be found here.

Tanzu Mission Control

After the announcement that Tanzu Mission Control supports the lifecycle management of Amazon EKS clusters, VMware announced the expansion to provide lifecycle management capabilities of Microsoft AKS clusters now as well. 

Tanzu Application Engine (Private Beta)

VMware announced a new solution for the Tanzu Application Platform category.

VMware Tanzu for Kubernetes Operations is introducing Tanzu Application Engine, enhancing multi-cloud support with lifecycle management of Azure AKS clusters, and offering new Kubernetes FinOps (cluster cost) visibility. A new abstraction that includes workload placement, K8s runtime, data services, libraries, infra resources, with a set of policies and guardrails.

The Tanzu Application Engine announcement can be found here.

VMware RabbitMQ Managed Control Plane

I know a lot of customers who built an in-house RabbitMQ cloud service.

VMware just announced a beta program for a new VMware RabbitMQ Managed Control Plane which allows enterprises to seamlessly integrate RabbitMQ within their existing cloud environment, offering flexibility and control over data streaming processes.

What’s New with VMware Aria?

Other Aria announcements can be found here.

What’s New with VMware Aria Operations at VMware Explore

Next-Gen Public Cloud Management with VMware Aria Automation

VMware Cloud Editions

What has started with four different VMware Cloud Packs, is now known as “VMware Cloud Editions” with five different options:

VMware Cloud Editions

Here’s an overview of the different solutions/subscriptions included in each edition:

VMware Cloud Editions Connected Subscriptions

More VMware Cloud related announcements can be found here.

What’s New in vSphere 8 Update 2

As always, VMware is working on enhancing operational efficiency to make the life of an IT admin easier. And this gets better with the vSphere 8 U2 release.

In vSphere 8 Update 2, we are making significant improvements to several areas of maintenance to reduce and in some cases eliminate this need for downtime so vSphere administrators can make those important maintenance changes without having a large impact on the wider vSphere infrastructure consumers.

These enhancements include, reduced downtime upgrades for vCenter, automatic vCenter LVM snapshots before patching and updating, non-disruptive certificate management, and reliable network configuration recovery after a vCenter is restored from backup.

More information about the vSphere 8 Update 2 release can be found here.

What’s New in vSAN 8 Update 2

At VMware Explore 2022, VMware announced the new vSAN 8.0 release which included the new Express Storage Architecture (ESA), which even got better with the recent vSAN 8.0 Update 1 release.

VMware vSAN Max – Petabyte-Scale Disaggregated Storage

VMware vSAN Max, powered by vSAN Express Storage Architecture, is a new vSAN offering in the vSAN family delivering
petabyte-scale disaggregated storage for vSphere. With its new disaggregated storage deployment model, vSAN customers can scale storage elastically and independently from compute and deploy unified block, file, and partner-based object storage to maximize utilization and achieve lower TCO.

VMware vSAN Max

vSAN Max expands the use cases in which HCI can provide exceptional value. Disaggregation through vSAN Max provides flexibility to build infrastructure with the scale and efficiency required for non-linear scaling applications, such as storage-intensive databases, modern elastic applications with large datasets and more. Customers have a choice of deploying vSAN in a traditional model or a disaggregated model with vSAN Max, while still using a single control plane to manage both deployment options.

The vSAN Max announcement can be found here.

VMware Cloud on AWS

VMware announced a VMware Cloud on AWS Advanced subscription tier that will be available on i3en.metal and i4i.metal instance types only. This subscription will include advanced cloud management, networking and security features:

  • VMware NSX+ Services (NSX+ Intelligence, NDR capabilities, NSX Advanced Load Balancer)
  • vSAN Express Storage Architecture Support
  • VMware Aria Automation
  • VMware Aria Operations
  • VMware Aria Operations for Logs

Note: Existing deployments (existing SDDCs) will be entitled to these advanced cloud management, networking and security features over time

The VMware Cloud on AWS Advanced Subscription Tier FAQ can be found here. 

Introduction of VMware NSX+

Last year, VMware introduced Project Northstar as technology preview:

Project Northstar is a SaaS-based networking and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

This year, VMware announced the initial availability of the NSX+ service. VMware NSX+ is a fully managed cloud-based service offering that allows networking, security, and operations teams to consume and operate VMware NSX services from a single cloud console across private and public clouds.

NSX+ Architectural Diagram

The following services are available:

  • NSX+ Policy Management: Provides unified networking and security policy management across multiple clouds and on-premises data centers.
  • NSX+ Intelligence (Tech Preview only): Provides a big data reservoir and a system for network and security analytics for real-time traffic visibility into applications traffic all the way from basic traffic metrics to deep inspection of packets.
  • NSX+ NDR (Tech Preview only): Provides a scalable threat detection and response service offering for Security Operations Center (SoC) teams to triage real time security threats to their data center and cloud.

There are three different NSX+ and two NSX+ distributed firewall editions available:

  • NSX+ Standard. For organizations needing a basic set of NSX connectivity and security features for single location software-defined data center deployments.
  • NSX+ Advanced. For organizations needing advanced networking and security features that are applied to multiple sites. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Enterprise. For organizations needing all of the capability NSX has to offer. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Distributed Firewall. For organizations needing implement access controls for east-west traffic within the network (micro-segmentation) but not focused on Threat detection and prevention services.
  • NSX+ Distributed Firewall with Threat Prevention. For organizations needing access control and select Threat prevention features for east-west traffic within the network. 

An NSX+ feature overview can be found here.

Note: Currently, NSX+ only supports NSX on-premises deployments (NSX 4.1.1 or later) and VMware Cloud on AWS

VMware Cloud Foundation

VMware announced a few innovations for H2 2023, which includes the support for Distributed Service Engine (DSE aka Project Monterey), vSAN ESA support, and NSX+.

 

Generative AI – VMware Private AI Foundation with Nvidia

VMware and Nvidia’s CEOs announced VMware Private AI Foundation as the result of their longstanding partnership. 

Built on VMware Cloud Foundation, this integrated solution with Nvidia will enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.

Bild

Anywhere Workspace Announcements

At VMware Explore 2022, VMware shared its vision for autonomous workspaces.

Autonomous workspace is a concept (not an individual product) that is our north star for the future of end-user computing. It means going beyond creating a unified workspace with basic automations, to analyzing huge amounts of data with AI and machine learning, to drive more advanced, context aware automations. This leads to a workspace that can be considered self-configuring, self-healing, and self-securing. 

VMware continued working on the realization of this vision and came up with a lot of announcements, which can be found here.

Other Announcements

Please find below some announcements that VMware shared with us during the SpringOne event or before and after the general session on August 22nd, 2023:

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

The time is right for VMware Cloud! In the rapidly evolving landscape of modern business, embracing the cloud has become essential for organizations seeking to stay competitive and agile. The allure of increased scalability, cost-efficiency, and flexibility has driven enterprises of all sizes to embark on cloud migration journeys. However, the road to a successful cloud adoption is often coming with challenges. Slow and failed migrations have given rise to what experts call the “cloud paradox,” where the very technology meant to accelerate progress ends up hindering it.

As businesses navigate through this paradox, finding the right strategy to harness the full potential of the cloud becomes paramount. One solution that has emerged as a beacon of hope in this complex landscape is VMware Cloud. With its multi-cloud approach, which is also known as supercloud, VMware Cloud provides organizations the ability to craft a winning strategy that capitalizes on momentum while minimizing the risks associated with cloud migrations.

The Experimental Phase is Over

Is it really though? The experimental phase was an exciting journey of discovery for organizations seeking the potential of multi-cloud environments. Companies have explored different cloud providers, tested a variety of cloud services, and experimented with workloads and applications in the cloud. It allowed them to understand the benefits and drawbacks of each cloud platform, assess performance, security and compliance aspects, and determine how well each cloud provider aligns with their unique business needs.

The Paradox of Cloud and Choice

With an abundance of cloud service providers, each offering distinct features and capabilities, decision-makers can find themselves overwhelmed with options. The quest to optimize workloads across multiple clouds can lead to unintended complexities, such as increased operational overhead, inconsistent management practices/tools, and potential vendor lock-in.

Furthermore, managing data and applications distributed across various cloud environments can create challenges related to security, compliance, and data sovereignty. The lack of standardized practices and tools in a multi-cloud setup can also hinder collaboration and agility, negating the very advantages that public cloud environments promise to deliver.

Multi-Cloud Complexity

(Public) Cloud computing is often preached for its cost-efficiency, enabling businesses to pay for resources on-demand and avoid capital expenditures on physical infrastructure. However, the cloud paradox reveals that organizations can inadvertently accumulate hidden costs, such as data egress fees, storage overage charges, and the cost of cloud management tools. Without careful planning and oversight, the cloud’s financial benefits might be offset by unexpected expenses.

Why Cloud Migrations are Slowing Down

Failed expectations. The first reasons my customers mention are cost and complexity.

While the cloud offers potential cost savings in the long run, the initial investment and perceived uncertainty in calculating the total cost of ownership can deter some organizations from moving forward with cloud migrations. Budget constraints and difficulties in accurately estimating and analyzing cloud expenses lead to a cautious approach to cloud adoption.

One significant factor impeding cloud migrations is the complexity of the process itself. Moving entire infrastructures, applications, and data to the cloud requires thorough planning, precise execution, and in-depth knowledge of cloud platforms and technologies. Many organizations lack the in-house expertise to handle such a massive undertaking, leading to delays and apprehensions about potential risks.

Other underestimated reasons are legacy systems and applications that have been in use for many years and are often deeply ingrained within an organization’s operations. Migrating these systems to the cloud may require extensive reconfiguration or complete redevelopment, making the migration process both time-consuming and resource-intensive.

Reverse Cloud Migrations

While I don’t advertise a case for repatriation, I would like to share the idea that companies should think about workload mobility, application portability, and repatriation upfront. You can infinitely optimize your cloud spend, but if cloud costs start to outpace your transformation plans or revenue growth, it is too late already.

Embracing a Smart Approach with VMware Cloud

To address the cloud paradox and maximize the potential of multi-cloud environments, VMware is embracing the cloud-smart approach. This approach is designed to empower organizations with a unified and consistent platform to manage and operate their applications across multiple clouds.

VMware Cloud-Smart

  • Single Cloud Operating Model: A single operating model that spans private and public clouds. This consistency simplifies cloud management, enabling seamless workload migration and minimizing the complexities associated with multiple cloud providers.
  • Flexible Cloud Choice: VMware allows organizations to choose the cloud provider that best suits their specific needs, whether it is a public cloud or a private cloud infrastructure. This freedom of choice ensures that businesses can leverage the unique advantages of each cloud while maintaining operational consistency.
  • Streamlined Application Management: A cloud-smart approach centralizes application management, making it easier to deploy, secure, and monitor applications across multi-cloud environments. This streamlines processes, enhances collaboration, and improves operational efficiency.
  • Enhanced Security and Compliance: By adopting VMware’s security solutions, businesses can implement consistent security policies across all clouds, ensuring data protection and compliance adherence regardless of the cloud provider.

Why VMware Cloud?

This year I realized that a lot of VMware customers came back to me because their cloud-first strategy did not work as expected. Costs exploded, migrations were failing, and their project timeline changed many times. Also, partners like Microsoft and AWS want to collaborate more with VMware, because the public cloud giants cannot deliver as expected.

Customers and public cloud providers did not see any value in lifting and shifting workloads from on-premises data centers to the public. Now the exact same people, companies and partners (AWS, Microsoft, Google, Oracle etc.) are back to ask for VMware their support, and solutions that can speed up cloud migrations while reducing risks.

This is why I am always suggesting a “lift and learn” approach, which removes pressure and reduces costs.

Organizations view the public cloud as a highly strategic platform for digital transformation. Gartner forecasted in April 2023 that Infrastructure-as-a-Service (IaaS) is going to experience the highest spending growth in 2023, followed by PaaS.

It is said that companies spend most of their money for compute, storage, and data services when using Google Cloud, AWS, and Microsoft Azure. Guess what, VMware Cloud is a perfect fit for IaaS-based workloads (instead of using AWS EC2, Google’s Compute Engine, and Azure Virtual machine instances)!

Who doesn’t like the idea of cost savings and faster cloud migrations?

Disaster Recovery and FinOps

When you migrate workloads to the cloud, you have to rethink your disaster recovery and ransomware recovery strategy. Have a look at VMware’s DRaaS (Disaster-Recovery-as-a-Service) offering which includes ransomware recovery capabilities as well. 

If you want to analyze and optimize your cloud spend, try out VMware Aria Cost powered by CloudHealth.

Final Words

VMware’s approach is not right for everyone, but it is a future-proof cloud strategy that enables organizations to adapt their cloud strategies as business needs to evolve. The cloud-smart approach offers a compelling solution, providing businesses with a unified, consistent, and flexible platform to succeed in multi-cloud environments. By embracing this approach, organizations can overcome the complexities of multi-cloud, unlock new possibilities, and set themselves on a path to cloud success.

And you still get the same access to the native public cloud services.

 

 

Supercloud – A Hybrid Multi-Cloud

Supercloud – A Hybrid Multi-Cloud

I thought it is time to finally write a piece about superclouds. Call it supercloud, the new multi-cloud, a hybrid multi-cloud, cross-cloud, or a metacloud. New terms with the same meaning. I may be biased but I am convinced that VMware is in the pole position for this new architecture and approach.

Let me also tell you this: superclouds are nothing new. Some of you believe that the idea of a supercloud is something new, something modern. Some of you may also think that cross-cloud services, workload mobility, application portability, and data gravity are new complex topics of the “modern world” that need to be discussed or solved in 2023 and beyond. Guess what, most of these challenges and ideas exist for more than 10 years already!

Cloud-First is not cool anymore

There is clear evidence that a cloud-first approach is not cool or the ideal approach anymore. Do you remember about a dozen years ago when analysts believed that local data centers are going to disappear and the IT landscape would only consist of public clouds aka hyperscalers? Have a look at this timeline:

VMware and Public Clouds Timeline

We can clearly see when public clouds like AWS, Google Cloud, and Microsoft Azure appeared on the surface. A few years later, the world realized that the future is hybrid or multi-cloud. In 2019, AWS launched “Outposts”, Microsoft made Azure Arc and their on-premises Kubernetes offering available only a few years later.

Google, AWS, and Microsoft changed their messaging from “we are the best, we are the only cloud” to “okay, the future is multi-cloud, we also have something for you now”. Consistent infrastructure and consistent operations became almost everyone’s marketing slogan.

As you can also see above, VMware announced their hybrid cloud offering “VMware Cloud on AWS” in 2016, the initial availability came a year after, and since 2018 it is generally available.

From Internet to Interclouds

Before someone coined the term “supercloud”, people were talking about the need for an “intercloud”. In 2010, Vint Cerf, the so-called “Father of the Internet” shared his opinions and predictions on the future of cloud computing. He was talking about the potential need and importance of interconnecting different clouds.

Cerf already understood about 13 years ago, that there’s a need for an intercloud because users should be able to move data/workloads from one cloud to another (e.g., from AWS to Azure to GCP). He was guessing back then that the intercloud problem could be solved around 2015.

We’re at the same point now in 2010 as we were in ’73 with internet.

In short, Vint Cerf understood that the future is multi-cloud and that interoperability standards are key.

There is also a document that also delivers proof that NIST had a working group (IEEE P2302) trying to develop “the Standard for Intercloud Interoperability and Federation (SIIF)”. This was around 2011. How did the suggestion back then look like? I found this youtube video a few years ago with the following sketch:

Intercloud 2012

Workload Mobility and Application Portability

As we can see above, VM or workload mobility was already part of this high-level architecture from the IEEE working group. I also found a paper from NIST called “Cloud Computing Standards Roadmap” dated July 2013 with very interesting sections:

Cloud platforms should make it possible to securely and efficiently move data in, out, and among cloud providers and to make it possible to port applications from one cloud platform to another. Data may be transient or persistent, structured or unstructured and may be stored in a file system, cache, relational or non-relational database. Cloud interoperability means that data can be processed by different services on different cloud systems through common specifications. Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Note: VMware HCX is available since 2018 and is still the easiest and probably the most cost-efficient way to migrate workloads from one cloud to another.

It is all about the money

Imagine it is March 2014, and you read the following announcement: Cisco is going big – they want to spend $1 billion on the creation of an intercloud

Yes, that really happened. Details can be found in the New York Times Archive. The New York Times even mentioned at the end of their article that “it’s clear that cloud computing has become a very big money game”.

In Cisco’s announcement, money had also been mentioned:

Of course, we believe this is going to be good for business. We expect to expand the addressable cloud market for Cisco and our partners from $22Bn to $88Bn between 2013-2017.

In 2016, Cisco retired their intercloud offering, because AWS and Microsoft were, and still are, very dominant. AWS posted $12.2 billion in sales for 2016, Microsoft ended up almost at $3 billion in revenue with Azure.

Remember Cisco’s estimate about the “addressable cloud market”? In 2018, Gartner presented the number of $145B for the worldwide public cloud spend in 2017. For 2023, Gartner forecasted a cloud spend of almost $600 billion.

Data Gravity and Egress Costs

Another topic I want to highlight is “data gravity” coined by Dave McCrory in 2010:

Consider Data as if it were a Planet or other object with sufficient mass. As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet. As the mass or density increases, so does the strength of gravitational pull. As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity. Relating this analogy to Data is what is pictured below.

Put data gravity together with egress costs, then one realizes that data gravity and egress costs limit mobility and/or portability discussions:

Source: https://medium.com/@alexandre_43174/the-surprising-truth-about-cloud-egress-costs-d1be3f70d001

By the way, what happened to “economies of scale”?

The Cloud Paradox

As you should understand by now topics like costs, lock-in, and failed expectations (technically and commercially) are being discussed for more than a decade already. That is why I highlighted NIST’s sentence above: Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Acceptable cost.

While the (public) cloud seems to be the right choice for some companies, we now see other scenarios popping up more often: reverse cloud migrations (also called repatriation sometimes)

I have customers who tell me, that the exact same VM with the exact same business logic costs between 5 to 7 times more when they moved it from their private to a public cloud.

Let’s park that and cover the “true costs of cloud” another time. 😀

Public Cloud Services Spend

Looking at Vantage’s report, we can see the following top 10 services on AWS, Azure and GCP ranked by the share of costs:

If they are right and the numbers are true for most enterprises, it means that customers spend most of their money on virtual machines (IaaS), databases, and storage.

What does Gartner say?

Let’s have a look at the most recent forecast called “Worldwide Public Cloud End-User Spending to Reach Nearly $600 Billion in 2023” from April 2023:

Gartner April 2023 Public Cloud Spend Forecast

All segments of the cloud market are expected see growth in 2023. Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth in 2023 at 30.9%, followed by platform-as-a-service (PaaS) at 24.1%

Conclusion

If most companies spend around 30% of their budget on virtual machines and Gartner predicts that IaaS is still having a higher growth than SaaS or PaaS, a supercloud architecture for IaaS would make a lot of sense. You would have the same technology format, could use the same networking and security policies, and existing skills, and benefit from many other advantages as well.

Looking at the VMware Cloud approach, which allows you to run VMware’s software-defined data center (SDDC) stack on AWS, Azure, Google, and many other public clouds, customers could create a seamless hybrid multi-cloud architecture – using the same technology across clouds.

Other VMware products that fall under the supercloud category would be Tanzu Application Platform (TAP), the Aria Suite, and Tanzu for Kubernetes Operations (TKO) which belong to VMware’s Cross-Cloud Services portfolio.

Final Words

I think it is important that we understand, that we are still in the early days of multi-cloud (or when we use multiple clouds).

Customers get confused because it took them years to deploy or move new or existing apps to the public cloud. Now, analysts and vendors talk about cloud exit strategies, reverse cloud migrations, repatriations, exploding cloud costs, and so on.

Yes, a supercloud is about a hybrid multi-cloud architecture and a standardized design for building apps and platforms across cloud. But the most important capability, in my opinion, is the fact that it makes your IT landscape future-ready on different levels with different abstraction layers.

VMware vSphere – The Enterprise Data Platform

VMware vSphere – The Enterprise Data Platform

The world is creating and consuming more data than ever. There are multiple reasons that can explain this trend. Data creates the foundation for many digital products and services. And we read more and more about companies that want or need to keep their data on-premises because of reasons like data proximity, performance, data privacy, data sovereignty, data security, and predictable cost control. We also know that the edge is growing much faster than large data centers. These and other factors are the reasons why CIOs and decision-makers are now focusing on data more than ever before.

We live in a digital era where data is one of the most valuable assets. The whole economy from the government to local companies would not be able to function without data. Hence, it makes sense to structure and analyze the data, so a company’s data infrastructure becomes a profit center and is not just seen as a cost center anymore.

Data Sprawl

A lot of enterprises are confronted with the so-called data sprawl. Data sprawl means that an organization’s data is stored on and consumed by different devices and operating systems in different locations. There are cases where the consumers and the IT teams are not sure anymore where some of the data is stored and how it should be accessed. This is a huge risk and results in a loss of security and productivity.

Since the discussions about sovereign clouds and data sovereignty have started, it has never been more important where a company’s data resides, and where and how one can consume that data.

Enterprises have started to follow a cloud-smart approach: They put the right application and its data in the right cloud, based on the right reasons. In other words, they think twice now where and how they store their data.

What databases are popular?

When talking to developers and IT teams, I mostly received the following names (in no particular order):

  • Oracle
  • MSSQL
  • MySQL
  • PostgreSQL

I think it would be a fair statement to make that a lot of customers are looking for alternatives to reduce expensive database and database management solutions (DBMS). It seems that Postgres and MySQL earned a lot of popularity over the past years, while Oracle is still considered one of the best databases on the market – even seen as one of the most expensive and least liked solutions. But I also hear other solutions like MongoDB, MariaDB, and Redis mentioned in more discussions.

DBaaS and Public Cloud Characteristics

It is nothing new: Developers are looking for a public-cloud-like experience for their on-premises deployments. They want an easy and smooth self-service experience without the need for opening tickets and waiting for several days to get their database up and running. And we also know that open-source and freedom of choice are becoming more important to companies and their developers. Some of the main drivers here are costs and vendor lock-in.

IT teams on the other side want to provide security and compliance, more standardization around versions and types, and an easy way to backup and restore databases. But the truth is, that a lot of companies are struggling to provide this kind of Database-as-a-Service (DBaaS) experience to their developers.

The idea and expectation of DBaaS are to reduce management and operational efforts with the possibility to easily scale databases up and down. The difference between the public cloud DBaaS offering and your on-premises data center infrastructure is the underlying physical and virtual platform.

On-premises it could be theoretically any hardware, but VMware vSphere is still the most used virtualization platform for an enterprise’s data (center) infrastructure.

VMware vSphere and Data

VMware shared the information that studying their telemetry from their customer base showed that almost 25% of VMware workloads are data workloads (databases, data warehouses, big data analytics, data queueing, and caching) and it looks like that MS SQL Server still has the biggest share of all databases that are hosted on-premises.

They are also seeing a high double-digit growth (approx. 70-90%) when it comes to MySQL and steady growth with PostgreSQL. Rank 4 is probably Redis followed by MongoDB.

VMware Data Solutions

VMware Data Solutions, formerly known as Tanzu Data Services, is a powerful part of the entire VMware portfolio and consists of:

  • VMware GemFire – Fast, consistent data for web-scaling concurrent requests fulfills the promise of highly responsive applications.
  • VMware RabbitMQ – A fast, dependable enterprise message broker that provides reliable communication among servers, apps, and devices.
  • VMware Greenplum – VMware Greenplum is a massively parallel processing database. Greenplum is based on open-source Postgres, enabling Data Warehousing, aggregation, AI/ML and extreme query speed.
  • VMware SQL – VMware’s open-source SQL Database (Postgres & MySQL) is a Relational database service providing cost-efficient and flexible deployments on-demand and at scale. Available on any cloud, anywhere.
  • VMware Data Services Manager – Reduce operational costs and increase developer agility with VMware Data Services Manager, the modern platform to manage and consume databases on vSphere.

VMware Data Services Manager and VMware SQL

VMware SQL allows customers to deploy curated versions of PostgreSQL and MySQL and DSM is the solution that enables customers to create this DBaaS experience their developers are looking for.

VMware DSM Personas

Data Services Manager has the following key features:

  • Provisioning – Provision different configurations of databases (MySQL, Postgres, and SQL Server) with either freely
    configurable or pre-defined sizing of compute and memory resources, depending on user permissions
  • Backup & Restore – Backup, Transactional log, Point in Time Recovery (PiTR), on-demand or as scheduled
  • Scaling – Modify instances depending on usage (scale up, scale down, disk extension)
  • Replication – Replicate (Cold/Hot or Read Replicas) across managed zones
  • Monitoring – Monitor database engine, vSphere infrastructure, networking, and more.

…and supports the following components and versions (with DSM v1.4):

  • MySQL 8.0.30
  • Postgres 10.23.0, 11.18.0, 12.13.0, 13.9.0
  • MSSQL Server 2019 (Standard, Developer, Enterprise Edition)

Companies with a lot of databases have now a way at least to manage, control and secure Postgres, MySQL and MSSQL DB instances from a centralized tool than can be accessed via the UI or API.

Project Moneta

VMware’s vision is to become the cloud platform of choice. What started with compute, storage and network, continues with data: make it as easy to consume as the rest of their software-defined data center stack.

VMware has started with DSM and sees Moneta, which is still an R&D project, as the next evolution. The focus of Moneta is to bring better self-service and programmatic consumption capabilities (e.g., integration with GitHub).

Project Moneta will provide native integration with vSphere+ and the Cloud Consumption Interface (CCI). While nothing is official yet, I think of it as a vSphere+ and VMware Cloud add-on service that would provide data infrastructure capabilities. 

Final Words

If your developers want to use PostgreSQL, MySQL and MSSQL, and if your IT struggles to deploy, manage, secure and backup those databases, then DSM with Tanzu SQL can help. Both solutions are also perfectly made for disconnected use cases or airgapped environments.

Note: The DB engines are certified, tested and supported by VMware.