From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more. 

Private Cloud Autarky – You Are Safe Until The World Moves On

Private Cloud Autarky – You Are Safe Until The World Moves On

I believe it was 2023 when the term “autarky” was mentioned during my conversations with several customers, who maintained their own data centers and private clouds. Interestingly, this word popped up again recently at work, but I only knew it from photovoltaic systems. And it kept my mind busy for several weeks.

What is autarky?

To understand autarky in the IT world and its implications for private clouds, an analogy from the photovoltaic (solar power) system world offers a clear parallel. Just as autarky in IT means a private cloud that is fully self-sufficient, autarky in photovoltaics refers to an “off-grid” solar setup that powers a home or facility without relying on the external electrical grid or outside suppliers.

Imagine a homeowner aiming for total energy independence – an autarkic photovoltaic system. Here is what it looks like:

  • Solar Panels: The homeowner installs panels to capture sunlight and generate electricity.
  • Battery: Excess power is stored in batteries (e.g., lithium-ion) for use at night or on cloudy days.
  • Inverter: A device converts solar DC power to usable AC power for appliances.
  • Self-Maintenance: The homeowner repairs panels, replaces batteries, and manages the system without calling a utility company or buying parts. 

This setup cuts ties with the power grid – no monthly bills, no reliance on power plants. It is a self-contained energy ecosystem, much like an autarkic private cloud aims to be a self-contained digital ecosystem.

Question: Which partner (installation company) has enough spare parts and how many homeowners can repair the whole system by themselves?

Let’s align this with autarky in IT:

  • Solar Panels = Servers and Hardware: Just as panels generate power, servers (compute, storage, networking) generate the cloud’s processing capability. Theoretically, an autarkic private cloud requires the organization to build its own servers, similar to crafting custom solar panels instead of buying from any vendor.
  • Battery = Spares and Redundancy: Batteries store energy for later; spare hardware (e.g., extra servers, drives, networking equipment) keeps the cloud running when parts fail. 
  • Inverter = Software Stack: The inverter transforms raw power into usable energy, like how a software stack (OS, hypervisor) turns hardware into a functional cloud.
  • Self-Maintenance = Internal Operations: Fixing a solar system solo parallels maintaining a cloud without vendor support – both need in-house expertise to troubleshoot and repair everything.

Let me repeat it: both need in-house expertise to troubleshoot and repair everything. Everything.

The goal is self-sufficiency and independence. So, what are companies doing?

An autarkic private cloud might stockpile Dell servers or Nvidia GPUs upfront, but that first purchase ties you to external vendors. True autarky would mean mining silicon and forging chips yourself – impractical, just like growing your own silicon crystals for panels.

The problem

In practice, autarky for private clouds sounds like an extreme goal. It promises maximum control. Ideal for scenarios like military secrecy, regulatory isolation, or distrust of global supply chains but clashes with the realities of modern IT:

  • Once the last spare dies, you are done. No new tech without breaking autarky.
  • Autarky trades resilience for stagnation. Your cloud stays alive but grows irrelevant.
  • Autarky’s price tag limits it to tiny, niche clouds – not hyperscale rivals.
  • Future workloads are a guessing game. Stockpile too few servers, and you can’t expand. Too many, and you have wasted millions. A 2027 AI boom or quantum shift could make your equipment useless.

But where is this idea of self-sufficiency or sovereign operations coming from? Nowadays? Geopolitical resilience.

Sanctions or trade wars will not starve your cloud. A private (hyperscale) cloud that answers to no one, free from external risks or influence. That is the whole idea.

What is the probability of such sanctions? Who knows… but this is a number that has to be defined for each case depending on the location/country, internal and external customers, and requirements.

If it happens, is it foreseeable, and what does it force you to do? Does it trigger a cloud-exit scenario?

I just know that if there are sanctions, any hyperscaler in your country has the same problems. No matter if it is a public or dedicated region. That is the blast radius. It is not only about you and your infrastructure anymore.

What about private disconnected hyperscale clouds?

When hosting workloads in the public clouds, organizations care more about data residency, regulations, the US Cloud Act, and less about autarky.

Hyperscale clouds like Microsoft Azure and Oracle Cloud Infrastructure (OCI) are built to deliver massive scale, flexibility, and performance but they rely on complex ecosystems that make full autarky impossible. Oracle offers solutions like OCI Dedicated Region and Oracle Alloy to address sovereignty needs, giving customers more control over their data and operations. However, even these solutions fall short of true autarky and absolute sovereign operations due to practical, technical, and economic realities.

A short explanation from Microsoft gives us a hint why that is the case:

Additionally, some operational sovereignty requirements, like Autarky (for example, being able to run independently of external networks and systems) are infeasible in hyperscale cloud-computing platforms like Azure, which rely on regular platform updates to keep systems in an optimal state.

So, what are customers asking for when they are interested in hosting their own dedicated cloud region in their data centers? Disconnected hyperscale clouds.

But hosting an OCI Dedicated Region in your data center does not change the underlying architecture of Oracle Cloud Infrastructure (OCI). Nor does it change the upgrade or patching process, or the whole operating model.

Hyperscale clouds do not exist in a vacuum. They lean on a web of external and internal dependencies to work:

  • Hardware Suppliers. For example, most public clouds use Nvidia’s GPUs for AI workloads. Without these vendors, hyperscalers could not keep up with the demand.
  • Global Internet Infrastructure. Hyperscalers need massive bandwidth to connect users worldwide. They rely on telecom giants and undersea cables for internet backbone, plus partnerships with content delivery networks (CDNs) like Akamai to speed things up.
  • Software Ecosystems. Open-source tools like Linux and Kubernetes are part of the backbone of hyperscale operations.
  • Operations. Think about telemetry data and external health monitoring.

Innovation depends on ecosystems

The tech world moves fast. Open-source software and industry standards let hyperscalers innovate without reinventing the wheel. OCI’s adoption of Linux or Azure’s use of Kubernetes shows they thrive by tapping into shared knowledge, not isolating themselves. Going it alone would skyrocket costs. Designing custom chips, giving away or sharing operational control or skipping partnerships would drain billions – money better spent on new features, services or lower prices.

Hyperscale clouds are global by nature, this includes Oracle Dedicated Region and Alloy. In return you get:

  • Innovation
  • Scalability
  • Cybersecurity
  • Agility
  • Reliability
  • Integration and Partnerships

Again, by nature and design, hyperscale clouds – even those hosted in your data center as private Clouds (OCI Dedicated Region and Alloy) – are still tied to a hyperscaler’s software repositories, third-party hardware, operations personnel, and global infrastructure.

Sovereignty is real, autarky is a dream

Autarky sounds appealing: a hyperscale cloud that answers to no one, free from external risks or influence. Imagine OCI Dedicated Region or Oracle Alloy as self-contained kingdoms, untouchable by global chaos.

Autarky sacrifices expertise for control, and the result would be a weaker, slower and probably less secure cloud. Self-sufficiency is not cheap. Hyperscalers spend billions of dollars yearly on infrastructure, leaning on economies of scale and vendor deals. Tech moves at lightning speed. New GPUs drop yearly, software patches roll out daily (think about 1’000 updates/patches a month). Autarky means falling behind. It would turn your hyperscale cloud into a relic.

Please note, there are other solutions like air-gapped isolated cloud regions, but those are for a specific industry and set of customers.

From Cloud-First to Cloud-Smart to Repatriation

From Cloud-First to Cloud-Smart to Repatriation

VMware Explore 2024 happened this week in Las Vegas. I think many people were curious about what Hock Tan, CEO of Broadcom, had to say during the general session. He delivered interesting statements and let everyone in the audience know that “the future of enterprise is private – private cloud, private AI, fueled by your own private data“. On social media, the following slide about “repatriation” made quite some noise:

VMware Explore 2024 Keynote Repatriation

The information on this slide came from Barcley’s CIO Survey in April 2024 and it says that 8 out of 10 CIOs today are planning to move workloads from the public cloud back to their on-premises data centers. It is interesting, and in some cases even funny, that other vendors in the hardware and virtualization business are chasing this ambulance now. Cloud migrations are dead, let us do reverse cloud migrations now. Hybrid cloud is dead, let us do hybrid multi-clouds now and provide workload mobility. My social media walls are full of such postings now. It seems Hock Tan presented the Holy Grail to the world.

Where is this change of mind from? Why did only 43% during COVID-19 plan a reverse cloud migration and now “suddenly” more than 80%?

I could tell you the story now about cloud-first not being cool anymore, that organizations started to follow a smarter cloud approach, and then concluded that cloud migrations are still not happening based on their expectations (e.g., costs and complexity). And that it is time now to bring workloads back on-premises. It is not that simple.

I looked at Barclay’s CIO survey and the chart (figure 20 in the survey) that served as a source for Hock Tan’s slide:

Barclays CIO Survey April 2024 Cloud RepatriationWe must be very careful with our interpretation of the results. Just because someone is “planning” a reverse cloud migration, does it mean they are executing? And if they execute such an exercise, is this going to be correctly reflected in a future survey?

And which are the workloads and services that are brought back to an enterprise’s data center? Are we talking about complete applications? Or is it more about load balancers, security appliances, databases and storage, and specific virtual machines? And if we understand the workloads, what are the real reasons to bring them back? Figure 22 of the survey shows “Workloads that Respondents Intend to Move Back to Private Cloud / On-Premise from Public Cloud”:

Barclays CIO Survey April 2024 Workload to migrate

Okay, we have a little bit more context now. Just because some workloads are potentially migrated back to private clouds, what does it mean for public cloud vs. private cloud spend? Question #11 of the survey “What percentage of your workloads and what percentage of your total IT spend are going towards the public cloud, and how have those evolved over time?” focuses on this matter.

Barclays CIO Survey April 2024 Percentage of Workloads and Spend My interpretation? Just because one slide or illustration talks about repatriation does not mean, that the entire world is just doing reverse migrations now. Cloud migrations and reverse cloud migrations can happen at the same time. You could bring one application or some databases back on-premises but decide to move all your virtual desktops to the public cloud in parallel. We could still bring workloads back to our data center and increase public cloud spend. 

Sounds like cloud-smart again, doesn’t it? Maybe I am an organization that realized that the applications A, B, C, and D shouldn’t run in Azure, AWS, Google, and Oracle anymore, but the applications W, X, Y, and Z are better suited for these hyperscalers.

What else?

I am writing about my views and my opinions here. There is more to share. During the pandemic, everything had to happen very quickly, and everyone suddenly had money to speed up migrations and application modernization projects. After that, I think it is a natural thing that everything was slowing down a bit after this difficult and exhausting phase.

Some of the IT teams are probably still documenting all their changes and new deployments on an internal wiki, and their bosses started to hire FinOps specialists to analyze their cloud spend. It is no shocking surprise to me that some of the financial goals haven’t been met and result in a reverse cloud migration a few years later.

But that is not all. Try to think about the past years. What else happened?

Yes, we almost forgot about Artificial Intelligence (AI) and Sovereign Clouds.

Before 2020, not many of us were thinking about sovereign clouds, data privacy, and AI.

Most enterprises are still hosting their data on-premises behind their own firewall. And some of this data is used to train or finetune models. We see (internal) chatbots popping up using Retrieval Augmented Generation (RAG), which delivers answers based on actual data and proprietary information.

Okay. What else? 

Yep, there is more. There are new technologies and offerings available that were not here before. We just covered AI and ML (machine learning) workloads that became a potential cost or compliance concern.

The concept of sovereign clouds has gained traction due to increasing concerns about data sovereignty and compliance with local regulations.

The adoption of hybrid and hybrid multi-cloud strategies has been a significant trend from 2020 to 2024. Think about VMware’s Cloud Foundation approach with Azure, Google, Oracle etc., AWS Outposts, Azure Stack, Oracle’s DRCC, or Nutanix’s.

Enterprises started to upskill and train their people to deliver their own Kubernetes platforms.

Edge computing has emerged as a crucial technology, particularly for industries like manufacturing, telecommunications, and healthcare, where real-time data processing is critical.

Conclusion

Reverse cloud migrations are happening for many different reasons like cost management, performance optimization, data security and compliance, automation and operations, or because of lock-in concerns.

Yes, (cloud) repatriation became prominent, but I think this is just a reflection of the maturing cloud market – and not an ambulance.

And no, it is not a better moment to position your hybrid multi-cloud solutions, unless you understand the services and workloads that need to be migrated from one cloud to another. Just because some CIOs plan to bring back some workloads on-premises, does it mean/imply that they will do it? What about the sunk cost fallacy?

Perhaps IT leaders are going to be more careful in the future and are trying to find other ways for potential cost savings and strategic benefits to achieve their business outcomes – and keep their workloads in the cloud versus repatriating them.

Businesses are adopting a more nuanced workload-centric strategy.

What’s your opinion?

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Since VMware belongs to Broadcom, there was less focus and messaging on multi-cloud or supercloud architectures. Broadcom has drastically changed the available offerings and VMware Cloud Foundation is becoming the new vSphere. Additionally, we have seen big changes regarding the partnerships with hyperscalers (the Azures and AWSes of this world) and the VMware Cloud partners and providers. So, what happened to multi-cloud and how come that nobody (at Broadcom) talks about it anymore?

What is going on?

I do not know if it’s only me, but I do not see the term “multi-cloud” that often anymore. Do you? My LinkedIn feed is full of news about artificial intelligence (AI) and how Nvidia employees got rich. So, I have to admit that I lost track of hybrid clouds, multi-clouds, or hybrid multi-cloud architectures. 

Cloud-Inspired and Cloud-Native Private Clouds

It seems to me that the initial idea of multi-cloud has changed in the meantime and that private clouds are becoming platforms with features. Let me explain.

Organizations have built monolithic private clouds in their data centers for a long time. In software engineering, the word “monolithic” describes an application that consists of multiple components, which form something larger. To build data centers, we followed the same approach by using different components like compute, storage, and networking. And over time, IT teams started to think about automation and security, and the integration of different solutions from different vendors.

The VMware messaging was always pointing in the right direction: They want to provide a cloud operating system for any hardware and any cloud (by using VMware Cloud Foundation). On top of that, build abstraction layers and leverage a unified control plane (aka consistent automation and operations).

And I told all my customers since 2020 that they need to think like a cloud service provider, get rid of silos, implement new processes, and define a new operating model. That is VMware by Broadcom’s messaging today and this is where they and other vendors are headed: a platform with features that provide cloud services.

In other words, and this is my opinion, VMware Cloud Foundation is today a platform with different components like vSphere, vSAN, NSX, Aria, and so on. Tomorrow, it is still called VMware Cloud Foundation, a platform that includes compute, storage, networking, automation, operations, and other features. No more other product names, just capabilities, and services like IaaS, CaaS, DRaaS or DBaaS. You just choose the specs of the underlying hardware and networking, deploy your private clouds, and then start to build and consume your services.

Replace the name “VMware Cloud Foundation” in the last paragraph with AWS Outposts or Azure Stack. Do you see it now? Distributed unmanaged and managed hybrid cloud offerings with a (service) consumption interface on top.

That is the shift from monolithic data centers to cloud-native private clouds.

From Intercloud to Multi-Cloud

It is not the first time that I write about interclouds, that not many of us know. In 2012, there was this idea that different clouds and vendors need to be interoperable and agree on certain standards and protocols. Think about interconnected private and public clouds, which allow you to provide VM mobility or application portability. Can you see the picture in front of you? What is the difference today in 2024?

In 2023, I truly believed that VMware figured it out when they announced VMware Cloud on Equinix Metal (VMC-E). To me, VMC-E was different and special because of Equinix, who is capable of interconnecting different clouds, and at the same time could provide a baremetal-as-a-service (BMaaS) offering.

Workload Mobility and Application Portability

Almost 2 years ago, I started to write a book about this topic, because I wanted to figure out if workload mobility and application portability are things, that enterprises are really looking for. I interviewed many CIOs, CTOs, chief architects and engineers around the globe, and it became VERY clear: it seems nobody was changing anything to make app portability a design requirement.

Almost all of the people I have spoken to, told me, that a lot of things must happen that could trigger a cloud-exit and therefore they see this as a nice-to-have capability that helps them to move virtual machines or applications faster from one cloud to another.

VMware Workload Mobility

And I have also been told that a lift & shift approach is not providing any value to almost all of them.

But when I talked to developers and operations teams, the answers changed. Most of them did not know that a vendor could provide mobility or portability. Anyway, what has changed now?

Interconnected Multi-Clouds and Distributed Hybrid Clouds

I mentioned it already before. Some vendors have realized that they need to deliver a unified and integrated programmable platform with a control plane. Ideally, this control plane can be used on-premises, as a SaaS solution, or both. And according to Gartner, these are the leaders in this area (Magic Quadrant for Distributed Hybrid Infrastructure):

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

In my opinion, VMware and Nutanix are providing a hybrid multi-cloud approach.

AWS and Microsoft are providing hybrid cloud solutions. In Microsoft’s case, we see Azure Stack HCI, Azure Kubernetes Service (AKS incl. Hybrid AKS) and Azure Arc extending Microsoft’s Azure services to on-premises data centers and edge locations.

The only vendor, that currently offers true multi-cloud capabilities, is Oracle. Oracle has Dedicated Region Cloud@Customer (DRCC) and Roving Edge, but also partnerships with Microsoft and Google that allow customers to host Oracle databases in Azure and Google Cloud data centers. Both partnerships come with a cross-cloud interconnection.

That is one of the big differences and changes for me at the moment. Multi-cloud has become less about mobility or portability, a single global control plane, or the same Kubernetes distribution in all the clouds, but more about bringing different services from different cloud providers closer together.

This is the image I created for the VMC-E blog. Replace the words “AWS” and “Equinix” with “Oracle” and suddenly you have something that was not there before, an interconnected multi-cloud.

What’s Next?

Based on the conversations with my customers, it does not feel that public cloud migrations are happening faster than in 2020 or 2022 and we still see between 70 and 80% of the workloads hosted on-premises. While we see customers who are interested in a cloud-first approach, we see many following a hybrid multi-cloud and/or multi-cloud approach. It is still about putting the right applications in the right cloud based on the right decisions. This has not changed.

But the narrative of such conversations has changed. We will see more conversations about data residency, privacy, security, gravity, proximity, and regulatory requirements. Then there are sovereign clouds.

Lastly, enterprises are going to deploy new platforms for AI-based workloads. But that could still take a while.

Final Thoughts

As enterprises continue to navigate the above mentioned complexities, the need for flexible, scalable, and secure infrastructure solutions will only grow. There are a few compelling solutions that bridge the gap between traditional on-premises systems and modern cloud environments.

And since most enterprises are still hosting their workloads on-premises, they have to decide if they want to stretch the private cloud to the public cloud, or the other way around. Both options can co-exist, but would make it too big and too complex. What’s your conclusion?

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US is currently happening in Las Vegas and I am onsite! Below you will find an overview of the information that was shared with us during the general session and solution keynotes.

Please be aware that this list is not complete but it should include all the major announcements including references and sources.

VMware Aria and VMware Tanzu

Starting this year, VMware Aria and VMware Tanzu form a single track at VMware Explore and VMware introduced the develop, operate, and optimize pillars (DOO) for Aria and Tanzu around April 2023.

VMware Tanzu DOO Framework

The following name changes and adjustments have been announced at VMware Explore US 2023:

  • The VMware Tanzu portfolio includes two new product categories (product family) called “Tanzu Application Platform” and “Tanzu Intelligence Services”.
  • Tanzu Application Platform includes the products Tanzu Application Platform (TAP) and Tanzu for Kubernetes Operations (TKO), and the new Tanzu Application Engine module.
  • Tanzu Intelligence Services – Aria Cost powered by CloudHealth, Aria Guardrails, Aria Insights, and Aria Migration will be rebranded as “Tanzu” and become part of this new Tanzu Intelligence Services category.
    • Tanzu Hub & Tanzu Graph
    • Tanzu CloudHealth
    • Tanzu Guardrails
    • Tanzu Insights (currently known as Aria Insights)
    • Tanzu Transformer (currently known as Aria Migration)
  • Aria Hub and Aria Graph are now called Tanzu Hub
  • VMware Cloud Packs are now called the VMware Cloud Editions (more information below)

Note: VMware expects to implement these changes latest by Q1 2024

The VMware Aria and Tanzu announcement and rebranding information can be found here.

Tanzu Mission Control

After the announcement that Tanzu Mission Control supports the lifecycle management of Amazon EKS clusters, VMware announced the expansion to provide lifecycle management capabilities of Microsoft AKS clusters now as well. 

Tanzu Application Engine (Private Beta)

VMware announced a new solution for the Tanzu Application Platform category.

VMware Tanzu for Kubernetes Operations is introducing Tanzu Application Engine, enhancing multi-cloud support with lifecycle management of Azure AKS clusters, and offering new Kubernetes FinOps (cluster cost) visibility. A new abstraction that includes workload placement, K8s runtime, data services, libraries, infra resources, with a set of policies and guardrails.

The Tanzu Application Engine announcement can be found here.

VMware RabbitMQ Managed Control Plane

I know a lot of customers who built an in-house RabbitMQ cloud service.

VMware just announced a beta program for a new VMware RabbitMQ Managed Control Plane which allows enterprises to seamlessly integrate RabbitMQ within their existing cloud environment, offering flexibility and control over data streaming processes.

What’s New with VMware Aria?

Other Aria announcements can be found here.

What’s New with VMware Aria Operations at VMware Explore

Next-Gen Public Cloud Management with VMware Aria Automation

VMware Cloud Editions

What has started with four different VMware Cloud Packs, is now known as “VMware Cloud Editions” with five different options:

VMware Cloud Editions

Here’s an overview of the different solutions/subscriptions included in each edition:

VMware Cloud Editions Connected Subscriptions

More VMware Cloud related announcements can be found here.

What’s New in vSphere 8 Update 2

As always, VMware is working on enhancing operational efficiency to make the life of an IT admin easier. And this gets better with the vSphere 8 U2 release.

In vSphere 8 Update 2, we are making significant improvements to several areas of maintenance to reduce and in some cases eliminate this need for downtime so vSphere administrators can make those important maintenance changes without having a large impact on the wider vSphere infrastructure consumers.

These enhancements include, reduced downtime upgrades for vCenter, automatic vCenter LVM snapshots before patching and updating, non-disruptive certificate management, and reliable network configuration recovery after a vCenter is restored from backup.

More information about the vSphere 8 Update 2 release can be found here.

What’s New in vSAN 8 Update 2

At VMware Explore 2022, VMware announced the new vSAN 8.0 release which included the new Express Storage Architecture (ESA), which even got better with the recent vSAN 8.0 Update 1 release.

VMware vSAN Max – Petabyte-Scale Disaggregated Storage

VMware vSAN Max, powered by vSAN Express Storage Architecture, is a new vSAN offering in the vSAN family delivering
petabyte-scale disaggregated storage for vSphere. With its new disaggregated storage deployment model, vSAN customers can scale storage elastically and independently from compute and deploy unified block, file, and partner-based object storage to maximize utilization and achieve lower TCO.

VMware vSAN Max

vSAN Max expands the use cases in which HCI can provide exceptional value. Disaggregation through vSAN Max provides flexibility to build infrastructure with the scale and efficiency required for non-linear scaling applications, such as storage-intensive databases, modern elastic applications with large datasets and more. Customers have a choice of deploying vSAN in a traditional model or a disaggregated model with vSAN Max, while still using a single control plane to manage both deployment options.

The vSAN Max announcement can be found here.

VMware Cloud on AWS

VMware announced a VMware Cloud on AWS Advanced subscription tier that will be available on i3en.metal and i4i.metal instance types only. This subscription will include advanced cloud management, networking and security features:

  • VMware NSX+ Services (NSX+ Intelligence, NDR capabilities, NSX Advanced Load Balancer)
  • vSAN Express Storage Architecture Support
  • VMware Aria Automation
  • VMware Aria Operations
  • VMware Aria Operations for Logs

Note: Existing deployments (existing SDDCs) will be entitled to these advanced cloud management, networking and security features over time

The VMware Cloud on AWS Advanced Subscription Tier FAQ can be found here

Introduction of VMware NSX+

Last year, VMware introduced Project Northstar as technology preview:

Project Northstar is a SaaS-based networking and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

This year, VMware announced the initial availability of the NSX+ service. VMware NSX+ is a fully managed cloud-based service offering that allows networking, security, and operations teams to consume and operate VMware NSX services from a single cloud console across private and public clouds.

NSX+ Architectural Diagram

The following services are available:

  • NSX+ Policy Management: Provides unified networking and security policy management across multiple clouds and on-premises data centers.
  • NSX+ Intelligence (Tech Preview only): Provides a big data reservoir and a system for network and security analytics for real-time traffic visibility into applications traffic all the way from basic traffic metrics to deep inspection of packets.
  • NSX+ NDR (Tech Preview only): Provides a scalable threat detection and response service offering for Security Operations Center (SoC) teams to triage real time security threats to their data center and cloud.

There are three different NSX+ and two NSX+ distributed firewall editions available:

  • NSX+ Standard. For organizations needing a basic set of NSX connectivity and security features for single location software-defined data center deployments.
  • NSX+ Advanced. For organizations needing advanced networking and security features that are applied to multiple sites. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Enterprise. For organizations needing all of the capability NSX has to offer. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Distributed Firewall. For organizations needing implement access controls for east-west traffic within the network (micro-segmentation) but not focused on Threat detection and prevention services.
  • NSX+ Distributed Firewall with Threat Prevention. For organizations needing access control and select Threat prevention features for east-west traffic within the network. 

An NSX+ feature overview can be found here.

Note: Currently, NSX+ only supports NSX on-premises deployments (NSX 4.1.1 or later) and VMware Cloud on AWS

VMware Cloud Foundation

VMware announced a few innovations for H2 2023, which includes the support for Distributed Service Engine (DSE aka Project Monterey), vSAN ESA support, and NSX+.

 

Generative AI – VMware Private AI Foundation with Nvidia

VMware and Nvidia’s CEOs announced VMware Private AI Foundation as the result of their longstanding partnership. 

Built on VMware Cloud Foundation, this integrated solution with Nvidia will enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.

Bild

Anywhere Workspace Announcements

At VMware Explore 2022, VMware shared its vision for autonomous workspaces.

Autonomous workspace is a concept (not an individual product) that is our north star for the future of end-user computing. It means going beyond creating a unified workspace with basic automations, to analyzing huge amounts of data with AI and machine learning, to drive more advanced, context aware automations. This leads to a workspace that can be considered self-configuring, self-healing, and self-securing. 

VMware continued working on the realization of this vision and came up with a lot of announcements, which can be found here.

Other Announcements

Please find below some announcements that VMware shared with us during the SpringOne event or before and after the general session on August 22nd, 2023:

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

The time is right for VMware Cloud! In the rapidly evolving landscape of modern business, embracing the cloud has become essential for organizations seeking to stay competitive and agile. The allure of increased scalability, cost-efficiency, and flexibility has driven enterprises of all sizes to embark on cloud migration journeys. However, the road to a successful cloud adoption is often coming with challenges. Slow and failed migrations have given rise to what experts call the “cloud paradox,” where the very technology meant to accelerate progress ends up hindering it.

As businesses navigate through this paradox, finding the right strategy to harness the full potential of the cloud becomes paramount. One solution that has emerged as a beacon of hope in this complex landscape is VMware Cloud. With its multi-cloud approach, which is also known as supercloud, VMware Cloud provides organizations the ability to craft a winning strategy that capitalizes on momentum while minimizing the risks associated with cloud migrations.

The Experimental Phase is Over

Is it really though? The experimental phase was an exciting journey of discovery for organizations seeking the potential of multi-cloud environments. Companies have explored different cloud providers, tested a variety of cloud services, and experimented with workloads and applications in the cloud. It allowed them to understand the benefits and drawbacks of each cloud platform, assess performance, security and compliance aspects, and determine how well each cloud provider aligns with their unique business needs.

The Paradox of Cloud and Choice

With an abundance of cloud service providers, each offering distinct features and capabilities, decision-makers can find themselves overwhelmed with options. The quest to optimize workloads across multiple clouds can lead to unintended complexities, such as increased operational overhead, inconsistent management practices/tools, and potential vendor lock-in.

Furthermore, managing data and applications distributed across various cloud environments can create challenges related to security, compliance, and data sovereignty. The lack of standardized practices and tools in a multi-cloud setup can also hinder collaboration and agility, negating the very advantages that public cloud environments promise to deliver.

Multi-Cloud Complexity

(Public) Cloud computing is often preached for its cost-efficiency, enabling businesses to pay for resources on-demand and avoid capital expenditures on physical infrastructure. However, the cloud paradox reveals that organizations can inadvertently accumulate hidden costs, such as data egress fees, storage overage charges, and the cost of cloud management tools. Without careful planning and oversight, the cloud’s financial benefits might be offset by unexpected expenses.

Why Cloud Migrations are Slowing Down

Failed expectations. The first reasons my customers mention are cost and complexity.

While the cloud offers potential cost savings in the long run, the initial investment and perceived uncertainty in calculating the total cost of ownership can deter some organizations from moving forward with cloud migrations. Budget constraints and difficulties in accurately estimating and analyzing cloud expenses lead to a cautious approach to cloud adoption.

One significant factor impeding cloud migrations is the complexity of the process itself. Moving entire infrastructures, applications, and data to the cloud requires thorough planning, precise execution, and in-depth knowledge of cloud platforms and technologies. Many organizations lack the in-house expertise to handle such a massive undertaking, leading to delays and apprehensions about potential risks.

Other underestimated reasons are legacy systems and applications that have been in use for many years and are often deeply ingrained within an organization’s operations. Migrating these systems to the cloud may require extensive reconfiguration or complete redevelopment, making the migration process both time-consuming and resource-intensive.

Reverse Cloud Migrations

While I don’t advertise a case for repatriation, I would like to share the idea that companies should think about workload mobility, application portability, and repatriation upfront. You can infinitely optimize your cloud spend, but if cloud costs start to outpace your transformation plans or revenue growth, it is too late already.

Embracing a Smart Approach with VMware Cloud

To address the cloud paradox and maximize the potential of multi-cloud environments, VMware is embracing the cloud-smart approach. This approach is designed to empower organizations with a unified and consistent platform to manage and operate their applications across multiple clouds.

VMware Cloud-Smart

  • Single Cloud Operating Model: A single operating model that spans private and public clouds. This consistency simplifies cloud management, enabling seamless workload migration and minimizing the complexities associated with multiple cloud providers.
  • Flexible Cloud Choice: VMware allows organizations to choose the cloud provider that best suits their specific needs, whether it is a public cloud or a private cloud infrastructure. This freedom of choice ensures that businesses can leverage the unique advantages of each cloud while maintaining operational consistency.
  • Streamlined Application Management: A cloud-smart approach centralizes application management, making it easier to deploy, secure, and monitor applications across multi-cloud environments. This streamlines processes, enhances collaboration, and improves operational efficiency.
  • Enhanced Security and Compliance: By adopting VMware’s security solutions, businesses can implement consistent security policies across all clouds, ensuring data protection and compliance adherence regardless of the cloud provider.

Why VMware Cloud?

This year I realized that a lot of VMware customers came back to me because their cloud-first strategy did not work as expected. Costs exploded, migrations were failing, and their project timeline changed many times. Also, partners like Microsoft and AWS want to collaborate more with VMware, because the public cloud giants cannot deliver as expected.

Customers and public cloud providers did not see any value in lifting and shifting workloads from on-premises data centers to the public. Now the exact same people, companies and partners (AWS, Microsoft, Google, Oracle etc.) are back to ask for VMware their support, and solutions that can speed up cloud migrations while reducing risks.

This is why I am always suggesting a “lift and learn” approach, which removes pressure and reduces costs.

Organizations view the public cloud as a highly strategic platform for digital transformation. Gartner forecasted in April 2023 that Infrastructure-as-a-Service (IaaS) is going to experience the highest spending growth in 2023, followed by PaaS.

It is said that companies spend most of their money for compute, storage, and data services when using Google Cloud, AWS, and Microsoft Azure. Guess what, VMware Cloud is a perfect fit for IaaS-based workloads (instead of using AWS EC2, Google’s Compute Engine, and Azure Virtual machine instances)!

Who doesn’t like the idea of cost savings and faster cloud migrations?

Disaster Recovery and FinOps

When you migrate workloads to the cloud, you have to rethink your disaster recovery and ransomware recovery strategy. Have a look at VMware’s DRaaS (Disaster-Recovery-as-a-Service) offering which includes ransomware recovery capabilities as well. 

If you want to analyze and optimize your cloud spend, try out VMware Aria Cost powered by CloudHealth.

Final Words

VMware’s approach is not right for everyone, but it is a future-proof cloud strategy that enables organizations to adapt their cloud strategies as business needs to evolve. The cloud-smart approach offers a compelling solution, providing businesses with a unified, consistent, and flexible platform to succeed in multi-cloud environments. By embracing this approach, organizations can overcome the complexities of multi-cloud, unlock new possibilities, and set themselves on a path to cloud success.

And you still get the same access to the native public cloud services.