Can a Unified Multi-Cloud Inventory Transform Cloud Management?

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

When we spread our workloads across clouds like Oracle Cloud, AWS, Azure, Google Cloud, maybe even IBM, or smaller niche players, we knowingly accept complexity. Each cloud speaks its own language, offers its own services, and maintains its own console. What if there were a central place where we could see everything: every resource, every relationship, across every cloud? A place that lets us truly understand how our distributed architecture lives and breathes?

I find myself wondering if we could one day explore a tool or approach that functions as a multi-cloud inventory, keeping track of every VM, container, database, and permission – regardless of the platform. Not because it’s a must-have today, but because the idea sparks curiosity: what would it mean for cloud governance, cost transparency, and risk reduction if we had this true single pane of glass?

Who feels triggered now because I said “single pane of glass”? 😀 Let’s move on!

Could a Multi-Cloud Command Center Change How We Visualize Our Environment?

Let’s imagine it: a clean interface, showing not just lists of resources, but the relationships between them. Network flows across cloud boundaries. Shared secrets between apps on “cloud A” and databases on “cloud B”. Authentication tokens moving between clouds.

What excites me here isn’t the dashboard itself, but the possibility of visualizing the hidden links across clouds. Instead of troubleshooting blindly, or juggling a dozen consoles, we could zoom out for a bird’s-eye view. Seeing in one place how data and services crisscross providers.

Multi-Cloud Insights

I don’t know if we’ll get there anytime soon (or if such a solution already exists) but exploring the idea of a unified multi-cloud visualization tool feels like an adventure worth considering.

Multi-Cloud Search and Insights

When something breaks, when we are chasing a misconfiguration, or when we want to understand where we might be exposed, it often starts with a question: Where is this resource? Where is that permission open?

What if we could type that question once and get instant answers across clouds? A global search bar that could return every unencrypted public bucket or every server with a certain tag, no matter which provider it’s on.

Multi-Cloud Graph Query

Wouldn’t it be interesting if that search also showed contextual information: connected resources, compliance violations, or cost impact? It’s a thought I keep returning to because the journey toward proactive multi-cloud operations might start with simple, unified answers.

Could a True Multi-Cloud App Require This Kind of Unified Lens?

Some teams are already building apps that stretch across clouds: an API front-end in one provider, authentication in another, ML workloads on specialized platforms, and data lakes somewhere else entirely. These aren’t cloud-agnostic apps, they are “cloud-diverse” apps. Purpose-built to exploit best-of-breed services from different providers.

That makes me wonder: if an app inherently depends on multiple clouds, doesn’t it deserve a control plane that’s just as distributed? Something that understands the unique role each cloud plays, and how they interact, in one coherent operational picture?

I don’t have a clear answer, but I can’t help thinking about how multi-cloud-native apps might need true multi-cloud-native management.

VMware Aria Hub and Graph – Was It a Glimpse of the Future?

Not so long ago, VMware introduced Aria Hub and Aria Graph with an ambitious promise: a single place to collect and normalize resource data from all major clouds, connect it into a unified graph, and give operators a true multi-cloud inventory and control plane. It was one of the first serious attempts to address the challenge of understanding relationships between cloud resources spread across different providers.

VMware Aria Hub Dashboard

The idea resonated with anyone who has struggled to map sprawling cloud estates or enforce consistent governance policies in a multi-cloud world. A central graph of every resource, dependency, and configuration sounded like a game-changer. Not only for visualization, but also for powerful queries, security insights, and cost management.

But when Broadcom acquired VMware, they shifted focus away from VMware’s SaaS portfolio. Many SaaS-based offerings were sunset or sidelined, including Aria Hub and Aria Graph, effectively burying the vision of a unified multi-cloud inventory platform along with them.

I still wonder: did VMware Aria Hub and Graph show us a glimpse of what multi-cloud operations could look like if we dared to standardize resource relationships across clouds? Or did it simply arrive before its time, in an industry not yet ready to embrace such a radical approach?

Either way, it makes me even more curious about whether we might one day revisit this idea and how much value a unified resource graph could unlock in a world where multi-cloud complexity continues to grow.

Final Thoughts

I don’t think there’s a definitive answer yet to whether we need a unified multi-cloud inventory or command center today. Some organizations already have mature processes and tooling that work well enough, even if they are built on scripts, spreadsheets, or point solutions glued together. But as multi-cloud strategies evolve, and as more teams start building apps that intentionally spread across multiple providers, I find myself increasingly curious about whether we will see renewed demand for a shared data model of our entire cloud footprint.

Because with each new cloud we adopt, complexity grows exponentially. Our assets scatter, our identities and permissions multiply, and our ability to keep track of everything by memory or siloed dashboards fades. Even something simple, like understanding “what resources talk to this database?” becomes a detective story across clouds.

A solution that offers unified visibility, context, and even policy controls feels almost inevitable if multi-cloud architectures continue to accelerate. And yet, I’m also aware of how hard this problem is to solve. Each cloud provider evolves quickly, their APIs change, and mapping their semantics into a single, consistent model is an enormous challenge.

That’s why, for now, I see this more as a hypothesis. An idea to keep exploring rather than a clear requirement. I’m fascinated by the thought of what a central multi-cloud “graph” could unlock: faster investigations, smarter automation, tighter security, and perhaps a simpler way to make sense of our expanding environments.

Whether we build it ourselves, wait for a vendor to try again, or discover a new way to approach the problem, I’m eager to see how the industry experiments with this space in the years ahead. Because in the end, the more curious we stay, the better prepared we’ll be when the time comes to simplify the complexity we’ve created.

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

The State of Application Modernization 2025

The State of Application Modernization 2025

Every few weeks, I find myself in a conversation with customers or colleagues where the topic of application modernization comes up. Everyone agrees that modernization is more important than ever. The pressure to move faster, build more resilient systems, and increase operational efficiency is not going away.

But at the same time, when you look at what has actually changed since 2020… it is surprising how much has not.

We are still talking about the same problems: legacy dependencies, unclear ownership, lack of platform strategy, organizational silos. New technologies have emerged, sure. AI is everywhere, platforms have matured, and cloud-native patterns are no longer new. And yet, many companies have not even started building the kind of modern on-premises or cloud platforms needed to support next-generation applications.

It is like we are stuck between understanding why we need to modernize and actually being able to do it.

Remind me, why do we need to modernize?

When I joined Oracle in October 2024, some people reminded me that most of us do not know why we are where we are. One could say that it is not important to know that. In my opinion, it very much is. Something has fundamentally changed in the past that has led us to our situation.

In the past, when we moved from physical servers to virtual machines (VMs), apps did not need to change. You could lift and shift a legacy app from bare metal to a VM and it would still run the same way. The platform changed, but the application did not care. It was an infrastructure-level transformation without rethinking the app itself. So, the transition (P2V) of an application was very smooth and not complicated.

But now? The platform demands change.

Cloud-native platforms like Kubernetes, serverless runtimes, or even fully managed cloud services do not just offer a new home. They offer a whole new way of doing things. To benefit from them, you often have to re-architect how your application is built and deployed.

That is the reason why enterprises have to modernize their applications.

What else is different?

User expectations, business needs, and competitive pressure have exploded as well. Companies need to:

  • Ship features faster
  • Scale globally
  • Handle variable load
  • Respond to security threats instantly
  • Reduce operational overhead

A Quick Analogy

Think of it like this: moving from physical servers to VMs was like transferring your VHS tapes to DVDs. Same content, just a better format.

But app modernization? That is like going from DVDs to Netflix. You do not just change the format, but you rethink the whole delivery model, the user experience, the business model, and the infrastructure behind it.

Why Is Modernization So Hard?

If application modernization is so powerful, why is not everyone done with it already? The truth is, it is complex, disruptive, and deeply intertwined with how a business operates. Organizations often underestimate how much effort it takes to replatform systems that have evolved over decades. Here are 6 common challenges companies face during modernization:

  1. Legacy Complexity – Many existing systems are tightly coupled, poorly documented, and full of business logic buried deep in spaghetti code. 
  2. Skill Gaps – Moving to cloud-native tech like Kubernetes, microservices, or DevOps pipelines requires skills many organizations do not have in-house. Upskilling or hiring takes time and money.
  3. Cultural Resistance – Modernization often challenges organizational norms, team structures, and approval processes. People do not always welcome change, especially if it threatens familiar workflows.
  4. Data Migration & Integration – Legacy apps are often tied to on-prem databases or batch-driven data flows. Migrating that data without downtime is a massive undertaking.
  5. Security & Compliance Risks – Introducing new tech stacks can create blind spots or security gaps. Modernizing without violating regulatory requirements is a balancing act.
  6. Cost Overruns – It is easy to start a cloud migration or container rollout only to realize the costs (cloud bills, consultants, delays) are far higher than expected.

Modernization is not just a technical migration. It’s a transformation of people, process, and platform (technology). That is why it is hard and why doing it well is such a competitive advantage!

Technical Debt Is Also Slowing Things Down

Also known as the silent killer of velocity and innovation: technical debt

Technical debt is the cost of choosing a quick solution now instead of a better one that would take longer. We have all seen/done it. 🙂 Sometimes it is intentional (you needed to hit a deadline), sometimes it is unintentional (you did not know better back then). Either way, it is a trade-off. And just like financial debt, it accrues interest over time.

Here is the tricky part: technical debt usually doesn’t hurt you right away. You ship the feature. The app runs. Management is happy.

But over time, debt compounds:

  • New features take longer because the system is harder to change

  • Bugs increase because no one understands the code

  • Every change becomes risky because there is no test safety net

Eventually, you hit a wall where your team is spending more time working around the system than building within it. That is when people start whispering: “Maybe we need to rewrite it.”  Or they just leave your company.

Let me say it: Cloud Can Also Introduce New Debt

Cloud-native architectures can reduce technical debt, but only if used thoughtfully.

You can still:

  • Over-complicate microservices

  • Abuse Kubernetes without understanding it

  • Ignore costs and create “cost debt”

  • Rely on too many services and lose track

Use the cloud to eliminate debt by simplifying, automating, and replacing legacy patterns, not just lifting them into someone else’s data center.

It Is More Than Just Moving to the Cloud 

Modernization is about upgrading how your applications are built, deployed, run, and evolved, so they are faster, cheaper, safer, and easier to change. Here are some core areas where I saw organizations are making real progress:

  • Improving CI/CD. You can’t build modern applications if your delivery process is stuck in 2010.
  • Data Modernization. Migrate from monolithic databases to cloud-native, distributed ones.
  • Automation & Infrastructure as Code. It is the path to resilience and scale.
  • Serverless Computing. It is the “don’t worry about servers” mindset and ideal for many modern workloads.
  • Containerizing Workloads. Containers are a stepping stone to microservices, Kubernetes, and real DevOps maturity.
  • Zero-Trust Security & Cybersecurity Posture. One of the biggest priorities at the moment.
  • Cloud Migration. It is not about where your apps run. it is about how well they run there. “The cloud” should make you faster, safer, and leaner.

As you can see, application modernization is not one thing, it’s many things. You do not have to do all of these at once. But if you are serious about modernizing, these points (any more) must be part of your blueprint. Modernization is a mindset.

Why (replatforming) now?

There are a few reasons why application modernization projects are increasing:

  • The maturity of cloud-native platforms: Kubernetes, managed databases, and serverless frameworks have matured to the point where they can handle serious production workloads. It is no longer “bleeding edge”
  • DevOps and Platform Engineering are mainstream: We have shifted from siloed teams to collaborative, continuous delivery models. But that only works if your platform supports it.
  • AI and automation demand modern infrastructure: To leverage modern AI tools, event-driven data, and real-time analytics, your backend can’t be a 2004-era database with a web front-end duct-taped to it.

Conclusion

There is no longer much debate: (modern) applications are more important than ever. Yet despite all the talk around cloud-native technologies and modern architectures, the truth is that many organizations are still trying to catch up and work hard to modernize not just their applications, but also the infrastructure and processes that support them.

The current progress is encouraging, and many companies have learned from the experience of their first modernization projects.

One thing that is becoming harder to ignore is how much the geopolitical situation is starting to shape decisions around application modernization and cloud adoption. Concerns around data sovereignty, digital borders, national cloud regulations, and supply chain security are no longer just legal or compliance issues. They are shaping architecture choices.

Some organizations are rethinking their cloud and modernization strategies, looking at multi-cloud or hybrid models to mitigate risk. Others are delaying cloud adoption due to regional uncertainty, while a few are doubling down on local infrastructure to retain control. It is not just about performance or cost anymore, but also about resilience and autonomy.

The global context (suddenly) matters, and it is influencing how platforms are built, where data lives, and who organizations choose to partner with. If anything, it makes the case even stronger for flexible, portable, cloud-native architectures. So you are not locked into a single region or provider.

Navigating the AI Buffet – Strategies and Metrics for Successful Enterprise Implementations

Navigating the AI Buffet – Strategies and Metrics for Successful Enterprise Implementations

Artificial intelligence (AI) is gaining momentum everywhere. We see new solutions, partnerships and even reference architectures popping up almost daily. Additionally, organizations, lawyers and country leaders are looking for the right balance between business value and compliance needs. Without going too much into details, I said to myself, that artificial intelligence has e lot in common with cloud computing and multi-clouds. Just because it is out there everywhere, does it mean we should / are allowed to use it? Organizations are going to use both public and private clouds to host their non-AI and AI workloads, but what is their strategy? How do enterprises implement and successfully manage AI-based technologies and processes in order to generate a sustainable strategy and long-term competitive advantages?

What I won’t do

So, I asked myself: What is my role in this whole (crazy) AI world? What do I need to know? What do I have to do?

First, let me tell you what I won’t or cannot do:

  • I do not have 4+ years of experience working with machine learning
  • I have no competencies to write ML code using TensorFlow, PyTorch or Keras
  • Python? No, no experience, sorry
  • I do not do data engineering as well
  • I understand storage and compute, yes, but no clue when it comes to correlating models with parameter and data
  • No, I don’t have real knowledge of Large Language Models (LLM) or HuggingFace models
  • I do not understand a full MLOps technical stack
  • I cannot fine-tune or tweak AI models
  • No, I don’t fully understand the possibilities of confidential computing or confidential AI

All the things above? That is not me.

What are my questions?

I think most of us start at the same place. First, when this hype started, we had to figure out what AI really means, where it is coming from and what types of AI exist.

After that, how did you continue? Probably like me and many others, you tried out ChatGPT, read about LLMs and generative AI (genAI). Eventually, you also tried out new plugins or tools to enhance your productivity.

A few months ago, I had a short conversation with a CTO from a large bank. A really large bank.

Guess what? He could not tell me how they move forward with the topic “artificial intelligence”. They have not figured out or decided yet what to do in terms of data privacy and control.

Decision-Makers and Data Scientists

This conversation led me to two important questions, and I believe this is what I want to do in the next few months and coming years:

  1. What does it take to implement AI in organizations?
  2. How can the success of an AI strategy and implementation be measured?

These are the topics I want to specialize in. This is the homework I and many others need to do first. These are the conversations I want to have with my customers first before we talk about infrastructure, data, and reference architectures.

My focus

I would like to get a better understanding of how organizations plan to get value with artificial intelligence. It is important, like we had to learn with cloud computing and hybrid or multi-cloud architecture over the past decade or so, to get a complete view and understanding of the opportunities and risks, as well as an understanding of the financial and organizational resources an enterprise might need.

What are the business models and frameworks one has to implement? What is a “good” strategy and how do you manage and measure that? What are the KPIs? What about feasibility and cost-effectiveness?

I want to understand the best practices and how some decision-makers have implemented a successful long-term strategy including processes, culture and technology.

I recently learned that artificial intelligence and machine learning implementations require a huge software stack. Do we really need to understand all the options and the solutions from different vendors? If not, who has got this knowledge? Data scientists?

Conclusion

In conclusion, the journey of implementing artificial intelligence in enterprises mirrors the experience of navigating an all-you-can-eat buffet.

I (still) have so many questions. My mission is to find answers and opinions to these questions, and I would not be surprised if it takes between 12 and 24 months.

The history of AI is more than 70 years old, but it seems we just have started now. While I understand that we live with AI every day now, I also want to understand how this field will develop and what is next. What are the trends?

As enterprises continue to embrace the AI buffet, it is not just about filling plates with technology. It is about crafting a menu that satisfies the hunger for innovation and excellence.

Note: The images for this article have been created with the help of artificial intelligence

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US – Day 1 Announcements

VMware Explore 2023 US is currently happening in Las Vegas and I am onsite! Below you will find an overview of the information that was shared with us during the general session and solution keynotes.

Please be aware that this list is not complete but it should include all the major announcements including references and sources.

VMware Aria and VMware Tanzu

Starting this year, VMware Aria and VMware Tanzu form a single track at VMware Explore and VMware introduced the develop, operate, and optimize pillars (DOO) for Aria and Tanzu around April 2023.

VMware Tanzu DOO Framework

The following name changes and adjustments have been announced at VMware Explore US 2023:

  • The VMware Tanzu portfolio includes two new product categories (product family) called “Tanzu Application Platform” and “Tanzu Intelligence Services”.
  • Tanzu Application Platform includes the products Tanzu Application Platform (TAP) and Tanzu for Kubernetes Operations (TKO), and the new Tanzu Application Engine module.
  • Tanzu Intelligence Services – Aria Cost powered by CloudHealth, Aria Guardrails, Aria Insights, and Aria Migration will be rebranded as “Tanzu” and become part of this new Tanzu Intelligence Services category.
    • Tanzu Hub & Tanzu Graph
    • Tanzu CloudHealth
    • Tanzu Guardrails
    • Tanzu Insights (currently known as Aria Insights)
    • Tanzu Transformer (currently known as Aria Migration)
  • Aria Hub and Aria Graph are now called Tanzu Hub
  • VMware Cloud Packs are now called the VMware Cloud Editions (more information below)

Note: VMware expects to implement these changes latest by Q1 2024

The VMware Aria and Tanzu announcement and rebranding information can be found here.

Tanzu Mission Control

After the announcement that Tanzu Mission Control supports the lifecycle management of Amazon EKS clusters, VMware announced the expansion to provide lifecycle management capabilities of Microsoft AKS clusters now as well. 

Tanzu Application Engine (Private Beta)

VMware announced a new solution for the Tanzu Application Platform category.

VMware Tanzu for Kubernetes Operations is introducing Tanzu Application Engine, enhancing multi-cloud support with lifecycle management of Azure AKS clusters, and offering new Kubernetes FinOps (cluster cost) visibility. A new abstraction that includes workload placement, K8s runtime, data services, libraries, infra resources, with a set of policies and guardrails.

The Tanzu Application Engine announcement can be found here.

VMware RabbitMQ Managed Control Plane

I know a lot of customers who built an in-house RabbitMQ cloud service.

VMware just announced a beta program for a new VMware RabbitMQ Managed Control Plane which allows enterprises to seamlessly integrate RabbitMQ within their existing cloud environment, offering flexibility and control over data streaming processes.

What’s New with VMware Aria?

Other Aria announcements can be found here.

What’s New with VMware Aria Operations at VMware Explore

Next-Gen Public Cloud Management with VMware Aria Automation

VMware Cloud Editions

What has started with four different VMware Cloud Packs, is now known as “VMware Cloud Editions” with five different options:

VMware Cloud Editions

Here’s an overview of the different solutions/subscriptions included in each edition:

VMware Cloud Editions Connected Subscriptions

More VMware Cloud related announcements can be found here.

What’s New in vSphere 8 Update 2

As always, VMware is working on enhancing operational efficiency to make the life of an IT admin easier. And this gets better with the vSphere 8 U2 release.

In vSphere 8 Update 2, we are making significant improvements to several areas of maintenance to reduce and in some cases eliminate this need for downtime so vSphere administrators can make those important maintenance changes without having a large impact on the wider vSphere infrastructure consumers.

These enhancements include, reduced downtime upgrades for vCenter, automatic vCenter LVM snapshots before patching and updating, non-disruptive certificate management, and reliable network configuration recovery after a vCenter is restored from backup.

More information about the vSphere 8 Update 2 release can be found here.

What’s New in vSAN 8 Update 2

At VMware Explore 2022, VMware announced the new vSAN 8.0 release which included the new Express Storage Architecture (ESA), which even got better with the recent vSAN 8.0 Update 1 release.

VMware vSAN Max – Petabyte-Scale Disaggregated Storage

VMware vSAN Max, powered by vSAN Express Storage Architecture, is a new vSAN offering in the vSAN family delivering
petabyte-scale disaggregated storage for vSphere. With its new disaggregated storage deployment model, vSAN customers can scale storage elastically and independently from compute and deploy unified block, file, and partner-based object storage to maximize utilization and achieve lower TCO.

VMware vSAN Max

vSAN Max expands the use cases in which HCI can provide exceptional value. Disaggregation through vSAN Max provides flexibility to build infrastructure with the scale and efficiency required for non-linear scaling applications, such as storage-intensive databases, modern elastic applications with large datasets and more. Customers have a choice of deploying vSAN in a traditional model or a disaggregated model with vSAN Max, while still using a single control plane to manage both deployment options.

The vSAN Max announcement can be found here.

VMware Cloud on AWS

VMware announced a VMware Cloud on AWS Advanced subscription tier that will be available on i3en.metal and i4i.metal instance types only. This subscription will include advanced cloud management, networking and security features:

  • VMware NSX+ Services (NSX+ Intelligence, NDR capabilities, NSX Advanced Load Balancer)
  • vSAN Express Storage Architecture Support
  • VMware Aria Automation
  • VMware Aria Operations
  • VMware Aria Operations for Logs

Note: Existing deployments (existing SDDCs) will be entitled to these advanced cloud management, networking and security features over time

The VMware Cloud on AWS Advanced Subscription Tier FAQ can be found here

Introduction of VMware NSX+

Last year, VMware introduced Project Northstar as technology preview:

Project Northstar is a SaaS-based networking and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

This year, VMware announced the initial availability of the NSX+ service. VMware NSX+ is a fully managed cloud-based service offering that allows networking, security, and operations teams to consume and operate VMware NSX services from a single cloud console across private and public clouds.

NSX+ Architectural Diagram

The following services are available:

  • NSX+ Policy Management: Provides unified networking and security policy management across multiple clouds and on-premises data centers.
  • NSX+ Intelligence (Tech Preview only): Provides a big data reservoir and a system for network and security analytics for real-time traffic visibility into applications traffic all the way from basic traffic metrics to deep inspection of packets.
  • NSX+ NDR (Tech Preview only): Provides a scalable threat detection and response service offering for Security Operations Center (SoC) teams to triage real time security threats to their data center and cloud.

There are three different NSX+ and two NSX+ distributed firewall editions available:

  • NSX+ Standard. For organizations needing a basic set of NSX connectivity and security features for single location software-defined data center deployments.
  • NSX+ Advanced. For organizations needing advanced networking and security features that are applied to multiple sites. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Enterprise. For organizations needing all of the capability NSX has to offer. This edition also entitles customers to VMware NSX+ Advanced Load Balancer Cloud Services.
  • NSX+ Distributed Firewall. For organizations needing implement access controls for east-west traffic within the network (micro-segmentation) but not focused on Threat detection and prevention services.
  • NSX+ Distributed Firewall with Threat Prevention. For organizations needing access control and select Threat prevention features for east-west traffic within the network. 

An NSX+ feature overview can be found here.

Note: Currently, NSX+ only supports NSX on-premises deployments (NSX 4.1.1 or later) and VMware Cloud on AWS

VMware Cloud Foundation

VMware announced a few innovations for H2 2023, which includes the support for Distributed Service Engine (DSE aka Project Monterey), vSAN ESA support, and NSX+.

 

Generative AI – VMware Private AI Foundation with Nvidia

VMware and Nvidia’s CEOs announced VMware Private AI Foundation as the result of their longstanding partnership. 

Built on VMware Cloud Foundation, this integrated solution with Nvidia will enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.

Bild

Anywhere Workspace Announcements

At VMware Explore 2022, VMware shared its vision for autonomous workspaces.

Autonomous workspace is a concept (not an individual product) that is our north star for the future of end-user computing. It means going beyond creating a unified workspace with basic automations, to analyzing huge amounts of data with AI and machine learning, to drive more advanced, context aware automations. This leads to a workspace that can be considered self-configuring, self-healing, and self-securing. 

VMware continued working on the realization of this vision and came up with a lot of announcements, which can be found here.

Other Announcements

Please find below some announcements that VMware shared with us during the SpringOne event or before and after the general session on August 22nd, 2023:

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

Momentum in the Cloud: Crafting Your Winning Strategy with VMware Cloud

The time is right for VMware Cloud! In the rapidly evolving landscape of modern business, embracing the cloud has become essential for organizations seeking to stay competitive and agile. The allure of increased scalability, cost-efficiency, and flexibility has driven enterprises of all sizes to embark on cloud migration journeys. However, the road to a successful cloud adoption is often coming with challenges. Slow and failed migrations have given rise to what experts call the “cloud paradox,” where the very technology meant to accelerate progress ends up hindering it.

As businesses navigate through this paradox, finding the right strategy to harness the full potential of the cloud becomes paramount. One solution that has emerged as a beacon of hope in this complex landscape is VMware Cloud. With its multi-cloud approach, which is also known as supercloud, VMware Cloud provides organizations the ability to craft a winning strategy that capitalizes on momentum while minimizing the risks associated with cloud migrations.

The Experimental Phase is Over

Is it really though? The experimental phase was an exciting journey of discovery for organizations seeking the potential of multi-cloud environments. Companies have explored different cloud providers, tested a variety of cloud services, and experimented with workloads and applications in the cloud. It allowed them to understand the benefits and drawbacks of each cloud platform, assess performance, security and compliance aspects, and determine how well each cloud provider aligns with their unique business needs.

The Paradox of Cloud and Choice

With an abundance of cloud service providers, each offering distinct features and capabilities, decision-makers can find themselves overwhelmed with options. The quest to optimize workloads across multiple clouds can lead to unintended complexities, such as increased operational overhead, inconsistent management practices/tools, and potential vendor lock-in.

Furthermore, managing data and applications distributed across various cloud environments can create challenges related to security, compliance, and data sovereignty. The lack of standardized practices and tools in a multi-cloud setup can also hinder collaboration and agility, negating the very advantages that public cloud environments promise to deliver.

Multi-Cloud Complexity

(Public) Cloud computing is often preached for its cost-efficiency, enabling businesses to pay for resources on-demand and avoid capital expenditures on physical infrastructure. However, the cloud paradox reveals that organizations can inadvertently accumulate hidden costs, such as data egress fees, storage overage charges, and the cost of cloud management tools. Without careful planning and oversight, the cloud’s financial benefits might be offset by unexpected expenses.

Why Cloud Migrations are Slowing Down

Failed expectations. The first reasons my customers mention are cost and complexity.

While the cloud offers potential cost savings in the long run, the initial investment and perceived uncertainty in calculating the total cost of ownership can deter some organizations from moving forward with cloud migrations. Budget constraints and difficulties in accurately estimating and analyzing cloud expenses lead to a cautious approach to cloud adoption.

One significant factor impeding cloud migrations is the complexity of the process itself. Moving entire infrastructures, applications, and data to the cloud requires thorough planning, precise execution, and in-depth knowledge of cloud platforms and technologies. Many organizations lack the in-house expertise to handle such a massive undertaking, leading to delays and apprehensions about potential risks.

Other underestimated reasons are legacy systems and applications that have been in use for many years and are often deeply ingrained within an organization’s operations. Migrating these systems to the cloud may require extensive reconfiguration or complete redevelopment, making the migration process both time-consuming and resource-intensive.

Reverse Cloud Migrations

While I don’t advertise a case for repatriation, I would like to share the idea that companies should think about workload mobility, application portability, and repatriation upfront. You can infinitely optimize your cloud spend, but if cloud costs start to outpace your transformation plans or revenue growth, it is too late already.

Embracing a Smart Approach with VMware Cloud

To address the cloud paradox and maximize the potential of multi-cloud environments, VMware is embracing the cloud-smart approach. This approach is designed to empower organizations with a unified and consistent platform to manage and operate their applications across multiple clouds.

VMware Cloud-Smart

  • Single Cloud Operating Model: A single operating model that spans private and public clouds. This consistency simplifies cloud management, enabling seamless workload migration and minimizing the complexities associated with multiple cloud providers.
  • Flexible Cloud Choice: VMware allows organizations to choose the cloud provider that best suits their specific needs, whether it is a public cloud or a private cloud infrastructure. This freedom of choice ensures that businesses can leverage the unique advantages of each cloud while maintaining operational consistency.
  • Streamlined Application Management: A cloud-smart approach centralizes application management, making it easier to deploy, secure, and monitor applications across multi-cloud environments. This streamlines processes, enhances collaboration, and improves operational efficiency.
  • Enhanced Security and Compliance: By adopting VMware’s security solutions, businesses can implement consistent security policies across all clouds, ensuring data protection and compliance adherence regardless of the cloud provider.

Why VMware Cloud?

This year I realized that a lot of VMware customers came back to me because their cloud-first strategy did not work as expected. Costs exploded, migrations were failing, and their project timeline changed many times. Also, partners like Microsoft and AWS want to collaborate more with VMware, because the public cloud giants cannot deliver as expected.

Customers and public cloud providers did not see any value in lifting and shifting workloads from on-premises data centers to the public. Now the exact same people, companies and partners (AWS, Microsoft, Google, Oracle etc.) are back to ask for VMware their support, and solutions that can speed up cloud migrations while reducing risks.

This is why I am always suggesting a “lift and learn” approach, which removes pressure and reduces costs.

Organizations view the public cloud as a highly strategic platform for digital transformation. Gartner forecasted in April 2023 that Infrastructure-as-a-Service (IaaS) is going to experience the highest spending growth in 2023, followed by PaaS.

It is said that companies spend most of their money for compute, storage, and data services when using Google Cloud, AWS, and Microsoft Azure. Guess what, VMware Cloud is a perfect fit for IaaS-based workloads (instead of using AWS EC2, Google’s Compute Engine, and Azure Virtual machine instances)!

Who doesn’t like the idea of cost savings and faster cloud migrations?

Disaster Recovery and FinOps

When you migrate workloads to the cloud, you have to rethink your disaster recovery and ransomware recovery strategy. Have a look at VMware’s DRaaS (Disaster-Recovery-as-a-Service) offering which includes ransomware recovery capabilities as well. 

If you want to analyze and optimize your cloud spend, try out VMware Aria Cost powered by CloudHealth.

Final Words

VMware’s approach is not right for everyone, but it is a future-proof cloud strategy that enables organizations to adapt their cloud strategies as business needs to evolve. The cloud-smart approach offers a compelling solution, providing businesses with a unified, consistent, and flexible platform to succeed in multi-cloud environments. By embracing this approach, organizations can overcome the complexities of multi-cloud, unlock new possibilities, and set themselves on a path to cloud success.

And you still get the same access to the native public cloud services.