The State of Application Modernization 2025

The State of Application Modernization 2025

Every few weeks, I find myself in a conversation with customers or colleagues where the topic of application modernization comes up. Everyone agrees that modernization is more important than ever. The pressure to move faster, build more resilient systems, and increase operational efficiency is not going away.

But at the same time, when you look at what has actually changed since 2020… it is surprising how much has not.

We are still talking about the same problems: legacy dependencies, unclear ownership, lack of platform strategy, organizational silos. New technologies have emerged, sure. AI is everywhere, platforms have matured, and cloud-native patterns are no longer new. And yet, many companies have not even started building the kind of modern on-premises or cloud platforms needed to support next-generation applications.

It is like we are stuck between understanding why we need to modernize and actually being able to do it.

Remind me, why do we need to modernize?

When I joined Oracle in October 2024, some people reminded me that most of us do not know why we are where we are. One could say that it is not important to know that. In my opinion, it very much is. Something has fundamentally changed in the past that has led us to our situation.

In the past, when we moved from physical servers to virtual machines (VMs), apps did not need to change. You could lift and shift a legacy app from bare metal to a VM and it would still run the same way. The platform changed, but the application did not care. It was an infrastructure-level transformation without rethinking the app itself. So, the transition (P2V) of an application was very smooth and not complicated.

But now? The platform demands change.

Cloud-native platforms like Kubernetes, serverless runtimes, or even fully managed cloud services do not just offer a new home. They offer a whole new way of doing things. To benefit from them, you often have to re-architect how your application is built and deployed.

That is the reason why enterprises have to modernize their applications.

What else is different?

User expectations, business needs, and competitive pressure have exploded as well. Companies need to:

  • Ship features faster
  • Scale globally
  • Handle variable load
  • Respond to security threats instantly
  • Reduce operational overhead

A Quick Analogy

Think of it like this: moving from physical servers to VMs was like transferring your VHS tapes to DVDs. Same content, just a better format.

But app modernization? That is like going from DVDs to Netflix. You do not just change the format, but you rethink the whole delivery model, the user experience, the business model, and the infrastructure behind it.

Why Is Modernization So Hard?

If application modernization is so powerful, why is not everyone done with it already? The truth is, it is complex, disruptive, and deeply intertwined with how a business operates. Organizations often underestimate how much effort it takes to replatform systems that have evolved over decades. Here are 6 common challenges companies face during modernization:

  1. Legacy Complexity – Many existing systems are tightly coupled, poorly documented, and full of business logic buried deep in spaghetti code. 
  2. Skill Gaps – Moving to cloud-native tech like Kubernetes, microservices, or DevOps pipelines requires skills many organizations do not have in-house. Upskilling or hiring takes time and money.
  3. Cultural Resistance – Modernization often challenges organizational norms, team structures, and approval processes. People do not always welcome change, especially if it threatens familiar workflows.
  4. Data Migration & Integration – Legacy apps are often tied to on-prem databases or batch-driven data flows. Migrating that data without downtime is a massive undertaking.
  5. Security & Compliance Risks – Introducing new tech stacks can create blind spots or security gaps. Modernizing without violating regulatory requirements is a balancing act.
  6. Cost Overruns – It is easy to start a cloud migration or container rollout only to realize the costs (cloud bills, consultants, delays) are far higher than expected.

Modernization is not just a technical migration. It’s a transformation of people, process, and platform (technology). That is why it is hard and why doing it well is such a competitive advantage!

Technical Debt Is Also Slowing Things Down

Also known as the silent killer of velocity and innovation: technical debt

Technical debt is the cost of choosing a quick solution now instead of a better one that would take longer. We have all seen/done it. 🙂 Sometimes it is intentional (you needed to hit a deadline), sometimes it is unintentional (you did not know better back then). Either way, it is a trade-off. And just like financial debt, it accrues interest over time.

Here is the tricky part: technical debt usually doesn’t hurt you right away. You ship the feature. The app runs. Management is happy.

But over time, debt compounds:

  • New features take longer because the system is harder to change

  • Bugs increase because no one understands the code

  • Every change becomes risky because there is no test safety net

Eventually, you hit a wall where your team is spending more time working around the system than building within it. That is when people start whispering: “Maybe we need to rewrite it.”  Or they just leave your company.

Let me say it: Cloud Can Also Introduce New Debt

Cloud-native architectures can reduce technical debt, but only if used thoughtfully.

You can still:

  • Over-complicate microservices

  • Abuse Kubernetes without understanding it

  • Ignore costs and create “cost debt”

  • Rely on too many services and lose track

Use the cloud to eliminate debt by simplifying, automating, and replacing legacy patterns, not just lifting them into someone else’s data center.

It Is More Than Just Moving to the Cloud 

Modernization is about upgrading how your applications are built, deployed, run, and evolved, so they are faster, cheaper, safer, and easier to change. Here are some core areas where I saw organizations are making real progress:

  • Improving CI/CD. You can’t build modern applications if your delivery process is stuck in 2010.
  • Data Modernization. Migrate from monolithic databases to cloud-native, distributed ones.
  • Automation & Infrastructure as Code. It is the path to resilience and scale.
  • Serverless Computing. It is the “don’t worry about servers” mindset and ideal for many modern workloads.
  • Containerizing Workloads. Containers are a stepping stone to microservices, Kubernetes, and real DevOps maturity.
  • Zero-Trust Security & Cybersecurity Posture. One of the biggest priorities at the moment.
  • Cloud Migration. It is not about where your apps run. it is about how well they run there. “The cloud” should make you faster, safer, and leaner.

As you can see, application modernization is not one thing, it’s many things. You do not have to do all of these at once. But if you are serious about modernizing, these points (any more) must be part of your blueprint. Modernization is a mindset.

Why (replatforming) now?

There are a few reasons why application modernization projects are increasing:

  • The maturity of cloud-native platforms: Kubernetes, managed databases, and serverless frameworks have matured to the point where they can handle serious production workloads. It is no longer “bleeding edge”
  • DevOps and Platform Engineering are mainstream: We have shifted from siloed teams to collaborative, continuous delivery models. But that only works if your platform supports it.
  • AI and automation demand modern infrastructure: To leverage modern AI tools, event-driven data, and real-time analytics, your backend can’t be a 2004-era database with a web front-end duct-taped to it.

Conclusion

There is no longer much debate: (modern) applications are more important than ever. Yet despite all the talk around cloud-native technologies and modern architectures, the truth is that many organizations are still trying to catch up and work hard to modernize not just their applications, but also the infrastructure and processes that support them.

The current progress is encouraging, and many companies have learned from the experience of their first modernization projects.

One thing that is becoming harder to ignore is how much the geopolitical situation is starting to shape decisions around application modernization and cloud adoption. Concerns around data sovereignty, digital borders, national cloud regulations, and supply chain security are no longer just legal or compliance issues. They are shaping architecture choices.

Some organizations are rethinking their cloud and modernization strategies, looking at multi-cloud or hybrid models to mitigate risk. Others are delaying cloud adoption due to regional uncertainty, while a few are doubling down on local infrastructure to retain control. It is not just about performance or cost anymore, but also about resilience and autonomy.

The global context (suddenly) matters, and it is influencing how platforms are built, where data lives, and who organizations choose to partner with. If anything, it makes the case even stronger for flexible, portable, cloud-native architectures. So you are not locked into a single region or provider.

VMware Tanzu Licensing – What’s New?

VMware Tanzu Licensing – What’s New?

Last year, VMware gave the Tanzu portfolio a fairly good facelift with all the announcements from VMware Explore 2022. It is clear to me that VMware focuses on multi-cluster and multi-cloud Kubernetes management capabilities (Tanzu for Kubernetes Operations) and a superior developer experience with any Kubernetes on any cloud (Tanzu Application Platform). VMware embraces native public clouds and so it was very exciting for many customers when they announced the lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters – the direct provisioning and management of EKS clusters with Tanzu Mission Control. But what happened in the last 6 to 9 months since VMware Explore US and Europe? And how do I get parts of the VMware Tanzu portfolio nowadays?

Tanzu Licensing

Let us start with licensing first. in October 2022, VMware made it clear that they do not want to move forward anymore with the Tanzu Basic and Advanced editions, only Tanzu Standard was left. VMware replaced Tanzu Basic with “Tanzu Kubernetes Grid” (TKG), which comes with the following components:

  • vSphere capabilities / K8s Runtime
  • K8s Cluster Lifecycle Management – Cluster API
  • Image Registry – Harbor
  • Container Networking – Antrea/Calico
  • Load Balancing – NSX Advanced Load Balancer
  • Ingress Controller – Contour
  • Observability – Fluent Bit, Prometheus, Grafana
  • Operating System – Photon OS, Ubuntu, bring-your-own node image
  • Data Protection – Velero

Note: Nothing is official yet, but according to this article intended for partners, VMware is going to announce the Tanzu Standard EOA (End of Availability) soon:

…containing updated information on Tanzu Standard entering end of availability (EOA) and the new Tanzu Kubernetes Operations and Tanzu Application Platform partner resources.

Looking at the “Tanzu Explainer” and its changelog from the 5th of May, one can find the following: “Updated to reflect new Tanzu for Kubernetes Operations SKUs“.

Tanzu for Kubernetes Operations Bundles

The Tanzu Explainer on Tech Zone lists the following new bundles/packages for Tanzu for Kubernetes Operations (TKO):

  1. Tanzu for Kubernetes Operations Foundation includes Tanzu Mission Control Advanced and Tanzu Service Mesh Advanced. Two add-on SKUs are available—one adds Antrea Advanced and Aria Operations for Applications, the other adds these plus NSX Advanced Load Balancer Enterprise. Tanzu Kubernetes Grid is not included in this bundle.
  2. Tanzu for Kubernetes Operations includes Tanzu Kubernetes Grid, Tanzu Mission Control Advanced, Tanzu Service Mesh Advanced, Antrea Advanced, and Aria Operations for Applications.
  3. Tanzu for Kubernetes Operations with NSX Advanced Load Balancer includes Tanzu Kubernetes Grid, Tanzu Mission Control Advanced, Tanzu Service Mesh Advanced, Antrea Advanced, Aria Operations for Applications, and NSX Advanced Load Balancer Enterprise.

Note: Since Tanzu Mission Control Standard (TMC) was only sold as part of the Tanzu Standard Edition, we see VMware moving forward with TMC Advanced only. Which is good! But TMC Essentials still comes with vSphere+ and VMC on AWS.

Tanzu Entitlements with vSphere and VMware Cloud Foundation Editions

What about vSphere and VMware Cloud Foundation (VCF)? Let me give you an overview here as well:

  • vSphere+ Standard – No Tanzu entitlements included
  • vSphere+ – Includes TKG and TMC Essentials
  • vSphere Enterprise+ with TKG – Includes TKG
  • VMware Cloud Foundation – All VCF editions have Tanzu Standard included

Note: We do not know yet what the Tanzu Standard EOA means for the Tanzu entitlements with VCF. Need to wait for guidance.

VMware Cloud Packs

In April 2023, VMware introduced new bundles called VMware Cloud Packs and they come in four different flavours:

  1. Compute with Advanced Automation. vSphere+ and Aria Universal Suite Advanced
  2. HCI. vSphere+, vSAN+ Advanced and Aria Universal Suite Standard
  3. HCI with Advanced Automation. vSphere+, vSAN+ Advanced and Aria Universal Suite Advanced
  4. VMware Cloud Foundation. vSphere+, vSAN+ Enterprise, NSX Enterprise Plus, SDDC Manager, Aria Universal Suite Enterprise, Aria Operations for Networks Enterprise add-on

In addition to these four Cloud Packs offerings, customers can get the following add-ons:

  • Data Protection & Disaster Recovery
  • Network Detection and Response
  • Tanzu Mission Control
  • Ransomware Recovery
  • Advanced Load Balancer
  • Workload and Endpoint Security
  • Intrusion Detection and Prevention
  • VDI/Desktops

Note: As you can see, all new cloud packs have TKG included and TMC is an add-on. vCenter Standard is with connected and disconnected subscriptions.

Important: Please note as well that the individual components of the bundles cannot be upgraded independently. Example – Aria Universal Suite Standard as part of the HCI Cloud Pack cannot be upgraded to Aria Universal Suite Enterprise.

Conclusion

VMware is clearly moving in the right direction: They want to simplify their portfolio and improve how customers can consume/subscribe services. As always, it is going to take a while until they have figured out which bundles and product versions make sense for most of the customers. Be patient. 🙂

 

Open Source and Vendor Lock-In

Open Source and Vendor Lock-In

When talking about multi-cloud and cost efficiency, open source is often discussed because it can be deployed and operated on all private and public clouds. From my experience and conversations with customers, open source is most of the time directly connected to discussions about vendor lock-ins.

Organizations want to avoid or minimize the use of proprietary software to avoid becoming dependent on a particular vendor or service. And there are different factors like proprietary technology or service, or long-term contracts. It is also about not giving a specific supplier leverage over your organization – for example when this supplier is increasing their prices. Another reason to avoid vendor lock-in is the notion that proprietary software can limit or reduce innovation in your environment.

CNCF and Kubernetes

Let us take Kubernetes as an example. Kubernetes, which is also known as K8s, was contributed as an open-source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM, and VMware.

Currently, the CNCF has over 167k project contributors, over 800 members, and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.

The Cloud Native Computing Foundation, its members, and contributors have the same mission in mind. They want to provide drive the cloud native adoption by providing open and cloud native software that “can be implemented on a variety of architectures and operating systems”. This is one of the values described in the CNCF mission statement).

If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding of how many open source projects are supported by the CNCF and this open source community.

CNCF Landscape Jan 2023

Since donated to CNCF, a lot of companies on this planet are using Kubernetes, or at least a distribution of it:

  • Amazon Elastic Kubernetes Service Distro (Amazon EKS-D)
  • Azure (AKS) Engine
  • Cisco Intersight Kubernetes Service
  • K3s – Lightweight Kubernetes
  • MetalK8s
  • Oracle Cloud Native Environment
  • Rancher Kubernetes
  • Red Hat OpenShift
  • VMware Tanzu Kubernetes Grid (TKG)

A distribution, or distro, is when a vendor takes core Kubernetes — that’s the unmodified, open source code (although some modify it) — and packages it for redistribution. Usually, this entails finding and validating the Kubernetes software and providing a mechanism to handle cluster installation and upgrades. Many Kubernetes distributions include other proprietary or open source applications.

These were just a few of the total 66 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones out of the 53 total:

  • Alibaba Cloud Container Service for Kubernetes (ACK)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE)
  • Nutanix Kubernetes Engine (formerly Karbon)
  • Oracle Container Engine for Kubernetes (OKE)
  • Red Hat OpenShift Dedicated

While Kubernetes is open source, different vendors create curated versions of Kubernetes, add some proprietary services, and then offer it as a managed service. The notion of open source is that you can take all of your applications and their components and leave a specific cloud provider if needed.

Trade-Offs

Open source software can make cloud migrations easier in some ways (e.g., if you use the same database in all the clouds). Kubernetes is designed to be cloud-agnostic, meaning that it can run on multiple cloud platforms. This can make it easier to move applications and workloads between different clouds without needing to rewrite the code or reconfigure the infrastructure. At least this was the expectation of Kubernetes. And it should be clear by now, that a managed service or platform means a lock-in. No matter if this is GKE, EKS, AKS, or VMware Tanzu for Kubernetes.

You cannot avoid a (vendor) lock-in. You have the same with open source. It is about trade-offs.

If you deploy workloads in multiple clouds, you end up with different vendors/partners, different solutions, and technologies. For me, it is about operations at the end of the day. How do you manage and operate multiple clouds and their different managed services? How do you deploy and use open source software in different clouds?

I have not seen one customer saying that they moved away from AKS, EKS, GKE, or Tanzu and went back to the upstream version of Kubernetes and built the application platform around it by themselves from scratch with other open source projects. You can do it, but you need someone who did that before and can guide you. Why?

There are other container-related technologies like databases, streaming & messaging, service proxies, API gateways, cloud native storage, container runtimes, service meshes, and cloud native network projects. Let us have a look at the different categories and examples:

  • Database, 62 different projects (Cassandra, MySQL, Redis, PostgreSQL, Scylla)
  • Storage, 66 different projects (Container Storage Interface, MinIO, Velero)
  • Network, 25 different projects (Antrea, Cilium, Flannel, Container Network Interface, Open vSwitch, Calico, NGINX)
  • Service Proxy, 21 different projects (Contour, Envoy, HAProxy, MetalLB, NGINX)
  • Observability & Analysis, 145 projects (Grafana, Icinga, Nagios, Prometheus)

CNCF Cloud Native Networking

It is complex to deploy, integrate, operate and maintain different open source projects that you most probably need to integrate with proprietary software as well. So, one trade-off and disadvantage of open source software could be that it is developed and maintained by a community of volunteers. Some companies need enterprise support.

Note: Do not forget that even though you may be using open source software in different private and public clouds, you cannot change the fact that you most probably still have to use specific services of each cloud platform (e.g., network and storage). In this case, you have a dependency or lock-in on a different architectural layer.

If it is about costs, then open source can be helpful here, sure, but we shouldn’t forget the additional operational efforts. You will never get the costs down to zero with open source

The Reality

Graduated and incubating CNCF projects are considered to be running stable and can be used in production. Some examples would be Envoy, etcd, Harbor, Kubernetes, Open Policy Agent, and Prometheus.

Companies and developers have different motivations why open source. Open source software lowers your total cost of ownership (TCO), is created by skillful and talented people, you have more flexibility because of non-proprietary standards, it is cloud agnostic, has strong and fast support from the community when finding bugs, and is considered to be secure for use in production.

Open source is even so much liked that its usage attracts talent. There is no other community of this size that is collaborating on innovation and industry standardization!

But the Apache Log4j vulnerability showed the whole world that open source software needs to become more secure, and that project contributors and users need to ensure the integrity of the source code, build, and distribution in all open source software since a growing number of companies are using open source software as part of their solutions and managed services.

There are certain situations where open source software needs to be integrated with proprietary software. Commercial software can also provide more enterprise-readiness and can provide a complete solution, whereas with open source software on the other hand, you have to deploy and use a combination of different projects (to achieve the same). This could mean a lot of effort for a company. And you have to ensure the interoperability of the implemented software stack.

Technical issues always occur, no matter if it’s open source or proprietary software. Open source software does not provide the enterprise support some organizations are looking for.

While one has to decide what is best for their company and strategy, a lot of people are overwhelmed by the huge and confusing CNCF landscape that gives you so many options. Instead of deploying and integrating different open source projects by themselves, organizations are looking for public cloud service providers that take care of the management and ecosystem (network, storage, databases etc.) related to Kubernetes and this way is seen as the easiest way to get started with cloud native.

What has started for some organizations in one public cloud with one hosted Kubernetes offering has sometimes grown to a landscape with three different public clouds and four different Kubernetes distributions or hosted services.

Example: Companies may have started with Kubernetes or VMware Tanzu on-premises and use AKS, EKS and GKE in their public clouds.

How do you cost-efficiently manage all these different distributions and services over different clouds with different management consoles and security solutions? Tanzu Mission Control and Tanzu Application Platform could be on option.

VMware and Open Source

VMware and some of their engineers are part of the community and they actively contribute to projects like Kubernetes, Harbor, Carvel, Antrea, Contour and Velero. Interested in some stats (filtered by the last decade)?

Open source is an essential part of any software strategy—from a developer’s laptop to the data center. At VMware, we’re committed to open source and their communities so that we can all deliver better solutions: software that’s more secure, scalable, and innovative. VMware Tanzu is open source aligned and built on a foundation of open source projects.

VMware Tanzu

VMware (Tanzu) leverages some of the leading open source technologies in the Kubernetes ecosystem. They use Cluster API for cluster lifecycle management, Harbor for container registry, Contour for ingress, Fluentbit for logging, Grafana and Prometheus for monitoring, Antrea and Calico for container networking, Velero for backup and recovery, Sonobuoy for conformance testing, and Pinniped for authentication.

VMware Open Source

VMware Tanzu Application Platform

According to VMware, they built Tanzu Application Platform (TAP) with an open source-first mindset. Here are some of the most popular technologies and projects:

More information can be found here.

VMware Data Services

VMware has also a family of on-demand caching, messaging, and database software (from the acquisition of Pivotal):

  • VMware GemFire – Fast, consistent data for web-scaling concurrent requests fulfills the promise of highly responsive applications.
  • VMware RabbitMQ – A fast, dependable enterprise message broker provides reliable communication among servers, apps, and devices.
  • VMware Greenplum – VMware Greenplum is a massively parallel processing database. Greenplum is based on open source Postgres, enabling Data Warehousing, aggregation, AI/ML and extreme query speed.
  • VMware SQL – VMware’s open-source SQL Database (Postgres & MySQL) is a Relational database service providing cost-efficient and flexible deployments on-demand and at scale. Available on any cloud, anywhere.

Watch the VMware Explore 2022 session “Introduction to VMware Tanzu Data Services” to learn more about this portfolio.

Developers could start with the Tanzu Developer Center.

VMware SQL and DBaaS

If you are interested in building a DB-as-a-Service offering based on PostgreSQL, MySQL or SQL Server, I recommend the following resources from Cormac Hogan:

  1. A closer look at VMware Data Services Manager and Project Moneta
  2. VMware Data Services Manager – Architectural Overview and Provider Deployment
  3. VMware Data Services Manager – Agent Deployment
  4. VMware Data Services Manager – Database Creation
  5. VMware Data Services Manager – SQL Server Database Template
  6. Introduction to VMware Data Services Manager (video)

Closing

Like always, you or your architects have to decide what makes the most sense for your company, your IT landscape, and your applications. Make or buy? Open source or proprietary software? Happy married or locked in? What is vendor lock-in for you?

In any case, VMware embraces open source!

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you! 

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

Multi-Cloud is a mess. You cannot solve that multi-cloud complexity with a single vendor or one single supercloud (or intercloud), it’s just not possible. But different vendors can help you on your multi-cloud journey to make your and the platform team’s life easier. The whole world talks about DevOps or DevSecOps and then there’s the shift-left approach which puts more responsibility on developers. It seems to me that too many times we forget the “ops” part of DevOps. That is why I would like to highlight the need for Tanzu Mission Control (which is part of  Tanzu for Kubernetes Operations) and Tanzu Application Platform.

Challenges for Operations

What has started with a VMware-based cloud in your data centers, has evolved to a very heterogeneous architecture with two or more public clouds like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform. IT analysts tell us that 75% of businesses are already using two or more public clouds. Businesses choose their public cloud providers based on workload or application characteristics and a public clouds known strengths. Companies want to modernize their current legacy applications in the public clouds, because in most cases a simple rehost or migration (lift & shift) doesn’t bring value or innovation they are aiming for.

A modern application is a collection of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed in a private or public cloud. Many operations and platform teams see cloud-native as going to Kubernetes. But cloud-native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructures.

Expectation of Kubernetes

Kubernetes 1.0 was contributed as an open source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM and VMware.

Currently, the CNCF has over 167k project contributors, around 800 members and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.

If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding how many open source projects are supported by the CNCF and maintained this open source community. Since donated to CNCF, almost every company on this planet is using Kubernetes, or a distribution of it:

These were just a few of total 63 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones:

  • Alibaba Cloud Container Service for Kubernetes
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE)
  • Nutanix Karbon
  • Oracle Container Engine
  • OVH Managed Kubernetes Service
  • Red Hat OpenShift Dedicated

All these clouds and vendors expose Kubernetes implementations, but writing software that performs equally well across all clouds seems to be still a myth. At least we have a common denominator, a consistency across all clouds, right? That’s Kubernetes.

Consistent Operations and Experience

It is very interesting to see that the big three hyperscalers Amazon, AWS and Google are moving towards multi-cloud enabled services and products to provide a consistent experience from an operations standpoint, especially for Kubernetes clusters.

Microsoft got Azure Arc now, Google provides Anthos (GKE clusters) for any cloud and AWS also realized that the future consists of multiple clouds and plans to provide AKS “anywhere”.

They all have realized that customers need a centralized management and control plane. Customers are looking for simplified operations and consistent experience when managing multi-cloud K8s clusters.

Tanzu Mission Control (TMC)

Imagine that you have a centralized dashboard with management capabilities, which provide a unified policy engine and allows you to lifecycle all the different K8s clusters you have.

TMC offers built-in security policies and cluster inspection capabilities (CIS benchmarks) so you can apply additional controls on your Kubernetes deployments. Leveraging the open source project Velero, Tanzu Mission Control gives ops teams the capability to very easily backup and restore your clusters and namespaces. Just 4 weeks ago, VMware announced cross-cluster backup and restore capabilities for Tanzu Mission Control, that let Kubernetes-based applications “become” infrastructure and distribution agnostic.

Tanzu Mission Control lets you attach any CNCF-conformant K8s cluster. When attached to TMC, you can manage policies for all Kubernetes distributions such as Tanzu Kubernetes Grid (TKG), Azure Kubernetes Service, Google Kubernetes Engine or OpenShift.

Tanzu Mission Control Dashboard

In VMware’s ongoing commitment to support customers in their multi-cloud application modernization efforts, the Tanzu Mission Control team introduced the preview of lifecycle management of Amazon AKS clusters at VMware Explore US 2022:

Preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters can enable direct provisioning and management of Amazon EKS clusters so that developers and operators have less friction and more choices for cluster types. Teams will be able to simplify multi-cloud, multi-cluster Kubernetes management with centralized lifecycle management of Tanzu Kubernetes Grid and Amazon EKS cluster types.

Note: With this announcement I would expect that the support for Azure Kubernetes Service (AKS) is also coming soon.

Read the Tanzu Mission Control solution brief to get more information about its benefits and capabilities.

Challenges for Developers

Tanzu Mission Control provides cross-cloud services for your Kubernetes clusters deployed in multiple clouds. But there is still another problem.

Developers are being asked to write code and provide business logic that could run on-prem, on AWS, on Azure or any other public cloud. Every cloud provider has an interest to provide you their technologies and services. This includes the hosted Kubernetes offerings (with different Kubernetes distributions), load balancers, storage, databases, APIs, observability, security tools and so many other components. To me, it sounds very painful and difficult to learn and understand the details of every cloud provider.

Cross-cloud services alone don’t solve that problem. Obviously, neither Kubernetes solves that problem.

What if Kubernetes and centralized management and visibility are not “the” solution but rather something that sits on top of Kubernetes?

And Then Came PaaS

Kubernetes is a platform for building platforms and is not really meant to be used by developers.

The CNCF landscape is huge and complex to understand and integrate, so it is just a logical move that companies were looking more for pre-assembled solutions like platform as a service (PaaS). I think that Tanzu Application Service (formerly known as Pivotal Cloud Foundry), Heroku, RedHat OpenShift and AWS Elastic Beanstalk are the most famous examples for PaaS.

The challenge with building applications that run on a PaaS, is sometimes the need to leverage all the PaaS specific components to fully make use of it. What if someone wants to run her own database? What if the PaaS offering restricts programming languages, frameworks, or libraries? Or is it the vendor lock-in which bothers you?

PaaS solutions alone don’t seem to be solving the missing developer experience either for everyone.

Do you want to build the platform by yourself or get something off the shelf? There is a big difference between using a platform and running one. 🙂

Twitter Kelsey Hightower K8s PaaS

Bring Your Own Kubernetes To A Portable PaaS

What’s next after IaaS has evolved to CaaS (because of Kubernetes) and PaaS? It is adPaaS (Application Developer PaaS).

Have you ever heard of the “Golden Path“? Spotify uses this term and Netflix calls it “Paved Road“.

The idea behind the golden path or paved road is that the (internal) platform offers some form of pre-assembled components and supported approach (best practices) that make software development faster and more scalable. Developers don’t have to reinvent the wheel by browsing through a very fragmented ecosystem of developer tooling where the best way to find out how to do things was to ask the community or your colleagues.

VMware announced Tanzu Application Platform (TAP) in September 2021 with the statement, that TAP will provide a better developer experience on any Kubernetes.

VMware Tanzu Application Platform delivers a prepaved path to production and a streamlined, end-to-end developer experience on any Kubernetes.

It is the platform team’s duty to install and configure the opinionated Tanzu Application Platform as an overlay on top of any Kubernetes cluster. They also integrate existing components of Kubernetes such as storage and networking. An opinionated platform provides the structure and abstraction you are looking for: The platform “does” it for you. In other words, TAP is a prescribed architecture and path with the necessary modularity and flexibility to boost developer productivity.

Diagram depicting the layered structure of TAP

The developers can focus on writing code and do not have to fully understand the details like container image registries, image building and scanning, ingress, RBAC, deploying and running the application etc.

Illustration of TAP conceptual value, starting with components that serve the developer and finishing with the components that serve the operations staff and security staff.

 

TAP comes with many popular best-of-breed open source projects that are improving the DevSecOps experience:

  • Backstage. Backstage is an open platform for building developer portals, created at Spotify, donated to the CNCF, and maintained by a worldwide community of contributors.
  • Carvel. Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes.
  • Cartographer. Cartographer is a VMware-backed project and is a Supply Chain Choreographer for Kubernetes. It allows App Operators to create secure and pre-approved paths to production by integrating Kubernetes resources with the elements of their existing toolchains (e.g. Jenkins).
  • Tekton. Tekton is a cloud-native, open source framework for creating CI/CD systems. It allows developers to build, test, and deploy across cloud providers and on-premise systems.
  • Grype. Grype is a vulnerability scanner for container images and file systems.
  • Cloud Native Runtimes for VMware Tanzu. Cloud Native Runtimes for Tanzu is a serverless application runtime for Kubernetes that is based on Knative and runs on a single Kubernetes cluster.

At VMware Explore US 2022, VMware announced new capabilities that will be released in Tanzu Application Platform 1.3. The most important added functionalities for me are:

  • Support for RedHat OpenShift. Tanzu Application Platform 1.3 will be available on RedHat OpenShift, running in vSphere and on baremetal.
  • Support for air-gapped installations. Support for regulated and disconnected environments, helping to ensure that the components, upgrades, and patches are made available to the system and that they operate consistently and correctly in the controlled environment and keep data secure.
  • Carbon Black Integration. Tanzu Application Platform expands the ecosystem of supported vulnerability scanners with a beta integration with VMware Carbon Black scanner to enable customer choice and leverage their existing investments in securing their supply chain.

The Power Combo for Multi-Cloud

A mix of different workloads like virtual machines and containers that are hosted in multiple clouds introduce complexity. With the powerful combination of Tanzu Mission Control and Tanzu Application Platform companies can unlock the full potential of their platform teams and developers by reducing complexity while creating and using abstraction layers on top your multi-cloud infrastructure.

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

Last year at VMworld 2021, VMware mentioned and announced a lot of (new) projects they are working on. What happened to them and which new VMware projects have been mentioned this year at VMware Explore so far?

Project Ensemble – VMware Aria Hub

VMware unveiled their unified multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.

VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.

VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.

VMware Explore US 2022 Session: A Unified Cloud Management Control Plane – Update on Project Ensemble [CMB2210US]

Project Monterey – DPU-based Acceleration for NSX

Last year introduced as Project Monterey and in technology preview, VMware announced the GA version of Monterey called DPU-based Acceleration for NSX yesterday.

Project Arctic – vSphere+ and vSAN+

Project Arctic has been introduced last year as a Technology Preview and was described as “the next step in the evolution of vSphere in a multi-cloud world”. What has started with the idea of bringing VMware Cloud services closer to vSphere, has evolved to a even more interesting and enterprise-ready version called vSphere+ and vSAN+. It includes developer services that consist of the Tanzu Kubernetes Grid runtime, Tanzu Mission Control Essentials and NSX Advanced Load Balancer Essentials. VMware is going to add more and more VMware Cloud add-on services in the future. Additionally, VMware even introduced VMware Cloud Foundation+.

Project Iris – Application Transformer for VMware Tanzu

VMware mentioned Project Iris very briefly last year at VMworld. In February 2022, Project Iris became generally available and is since then known as Application Transformer for VMware Tanzu.

Project Northstar

At VMware Explore on day 1, VMware introduced Project Northstar, which will provide customers a centralized cloud console that gives them instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. Project Northstar will be able to apply consistent networking and security policies across private cloud, hybrid cloud, and multi-cloud environments.

Graphical user interface Description automatically generated

VMware Explore US 2022 Session: Multi-Cloud Networking and Security with NSX [NETB2154US]

Project Watch

At VMware Explore on day 1,VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.

Project Trinidad

Also announced at VMware Explore day 1 and further explained at day 2, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.

Project Narrows

Project Narrows introduces a unique addition to Harbor, allowing end users to assess the security posture of Kubernetes clusters at runtime. Images previously undetected, will be scanned at the time of introduction to a cluster, so vulnerabilities can now be caught, images may be flagged, and workloads quarantined.

Project Narrows adding dynamic scanning to your software supply chain with Harbor is critical. It allows greater awareness and control of your running workloads than the traditional method of simply updating and storing workloads.

VMware is open sourcing the initial capabilities of Project Narrows on GitHub as the Cloud Native Security Inspector (CNSI) Project.

VMware Explore US 2022 Session: Running App Workloads in a Trusted, Secure Kubernetes Platform [VIB1443USD]

Project Keswick

Also introduced on day 2, Project Keswick is about simplifying edge deployments at scale. It comes as an xLabs project coming out of the Advanced Technology Group in VMware’s Office of the CTO.

Bild

A Keswick deployment is entirely automated and uses Git as a single source of truth for a declarative way to manage your infrastructure and applications through desired state configuration enabled by GitOps. This ensures the infrastructure and applications running at the edge are always exactly what they need to be.

VMware Explore US 2022 Session: Edge Computing: What’s Next? [VIB1457USD]

Project Newcastle

At VMworld 2021, VMware talked the first time (I think) about cryptographic agility and even showed a short demo of a Post Quantum Cryptography (PQC) enabled Unified Access Gateway (using a proxy-based approach): 

Diagram of an HAProxy with TLS Termination and Quantum-Safe Cipher Support as a reverse proxy to communicate with a quantum-safe web browser.

At VMware Explore 2022 day 2, VMware demonstrated what they believe to be the world’s first quantum-safe multi-cloud application!

VMware developed and presented Project Newcastle, a policy-based framework enabling and orchestrating cryptographic transition in modern applications.

Integrated with Tanzu Service Mesh, Project Newcastle gives users greater insight into the cryptography in their applications. But that’s not all — as a platform for cryptographic agility, Project Newcastle automates the process of reconfiguring an application’s cryptography to comply with user-defined policies and industry standards.

Closing Comment

Which VMware projects excite you the most? I’m definitely going with Project Ensemble (Aria Hub) and Project Newcastle!