Interclouds And The Future of Cloud Computing

Interclouds And The Future of Cloud Computing

I am finally taking the time to write this piece about interclouds, workload mobility and application portability. Some of my engagements during the past four weeks led me several times to discussions about interclouds and workload mobility.

Cloud to Cloud Interoperability and Federation

Who has thought back in 2012 that we will have so many (public) cloud providers like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud etc. in 2022?

10 years ago, many people and companies were convinced that the future consists of public cloud infrastructure only and that local self-managed data centers are going to disappear.

This vision and perception of cloud computing has dramatically changed over the past few years. We see public cloud providers stretching their cloud services and infrastructure to large data centers or edge locations. It seems they realized, that the future is going to look differently than a lot of people anticipated back then.

I was not aware that the word “intercloud” and the need for it exists for a long time already apparently. Let’s take David Bernstein’s presentation as an example, which I found by googling “intercloud”:

This presentation is about avoiding the mistake of using proprietary protocols and cloud infrastructures that lead to silos and a non-interoperable architecture. He was part of the IEEE Intercloud Working Group (P2302) which was working on a standard for “Intercloud Interoperability and Federation (SIIF)” (draft), which mentioned the following:

Currently there are no implicit and transparent interoperability standards in place in order for disparate
cloud computing environments to be able to seamlessly federate and interoperate amongst themselves.
Proposed P2302 standards are a layered set of such protocols, called “Intercloud Protocols”, to solve the interoperability related challenges. The P2302 standards propose the overall design of decentralized, scalable, self-organizing federated “Intercloud” topology.

David Bernstein Intercloud

I do not know David Bernstein and the IEEE working group personally, but it would be great to hear from some of them, what they think about the current cloud computing architectures and how they envision the future of cloud computing for the next 5 or 10 years.

As you can see, the wish for an intercloud protocol or an intercloud exists since a while. Let us quickly have a look how others define intercloud:

Cisco in 2008 (it seems that David Bernstein worked at Cisco that time). Intercloud is a network of clouds that are linked with each other. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data.

teradata. Intercloud is a cloud deployment model that links multiple public cloud services together as one holistic and actively orchestrated architecture. Its activities are coordinated across these clouds to move workloads automatically and intelligently (e.g., for data analytics), based on criteria like their cost and performance characteristics.

The Future of Cloud Computing

I found this post on Twitter on May 19th, 2022:

Alvin Cheung Berkeley Intercloud

Alvin Cheung is an associate professor at Berkeley EECS and wrote the following in his Twitter comments:

we argue that cloud computing will evolve to a new form of inter-cloud operation: instead of storing data and running code on a single cloud provider, apps will run on an inter-operating set of cloud providers to leverage their specialized services / hw / geo etc, much like ISPs.

Alvin and his colleagues wrote a publication which states “A Berkeley View on the Future of Cloud Computing” that mentions the following very early in the PDF:

We predict that this market, with the appropriate intermediation, could evolve into one with a far greater emphasis on compatibility, allowing customers to easily shift workloads between clouds.

[…] Instead, we argue that to achieve this goal of flexible workload placement, cloud computing will require intermediation, provided by systems we call intercloud brokers, so that individual customers do not have to make choices about which clouds to use for which workloads, but can instead rely on brokers to optimize their desired criteria (e.g., price, performance, and/or execution location).

We believe that the competitive forces unleashed by the existence of effective intercloud brokers will create a thriving market of cloud services with many of those services being offered by more than one cloud, and this will be sufficient to significantly increase workload portability.

Intercloud Broker

Organizations place their workloads in that cloud which makes the most sense for them. Depending on different regulations, data classification, different cloud services, locations, or pricing, they then decide which data or workload goes to which cloud.

The people from Berkeley do not necessarily promote a multi-cloud architecture, but have the idea of an intercloud broker that places your workload on the right cloud based on different factors. They see the intercloud as an abstraction layer with brokering services:

In my understanding their idea goes towards the direction of an intelligent and automated cloud management platform that takes the decision where a specific workload and its data should be hosted. And that it, for example, migrates the workload to another cloud which is cheaper than the current one.

Cloud Native Technologies for Multi-Cloud

Companies are modernizing/rebuilding their legacy applications or create new modern applications using cloud native technologies. Modern applications are collections of microservices, which are light, fault tolerant and small. These microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

The challenge today is that we have modern architectures, new technologies/services and multiple clouds running with different technology stacks. And we have Kubernetes as framework, which is available in different formats (DIY or offerings like Tanzu TKG, AKS, EKS, GKE etc.)

Then there is the Cloud Native Computing Foundation (CNCF) and the open source community which embrace the principal of “open” software that is created and maintained by a community.

It is about building applications and services that can run on any infrastructure, which also means avoiding vendor or cloud lock-in.

Challenges of Interoperability and Multiple Clouds

If you discuss multi-cloud and infrastructure independent applications, you mostly end up with an endless list of questions like:

  • How can we achieve true workload mobility or application portability?
  • How do we deal with the different technology formats and the “language” (API) of each cloud?
  • How can we standardize and automate our deployments?
  • Is latency between clouds a problem?
  • What about my stateful data?
  • How can we provide consistent networking and security?
  • What about identity federation and RBAC?
  • Is the performance of each cloud really the same?
  • How should we encrypt traffic between services in multiple clouds?
  • What about monitoring and observability?

Workload Mobility and Application Portability without an Intercloud

VMware has a different view and approach how workload mobility and application portability can be achieved.

Their value add and goal is the same, but with a different strategy of abstracting clouds.

VMware is not building an intercloud but they provide customer a  technology stack (compute, storage, networking), or a cloud operating system if you will, that can run on top of every major public cloud provider like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

VMware Workload Mobility

This consistent infrastructure makes it especially for virtual machines and legacy applications extremely easy to be migrated to any location.

What about modern applications and Kubernetes? What about developers who do not care about (cloud) infrastructures?

Project Cascade

At VMworld 2021, VMware announced the technology preview of “Project Cascade” which will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.

The idea is to provide customers a converged IaaS and CaaS consumption service across any cloud, exposed through different Kubernetes APIs.

VMware Project Cascade

I heard the statement “Kubernetes is complex and hard” many times at KubeCon Europe 2022 and Project Cascade is clearly providing another abstraction layer for VM and container orchestration that should make the lives of developers and operators less complex.

Project Ensemble

Another project in tech preview since VMworld last year is “Project Ensemble“. It is about multi-cloud management platform that provides an app-centric self-service portal with predictive support.

Project Ensemble will deliver a unified consumption surface that meets the unique needs of the cloud administrator and SRE alike. From an architectural perspective, this means creating a platform designed for programmatic consumption and a firm “API First” approach.

I can imagine that it will be a service that leverages artificial intelligence and machine learning to simplify troubleshooting and that is capable in the future to intelligently place or migrate your workloads to the appropriate or best cloud (for example based on cost) including all attached networking and security policies.

Conclusion

I believe that VMware is on the right path by giving customers the option to build a cloud-agnostic infrastructure with the necessary abstraction layers for IaaS and CaaS including the cloud management platform. By providing a common way or standard to run virtual machines and containers in any cloud, I am convinced, VMware is becoming the defacto standard for infrastructure for many enterprises.

VMware Vision and Strategy 2022

By providing a consistent cloud infrastructure and a consistent developer model and experience, VMware bridges the gap between the developers and operators, without the need for an intercloud or intercloud protocol. That is the future of cloud computing.

 

Other relevant resources:

 

 

Current vSphere Subscription Licensing Options

Current vSphere Subscription Licensing Options

VMware is giving their customers more and more the option to move towards a subscription-based licensing model. In general, companies are moving away from the large pay-up-front deals and replace them with recurring subscriptions. Vendors like VMware are making a lot of investments to provide the structures, processes and capabilities to offer subscription licenses (and SaaS services). Organizations see the benefits of subscription licenses and this blog describes the current options if you want to move your vSphere perpetual licenses towards vSphere subscription.

vSphere Advantage – vSphere Subscription Service

Since December 2021, VMware offers vSphere Advantage in limited regions (aka Initial Availability).

vSphere Advantage gives you the flexibility to manage and operate your on-premises vSphere infrastructure while leveraging several VMware Cloud capabilities:

  • Transition from vSphere perpetual to vSphere subscription-based consumption for your vSphere deployments
  • Complete view of the globally distributed on-premises vSphere inventory
  • VMware-managed vCenter Servers (aka Project Arctic, not GA yet)

From a centralized VMware Cloud Console you can monitor events, alerts, capacity utilization, and the security posture of your vSphere infrastructure.

vSphere Advantage VMware Cloud Console

It is also possible now for you to plan and upgrade your existing vSphere licensing keys and replace them with vSphere Advantage, which enables you to make use of keyless entitlements. This keyless entitlement makes it very easy for customers to stay compliant all the time and to understand the current subscription usage.

vSphere Advantage VMware Cloud Console VMs

To start using vSphere Advantage, you must enable communication between your on-premises vCenter Server and VMware Cloud by using a vCenter Cloud Gateway. This requires an outbound connection (443, HTTPS) only, no VPN is needed.

 

Current vCenter Server Requirements:

  • The vCenter Server version must be 7.0 Update 3a and later
  • Configure the vCenter Server with a backup and restore mechanism
  • Dedicate at least three ESXi hosts for the vCenter Server. (Recommended)
  • The vCenter Server must be self-managed. It must manage its own ESXi hosts and virtual machines

Unsupported vCenter Configurations:

  • Ensure that the vCenter Server is not configured in High Availability mode
  • If the vCenter Server is configured in Enhanced Linked Mode (ELM), unlink it from ELM. See Repoint a vCenter Server Node to a New Domain. ELM is no longer required because with vSphere Advantage you can monitor your entire vSphere inventory in a single pane of glass.
  • Ensure that the vCenter Server is not configured with NSX for vSphere, vRealize Operations Manager, Site Recovery Manager, vCloud Suite, or vSAN.

Project Arctic – VMware-Managed vCenter (Roadmap)

VMware introduced Project Arctic at VMworld 2021. Now it’s called vSphere Advantage. While a hybrid cloud operating model for vSphere becomes default now, it’s not yet possible to let VMware manage your vCenter Servers. We can expect that this capability will be shipped and made generally available somewhen in 2022.

VMware Edge Compute Stack

Edge Compute Stack (ECS) is a purpose-built stack that is available in three different editions (information based on initial availability from VMworld 2021):

VMware Edge Comput Stack Editions

As you can see, each VMware Edge Compute Stack edition has the vSphere Enterprise+ (hypervisor) included. Software-defined storage with vSAN is optional, but Tanzu for running containers is always part of each edition.

Note: The Edge Compute Stack includes vSphere subscription licenses.

Other Options

If you are running the VMware Cloud Foundation (VCF) stack and look for a managed service offering, which includes subscription-based licensing, have a look at the following alternatives:

As you can see, you can start small with vSphere Advantage and grow big with VMware Cloud Universal as the final destination.

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

According to Gartner, regulated industry customers (such as finance and healthcare) and governments are looking for digital borders. Companies in these sectors are looking to reduce vendor lock-in and single points of failure with their cloud providers, whose data centers sometimes are also outside their country (e.g., Switzerland based customer with an AWS data center in Frankfurt).

The market for cloud technology and services is currently dominated by US and Asian cloud providers and many (European) companies store their data in these regions. There are European regions and data centers, but the geopolitical and legal challenges, concerns about data control, industry compliance and sovereignty are driving the creation of new national clouds.

That is why Gartner sees sovereign clouds as one of the emerging technologies, which is currently at the start of the August 2021 published hype cycle:

Das sind die aufstrebenden Technologien im Hype Cycle 2021 | IT-Markt

Image Source: https://www.it-markt.ch/news/2021-08-27/das-sind-die-aufstrebenden-technologien-im-hype-cycle-2021

Use Case 1 – Swiss Federal Administration

As an example and first use case I would mention the Swiss federal administration, which doesn’t see the need for an independent technical infrastructure under public law.

In June 2021 they published the statement that they notified the following cloud providers to become part of the federal administration’s initial multi-cloud architecture:

  • Amazon Web Services (AWS)
  • IBM
  • Microsoft
  • Oracle
  • Alibaba

There are several reasons (pricing, market share, local data center availability) that led to this decision to build a multi-cloud architecture with these cloud providers. But it was interesting to read that the government did an assessment and concluded that no technical independent infrastructure is needed – no need for a local sovereign cloud.

This means that they want to keep their existing data centers to provide infrastructure and data sovereignty.

Interestingly, the Swiss confederation is exploring initiatives for secure and trustworthy data infrastructure for Europe and is examining participation in GAIA-X.

Use Case 2 – Current Sovereign Cloud Providers

There are other examples where organizations and governments saw the need for a sovereign cloud. Having a public cloud provider’s data center in the same country does not necessarily mean, that it’s a sovereign cloud per se. Hyperscale clouds often rely on non-domestic resources that maintain their data centers or provide customer support.

Governments and regulated industries say that you need domestic resources to provide a true sovereign cloud.

A good example here is the UK government, who has chosen the provider UKCloud, that delivers a consistent experience that spans the edge, private cloud and sovereign cloud.

Another VMware sovereign cloud provider is AUCloud, who provides IaaS to the Australian government, defense, defense industries and Critical National Industry (CNI) communities.

The third example I would like to highlight is Saudi Telecom Company (STC), that brings sovereign cloud services to Saudi Arabia.

What do UKCloud, AUCloud and STC have in common? They all joined the pretty new VMware Sovereign Cloud initiative and built their sovereign clouds based on VMware technology.

Use Case 3 – Cloud Act

Another motivation for a sovereign cloud could be the Cloud Act, which is a U.S. law that gives American authorities unrestricted access to the data of American IT cloud providers. It does not matter where the data is effectively stored. In the event of a criminal prosecution, the authorities have a free hand and do not even have to notify the data owners.

What does this mean for cloud users? Because of the Cloud Act, they cannot be sure whether when and to what extent their data or the data of their customers will be read by foreign authorities.

Use Case 4 – GAIA-X

Let me quote the official explanation of GAIA-X:

The architecture of Gaia-X is based on the principle of decentralization. Gaia-X is the result of many individual data owners (users) and technology players (providers) – all adopting a common standard of rules and control mechanisms – the Gaia-X standard.

Together, we are developing a new concept of data infrastructure ecosystem, based on the values of openness, transparency, sovereignty, and interoperability, to enable trust. What emerges is not a new cloud physical infrastructure, but a software federation system that can connect several cloud service providers and data owners together to ensure data exchange in a trusted environment and boost the creation of new common data spaces to create digital economy.

Gaia-X aims to mitigate Europe’s dependency on non-European providers and there seems to be no pre-defined architecture or preferred vendor when it comes to the underlying cloud platform GAIA-X sits on top.

While one would believe that a sovereign cloud is mandatory for GAIA-X, it looks more like a cloud-agnostic data exchange platform hosted by European providers and customers.

I am curious how providers build, operate and maintain a sovereign cloud stack based on open-source software.

How real is the need for Sovereign Cloud?

If a company or government wants to keep, extend, and maintain their own local data centers, this is still a valid option of course. But the above examples showed that the need for sovereign clouds exists and that the global interest seems to be growing.

What is the VMware Sovereign Cloud Initiative?

In October 2021 VMware announced their VMware Sovereign Cloud initiative where they partnering with cloud service providers to deliver a sovereign cloud infrastructure with cloud services on top to customers in regulated industries.

To become a so-called VMware Sovereign Cloud Provider, partners must go through an assessment and meet specific requirements (framework) to show their capability to provide a sovereign cloud infrastructure.

VMware defines a sovereign cloud as one that:

  • Protects and unlocks the value of critical data (e.g., national data, corporate data, and personal data) for both private and public sector organizations
  • Delivers a national capability for the digital economy
  • Secures data with audited security controls
  • Ensures compliance with data privacy laws
  • Improves control of data by providing both data residency and data sovereignty with full jurisdictional control

VMware aims to help regulated industry and government customers to execute their cloud strategies by connecting them to VMware Sovereign Cloud Providers (like UKCloud, AUcloud, STC, Tietoevry, ThinkOn or OVHcloud).

Sovereign Cloud Providers in Switzerland

Currently, there is no official VMware sovereign cloud provider in Switzerland. We have a few and strong VMware cloud provider partners as part of the VMware Cloud Provider Program (VCPP):

Let us come back to the use case 1 with the Swiss federal administration. They are building a multi-cloud and would have in Switzerland a potential number of at least 10 cloud service providers, which could become an official VMware Sovereign Cloud Provider.

VMware Sovereign Cloud Borders 

Image Source: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmw-sovereign-cloud-solution-brief-customer.pdf

There are other Swiss providers who are building a sovereign cloud based on open-source technologies like OpenStack.

Hyperscalers like Microsoft or Google need to partner with local providers if they want to build a sovereign cloud and deliver services.

VMware already has 4300+ partners with the strategic partnerships and the same technology stack in 120+ countries and some of them are already sovereign cloud providers as mentioned before.

VMware Sovereign Cloud initiative

Image Source: https://blogs.vmware.com/cloud/2021/10/06/vmware-sovereign-cloud/

What are the biggest challenges with a multi-cloud and a sovereign cloud infrastructure?

What do you think are the biggest challenges of an organization that builds a multi-cloud with different public cloud providers and sovereign clouds?

Let me list a few questions here:

  • How can I easily migrate my workloads to the public or sovereign cloud?
  • How long does it take to migrate my applications?
  • Which cloud is the right one for a specific workload?
  • Do I need to refactor some of my applications?
  • How can I consistently manage and operate 5 different public/sovereign cloud providers?
  • What if I one of my cloud providers is not strategic anymore? How can I build a cloud exit strategy?
  • How do I implement and maintain security?
  • What if I want to migrate workloads back from a public cloud to an on-premises (sovereign) cloud?
  • Which Kubernetes am I going to use in all these different clouds?
  • How do I manage and monitor all these different Kubernetes clusters, networking and security policies, create secure application communication between clouds and so on?
  • How do I control costs?

These are just a small number of questions, but I think it would take your organization or your cloud platform team a while to come up with a solution.

What is the VMware approach? Let me list some other articles of mine that help you to better understand the VMware multi-cloud approach:

Conclusion

Public cloud providers build local data centers and provide data residency. Sovereign clouds provide data sovereignty. Resident data may be accessed by a foreign authority while data sovereignty refers to data being subject to privacy laws and governance structures within the nation where that data is collected.

Controlling the location and access of data in the cloud has become an important task for CIOs and CISOs and I personally believe that sovereign clouds are not becoming important in 2 or 3 years, they are already very important and relevant, and we can expect a growth in this area in the next months.

My conclusion here is, that sovereign clouds and the public clouds are not competitors, they complement each other.

 

 

 

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

Intrinsic security is something we heard a lot in the past from VMware and it was mostly used to describe the strategy and capabilities behind the Carbon Black portfolio (EDR) that is complemented with the advanced threat prevention from NSX (NDR), that form together the VMware XDR vision.

I see similarities between intrinsic security and workout I am doing in the gym. My goal is to build more strength and power, and to become healthier in general. For additional muscle gain benefits and to be more time efficient, I have chosen compound exercises. I am not a fan of single muscle group exercises, which involve isolation exercises. Our body has a lot of joints for different movements, and I think it’s just natural if you use multiple muscle groups and joints during a specific exercise.

Therefore, when you perform compound exercises, you involve different muscles to complete the movement. This improves your intermuscular coordination of your muscles. In addition, as everyone would tell you, these exercises improve your core strength and they let your body become a single unit.

While doing weight training, it is very important to use the proper technique and equipment. Otherwise, the risk for injuries and vulnerabilities increases.

This is what intrinsic security means for me! And I think this is very much relevant to understand when talking about DevSecOps.

Understanding DevSecOps

For VMware, talking to developers and talking about DevOps started in 2019 when they presented VMware Tanzu the first time at VMworld. The ideas and innovation behind the name “Tanzu” should bring developers and IT operators closer together for collaboration.

DevOps is the combination of different practices, tools and philosophies that should help an organization to deliver applications and services at a higher pace. In the example above it would mean, that application developers and operations teams are not working isolated in silos anymore, they become one team, a single unit. But technology plays very important role to support the success of the new mindset and culture!

DevOps is about efficiency and the automation of manual tasks or processes. You want to become fast, flexible and efficient. When you put security in the center of this, then we start talking about DevSecOps. You want to know if one of your muscles or parts of the body become weak (defect) or vulnerable.

Tanzu DevSecOps Flow

Depending on where you are right now on this application modernization journey, doing DevSecOps could mean a huge cultural and fundamental change to how you develop applications and do IT operations.

For me, DevSecOps is not about bringing security tools together from different teams and technologies. If DevOps and DevSecOps mean that you must change your mindset, then it is maybe also about time to consider the importance of new technology choices.

If DevSecOps means that you put security in the center of a DevOps- or container-centric environment, then security must become an intrinsic part of a modern application supply chain.

The VMware Tanzu portfolio has a lot of products and services to bring developers, operations and security teams together.

Where do we start? We need to “shift left” and this means we need to integrate security already early in the application lifecycle.

Code – Spring Framework

Before you can deliver an application to your customer, you need to develop it, you need to code. Application frameworks are a very effective approach for developing more secure and optimized applications.

Frameworks help to write code faster and more efficient. Not only does a framework can save your developers a lot of coding effort, but it also comes with pre-defined templates. They incorporate best practices and help you simplifying the overall application architecture.

Why is this important? To achieve better security or a more secure cloud native application, it makes sense to standardize and automate. Automation is key for security. Standardization makes it easier to understand or reuse code. You can write all the code yourself, but the chances are high that someone else did parts of your work already. Less variability reduces complexity and therefore enhances security.

There is the open-source Spring Framework for example, which uses Java as the underlying language (or .NET for Steeltoe). Both projects are managed by VMware and millions of developers use them.

Tanzu Spring Steeltoe

What happens next? You would now run your continuous integration (CI) process (integration tests, unit tests) and then you are ready to package or build your application.

Build – Tanzu Build Service (TBS)

So, your code is now good for release. If you want to deploy your application to a Kubernetes environment, then you need a secure, portable and reproducible build that can be checked for security vulnerabilities, and you need an easy way to patch those vulnerabilities.

How are you going to build your container image where you application is going to be built into? A lot of customers and vendors have a dockerfile based approach.

VMware recommends Tanzu Build Service (TBS), which uses Tanzu Buildpacks that are based on the open-source Cloud Native Buildpacks CNCF project to turn application source code into container images. So, no dockerfiles.

TBS is constantly looking for changes in your source code and then automatically builds an image based on that. This means with TBS you don’t need any advanced knowledge of container packaging formats or know how to optimally construct a container creation script for a given programming language.

Tanzu Build Service knows all the images you have built and understands all the dependencies and components you have used. If something changes, your image is going to be rebuilt automatically and then stored in a registry of your choice. More about the registry in a second.

Tanzu Build Service

What happens if a vulnerability comes out and one of your libraries, operating systems or components is affected? TBS would patch this vulnerability and all the affected downstream container images would be updated automatically.

Imagine how happy your CISO would be about this way of building secure container images! 🙂

Build – Harbor

We have now pushed our container image to a container repository, a so-called registry. VMware uses Harbor (open-source cloud native registry by VMware, donated to the CNCF in 2018) as an enterprise-grade storage for container images. Additionally, Harbor provides static analysis of vulnerabilities in images through open-source projects like Trivy and Clair.

Tanzu Build Service Harbor

We have now developed our applications and stored our packaged images in our Harbor registry. What else do we need?

Build – VMware Application Catalog (VAC)

Developers are not going to build everything by themselves. Other services like databases or caching are needed to build the application as well and there are so many known and pre-packaged open-source software freely available online. This brings additional security risks and provides malicious actors to publish container images that contain vulnerabilities.

How can you mitigate this risk and reduce the chance for a critical application outage or breach?

In 2019, VMware acquired Bitnami, which delivers and maintains a catalog of 130+ pre-packaged and ready-to-use open-source application components, that are “continuously maintained and verifiably tested for use in production environments”.

Known as VMware Application Catalog (VAC, formerly also known as Tanzu Application Catalog), VAC as a SaaS offering provides your organization a customizable private collection of open-source software and services, that can automatically be placed in your private container image registry. In this case in your Harbor registry.

Example apps that are supported today:

Language Runtimes Databases App Components Developer Tools Business Apps
Nodejs MySQL Kafka Artifactory WordPress
Python PostgreSQL RabbitMQ Jenkins Drupal
Ruby MariaDB TensorFlow Redmine Magento
Java MongoDB ElasticSearch Harbor Moodle

How does it work?

VMware Application Catalog - How it works

There are two product features that I would like to highlight:

  • Build-time CVE scan reports for container images using Trivy
  • Build-time Antivirus scans for container images using ClamAV

Your application, built by Tanzu Build Service and VMware Application Catalog, is complete now, and stored in your Harbor registry. And since you use VAC, you also have your “marketplace” of applications, that is curated by a (security) team in your organization. 

If you want to see VAC in action, have a look at this Youtube video.

Note: Yes, VAC is a SaaS hosted application and you may have concerns because you are a public/federal customer. That’s no problem. Consider VAC as your trusted source where you can copy things from. There is no data stored in the public cloud nor does it run anything up there. Download your packages from this trusted repository over to you air gapped environment.

Run – Tanzu Kubernetes Grid (TKG)

Your application is ready to be deployed and the next step is in your pipeline is “continuous deployment“. We finally can deploy our applications to a Kubernetes cluster.

Tanzu Kubernetes Grid or TKG is VMware’s own consistent and conformant Kubernetes distribution that can run in any cloud. VMware’s strategy is about running the same Kubernetes dial tone across data centers and public cloud, which enables a consistent and secure experience for your developers.

TKG has a tight integration with vSphere called “vSphere with Tanzu”. Since TKG is an enterprise-ready Kubernetes for a multi-cloud infrastructure, it can run also in all major public clouds.

If consistent automation is important to you and you want to run Kubernetes in an air gapped environment, where there is no AWS, Azure or any other major public cloud provider, then a consistent Kubernetes version like TKG would add value to your infrastructure.

Manage/Operate – Tanzu Mission Control (TMC)

How do we manage these applications on any Kubernetes cluster (VMware TKG, Amazon EKS, Microsoft AKS, Google GKE), that can run in any cloud?

Some organizations started with TKG and others already started with managed Kubernetes offerings like EKS, AKS or GKE. That’s not a problem. The question here is how you deploy, manage, operate, and secure all these different clusters.

VMware’s solution for that is Tanzu Mission Control, which is also a SaaS-based tool hosted by VMware, that is the first offering I’m going to cover, that is part of a global Tanzu control plane. TMC is a solution that makes your multi-cloud and multi-cluster Kubernetes management much easier.

With TMC you’ll get:

  • Centralized Cluster Lifecycle Management. TMC enables automated provisioning and lifecycle management of TKG cluster across any cloud. It provides centralized provision, scaling, upgrading and deletion functions for your Kubernetes clusters. Tanzu Mission Control also allows you to attach any CNCF-conformant Kubernetes cluster (K8s on-prem, K8s in public cloud, TKG, EKS, AKS, GKE, OpenShift) to the platform for management, visibility, and analytic purposes. I would expect that we can use TMC in the future to lifecycle managed offerings like EKS, AKS or GKE.
  • Centralized Policy Management. TMC has a very powerful policy engine to apply consistent policies across clusters and clouds. You can create security, access, network, quota, registry, and custom policies (Open Policy Agent framework).
  • Identity and Access Management. Another important feature you don’t want to miss with DevSecOps in mind is centralized authentication and authorization, and identity federation from multiple sources like AD, LDAP and SAML. Make sure you give the right people or project teams the right access to the right resources.
  • Cluster Inspection. There are to inspection that you can run against your Kubernetes clusters. TMC leverages the built-in open-source project Sonobuoy that makes sure your cluster are configured in a conformant way with the Cloud Native Computing Foundation (CNCF) standards. Tanzu Mission Control provides CIS Benchmark inspection as another option.

Tanzu Mission Control

Tanzu Mission Control integrates with other Tanzu products like Tanzu Observability and Tanzu Service Mesh, which I’m covering later.

Connect – Antrea

VMware Tanzu uses Antrea as the default container network interface (CNI) and Kubernetes NetworkPolicy to provide network connectivity and security for your pods. Antrea is an open-source project with active contributors from Intel, Nvidia/Mellanox and VMware, and it supports multiple operating systems and managed Kubernetes offerings like EKS, AKS or GKE!

Antrea uses Open vSwitch (OvS) as the networking data plane in every Kubernetes node. OvS is a high performance and programmable virtual switch that not only supports Linux, but also Windows. VMware is working on the achievement to reach feature parity between them, and they are even working on the support for ARM hosts in addition to x86 hosts.

Antrea creates overlay networks using VXLAN or Geneve for encapsulation and encrypts node-to-node communication if needed.

Connect & Secure – NSX Advanced Load Balancer

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While a project like Contour is a great open-source project, VMware recommends Avi (aka NSX Advanced Load Balancer) provides much more enterprise-grade features like L4 load balancing, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity, and analytics are important to you, then Avi Enterprise, a true software-defined multi-cloud application delivery controller, is definitely the better fit.

 

Secure – Tanzu Service Mesh (TSM)

Let’s take a step back and recap what we have achieve until here. We have a standardized and automated application supply chain, with signed container images, that can be deployed in any conformant Kubernetes cluster. We can also access the application from outside and pod-to-pod communication, so that applications can talk to each other. So far so far good.

Is there maybe another way to stitch these services together or “offload” security from the containers? What if I have microservices or applications running in different clouds, that need to securely communicate with each other?

A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds.

This is the moment when you need to think about the connectivity and communication between your app’s microservices. Today, many Kubernetes users do that by implementing a service mesh and Istio is most probably the most used open-source project platform for that.

The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself. The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

Global Namespaces

A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

  • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
  • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
  • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
  • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
  • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

Monitor – Tanzu Observability (TO)

Another important part of DevSecOps with VMware Tanzu is observability. What happens if something goes wrong? What are you doing when an application is not working anymore as expected? How do you troubleshoot a distributed application, split in microservices, that potentially runs in multiple clouds?

Image an application split into different smaller services, that are running in a pod, which could be running in a virtual machine on a specific host in your on-premises datacenter, at the edge, or somewhere in the public cloud.

You need a tool that supports the architecture of a modern application. You need a solution that understands and visualizes cloud native applications.

That’s when VMware suggests Tanzu Observability to provide you observability and deep visibility across your DevSecOps environment.

Tanzu Observability

Tanzu Observability has an integration with Tanzu Mission Control, which has the capability then to install the Wavefront Kubernetes collector on your Kubernetes clusters. The name “Wavefront” comes from the company Wavefront, which VMware acquired in 2017.

Since Tanzu Observability is only offered as a SaaS version, I would like to highlight that it is “secure by design” according to VMware:

  • Isolation of customer data
  • User & Service Account Authentication (SSO, LDAP, SAML)
  • RBAC & Authorization
  • Data encryption at rest and in transit
  • Data at rest is managed by AWS S3 (protected by KMS)
  • Certifications like ISO 27001/27017/27018 or SOC 2 Type 1

Summary – Tanzu Portfolio Capabilities

The container build and deploy process consists of the Spring runtime, Tanzu Application Catalog and Tanzu Build Service.

The global control plane (SaaS) is formed by Tanzu Mission Control, Tanzu Service Mesh and Tanzu Observability.

The networking layer consists of NSX Advanced Load Balancer for ingress & load balancing and uses Antrea for container networking.

The foundation of this architecture is built on VMware’s Kubernetes runtime called Tanzu Kubernetes Grid.

Tanzu Advanced Capabilities

Note: There are other components like Application Transformer or Tanzu SQL (part of Tanzu Data Services), which I haven’t covered in this article.

Secure – Carbon Black Cloud Container

Another solution that might be of interest for you is Carbon Black Container. CB Container also provide visibility and control that DevSecOps team need to secure Kubernetes clusters and the application the deploy on top of them.

This solution provides container vulnerability & risk dashboard, image scanning, compliance policy scanning, CI/CD integration, integration with Harbor and supports any upstream Kubernetes like TKG, EKS, AKS, GKE or OpenShift.

Conclusion

DevSecOps with VMware Tanzu helps you to simplify and secure the whole container and application lifecycle. VMware has made some strategic acquisitions (Heptio, Pivotal, Bitnami, Wavefront, Octarine, Avi Networks, Carbon Black) in the past to become a major player the world of containerization, Kubernetes and application modernization.

I personally believe that VMware’s approach and Tanzu portfolio have a very strong position in the market. Their modular approach and the inclusion of open-source projects is a big differentiator. Tanzu is not just about Kubernetes, it’s about building, securing and managing the applications.

If you have a strong security focus, VMware can cover all the layers up from the hypervisor to the applications that can be deployed in any cloud. That’s the strength and unique value of VMware: A complete and diverse portfolio with products, that provide even more value when combined together.

Don’t forget, that VMware is number 1 when it comes to data center infrastructures and most of the customer workloads are still running on-premises. That’s why I believe that VMware and their Tanzu portfolio are very well positioned.

In case you missed it the announcements a few weeks ago, check out  Tanzu Application Platform and Tanzu for Kubernetes Operations that meet the needs of all those who are concerned with DevSecOps!

And if you would like to know more about VMware Tanzu in general, have a look at my “10 Things You Didn’t Know About VMware Tanzu” article.

 

What is Tanzu for Kubernetes Operations?

What is Tanzu for Kubernetes Operations?

Updated on March 16, 2022

The customers I worked with last year were large enterprises with a multi-cloud strategy and they have just started their application modernization journey. Typically, VMware customers interested in Tanzu would take a look at the Standard edition first, which gives you:

  • Tanzu Kubernetes Grid Runtime
  • Tanzu Mission Control Standard
  • Avi Essentials (NSX Advanced Load Balancer)
  • Antrea (open-source) for container networking
  • and some other open-source software like Prometheus, Grafana, Fluent Bit, Contour

Tanzu Std vs Adv

A lot of my customers were interested in Tanzu Advanced, but they were asking for something in between these editions. Tanzu Standard sounded very interesting, but almost all of them asked the followings questions:

  • What if I don’t build or modernize my own applications yet and get my application as a container from my ISV?
  • Prometheus and Grafana are nice, but I would like to have something more enterprise-ready for observability. How can I get Tanzu Observability?
  • Avi Essentials sounds great, but I am thinking to replace my current load balancer. Is it possible to replace my F5 or Citrix ADC (formerly known as Citrix NetScaler) appliances?
  • Contour seems to be a nice open-source project, but I am looking for something with built-in automation and analytics capabilities for ingress. Can’t I get Avi Enterprise for that as well?
  • I am looking for zero trust application security. How can you help me to encrypt traffic between containers or microservices, which could also be hosted on different clouds (e.g., on-prem and public cloud)?

The answer to these questions is Tanzu Kubernetes for Operations. Tanzu for Kubernetes Operations (TKO) is a bundle of VMware products and services to meet the requirements of cloud platform teams. It provides a centralized, consistent and simplified container management and operations across clouds and currently includes the following products and services:

Important Note: The VMware product guide says that “a Core is a single physical computational unit of the Processor which may be presented as one or more vCPUs“. So, if you plan a CPU overcommit of 1:2 (cores:vCPU) for your on-premises infrastructure, then you have to license 12 cores only.

TKO Reference Architecture

VMware has released TKO reference architectures for vSphere, AWS and Azure.

Figure 1 - Tanzu for Kubernetes Operations

Use this link to get additional information how to deploy and configure Tanzu Mission Control, Tanzu Observability and Tanzu Service Mesh.

What is Application Transformer for Tanzu?

Application Transformer for VMware Tanzu became generally available in February 2022.

Application Transformer can help you to convert virtual machines and application components to OCI-compliant container images, that then can be deployed into the Tanzu Kubernetes stack.

Tanzu Application Transformer

 

Tanzu App Navigator

Application Transformer helps you to analyze and visualize application components and dependencies. It also provides customers scores that allow them to decide which applications should be transformed.

App Navigator is a 4-to-6 week engagement that helps you to decide which applications you should tackle first and how much change is needed to drive business outcomes. It’s one thing to containerize an application, but App Navigator helps you to create a modernization strategy based on your goals.

Note: VMware’s App Navigator team uses Application Transformer during their service engagement.

Tanzu App Navigator

Tanzu Application Platform

Deploying an application on Kubernetes is not an easy thing if you don’t know anything about Kubernetes.

If you would like to focus more on your applications and your developer’s experience, then Tanzu Application Platform (TAP) could be very interesting for you.

With Tanzu Application Platform, application developers and operations teams can build and deliver a better multi-cloud developer experience on any Kubernetes distribution, including Azure Kubernetes Service, Amazon Elastic Kubernetes Service, Google Kubernetes Engine, as well as software offerings like Tanzu Kubernetes Grid.

VMware is known to provide reduction of complexity and to provide cloud-agnostic infrastructures. They started to abstract the underlying server hardware, then the virtualization of the whole data center (compute, storage, network) came and the next step was the abstraction of public clouds like AWS, Azure and Google.

In the case of Tanzu Application Platform we are talking about an opinionated grouping of separate components that run on any conformant Kubernetes cluster (TKG, AKS, EKS, GKE, OpenShift etc.). From an application developer perspective an application can automatically be built, tested and deployed on Kubernetes.

Tanzu Application Platform

Meaning, with TAP you get a modular application developer PaaS (adPaaS) offering and true application platform portability with the capability of “bring-your-own-Kubernetes”.

 

A Universal License and Technology to Build a Flexible Multi-Cloud

A Universal License and Technology to Build a Flexible Multi-Cloud

In November 2020 I wrote an article called “VMware Cloud Foundation And The Cloud Management Platform Simply Explained“. That piece was focused on the “why” and “when” VMware Cloud Foundation (VCF) makes sense for your organization. It also includes business values and hints that VCF is more than just about technology. Cloud Foundation is one of the most important drivers and THE enabler for to fulfill VMware’s multi-cloud strategy.

If you are not familiar enough with VMware’s multi-cloud strategy, then please have a look at my article “VMware Multi-Cloud and Hyperscale Computing” first.

To summarize the two above mentioned articles, one can say, that VMware Cloud Foundation is a software-defined data center (SDDC) that can run in any cloud. In “any cloud” means that VCF can also be consumed as a service through other cloud provider partners like:

Additionally, Cloud Foundation and the whole SDDC can be consumed as a managed offering called DCaaS or LCaaS (Data Center / Local Cloud as a service).

Let’s say a customer is convinced that a “VCF everywhere” approach is right for them and starts building up private and public clouds based on VMware’s technologies. This means that VMware Cloud Foundation now runs in their private and public cloud.

Note: This doesn’t mean that the customer cannot use native public cloud workloads and services anymore. They can simply co-exist.

The customer is at a point now where they have achieved a consistent infrastructure. What’s up next? The next logical step is to use the same automation, management and security consoles to achieve consistent operations.

A traditional VMware customer goes for the vRealize Suite now, because they would need vRealize Automation (vRA) for automation and vRealize Operations (vROps) to monitor the infrastructure.

The next topic in this customer’s journey would be application modernization, which includes topics containerization and Kubernetes. VMware’s answer for this is the Tanzu portfolio. For the sake of this example let’s go with “Tanzu Standard”, which is one of four editions available in the Tanzu portfolio (aka VMware Tanzu).

VMware Cloud Foundation

Let’s have a look at the customer’s bill of materials so far:

  • VMware Cloud Foundation on-premises (vSphere, vSAN, NSX)
  • VMware Cloud on AWS)
  • VMware Cloud on Dell EMC (locally managed VCF service for special edge use cases)
  • vRealize Automation
  • vRealize Operations
  • Tanzu Standard (includes Tanzu Kubernetes Grid and Tanzu Mission Control)

Looking at this list above, we see that their infrastructure is equipped with three different VMware Cloud Foundation flavours (on-prem, hyperscaler managed, locally managed) complemented by products of the vRealize Suite and the Tanzu portfolio.

This infrastructure with its different technologies, components and licenses has been built up over the past few years. But organizations are nowadays asking for more flexibility than ever. By flexibility I mean license portability and a subscription model.

VMware Cloud Universal

On 31st March 2021 VMware introduced VMware Cloud Universal (VMCU). VMCU is the answer to make the customer’s life easier, because it gives you the choice and flexibility in which clouds you want to run your infrastructure and consume VMware Cloud offerings as needed. It even allows you to convert existing on-premises VCF licenses to a VCF-subscription license.

The VMCU program includes the following technologies and licenses:

  • VMware Cloud Foundation Subscription
  • VMware Cloud on AWS
  • Google Cloud VMware Engine
  • VMware Cloud on Dell EMC
  • vRealize Cloud Universal Enterprise Plus
  • Tanzu Standard Edition
  • VMware Success 360 (S360 is required with VMCU)

VMware Cloud Console

As Kit Kolbert, CTO VMware, said, “the idea is that VMware Cloud is everywhere that you want your applications to be”.

The VMware Cloud Console gives you view into all those different locations. You can quickly see what’s going on with a specific site or cloud landing zone, what its overall utilization looks like or if issues occur.

The Cloud Console has a seamless integration with vROps, which also helps you regarding capacity forecasting and (future) requirements (e.g., do I have enough capacity to meet my future demand?).

VMware Cloud Console

In short, it’s the central multi-cloud console to manage your global VMware Cloud environment.

vRealize Cloud Universal

What is part of vRealize Cloud Universal (vRCU) Enterprise Plus? vRCU is a SaaS management suite that combines on-premises and SaaS capabilities for automation, operations, log analytics and network visibility into a single offering. In other words, you get to decide where you want to deploy your management and operations tools. vRealize Cloud Universal comes in four editions and in VMCU you have the vRCU Enterprise Plus edition included with the following components:

vRealize Cloud Universal Editions

    Note: While vRCU standard, advanced and enterprise are sold as standalone editions today, the enterprise plus edition is only sold with VMCU (and as add-on to VMC on AWS).

    vRealize AI Cloud

    Have you ever heard of Project Magna? It is something that was announced at VMworld 2019, that provides adaptive optimization and a self-tuning engine for your data center. It was Pat Gelsinger who envisioned a so-called “self-driving data center”. Intelligence-driven data center might haven been a better term since Project Magna leverages artificial intelligence by using reinforcement learning, which combs through your data and runs thousands of scenarios that searches for the best regard output based on trial and error on the Magna SaaS analytics engine.

    The first instantiation began with vSAN (today also known as vRAI Cloud vSAN Optimizer), where Magna will collect data, learn from it, and make decisions that will automatically self-tune your infrastructure to drive greater performance and efficiencies.

    Today, this SaaS service is called vRealize AI Cloud.

    vRealize AI Cloud vSAN vRealize AI (vRAI) learns about your operating environments, application demands and adapts to changing dynamics, ensuring optimization per stated KPI. vRAI Cloud is only available on vRealize Operations Cloud via the vRealize Cloud Universal subscription.

    VMware Skyline

    VMware Skyline as a support service that automatically collects, aggregates, and analyzes product usage data, which proactively identifies potential problems and helps the VMware support engineers to improve the resolution time. Skyline is included in vRealize Cloud Universal because it just makes sense. A lot of customers have asked for unifying the self-service experience between Skyline and vRealize Operations Cloud. And many customers are using Skyline and vROps side by side today.

    Users can now be proactive and perform troubleshooting in a single SaaS workflow. This means customers save more time by automating Skyline proactive remediations in vROps Cloud. But Skyline supports vSphere, vSAN, NSX, vRA, VCF and VMware Horizon as well.

    VMware Cloud Universal Use Cases

    As already mentioned, VMCU makes very much sense if you are building a hybrid or multi-cloud architecture with a consistent (VMware) infrastructure. VMCU, vRCU and the Tanzu portfolio help you to create a unified control plane for your cloud infrastructure.

    Other use cases could be cloud migration or cloud bursting scenarios. If we switch back to the fictive customer before, we could use VMCU to convert existing VCF licenses to VCF-S (subscription) licenses, which in the end allow you to build a VMware-based Cloud on top of AWS (other public cloud providers are coming very soon!) for example.

    Another good example is to achieve the same service and operating model on-prem as in the public cloud: a fully managed consumable infrastructure. Meaning, to move from a self-built and self-managed VCF infrastructure to something like VMC on Dell EMC.

    How can I get VMCU?

    There is no monthly subscription model and VMware only supports one-year or three-year terms. Customers will need to sign an Enterprise License Agreement (ELA) and purchase VMCU SPP credits.

    Note: SPP credits purchased out of the program are not allowed to be used within the VMCU program!

    After purchasing the VMCU SPP credits and VMware Cloud onboarding and organization setup, you can select the infrastructure offerings to consume your SPP credits. This can be done via the VMware Cloud Console.

    Summary

    I hope this article was useful to get a better understanding about VMware Cloud Universal. It might seem a little bit complex, but that’s not true. VMCU makes your life easier and helps you to build and license a globally distributed cloud infrastructure based on VMware technology.

    VCF Subscription