Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you! 

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

 

Update: Please follow this link to get to the updated version with VCF 5.0.

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.3, and now covers all capabilities and enhancements that were delivered with VCF 4.5.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

Tanzu Standard Edition is included in VMware Cloud Foundation with Tanzu Standard, Advanced, and Enterprise editions.

Note: The VMware Cloud Foundation Starter, Standard, Advanced and Enterprise editions do NOT include Tanzu Standard.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 4.5 the following components and software versions are included:

  • VMware SDDC Manager 4.5
  • vSphere 7.0 Update 3g
  • vCenter Server 7.0 Update 3h
  • vSAN 7.0 Update 3g
  • NSX-T 3.2.1.2
  • VMware Workspace ONE Access 3.3.6
  • vRealize Log Insight 8.8.2
  • vRealize Operations 8.8.2
  • vRealize Automation 8.8.2
  • (vRealize Network Insight)

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation Components

What is VMware Cloud Foundation+ (VCF+)?

With the launch of VMware Cloud Foundation (VCF) 4.5 in early October 2022, VCF introduced new consumption and licensing models.

VCF+ is the next cloud-connected SaaS product offering, which builds on vSphere+ and vSAN+. VCF+ delivers cloud connectivity to centralize management and a new consumption-based OPEX model to consume VMware Cloud services.

VMware Cloud Foundation Consumption Models

VCF+ components are cloud entitled, metered, and billed. There are no license keys in VCF+. Once the customer is onboarded to VCF+, the components are entitled from the cloud and periodically metered and billed.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX (term license)
  • SDDC Manager
  • Aria Universal Suite (formerly vRealize Cloud Universal aka vRCU)
  • Tanzu Standard
  • vCenter (included as part of vSphere+)

Note: In a given VCF+ instance, you can only have VCF+ licensing, you cannot mix VCF-S (term) and VCF perpetual licenses with VCF+.

What are other VCF subscription offerings?

VMware Cloud Foundation Subscription (VCF-S) is an on-premises (disconnected) term subscription offer that is available as a standalone VCF-S offer using physical core metrics and term subscription license keys.

VMware Cloud Foundation Subscription TLSS

You can also purchase VCF+ and VCF-S licenses as part of the VMware Cloud Universal program.

Note: You can mix VCF-S with perpetual license keys as long as you use the same key (either or) for a workload domain.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation.

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Where can I find more information about VCF?

Please consult the VMware Foundation 4.5 FAQ for more information about VMware Cloud Foundation.

 

 

 

10 Things You Didn’t Know About vSphere+

10 Things You Didn’t Know About vSphere+

VMware vSphere+ is the next evolution that brings the benefits of the cloud to on-premises workloads. It transforms existing on-prem deployments into SaaS-enabled infrastructures. This allows customers to access new innovations and cloud services much faster.

I mention 4 important things to customers when they ask about vSphere+:

  • You can purchase a new subscription or upgrade your existing licenses to subscription
    • Available in 1, 3, and 5-year terms
    • Per-Core metric with a 16 core minimum per CPU (perpetual vSphere licenses use a per-socket metric with a 32 core maximum)
  • You still manage your ESXi hosts the same way. vCenter updates can be managed from the VMware Cloud console.
    • You can deploy an unlimited number of vCenters (vCenter Standard)
  • vSphere+ includes vSphere all features of the vSphere Enterprise+ edition and allows keyless management of your vSphere and vSAN infrastructure
  • You get central management and insights through the VMware Cloud Console, and add-on services

Diagram showing the architecture for vSphere+

That is vSphere+ in a nutshell. But there is much more. With this new service and connection to VMware Cloud services, customers start to ask a lot of questions. 😉

1) Which parts of the Tanzu portfolio are included in vSphere+?

vSphere+ comes with so-called developer services that include:

2) What is the Cloud Consumption Interface (CCI)?

The Cloud Consumption Interface is included with vSphere+ (powered by Aria Automation, formerly known as vRealize Automation) and gives consumers a consistent API and CLI to interact with all their cloud and IaaS operations. This means you can connect to all your Supervisor clusters from a graphical web console.

Note: Do you remember the Project Cascade announcement at VMworld 2021? That’s CCI.

3) What if I have 20 cores and want to license only 16 cores of them?

Let us say that you have 20 cores and disabled 4 of them in BIOS, vSphere+ would only see and activate/subscribe 16 cores only. This is a supported and valid configuration option.

There is a minimum of 16 cores per CPU. If your CPUs have only 12 cores per socket, you still pay for 16 cores. In this case, where a CPU has 20 cores, a customer pays for 20 cores.

But it is recommended that you activate all the cores during a subscription upgrade to set the correct baseline for the future. If you never plan to activate those 4 leftover cores, then go ahead and license only 16 cores for this CPU.

4) What if I bought VMware Cloud Foundation or vCloud Suite already?

vCloud Suite (vCS) customers can upgrade their existing perpetual license to subscription with vCloud Suite+ (vCS+).

vCloud Suite+ Editions

Existing VCF customers should have a look at VCF+.

5) What is VMware Cloud Foundation+?

VMware Cloud Foundation+ (VCF+) is generally available since October 2022 starting from VCF 4.5 or higher. The difference with vSphere+ is, that VCF+ connects the vCenter Cloud Gateway to the SDDC Manager instead of vCenter.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX term license
  • SDDC Manager
  • Aria Universal Suite Enterprise edition (formerly known as vRealize Cloud Universal)
  • Tanzu Standard
  • Keyless entitlements (only for vSphere+ and vSAN+)

VMware Cloud Foundation+ comes in three different editions:

  • VCF+ Standard
  • VCF+ Advanced
  • VCF+ Enterprise

Note: vCenter Standard is included in vSphere+. This means that vCenter is part of VCF+ as well.

6) What if I cannot connect to the cloud yet or have an air-gapped environment?

If you are not ready yet or are not allowed to connect your environment to a cloud solution like this, you have the following alternatives for the so-called “disconnected” use cases (with term licenses):

  • vSphere Subscription (sometimes called vSphere-S)
  • vCloud Suite Subscription (vCS-S)
  • VMware Cloud Foundation Subscription (VCF-S)

Important: You cannot mix perpetual and VCF+ instances. The same is true for VCF-S and VCF+.

Note: VCF-S can be upgraded to VCF+ but you cannot go from VCF+ to VCF-S.

7) What if I lose my connection to the cloud?

No problem! If you lose your connection to the VMware Cloud, only access to cloud services and the cloud console will be affected. vCenter instances, ESXi hosts, and workloads will continue to run normally and can be managed from vCenter (through the vSphere client). This is true for vSphere+ and VCF+.

8) How many vCenters can be connected to a vCenter Cloud Gateway?

Currently, a vCenter Cloud Gateway (VCG) supports up to 8 medium vCenters. VCF+ customers need to deploy a gateway per VCF instance.

vCenter Cloud Gateway

Note: VMware periodically auto-updates vSphere+ and vCenter Cloud Gateway whenever an update is available. These auto-updates are not applicable for your vCenter Server. You must manually update the vCenter Server whenever an update is available.

9) Can I mix vSphere+ with vSAN perpetual licenses?

Yes, you can continue to use your vSAN perpetual licenses with vSphere+. But as you would expect, you should not mix vSAN perpetual and vSAN+ subscriptions.

10) What about other vSphere+ and vSAN editions?

As I mentioned, vSphere+ includes vSphere Enterprise+ features – vSAN+ has vSAN Enterprise features included.

We can expect that VMware is going to introduce vSphere+ Standard, vSAN+ Standard and vSAN+ Advanced soon. 

Want to know more?

Here are a few additional resources:

 

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 is history. This year felt different and very special! Rooms were fully booked, and people were queuing up in the hallways. The crowd had a HUGE interest in technical sessions from known speakers like Cormac Hogan, Frank Denneman, Duncan Epping, William Lam, and many more!

Compared to VMware Explore US, there were not that many major announcements, but I thought it might be helpful again to list the major announcements, that seem to be the most interesting and relevant ones.

VMware Aria Hub Free Tier

For me, the biggest and most important announcement was the Aria Hub free tier. I am convinced that Aria Hub will be the next big thing for VMware and I am sure that it will change how the world manages a multi-cloud infrastructure.

VMware Aria Hub is a multi-cloud management platform that unifies the management disciplines of cost, performance, configuration, and delivery automation with a common control plane and data model for any cloud, any platform, any tool, and every persona. It helps you align multiple teams and solutions on a common understanding of resources, relationships, historical changes, applications, and accounts, fundamental to managing a multi-cloud environment.

The new free tier enables customers to inventory, map, filter, and search resources from up to two of their native public cloud accounts, currently from either AWS or Azure. It also helps you understand the relationships of your resources to other resources, policies, and other key components in your public cloud and Kubernetes environments. WOW!

Aria Hub Free Tier Announcement: https://blogs.vmware.com/management/2022/11/announcing-vmware-aria-hub-free-tier.html 

Aria Hub Free Tier Technical Overview: https://blogs.vmware.com/management/2022/11/aria-hub-free-tier-technical-overview 

If you want to sign up for the free tier, please follow this link: https://www.vmware.com/learn/1732750_REG.html 

Tanzu Mission Control On-Premises

Many customers asked for it, it is coming! Tanzu Mission Control (TMC) will become available on-premises for sovereign cloud partners/providers and enterprise customers! 

Bild

There is a private beta coming. Hence, I cannot provide more information for now.

Tanzu Kubernetes Grid 2.1

At VMware Explore US 2022, VMware announced Tanzu Kubernetes Grid (TKG) 2.0, and at Explore Europe 2022, they announced TKG 2.1, which adds support for Oracle Cloud Infrastructure (OCI). Additionally, it will now also have the option of leveraging VMs as the management cluster. Each will be familiar, but now they both support a single, unified way of cluster creation using a new API called ClusterClass.

TKG 2.1 Announcement: https://tanzu.vmware.com/content/blog/tanzu-kubernetes-grid-2-1 

Tanzu Service Mesh Advanced Enhancements

VMware unveiled new enhancements for Tanzu Service Mesh (TSM) as well, which are going to bring new capabilities that would provide VM discovery and integration into the mesh, providing the ability to combine VMs and containers in the same service mesh for secure communications and to apply consistent policy.

VMware Cloud on Equinix Metal (VMC-E)

The last thing I want to highlight is the VMC-E announcement. It is a combination of VMware Cloud IaaS with Equinix Metal hardware as-a-service, which can be deployed in over 30 Equinix global data centers.

VMware Cloud on Equinix Metal is a great option for enterprises that want the flexibility and performance of the Public Cloud, where business requirements prevent moving data or applications to the public cloud. It offers full compatibility and consistency with on-premises and VMware Cloud operational models and policies and zero downtime migration

VMware Cloud on Equinix Metal is a fully managed solution by VMware (delivered, operated, managed, supported).

VMC-E Announcement: https://blogs.vmware.com/cloud/2022/11/07/introducing-vmware-cloud-on-equinix-metal 

VMC-E Technical Preview: https://www.youtube.com/watch?v=-WpGfrxW39Y&feature=youtu.be&ab_channel=VMwareCloud  

API Security with Spring Cloud Gateway and Tanzu Service Mesh

API Security with Spring Cloud Gateway and Tanzu Service Mesh

Today, more than ever, both humans and machines consume or process data. We, humans, consume data through multiple applications that are hosted in different clouds from different devices like smartphones, laptops, and tablets. Companies are building applications that need to look good and work well on any platform/device.

At the same time, developers are building new applications following cloud-native principles. A cloud-native architecture is a design pattern for applications that are built for the cloud. Most cloud-native apps are organized as microservices which are used to break up larger applications into loosely coupled units that can be managed by smaller teams. Resilience and scale are achieved through horizontal scaling, distributed processing, and automated placement of failed components.

Different people have a different understanding of “cloud-native” and the chances are high that you will get different answers. Let us look at the official definition from CNCF:

“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

12-Factor App

A widely accepted methodology for building cloud-based applications is the “Twelve-Factor Application”. It uses declarative formats for automation to minimize time and costs. It should offer maximum portability between execution environments and be suitable for the deployment on modern cloud platforms. The 12-factor methodology can be applied with any programming language and may use any combination of backing servers (caching, queuing, databases).

Interestingly, we now see other factors like API-first, telemetry, and security complementing this list.

While doing research for my book about “workload mobility and application portability”, I saw the term “API-first” many times.

Then I started to remember that VMware acquired Mesh7 a while ago and they announced Tanzu Service Mesh Enterprise last year at VMworld Europe (now known as VMware Explore). API security was even one of their main topics during the networking & security solutions keynote presented by Tom Gillis.

VMworld 2021 API Security

That is why I thought it is time to better understand this topic and write a piece about APIs. Let us start with some basics first.

What is an API?

An application programming interface (API) is a way for two or more software components to communicate with each other using a set of defined protocols and definitions. APIs are here to make the developer’s life easier.

I bet you have seen parts of Google Maps already embedded in different websites when you were looking for a specific business or restaurant location. Most websites and developers would use Google Maps in this case, because it just makes sense for us, right? That is why Google exposes the Google Maps API so developers can embed Google Maps objects very easily in a standardized way. Or have you seen anyone who wants to develop their own version of Google Maps?

In the case of enterprises, APIs are a very elegant way to share data with customers or other external users. Such public APIs like Google Maps APIs can be used by partners who then can access your data. And we all know that data is the new oil. Companies can make a lot of money today by sharing their data.

Even when using private APIs (internal use only), you decide who can access your API and data. This is one of the reasons why API security and API management become more important. You want to provide secure access when sensitive data is being exposed.

What is an API Gateway?

For microservices-based apps, it makes sense to implement an API gateway, because it can act as a single entry point for all API calls made to your system. And it doesn’t matter if your system/application is hosted on-premises, in the public cloud, or a combination of both. The API gateway takes care of the request (API call) and returns the requested data.

API Gateway Diagram

Image Source: https://www.tibco.com/reference-center/what-is-an-api-gateway 

API gateways can also handle other tasks like authentication, rate management, and statistics. This is important for example when you want to monetize some of your APIs by offering a service to consumers or other companies.

What is Spring Cloud Gateway for VMware Tanzu?

Spring Cloud Gateway for VMware Tanzu provides a simple way to route internal and external API requests to application services that expose APIs. This solution is based on the open-source Spring Cloud Gateway project and provides a library for building API gateways on top of Spring and Java.

Because it is intended that Spring Cloud Gateway sits between a requester and the resource that is being requested, it is in the position to intercept, analyze and modify requests.

Revitalize Legacy Apps with APIs

Before we had microservices, there were monolithic applications. An all-in-one application architecture, where all services are installed on the same virtual machine and depend on each other.

There are multiple reasons why such a monolith cannot be broken up into smaller pieces and modernized. Sometimes it’s not (technically) possible, not worth it, or it just takes too long. Hence many companies still use such monolithic (legacy) applications. The best example here is the mainframe which often still runs business-critical applications.

I always thought that my customers only have two options when modernizing applications:

  • Start from scratch (throw the old app away)
  • Refactor/Rewrite an application

Rewriting an application needs time and costs money. Imagine that you would refactor 50 of your applications, split these monoliths up in microservices, connect these hundreds or thousands of microservices, and at the same time must take care of security (e.g., vulnerabilities).

So, what are you going to do now?

APIs seem to provide a very cost-effective way to integrate some of the older applications with newer ones. With this approach, one can abstract away the data and services from the underlying (legacy) application infrastructure. APIs can extend the life of a legacy application and could be the start of a phased application modernization approach.

Tanzu Service Mesh Enterprise

At the moment, we only have an API gateway that sits in front of our microservices. Multiple (micro)services in an aggregated fashion create the API you want to expose to your internal or external customers. The question now is, how you do plan to expose this API when your microservices are distributed over one or more private or public clouds?

When we talk about APIs, we talk about data in motion. That is why we must secure this data that is sent from its source to any location. And you want to secure the application and data without increasing the application latency and decreasing the user’s experience.

Now it makes sense to me why VMware acquired Mesh7 in March 2021 and announced Tanzu Service Mesh Enterprise about 6 months later with these additional features:

  • API Security. API security is achieved through API vulnerability detection and mitigation, API baselining, and API drift detection (including API parameters and schema validation)
  • Personally Identifiable Information (PII) segmentation and detection. PII data is segmented using attribute-based access control (ABAC) and is detected via proper PII data detection and tracking, and end-user detection mechanisms.
  • API Security Visibility. API security is monitored using API discovery, security posture dashboards, and rich event auditing.

Final Words

APIs are used to connect different applications. They are also used to aggregate services or functions that can be consumed by other businesses or partners. Modern and containerized applications bring a large number of APIs with them, that can be hosted in any cloud.

With Spring Cloud Gateway and Tanzu Service Mesh Enterprise, VMware can deliver application connectivity services that enable improved developer experience and more secure operations.

It took me almost a year to realize the strengths of these (combined) products and why VMware for example acquired Mesh7. But it makes sense to me now. Even I do not completely understand all the key features of Spring Cloud Gateway and Tanzu Service Mesh.

What Is Unique About Oracle Cloud VMware Solution?

What Is Unique About Oracle Cloud VMware Solution?

Everyone talks about multi-cloud and in most cases they mean the so-called big 3 that consist of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. If we are looking at the 2021 Gartner Magic Quadrant for Cloud Infrastructure & Platform Services, one can also spot Alibaba Cloud, Oracle, IBM and Tencent Cloud.

VMware has a strategic partnership with 6 of these hyperscalers and all of these 6 public clouds offer VMware’s software-defined data center (SDDC) stack on top of their global infrastructure:

While I mostly have to talk about AWS, AVS and GCVE, I am finally getting the chance to attend a OCVS customer workshop led by Oracle. That is why I wanted to prepare myself accordingly and share my learnings with you.

Amazon Web Services, Microsoft Azure and Google Cloud dominate the cloud market, but Oracle has unique capabilities and characteristics that no one else can deliver. Additionally, Oracle’s Cloud Infrastructure (OCI) has shown an impressive pace of innovation in the past two years, which led to a 16% increase on Gartner’s solution scorecard for OCI (November 2021, from 62% to 78%), which put them into the fourth place behind Alibaba Cloud!

What is Oracle Cloud VMware Solution?

Oracle Cloud VMware Solution or OCVS is a result of the strategic partnership announced by VMware and Oracle in September 2019. Like the other VMware Cloud solutions like VMC on AWS, AVS or GCVE, Oracle Cloud VMware Solution will enable customers to run VMware Cloud Foundation on Oracle’s Generation 2 Cloud Infrastructure.

Meaning, running an on-premises VMware-based infrastructure combined with OCVS should make cloud migrations easier and faster, because it is the same foundation with vSphere, vSAN and NSX.

Oracle Cloud VMware Solution Key Differentiator #1 – Different SDDC Bundles

Customers can choose between a multi-host SDDC (minimum of 3 production hosts) and a single-host SDDC, that is made for test and dev environments. Oracle guarantees a monthly uptime percentage of at least 99.9% for the OCVS service.

OCVS offers three different ESXi software versions and supports the following versions of other components:

  • ESXi 7.0, 6.7 or 6.5
  • vCenter 7.0, 6.7 or 6.5
  • vSAN 7.0, 6.7 or 6.5
  • NSX-T 3.0
  • HCX Advanced 4.0, 3.5 (default option)
  • HCX Enterprise (billed upgrade)

Note: vSphere 6.5 and vSphere 6.7 reach the End of General Support from VMware on October 15, 2022.

Key Differentiator #2 – Customer-Managed & Baremetal Hosts

The VMware Cloud offerings from AWS, Azure or Google are all vendor-controlled and customers get limited access to the VMware hosts and infrastructure components. With Oracle Cloud VMware Solution, customers get baremetal servers and the same operational experience as on-premises. This means full control over VMware infrastructure and its components:

  • SSH access to ESXi
  • Edit vSAN cluster settings
  • Browse datastores; upload and delete files
  • Customer controls the upgrade policy (version, time, defer)
  • Oracle has NO ACCESS after the SDDC provisioning!

Note: According to Oracle it takes about 2 hours to deploy a new SDDC that consists of 3 production hosts.

Customers can choose between Intel- and AMD-based hosts:

  • Two-socket BM.DenseIO2.52 with two CPUs each running 26 cores (Intel)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 16 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 32 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 64 cores (AMD)

Details about the compute shapes can be found here.

Key Differentiator #3 – Availability Domains

To provide high throughput and low latency, an OCVS SDDC is deployed by default across a minimum of three fault domains within a single availability domain in a region. But, upon request it is also possible to deploy your SDDC across multiple availability domains (AD), which comes with a few limitations:

  • While OCVS can scale from 3 up to 64 hosts in a single SDDC, Oracle recommends a maximum of 16 ESXi hosts in a multi-AD architecture
  • This architecture can have impacts on vSAN storage synchronization, and rebuild and resync times

Most hyperscaler only let you use two availability zones and fault domains in the same region. With Oracle it is possible to distribute the minimum of 3 hosts to 3 different availability domains.  An availability domain consists of one or more data centers within the same region.

Note: Traffic between ADs within a region is free of charge.

Key Differentiator #4 – Networking

Because OCVS is customer-managed and can be operated like your on-premises environment, you also get “full” control over the network. OCVS is installed within a customers’ tencancy, which gives customer the advantage to run their VMware SDDC workloads in the same subnet as OCI native services. This provides lower latency to the OCI native services, especially for customers that are using Exadata for example.

Another important advantage of this architecture is capability to create VLAN-backed port groups on your vSphere Distributed Switch (VDS).

Key Differentiator #5 – External Storage

Since March 2022 the OCI File Storage service (NFS) is certified as secondary storage for an OCVS cluster. This allows customers to scale the storage layer of the SDDC without adding new compute resources at the same time.

And just announced on 22 August 2022, with Oracle’s summer ’22 release, OCVS customers can now connect to a certified OCI Block Storage through iSCSI as a second external storage option.

Block Storage provides high IOPS to OCI, and data is stored redundantly across storage servers with built-in repair mechanisms with a 99.99% uptime SLA.

Key Differentiator #6 – Billing Options

OCVS is currently only sold and supported by Oracle. Like with other cloud providers and VMware Cloud offerings, customers have different pricing options depending upon their commitment levels:

  • On-demand (hourly)
  • 1 month
  • 1 year
  • 3 years

The rule of thumb for any hyperscaler says, that a 1-year commitment get around 30% discount and the 3-year commitments are around 50% discount.

The unique characteristic here is the monthly commitment option, which is caluclated with a discount of 16-17% depending on the compute shape.

Note: OCVS is not part (yet) of the VMware Cloud Universal subscription (VMCU).

Key Differentiator #7 – Global Reach

Currently, OCI is available in 39 different cloud regions (21 countries) and Oracle announced five more by the end of 2022. On day one of each region, OCVS is available with a consistent and predictable pricing that doesn’t vary from region to region.

To compare: AWS has launched 27 different regions with 19 being able to host the VMware Cloud on AWS service. In Switzerland, AWS just opened their new data center without having the VMware Cloud on AWS service available, while OCVS is already available in Zurich.

Use Cases

While OCVS is a great solution for joint VMware and Oracle customers, it is not necessary for customers to using Oracle Cloud Infrastructure native solutions.

Data Center Expansion

As you just learned before, OCVS is a great fit if you want to maintain the same VMware software versions on-premises and in OCI. The classic use case here is the pure data center expansion scenario, which allows you to stretch your on-premises infrastructure to OCI, without the need to use their native services.

VMware Horizon on OCVS

As I mentioned at the beginning, Oracle Cloud VMware Solution is based on VMware Cloud Foundation and so it is no surprise that Horizon on OCVS is fully supported.

The Horizon deployment on OCVS works a little bit different compared to the on-premises installation and there is no feature parity yet:

  • Horizon on OCVS does not support vGPUs yet.
  • Horizon on OCVS does not support IPv6 yet.
  • Horizon on OCVS does not support vTPM yet. In this situation it is recommended to use shielded OCVS instances.

Note: The support of NSX Advanced Load Balancer (Avi) is still a roadmap item

VMware Tanzu for OCVS

Since April 2022 it is possible for joint VMware and Oracle customers to use Tanzu Standard and its components with Oracle Cloud VMware Solution. Tanzu Standard comes with VMware’s Kubernetes distribution Tanzu Kubernetes Grid (TKG) and Tanzu Mission Control, which is the right solution for multi-cloud, multi-cluster K8s management.

With TMC you can deploy and manage TKG clusters on vSphere on-premises or on Oracle Cloud VMware Solution. You can even attach existing Kubernetes clusters from other vendors like RedHat OpenShift, Amazon EKS or Azure Kubernetes Service (AKS).

OCVS Tanzu Standard 

Oracle Cloud VMware Solution FAQ

VMware’s OCVS FAQ can be found here.

Oracle’s OCVS FAQ can be found here.

Additional Resources

Here is a list of additional resources: