Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you! 

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

 

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.3, and now covers all capabilities and enhancements that were delivered with VCF 4.5.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

Tanzu Standard Edition is included in VMware Cloud Foundation with Tanzu Standard, Advanced, and Enterprise editions.

Note: The VMware Cloud Foundation Starter, Standard, Advanced and Enterprise editions do NOT include Tanzu Standard.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 4.5 the following components and software versions are included:

  • VMware SDDC Manager 4.5
  • vSphere 7.0 Update 3g
  • vCenter Server 7.0 Update 3h
  • vSAN 7.0 Update 3g
  • NSX-T 3.2.1.2
  • VMware Workspace ONE Access 3.3.6
  • vRealize Log Insight 8.8.2
  • vRealize Operations 8.8.2
  • vRealize Automation 8.8.2
  • (vRealize Network Insight)

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation Components

What is VMware Cloud Foundation+ (VCF+)?

With the launch of VMware Cloud Foundation (VCF) 4.5 in early October 2022, VCF introduced new consumption and licensing models.

VCF+ is the next cloud-connected SaaS product offering, which builds on vSphere+ and vSAN+. VCF+ delivers cloud connectivity to centralize management and a new consumption-based OPEX model to consume VMware Cloud services.

VMware Cloud Foundation Consumption Models

VCF+ components are cloud entitled, metered, and billed. There are no license keys in VCF+. Once the customer is onboarded to VCF+, the components are entitled from the cloud and periodically metered and billed.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX (term license)
  • SDDC Manager
  • Aria Universal Suite (formerly vRealize Cloud Universal aka vRCU)
  • Tanzu Standard
  • vCenter (included as part of vSphere+)

Note: In a given VCF+ instance, you can only have VCF+ licensing, you cannot mix VCF-S (term) and VCF perpetual licenses with VCF+.

What are other VCF subscription offerings?

VMware Cloud Foundation Subscription (VCF-S) is an on-premises (disconnected) term subscription offer that is available as a standalone VCF-S offer using physical core metrics and term subscription license keys.

VMware Cloud Foundation Subscription TLSS

You can also purchase VCF+ and VCF-S licenses as part of the VMware Cloud Universal program.

Note: You can mix VCF-S with perpetual license keys as long as you use the same key (either or) for a workload domain.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation.

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Where can I find more information about VCF?

Please consult the VMware Foundation 4.5 FAQ for more information about VMware Cloud Foundation.

 

 

 

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 is history. This year felt different and very special! Rooms were fully booked, and people were queuing up in the hallways. The crowd had a HUGE interest in technical sessions from known speakers like Cormac Hogan, Frank Denneman, Duncan Epping, William Lam, and many more!

Compared to VMware Explore US, there were not that many major announcements, but I thought it might be helpful again to list the major announcements, that seem to be the most interesting and relevant ones.

VMware Aria Hub Free Tier

For me, the biggest and most important announcement was the Aria Hub free tier. I am convinced that Aria Hub will be the next big thing for VMware and I am sure that it will change how the world manages a multi-cloud infrastructure.

VMware Aria Hub is a multi-cloud management platform that unifies the management disciplines of cost, performance, configuration, and delivery automation with a common control plane and data model for any cloud, any platform, any tool, and every persona. It helps you align multiple teams and solutions on a common understanding of resources, relationships, historical changes, applications, and accounts, fundamental to managing a multi-cloud environment.

The new free tier enables customers to inventory, map, filter, and search resources from up to two of their native public cloud accounts, currently from either AWS or Azure. It also helps you understand the relationships of your resources to other resources, policies, and other key components in your public cloud and Kubernetes environments. WOW!

Aria Hub Free Tier Announcement: https://blogs.vmware.com/management/2022/11/announcing-vmware-aria-hub-free-tier.html 

Aria Hub Free Tier Technical Overview: https://blogs.vmware.com/management/2022/11/aria-hub-free-tier-technical-overview 

If you want to sign up for the free tier, please follow this link: https://www.vmware.com/learn/1732750_REG.html 

Tanzu Mission Control On-Premises

Many customers asked for it, it is coming! Tanzu Mission Control (TMC) will become available on-premises for sovereign cloud partners/providers and enterprise customers! 

Bild

There is a private beta coming. Hence, I cannot provide more information for now.

Tanzu Kubernetes Grid 2.1

At VMware Explore US 2022, VMware announced Tanzu Kubernetes Grid (TKG) 2.0, and at Explore Europe 2022, they announced TKG 2.1, which adds support for Oracle Cloud Infrastructure (OCI). Additionally, it will now also have the option of leveraging VMs as the management cluster. Each will be familiar, but now they both support a single, unified way of cluster creation using a new API called ClusterClass.

TKG 2.1 Announcement: https://tanzu.vmware.com/content/blog/tanzu-kubernetes-grid-2-1 

Tanzu Service Mesh Advanced Enhancements

VMware unveiled new enhancements for Tanzu Service Mesh (TSM) as well, which are going to bring new capabilities that would provide VM discovery and integration into the mesh, providing the ability to combine VMs and containers in the same service mesh for secure communications and to apply consistent policy.

VMware Cloud on Equinix Metal (VMC-E)

The last thing I want to highlight is the VMC-E announcement. It is a combination of VMware Cloud IaaS with Equinix Metal hardware as-a-service, which can be deployed in over 30 Equinix global data centers.

VMware Cloud on Equinix Metal is a great option for enterprises that want the flexibility and performance of the Public Cloud, where business requirements prevent moving data or applications to the public cloud. It offers full compatibility and consistency with on-premises and VMware Cloud operational models and policies and zero downtime migration

VMware Cloud on Equinix Metal is a fully managed solution by VMware (delivered, operated, managed, supported).

VMC-E Announcement: https://blogs.vmware.com/cloud/2022/11/07/introducing-vmware-cloud-on-equinix-metal 

VMC-E Technical Preview: https://www.youtube.com/watch?v=-WpGfrxW39Y&feature=youtu.be&ab_channel=VMwareCloud  

Why AWS Developers Love VMware’s Lift and Learn Approach with VMware Cloud on AWS

Why AWS Developers Love VMware’s Lift and Learn Approach with VMware Cloud on AWS

Learn why AWS developers love VMware Cloud on AWS and want to present it to their internal platform team.

I had booth duty at the AWS Swiss Cloud Day 2022 and had the chance to finally talk to people that normally do not talk to VMware folks like me. I believe I had not a single infrastructure or cloud architect talking to me the whole day and I have been approached by Linux administrators and developers only. After I explained to them our partnership and capabilities with AWS, they were mind blown!

Michael, what is VMware’s business with AWS?”

Why are you here at the event, you are only a hypervisor company, right?

Haha, what are you guys doing here?

What is the reason for VMware coming here? You are a competitor of AWS, no?

Developers don’t want to do ops

Look, the developers did not know, that I have no developer background and spent most of my time with data centers. I already built true hybrid clouds almost 10 years ago before we had all the different hyperscalers and providers like Amazon Web Services. After I passed the AWS Solutions Architect Associate and AWS Developer Associate exams a few months ago, I finally understood better how complex software development and cloud migrations must be.

It is said that developers do not want to deal with operational concerns. And other developers want to understand the production environment to make sure that their code work. Additionally, we have the shift-left approach that puts more pressure on the developer’s shoulders, they do not have time for ops.

But after talking to a few developers, I had a light-bulb moment and the following truths came to the surface:

  • Developers had no clue how VMware can ease some of their pain
  • Developers liked my talk about infrastructure and ops
  • I need to bring more business cards to such events!!!

Developers are interested in infrastructure

Remember the questions from above? To answer the questions about VMware’s relevance or relationship with AWS, I used the first 2min to explain VMware Cloud on AWS to them. Yes, I started talking about infrastructure and not about Tanzu, developer experience, our open source projects, and contributions, or Tanzu Labs. The people visiting us at the booth were impressed that VMware and AWS have even specialists only focusing on this solution. Still, they were not convinced yet that VMware can do something good for them.

VMC on AWS Overview

Okay, I got it. So what? What is the value?

How would someone with a VMware background answer such a question? Most of us usually see this situation as the right moment to talk about use cases like:

  • Data center exit or refresh (infrastructure modernization)
  • Burst Capacity
  • Low latency to AWS native services
  • Application modernization
  • Cloud migrations

So, which of these use cases are relevant and important to developers?

The developer’s story

The developers confirmed some statements of mine:

  • Cloud migrations take long and are not easy
  • Lift & shift migrations involve a lot of manual tasks
  • They either have to refactor their app on-premises first and then move to the public cloud or start from scratch on AWS

I say it again, software development is complex. Developers need to modernize existing applications on-premises and then migrate them somehow to AWS because you cannot always start from scratch.

Imagine this: You have an application that was deployed and operated for years in your data centers. Most probably you don’t even understand all the dependencies and configurations anymore since the years have passed. Maybe you are not even the guy who initially developed this application.

Note: The only thing that can be assumed, is, that your infrastructure is most likely running on a VMware-based cloud.

Now you need to start modernizing this application, which takes months or even years. When you are done with your task, you have to figure out how to bring this application over to AWS. Because you had to spend all your time refactoring this application, there was no time to build new AWS skills. At least not during normal office hours.

Lift and shift is easy, right?

Nope. When it would be easy, why does the migration in most cases take longer than expected and cost more than expected? When you have to exit a data center for any reason and need to bring some of your workloads over to a public cloud like AWS, then a lift and shift approach is the best and fastest approach. But somehow organizations do not see much value in using this approach during their cloud adoption. At least not with VMware.

But if a consulting firm or AWS themselves tell the customer, that lift and shift is a good idea, their customers suddenly see the benefit even if they have to add millions to their estimated budget. Consulting firms are not cheap, and neither are lift and shift projects with different underlying technologies like having VMware as the source site on-premises and AWS (or any other public cloud provider) as the destination. But hey, good for your company if they have this extra money.

AWS Lift and Shift

Lift and shift brings no innovation

Different organizations have different agendas and goals. For some, solely running their virtual machines and containers, and using cloud native services is enough for them – no matter the costs. Others expect that economies of scale bring the necessary cost advantages over time while they implement and deliver innovation.

That is why some companies see lift and shift as the approach, which brings no innovation. It is complex, not easy, takes longer, costs more and in the end, you don’t use cloud native services (yet).

It is time now to change the perspective and narrative because I get why you think that lift and shift brings no innovation.

Forget Lift and Shift – Do Lift and Learn

So, our use case here is application modernization. A developer needs to modernize and migrate an application, ideally at the same time. No wonder why some of you may think that lift and shift brings no innovation: because you modernize later. 

Developers struggle. They struggle very much. After I explained VMware Cloud on AWS and mentioned, that a lift and learn approach is the better way that makes their life much easier, they asked me for my business card. It took less than 24h until I received my first two e-mails to organize a meeting.

Give developers more time.

Developers and ops teams need to have enough time to skill up, to learn and understand the new things. You have to break and fix things first in the new world before you can truly understand it. They loved the idea of lift and learn:

  1. Lift and shift your applications first with VMware Cloud on AWS. A true hybrid cloud approach, where the technology format is the same (on-prem and on AWS), will speed up your cloud adoption timeline and therefore save costs. Your workload now runs in the public cloud. Check!
  2. Since the cloud migration didn’t take 12 months, but more something like 3-4 months, your developers can use the additional time to learn and understand how to build things on AWS! The developers are happy because they have less pressure now and can play around with new stuff.
  3. After they have understood the new world, they can start modernizing different parts of the application. What has started with a legacy/traditional application, becomes a hybrid application and eventually a fully modernized app over time.

Figure 4. Connectivity examples for AWS Cloud storage services

The stepping stone to becoming cloud native

Some of you may think now that VMware and its solution with VMC on AWS is just a temporary solution before going completely, cloud native. Let us take a step back again quickly.

When I joined VMware in 2018, they talked about 70mio workloads running on their platform. This year at VMware Explore (formerly VMworld) they showed several 85mio VMware-based workloads. This is proof to me, that:

  • the cloud adoption does not happen as fast as expected,
  • on-premises data centers and VMware is not legacy,
  • VMware is more than only a “hypervisor” company,
  • cloud native and container-based workloads do not always make sense and
  • virtual machines are still going to exist for a while.

These are some pointers to why AWS has this partnership with VMware. As you can see, VMware is very strategic and relevant and should be part of every cloud and application modernization conversation.

Call to action

Just because a lot of people say that developers do not care about ops and are not interested in talking to “infrastructure guys” like me, does not mean that this statement/assumption is true. My conversations from AWS Swiss Cloud Day 2022 clearly showed that developers need to know more about the options and value that companies like VMware can bring to the table.

Do not let developers only talk to developers. Do lift and learn.

What Is Unique About Oracle Cloud VMware Solution?

What Is Unique About Oracle Cloud VMware Solution?

Everyone talks about multi-cloud and in most cases they mean the so-called big 3 that consist of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. If we are looking at the 2021 Gartner Magic Quadrant for Cloud Infrastructure & Platform Services, one can also spot Alibaba Cloud, Oracle, IBM and Tencent Cloud.

VMware has a strategic partnership with 6 of these hyperscalers and all of these 6 public clouds offer VMware’s software-defined data center (SDDC) stack on top of their global infrastructure:

While I mostly have to talk about AWS, AVS and GCVE, I am finally getting the chance to attend a OCVS customer workshop led by Oracle. That is why I wanted to prepare myself accordingly and share my learnings with you.

Amazon Web Services, Microsoft Azure and Google Cloud dominate the cloud market, but Oracle has unique capabilities and characteristics that no one else can deliver. Additionally, Oracle’s Cloud Infrastructure (OCI) has shown an impressive pace of innovation in the past two years, which led to a 16% increase on Gartner’s solution scorecard for OCI (November 2021, from 62% to 78%), which put them into the fourth place behind Alibaba Cloud!

What is Oracle Cloud VMware Solution?

Oracle Cloud VMware Solution or OCVS is a result of the strategic partnership announced by VMware and Oracle in September 2019. Like the other VMware Cloud solutions like VMC on AWS, AVS or GCVE, Oracle Cloud VMware Solution will enable customers to run VMware Cloud Foundation on Oracle’s Generation 2 Cloud Infrastructure.

Meaning, running an on-premises VMware-based infrastructure combined with OCVS should make cloud migrations easier and faster, because it is the same foundation with vSphere, vSAN and NSX.

Oracle Cloud VMware Solution Key Differentiator #1 – Different SDDC Bundles

Customers can choose between a multi-host SDDC (minimum of 3 production hosts) and a single-host SDDC, that is made for test and dev environments. Oracle guarantees a monthly uptime percentage of at least 99.9% for the OCVS service.

OCVS offers three different ESXi software versions and supports the following versions of other components:

  • ESXi 7.0, 6.7 or 6.5
  • vCenter 7.0, 6.7 or 6.5
  • vSAN 7.0, 6.7 or 6.5
  • NSX-T 3.0
  • HCX Advanced 4.0, 3.5 (default option)
  • HCX Enterprise (billed upgrade)

Note: vSphere 6.5 and vSphere 6.7 reach the End of General Support from VMware on October 15, 2022.

Key Differentiator #2 – Customer-Managed & Baremetal Hosts

The VMware Cloud offerings from AWS, Azure or Google are all vendor-controlled and customers get limited access to the VMware hosts and infrastructure components. With Oracle Cloud VMware Solution, customers get baremetal servers and the same operational experience as on-premises. This means full control over VMware infrastructure and its components:

  • SSH access to ESXi
  • Edit vSAN cluster settings
  • Browse datastores; upload and delete files
  • Customer controls the upgrade policy (version, time, defer)
  • Oracle has NO ACCESS after the SDDC provisioning!

Note: According to Oracle it takes about 2 hours to deploy a new SDDC that consists of 3 production hosts.

Customers can choose between Intel- and AMD-based hosts:

  • Two-socket BM.DenseIO2.52 with two CPUs each running 26 cores (Intel)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 16 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 32 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 64 cores (AMD)

Details about the compute shapes can be found here.

Key Differentiator #3 – Availability Domains

To provide high throughput and low latency, an OCVS SDDC is deployed by default across a minimum of three fault domains within a single availability domain in a region. But, upon request it is also possible to deploy your SDDC across multiple availability domains (AD), which comes with a few limitations:

  • While OCVS can scale from 3 up to 64 hosts in a single SDDC, Oracle recommends a maximum of 16 ESXi hosts in a multi-AD architecture
  • This architecture can have impacts on vSAN storage synchronization, and rebuild and resync times

Most hyperscaler only let you use two availability zones and fault domains in the same region. With Oracle it is possible to distribute the minimum of 3 hosts to 3 different availability domains.  An availability domain consists of one or more data centers within the same region.

Note: Traffic between ADs within a region is free of charge.

Key Differentiator #4 – Networking

Because OCVS is customer-managed and can be operated like your on-premises environment, you also get “full” control over the network. OCVS is installed within a customers’ tencancy, which gives customer the advantage to run their VMware SDDC workloads in the same subnet as OCI native services. This provides lower latency to the OCI native services, especially for customers that are using Exadata for example.

Another important advantage of this architecture is capability to create VLAN-backed port groups on your vSphere Distributed Switch (VDS).

Key Differentiator #5 – External Storage

Since March 2022 the OCI File Storage service (NFS) is certified as secondary storage for an OCVS cluster. This allows customers to scale the storage layer of the SDDC without adding new compute resources at the same time.

And just announced on 22 August 2022, with Oracle’s summer ’22 release, OCVS customers can now connect to a certified OCI Block Storage through iSCSI as a second external storage option.

Block Storage provides high IOPS to OCI, and data is stored redundantly across storage servers with built-in repair mechanisms with a 99.99% uptime SLA.

Key Differentiator #6 – Billing Options

OCVS is currently only sold and supported by Oracle. Like with other cloud providers and VMware Cloud offerings, customers have different pricing options depending upon their commitment levels:

  • On-demand (hourly)
  • 1 month
  • 1 year
  • 3 years

The rule of thumb for any hyperscaler says, that a 1-year commitment get around 30% discount and the 3-year commitments are around 50% discount.

The unique characteristic here is the monthly commitment option, which is caluclated with a discount of 16-17% depending on the compute shape.

Note: OCVS is not part (yet) of the VMware Cloud Universal subscription (VMCU).

Key Differentiator #7 – Global Reach

Currently, OCI is available in 39 different cloud regions (21 countries) and Oracle announced five more by the end of 2022. On day one of each region, OCVS is available with a consistent and predictable pricing that doesn’t vary from region to region.

To compare: AWS has launched 27 different regions with 19 being able to host the VMware Cloud on AWS service. In Switzerland, AWS just opened their new data center without having the VMware Cloud on AWS service available, while OCVS is already available in Zurich.

Use Cases

While OCVS is a great solution for joint VMware and Oracle customers, it is not necessary for customers to using Oracle Cloud Infrastructure native solutions.

Data Center Expansion

As you just learned before, OCVS is a great fit if you want to maintain the same VMware software versions on-premises and in OCI. The classic use case here is the pure data center expansion scenario, which allows you to stretch your on-premises infrastructure to OCI, without the need to use their native services.

VMware Horizon on OCVS

As I mentioned at the beginning, Oracle Cloud VMware Solution is based on VMware Cloud Foundation and so it is no surprise that Horizon on OCVS is fully supported.

The Horizon deployment on OCVS works a little bit different compared to the on-premises installation and there is no feature parity yet:

  • Horizon on OCVS does not support vGPUs yet.
  • Horizon on OCVS does not support IPv6 yet.
  • Horizon on OCVS does not support vTPM yet. In this situation it is recommended to use shielded OCVS instances.

Note: The support of NSX Advanced Load Balancer (Avi) is still a roadmap item

VMware Tanzu for OCVS

Since April 2022 it is possible for joint VMware and Oracle customers to use Tanzu Standard and its components with Oracle Cloud VMware Solution. Tanzu Standard comes with VMware’s Kubernetes distribution Tanzu Kubernetes Grid (TKG) and Tanzu Mission Control, which is the right solution for multi-cloud, multi-cluster K8s management.

With TMC you can deploy and manage TKG clusters on vSphere on-premises or on Oracle Cloud VMware Solution. You can even attach existing Kubernetes clusters from other vendors like RedHat OpenShift, Amazon EKS or Azure Kubernetes Service (AKS).

OCVS Tanzu Standard 

Oracle Cloud VMware Solution FAQ

VMware’s OCVS FAQ can be found here.

Oracle’s OCVS FAQ can be found here.

Additional Resources

Here is a list of additional resources:

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

Last year at VMworld 2021, VMware mentioned and announced a lot of (new) projects they are working on. What happened to them and which new VMware projects have been mentioned this year at VMware Explore so far?

Project Ensemble – VMware Aria Hub

VMware unveiled their unified multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.

VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.

VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.

VMware Explore US 2022 Session: A Unified Cloud Management Control Plane – Update on Project Ensemble [CMB2210US]

Project Monterey – DPU-based Acceleration for NSX

Last year introduced as Project Monterey and in technology preview, VMware announced the GA version of Monterey called DPU-based Acceleration for NSX yesterday.

Project Arctic – vSphere+ and vSAN+

Project Arctic has been introduced last year as a Technology Preview and was described as “the next step in the evolution of vSphere in a multi-cloud world”. What has started with the idea of bringing VMware Cloud services closer to vSphere, has evolved to a even more interesting and enterprise-ready version called vSphere+ and vSAN+. It includes developer services that consist of the Tanzu Kubernetes Grid runtime, Tanzu Mission Control Essentials and NSX Advanced Load Balancer Essentials. VMware is going to add more and more VMware Cloud add-on services in the future. Additionally, VMware even introduced VMware Cloud Foundation+.

Project Iris – Application Transformer for VMware Tanzu

VMware mentioned Project Iris very briefly last year at VMworld. In February 2022, Project Iris became generally available and is since then known as Application Transformer for VMware Tanzu.

Project Northstar

At VMware Explore on day 1, VMware introduced Project Northstar, which will provide customers a centralized cloud console that gives them instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. Project Northstar will be able to apply consistent networking and security policies across private cloud, hybrid cloud, and multi-cloud environments.

Graphical user interface Description automatically generated

VMware Explore US 2022 Session: Multi-Cloud Networking and Security with NSX [NETB2154US]

Project Watch

At VMware Explore on day 1,VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.

Project Trinidad

Also announced at VMware Explore day 1 and further explained at day 2, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.

Project Narrows

Project Narrows introduces a unique addition to Harbor, allowing end users to assess the security posture of Kubernetes clusters at runtime. Images previously undetected, will be scanned at the time of introduction to a cluster, so vulnerabilities can now be caught, images may be flagged, and workloads quarantined.

Project Narrows adding dynamic scanning to your software supply chain with Harbor is critical. It allows greater awareness and control of your running workloads than the traditional method of simply updating and storing workloads.

VMware is open sourcing the initial capabilities of Project Narrows on GitHub as the Cloud Native Security Inspector (CNSI) Project.

VMware Explore US 2022 Session: Running App Workloads in a Trusted, Secure Kubernetes Platform [VIB1443USD]

Project Keswick

Also introduced on day 2, Project Keswick is about simplifying edge deployments at scale. It comes as an xLabs project coming out of the Advanced Technology Group in VMware’s Office of the CTO.

Bild

A Keswick deployment is entirely automated and uses Git as a single source of truth for a declarative way to manage your infrastructure and applications through desired state configuration enabled by GitOps. This ensures the infrastructure and applications running at the edge are always exactly what they need to be.

VMware Explore US 2022 Session: Edge Computing: What’s Next? [VIB1457USD]

Project Newcastle

At VMworld 2021, VMware talked the first time (I think) about cryptographic agility and even showed a short demo of a Post Quantum Cryptography (PQC) enabled Unified Access Gateway (using a proxy-based approach): 

Diagram of an HAProxy with TLS Termination and Quantum-Safe Cipher Support as a reverse proxy to communicate with a quantum-safe web browser.

At VMware Explore 2022 day 2, VMware demonstrated what they believe to be the world’s first quantum-safe multi-cloud application!

VMware developed and presented Project Newcastle, a policy-based framework enabling and orchestrating cryptographic transition in modern applications.

Integrated with Tanzu Service Mesh, Project Newcastle gives users greater insight into the cryptography in their applications. But that’s not all — as a platform for cryptographic agility, Project Newcastle automates the process of reconfiguring an application’s cryptography to comply with user-defined policies and industry standards.

Closing Comment

Which VMware projects excite you the most? I’m definitely going with Project Ensemble (Aria Hub) and Project Newcastle!