A Closer Look at VMware NSX Security

A Closer Look at VMware NSX Security

A customer of mine asked me a few days ago: “Is it not possible to get NSX Security features without the network virtualization capabilities?”. I wrote it already in my blog “VMware is Becoming a Leading Cybersecurity Vendor” that you do not NSX’s network virtualization editions or capabilities if you are only interested in “firewalling” or NSX security features.

If you google “nsx security”, you will not find much. But there is a knowledge base article that describes the NSX Security capabilities from the “Distributed Firewall” product line: Product offerings for NSX-T 3.2 Security (87077).

Believe it or not, there are customers that haven’t started their zero-trust or “micro-segmentation” journey yet. Segmentation is about preventing lateral (east-west) movement. The idea is to divide the data center infrastructure into smaller security zones and that the traffic between the zones (and between workloads) is inspected based on the organization’s defined policies.

Perimeter Defense vs Micro-Segmentation

If you are one of them and want to deliver east-west traffic introspection using distributed firewalls, then these NSX Security editions are relevant for you:

VMware NSX Distributed Firewall

  • NSX Distributed Firewall (DFW)
  • NSX DFW with Threat Prevention
  • NSX DFW with Advanced Threat Prevention

VMware NSX Gateway Firewall

  • NSX Gateway Firewall (GFW)
  • NSX Gateway Firewall with Threat Prevention
  • NSX Gateway Firewall with Advanced Threat Prevention

Network Detection and Response

  • Network Detection and Response (standalone on-premises offering)

Note: If you are an existing NSX customer using network virtualization, please have a look at Product offerings for VMware NSX-T Data Center 3.2.x (86095).

VMware NSX Distributed Firewall

The NSX Distributed Firewall is a hypervisor kernel-embedded stateful firewall that lets you create access control policies based on vCenter objects like datacenters and clusters, virtual machine names and tags, IP/VLAN/VXLAN addresses, as well as user group identity from Active Directory.

If a VM gets vMotioned to another physical host, you do not need to rewrite any firewall rules.

The distributed nature of the firewall provides a scale-out architecture that automatically extends firewall capacity when additional hosts are added to a data center.

Should you be interested in “firewalling” only, want to implement access controls for east-west traffic (micro-segmentation) only, but do not need threat prevention (TP) capabilities, then “NSX Distributed Firewall Edition” is perfect for you.

So, which features does the NSX DFW edition include?

The NSX DFW edition comes with these capabilities:

  • L2 – L4 firewalling
  • L7 Application Identity-based firewalling
  • User Identity-based firewalling
  • NSX Intelligence (flow visualization and policy recommendation)
  • Aria Operations for Logs (formerly known as vRealize Log Insight)

What is the difference between NSX DFW and NSX DFW with TP?

With “NSX DFW with TP”, you would get the following additional features:

  • Distributed Intrusion Detection Services (IDS)
  • Distributed Behavioral IDS
  • Distributed Intrusion Prevention Service (IPS)
  • Distributed IDS Event Forwarding to NDR

Where does the NSX Distributed Firewall sit?

This question comes up a lot because customers understand that this is not an agent-based solution but something that is built into the VMware ESXi hypervisor.

The NSX DFW sits in the virtual patch cable, between the VM and the virtual distributed switch (VDS):

NSX Distributed Firewall

Note: Prior to NSX-T Data Center 3.2, VMs must have their vNIC connected to an NSX overlay or VLAN segment to be DFW-protected. In NSX-T Data Center 3.2, distributed firewall protects workloads that are natively connected to a VDS distributed port group (DVPG).

VMware NSX Gateway Firewall

The NSX Gateway Firewall extends the advanced threat prevention (ATP) capabilities of the NSX Distributed Firewall to physical workloads in your private cloud. It is a software-only, L2 – L7 firewall that includes capabilities such as IDS and IPS, URL filtering and malware detection as well as routing and VPN functionality.

If you are not interested in ATP capabilities yet, you can start with the “NSX Gateway Firewall” edition. What is the difference between all NSX GFW editions?

VMware NSX GFW Editions

The NSX GFW can be deployed as a virtual machine or with an ISO image that can run on a physical server and it shares the same management console as the NSX Distributed Firewall.

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you! 

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

 

Update: Please follow this link to get to the updated version with VCF 5.0.

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.3, and now covers all capabilities and enhancements that were delivered with VCF 4.5.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

Tanzu Standard Edition is included in VMware Cloud Foundation with Tanzu Standard, Advanced, and Enterprise editions.

Note: The VMware Cloud Foundation Starter, Standard, Advanced and Enterprise editions do NOT include Tanzu Standard.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 4.5 the following components and software versions are included:

  • VMware SDDC Manager 4.5
  • vSphere 7.0 Update 3g
  • vCenter Server 7.0 Update 3h
  • vSAN 7.0 Update 3g
  • NSX-T 3.2.1.2
  • VMware Workspace ONE Access 3.3.6
  • vRealize Log Insight 8.8.2
  • vRealize Operations 8.8.2
  • vRealize Automation 8.8.2
  • (vRealize Network Insight)

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation Components

What is VMware Cloud Foundation+ (VCF+)?

With the launch of VMware Cloud Foundation (VCF) 4.5 in early October 2022, VCF introduced new consumption and licensing models.

VCF+ is the next cloud-connected SaaS product offering, which builds on vSphere+ and vSAN+. VCF+ delivers cloud connectivity to centralize management and a new consumption-based OPEX model to consume VMware Cloud services.

VMware Cloud Foundation Consumption Models

VCF+ components are cloud entitled, metered, and billed. There are no license keys in VCF+. Once the customer is onboarded to VCF+, the components are entitled from the cloud and periodically metered and billed.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX (term license)
  • SDDC Manager
  • Aria Universal Suite (formerly vRealize Cloud Universal aka vRCU)
  • Tanzu Standard
  • vCenter (included as part of vSphere+)

Note: In a given VCF+ instance, you can only have VCF+ licensing, you cannot mix VCF-S (term) and VCF perpetual licenses with VCF+.

What are other VCF subscription offerings?

VMware Cloud Foundation Subscription (VCF-S) is an on-premises (disconnected) term subscription offer that is available as a standalone VCF-S offer using physical core metrics and term subscription license keys.

VMware Cloud Foundation Subscription TLSS

You can also purchase VCF+ and VCF-S licenses as part of the VMware Cloud Universal program.

Note: You can mix VCF-S with perpetual license keys as long as you use the same key (either or) for a workload domain.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation.

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Where can I find more information about VCF?

Please consult the VMware Foundation 4.5 FAQ for more information about VMware Cloud Foundation.

 

 

 

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 Major Announcements

VMware Explore Europe 2022 is history. This year felt different and very special! Rooms were fully booked, and people were queuing up in the hallways. The crowd had a HUGE interest in technical sessions from known speakers like Cormac Hogan, Frank Denneman, Duncan Epping, William Lam, and many more!

Compared to VMware Explore US, there were not that many major announcements, but I thought it might be helpful again to list the major announcements, that seem to be the most interesting and relevant ones.

VMware Aria Hub Free Tier

For me, the biggest and most important announcement was the Aria Hub free tier. I am convinced that Aria Hub will be the next big thing for VMware and I am sure that it will change how the world manages a multi-cloud infrastructure.

VMware Aria Hub is a multi-cloud management platform that unifies the management disciplines of cost, performance, configuration, and delivery automation with a common control plane and data model for any cloud, any platform, any tool, and every persona. It helps you align multiple teams and solutions on a common understanding of resources, relationships, historical changes, applications, and accounts, fundamental to managing a multi-cloud environment.

The new free tier enables customers to inventory, map, filter, and search resources from up to two of their native public cloud accounts, currently from either AWS or Azure. It also helps you understand the relationships of your resources to other resources, policies, and other key components in your public cloud and Kubernetes environments. WOW!

Aria Hub Free Tier Announcement: https://blogs.vmware.com/management/2022/11/announcing-vmware-aria-hub-free-tier.html 

Aria Hub Free Tier Technical Overview: https://blogs.vmware.com/management/2022/11/aria-hub-free-tier-technical-overview 

If you want to sign up for the free tier, please follow this link: https://www.vmware.com/learn/1732750_REG.html 

Tanzu Mission Control On-Premises

Many customers asked for it, it is coming! Tanzu Mission Control (TMC) will become available on-premises for sovereign cloud partners/providers and enterprise customers! 

Bild

There is a private beta coming. Hence, I cannot provide more information for now.

Tanzu Kubernetes Grid 2.1

At VMware Explore US 2022, VMware announced Tanzu Kubernetes Grid (TKG) 2.0, and at Explore Europe 2022, they announced TKG 2.1, which adds support for Oracle Cloud Infrastructure (OCI). Additionally, it will now also have the option of leveraging VMs as the management cluster. Each will be familiar, but now they both support a single, unified way of cluster creation using a new API called ClusterClass.

TKG 2.1 Announcement: https://tanzu.vmware.com/content/blog/tanzu-kubernetes-grid-2-1 

Tanzu Service Mesh Advanced Enhancements

VMware unveiled new enhancements for Tanzu Service Mesh (TSM) as well, which are going to bring new capabilities that would provide VM discovery and integration into the mesh, providing the ability to combine VMs and containers in the same service mesh for secure communications and to apply consistent policy.

VMware Cloud on Equinix Metal (VMC-E)

The last thing I want to highlight is the VMC-E announcement. It is a combination of VMware Cloud IaaS with Equinix Metal hardware as-a-service, which can be deployed in over 30 Equinix global data centers.

VMware Cloud on Equinix Metal is a great option for enterprises that want the flexibility and performance of the Public Cloud, where business requirements prevent moving data or applications to the public cloud. It offers full compatibility and consistency with on-premises and VMware Cloud operational models and policies and zero downtime migration

VMware Cloud on Equinix Metal is a fully managed solution by VMware (delivered, operated, managed, supported).

VMC-E Announcement: https://blogs.vmware.com/cloud/2022/11/07/introducing-vmware-cloud-on-equinix-metal 

VMC-E Technical Preview: https://www.youtube.com/watch?v=-WpGfrxW39Y&feature=youtu.be&ab_channel=VMwareCloud  

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

Multi-Cloud is a mess. You cannot solve that multi-cloud complexity with a single vendor or one single supercloud (or intercloud), it’s just not possible. But different vendors can help you on your multi-cloud journey to make your and the platform team’s life easier. The whole world talks about DevOps or DevSecOps and then there’s the shift-left approach which puts more responsibility on developers. It seems to me that too many times we forget the “ops” part of DevOps. That is why I would like to highlight the need for Tanzu Mission Control (which is part of  Tanzu for Kubernetes Operations) and Tanzu Application Platform.

Challenges for Operations

What has started with a VMware-based cloud in your data centers, has evolved to a very heterogeneous architecture with two or more public clouds like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform. IT analysts tell us that 75% of businesses are already using two or more public clouds. Businesses choose their public cloud providers based on workload or application characteristics and a public clouds known strengths. Companies want to modernize their current legacy applications in the public clouds, because in most cases a simple rehost or migration (lift & shift) doesn’t bring value or innovation they are aiming for.

A modern application is a collection of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed in a private or public cloud. Many operations and platform teams see cloud-native as going to Kubernetes. But cloud-native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructures.

Expectation of Kubernetes

Kubernetes 1.0 was contributed as an open source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM and VMware.

Currently, the CNCF has over 167k project contributors, around 800 members and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.

If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding how many open source projects are supported by the CNCF and maintained this open source community. Since donated to CNCF, almost every company on this planet is using Kubernetes, or a distribution of it:

These were just a few of total 63 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones:

  • Alibaba Cloud Container Service for Kubernetes
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE)
  • Nutanix Karbon
  • Oracle Container Engine
  • OVH Managed Kubernetes Service
  • Red Hat OpenShift Dedicated

All these clouds and vendors expose Kubernetes implementations, but writing software that performs equally well across all clouds seems to be still a myth. At least we have a common denominator, a consistency across all clouds, right? That’s Kubernetes.

Consistent Operations and Experience

It is very interesting to see that the big three hyperscalers Amazon, AWS and Google are moving towards multi-cloud enabled services and products to provide a consistent experience from an operations standpoint, especially for Kubernetes clusters.

Microsoft got Azure Arc now, Google provides Anthos (GKE clusters) for any cloud and AWS also realized that the future consists of multiple clouds and plans to provide AKS “anywhere”.

They all have realized that customers need a centralized management and control plane. Customers are looking for simplified operations and consistent experience when managing multi-cloud K8s clusters.

Tanzu Mission Control (TMC)

Imagine that you have a centralized dashboard with management capabilities, which provide a unified policy engine and allows you to lifecycle all the different K8s clusters you have.

TMC offers built-in security policies and cluster inspection capabilities (CIS benchmarks) so you can apply additional controls on your Kubernetes deployments. Leveraging the open source project Velero, Tanzu Mission Control gives ops teams the capability to very easily backup and restore your clusters and namespaces. Just 4 weeks ago, VMware announced cross-cluster backup and restore capabilities for Tanzu Mission Control, that let Kubernetes-based applications “become” infrastructure and distribution agnostic.

Tanzu Mission Control lets you attach any CNCF-conformant K8s cluster. When attached to TMC, you can manage policies for all Kubernetes distributions such as Tanzu Kubernetes Grid (TKG), Azure Kubernetes Service, Google Kubernetes Engine or OpenShift.

Tanzu Mission Control Dashboard

In VMware’s ongoing commitment to support customers in their multi-cloud application modernization efforts, the Tanzu Mission Control team introduced the preview of lifecycle management of Amazon AKS clusters at VMware Explore US 2022:

Preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters can enable direct provisioning and management of Amazon EKS clusters so that developers and operators have less friction and more choices for cluster types. Teams will be able to simplify multi-cloud, multi-cluster Kubernetes management with centralized lifecycle management of Tanzu Kubernetes Grid and Amazon EKS cluster types.

Note: With this announcement I would expect that the support for Azure Kubernetes Service (AKS) is also coming soon.

Read the Tanzu Mission Control solution brief to get more information about its benefits and capabilities.

Challenges for Developers

Tanzu Mission Control provides cross-cloud services for your Kubernetes clusters deployed in multiple clouds. But there is still another problem.

Developers are being asked to write code and provide business logic that could run on-prem, on AWS, on Azure or any other public cloud. Every cloud provider has an interest to provide you their technologies and services. This includes the hosted Kubernetes offerings (with different Kubernetes distributions), load balancers, storage, databases, APIs, observability, security tools and so many other components. To me, it sounds very painful and difficult to learn and understand the details of every cloud provider.

Cross-cloud services alone don’t solve that problem. Obviously, neither Kubernetes solves that problem.

What if Kubernetes and centralized management and visibility are not “the” solution but rather something that sits on top of Kubernetes?

And Then Came PaaS

Kubernetes is a platform for building platforms and is not really meant to be used by developers.

The CNCF landscape is huge and complex to understand and integrate, so it is just a logical move that companies were looking more for pre-assembled solutions like platform as a service (PaaS). I think that Tanzu Application Service (formerly known as Pivotal Cloud Foundry), Heroku, RedHat OpenShift and AWS Elastic Beanstalk are the most famous examples for PaaS.

The challenge with building applications that run on a PaaS, is sometimes the need to leverage all the PaaS specific components to fully make use of it. What if someone wants to run her own database? What if the PaaS offering restricts programming languages, frameworks, or libraries? Or is it the vendor lock-in which bothers you?

PaaS solutions alone don’t seem to be solving the missing developer experience either for everyone.

Do you want to build the platform by yourself or get something off the shelf? There is a big difference between using a platform and running one. 🙂

Twitter Kelsey Hightower K8s PaaS

Bring Your Own Kubernetes To A Portable PaaS

What’s next after IaaS has evolved to CaaS (because of Kubernetes) and PaaS? It is adPaaS (Application Developer PaaS).

Have you ever heard of the “Golden Path“? Spotify uses this term and Netflix calls it “Paved Road“.

The idea behind the golden path or paved road is that the (internal) platform offers some form of pre-assembled components and supported approach (best practices) that make software development faster and more scalable. Developers don’t have to reinvent the wheel by browsing through a very fragmented ecosystem of developer tooling where the best way to find out how to do things was to ask the community or your colleagues.

VMware announced Tanzu Application Platform (TAP) in September 2021 with the statement, that TAP will provide a better developer experience on any Kubernetes.

VMware Tanzu Application Platform delivers a prepaved path to production and a streamlined, end-to-end developer experience on any Kubernetes.

It is the platform team’s duty to install and configure the opinionated Tanzu Application Platform as an overlay on top of any Kubernetes cluster. They also integrate existing components of Kubernetes such as storage and networking. An opinionated platform provides the structure and abstraction you are looking for: The platform “does” it for you. In other words, TAP is a prescribed architecture and path with the necessary modularity and flexibility to boost developer productivity.

Diagram depicting the layered structure of TAP

The developers can focus on writing code and do not have to fully understand the details like container image registries, image building and scanning, ingress, RBAC, deploying and running the application etc.

Illustration of TAP conceptual value, starting with components that serve the developer and finishing with the components that serve the operations staff and security staff.

 

TAP comes with many popular best-of-breed open source projects that are improving the DevSecOps experience:

  • Backstage. Backstage is an open platform for building developer portals, created at Spotify, donated to the CNCF, and maintained by a worldwide community of contributors.
  • Carvel. Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes.
  • Cartographer. Cartographer is a VMware-backed project and is a Supply Chain Choreographer for Kubernetes. It allows App Operators to create secure and pre-approved paths to production by integrating Kubernetes resources with the elements of their existing toolchains (e.g. Jenkins).
  • Tekton. Tekton is a cloud-native, open source framework for creating CI/CD systems. It allows developers to build, test, and deploy across cloud providers and on-premise systems.
  • Grype. Grype is a vulnerability scanner for container images and file systems.
  • Cloud Native Runtimes for VMware Tanzu. Cloud Native Runtimes for Tanzu is a serverless application runtime for Kubernetes that is based on Knative and runs on a single Kubernetes cluster.

At VMware Explore US 2022, VMware announced new capabilities that will be released in Tanzu Application Platform 1.3. The most important added functionalities for me are:

  • Support for RedHat OpenShift. Tanzu Application Platform 1.3 will be available on RedHat OpenShift, running in vSphere and on baremetal.
  • Support for air-gapped installations. Support for regulated and disconnected environments, helping to ensure that the components, upgrades, and patches are made available to the system and that they operate consistently and correctly in the controlled environment and keep data secure.
  • Carbon Black Integration. Tanzu Application Platform expands the ecosystem of supported vulnerability scanners with a beta integration with VMware Carbon Black scanner to enable customer choice and leverage their existing investments in securing their supply chain.

The Power Combo for Multi-Cloud

A mix of different workloads like virtual machines and containers that are hosted in multiple clouds introduce complexity. With the powerful combination of Tanzu Mission Control and Tanzu Application Platform companies can unlock the full potential of their platform teams and developers by reducing complexity while creating and using abstraction layers on top your multi-cloud infrastructure.

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

VMware Explore US 2022 – VMware Projects and Day 2 Announcements

Last year at VMworld 2021, VMware mentioned and announced a lot of (new) projects they are working on. What happened to them and which new VMware projects have been mentioned this year at VMware Explore so far?

Project Ensemble – VMware Aria Hub

VMware unveiled their unified multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.

VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.

VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.

VMware Explore US 2022 Session: A Unified Cloud Management Control Plane – Update on Project Ensemble [CMB2210US]

Project Monterey – DPU-based Acceleration for NSX

Last year introduced as Project Monterey and in technology preview, VMware announced the GA version of Monterey called DPU-based Acceleration for NSX yesterday.

Project Arctic – vSphere+ and vSAN+

Project Arctic has been introduced last year as a Technology Preview and was described as “the next step in the evolution of vSphere in a multi-cloud world”. What has started with the idea of bringing VMware Cloud services closer to vSphere, has evolved to a even more interesting and enterprise-ready version called vSphere+ and vSAN+. It includes developer services that consist of the Tanzu Kubernetes Grid runtime, Tanzu Mission Control Essentials and NSX Advanced Load Balancer Essentials. VMware is going to add more and more VMware Cloud add-on services in the future. Additionally, VMware even introduced VMware Cloud Foundation+.

Project Iris – Application Transformer for VMware Tanzu

VMware mentioned Project Iris very briefly last year at VMworld. In February 2022, Project Iris became generally available and is since then known as Application Transformer for VMware Tanzu.

Project Northstar

At VMware Explore on day 1, VMware introduced Project Northstar, which will provide customers a centralized cloud console that gives them instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. Project Northstar will be able to apply consistent networking and security policies across private cloud, hybrid cloud, and multi-cloud environments.

Graphical user interface Description automatically generated

VMware Explore US 2022 Session: Multi-Cloud Networking and Security with NSX [NETB2154US]

Project Watch

At VMware Explore on day 1,VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.

Project Trinidad

Also announced at VMware Explore day 1 and further explained at day 2, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.

Project Narrows

Project Narrows introduces a unique addition to Harbor, allowing end users to assess the security posture of Kubernetes clusters at runtime. Images previously undetected, will be scanned at the time of introduction to a cluster, so vulnerabilities can now be caught, images may be flagged, and workloads quarantined.

Project Narrows adding dynamic scanning to your software supply chain with Harbor is critical. It allows greater awareness and control of your running workloads than the traditional method of simply updating and storing workloads.

VMware is open sourcing the initial capabilities of Project Narrows on GitHub as the Cloud Native Security Inspector (CNSI) Project.

VMware Explore US 2022 Session: Running App Workloads in a Trusted, Secure Kubernetes Platform [VIB1443USD]

Project Keswick

Also introduced on day 2, Project Keswick is about simplifying edge deployments at scale. It comes as an xLabs project coming out of the Advanced Technology Group in VMware’s Office of the CTO.

Bild

A Keswick deployment is entirely automated and uses Git as a single source of truth for a declarative way to manage your infrastructure and applications through desired state configuration enabled by GitOps. This ensures the infrastructure and applications running at the edge are always exactly what they need to be.

VMware Explore US 2022 Session: Edge Computing: What’s Next? [VIB1457USD]

Project Newcastle

At VMworld 2021, VMware talked the first time (I think) about cryptographic agility and even showed a short demo of a Post Quantum Cryptography (PQC) enabled Unified Access Gateway (using a proxy-based approach): 

Diagram of an HAProxy with TLS Termination and Quantum-Safe Cipher Support as a reverse proxy to communicate with a quantum-safe web browser.

At VMware Explore 2022 day 2, VMware demonstrated what they believe to be the world’s first quantum-safe multi-cloud application!

VMware developed and presented Project Newcastle, a policy-based framework enabling and orchestrating cryptographic transition in modern applications.

Integrated with Tanzu Service Mesh, Project Newcastle gives users greater insight into the cryptography in their applications. But that’s not all — as a platform for cryptographic agility, Project Newcastle automates the process of reconfiguring an application’s cryptography to comply with user-defined policies and industry standards.

Closing Comment

Which VMware projects excite you the most? I’m definitely going with Project Ensemble (Aria Hub) and Project Newcastle!