I have just started diving into the OCI DevOps Professional certification course, so why not share some lessons and important information I gathered from the official Oracle course as part of my preparation? My goal? To pass the exam in the next few weeks. In this first part, I’m covering the core concept of Oracle Kubernetes Engine (OKE). Please note that I am also copy-pasting parts of the official documentation.
What is Oracle Kubernetes Engine?
Oracle Kubernetes Engine (OKE) is Oracle Cloud Infrastructure’s managed Kubernetes service. It is designed to let you deploy, manage, and scale containerized applications using Kubernetes, but without the heavy lifting of setting up and maintaining the control plane yourself.
OKE is:
-
Certified by the CNCF (Cloud Native Computing Foundation)
-
Fully integrated with OCI services like networking, load balancing, and IAM
-
Designed for production workloads, with a choice between traditional VM-based clusters or serverless options.
You get the flexibility and power of Kubernetes, but Oracle handles the control plane: updates, availability, and scaling
Kubernetes Clusters
OKE supports two types of Kubernetes cluster options: Basic and Enhanced
- Enhanced cluster: Enhanced clusters support all available features, including features not supported by basic clusters (such as virtual nodes, cluster add-on management, workload identity, and additional worker nodes per cluster). Enhanced clusters come with a financially-backed service level agreement (SLA).
- Cluster add-ons: In an enhanced cluster, you can use Kubernetes Engine to manage both essential add-ons and a growing portfolio of optional add-ons. You can enable or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, and manage add-on specific customizations.
- Basic cluster: Basic clusters support all the core functionality provided by Kubernetes and Kubernetes Engine, but none of the enhanced features that Kubernetes Engine provides. Basic clusters come with a service level objective (SLO), but not a financially-backed service level agreement (SLA).
- Cluster add-ons: In a basic cluster, you have more responsibility and less flexibility when managing cluster add-ons. You are responsible for upgrading essential add-ons, but you cannot install or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, or manage add-on specific customizations. In addition, you are responsible for installing, managing, and maintaining any optional add-ons you want in the cluster
If you are aiming to build scalable, secure, and production-ready apps, enhanced clusters are the way to go.
Note: A new cluster using the console is created as an enhanced cluster by default. If you are using the CLI or API to create a cluster, a new cluster is created as a basic cluster by default.
Kubernetes Cluster Controle Plane
The Kubernetes cluster control plane implements core Kubernetes functionality. It runs on compute instances (known as ‘control plane nodes’) in the Kubernetes Engine service tenancy. The cluster control plane is fully managed by Oracle.
The cluster control plane runs a number of processes, including:
- kube-apiserver to support Kubernetes API operations requested from the Kubernetes command line tool (kubectl) and other command line tools, as well as from direct REST calls. The kube-apiserver includes admissions controllers required for advanced Kubernetes operations.
- kube-controller-manager to manage different Kubernetes components (for example, replication controller, endpoints controller, namespace controller, and serviceaccounts controller)
- kube-scheduler to control where in the cluster to run jobs
- etcd to store the cluster’s configuration data
- cloud-controller-manager to update and delete worker nodes (using the node controller), to create load balancers when Kubernetes services of
type: LoadBalancer
are created (using the service controller), and to set up network routes (using the route controller). The oci-cloud-controller-manager also implements a container-storage-interface, a flexvolume driver, and a flexvolume provisioner (for more information, see the OCI Cloud Controller Manager (CCM) documentation on GitHub).
Kubernetes Data Plane and Worker Nodes
Worker nodes are where you run the applications that you deploy in a cluster.
Each worker node runs a number of processes, including:
- kubelet to communicate with the cluster control plane
- kube-proxy to maintain networking rules
The cluster control plane processes monitor and record the state of the worker nodes and distribute requested operations between them.
A node pool is a subset of worker nodes within a cluster that all have the same configuration. Node pools enable you to create pools of machines within a cluster that have different configurations. For example, you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.
Worker nodes in a node pool are connected to a worker node subnet in your VCN.
Supported Images and Shapes for Worker Nodes
When creating a node pool with Kubernetes Engine, you specify that the worker nodes in the node pool are to be created as one or other of the following:
- Virtual nodes, fully managed by Oracle. Virtual nodes provide a ‘serverless’ Kubernetes experience, enabling you to run containerized applications at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters. You can only create virtual nodes in enhanced clusters.
- Managed nodes, running on compute instances (either bare metal or virtual machine) in your tenancy, and at least partly managed by you. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity. You can create managed nodes in both basic clusters and enhanced clusters.
Note: You can choose to upgrade the basic cluster to an enhanced cluster later, but you cannot downgrade an enhanced cluster to a basic cluster.
Supported Images for Managed Nodes
OKE supports the provisioning of worker nodes (managed nodes only) using some, but not all, of the latest Oracle Linux images provided by Oracle Cloud Infrastructure.
- Provided by Oracle and only contain an Oracle Linux operating system
- The managed nodes’ initial boot triggers a software download and setup by OKE
- Built on platform images
- OKE images are optimized for use as managed node base images, with all the necessary configurations and required software
- For faster managed node provisioning during cluster creation and updates
- Can be built on supported platform images and OKE images
- Custom images contain Oracle Linux OSes with customizations, configurations and software that were present when you created the image.
Shapes for Managed Nodes and Virtual Nodes
OKE supports the provisioning of worker nodes (both managed nodes and virtual nodes) using many, but not all, of the shapes provided by Oracle Cloud Infrastructure. More specifically:
- Managed Nodes
- Supported for managed nodes:
- Flexible shapes, except flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex)
- Bare Metal shapes, including standard shapes and GPU shapes;
- HPC shapes, except in RDMA networks;
- VM shapes, including standard shapes and GPU shapes;
- Dense I/O shapes
- For the list of supported GPU shapes, see GPU shapes supported by Kubernetes Engine (OKE).
- Not Supported:
- Dedicated VM host shapes
- Micro VM shapes
- HPC shapes on Bare Metal instances in RDMA networks
- flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex).
- Supported for managed nodes:
- Virtual Nodes
- Supported for virtual nodes:
- Pod.Standard.A1.Flex, Pod.Standard.E3.Flex, Pod.Standard.E4.Flex.
- Not Supported: All other shapes.
- Supported for virtual nodes:
Self-Managed Nodes
A self-managed node is a worker node hosted on a compute instance (or instance pool) that you have created yourself in Compute service, rather than on a compute instance that Kubernetes Engine has created for you. Self-managed nodes are often referred to as Bring Your Own Nodes (BYON). Unlike managed nodes and virtual nodes (which are grouped into managed node pools and virtual node pools respectively), self-managed nodes are not grouped into node pools.
Using the Compute service enables you to configure compute instances for specialized workloads, including compute shape and image combinations that are not available for managed nodes and virtual nodes.
Note: You can only add self-managed nodes to enhanced clusters.
Supported Images and Shapes for Self-Managed Nodes
Kubernetes Engine supports the provisioning of self-managed nodes using some, but not all, of the Oracle Linux images and shapes provided by Oracle Cloud Infrastructure. More specifically:
- Images supported for self-managed nodes: The image you select for the compute instance hosting a self-managed node must be one of the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) images, and the image must have a Release Date of March 28, 2023 or later. See Image Requirements.
- Shapes supported for self-managed nodes: The shape you can select for the compute instance hosting a self-managed node is determined by the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) image you select for the compute instance.
Prerequisites to create an OKE Cluster
Before you can use Kubernetes Engine to create a Kubernetes cluster, you have to meet prerequisites before you can use OKE. The list can be found here: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengprerequisites.htm