Cloud Container Engine (abbreviated as CCE) provides highly scalable and high-performance enterprise-level Kubernetes clusters and supports running Docker containers. With the help of Cloud Container Engine, you can easily deploy, manage, and scale containerized applications on the cloud.

Cloud Container Engine offers the native API of Kubernetes, supports the use of kubectl, and also provides a graphical console, enabling you to have a complete end-to-end usage experience. Before using Cloud Container Engine, it is recommended that you first understand the relevant basic concepts.

Node

Each node corresponds to a server (which can be a virtual machine instance or a physical server), and container applications run on the nodes. An Agent proxy program (kubelet) runs on the node and is used to manage the container instances running on the node. The number of nodes in a cluster can be scaled.

Virtual Private Cloud (VPC)

A Virtual Private Cloud is network-isolated in a logical way, providing a secure and isolated network environment. You can define a virtual network in the VPC that is no different from a traditional network, while also providing advanced network services such as Elastic IP and security groups.

Security Group

A security group is a logical grouping that provides access policies for elastic cloud servers within the same VPC that have the same security protection requirements and trust each other. After a security group is created, users can define various access rules in the security group. When an elastic cloud server joins this security group, it is protected by these access rules.

Relationship Between Clusters, VPCs, Security Groups, and Nodes

As shown in Figure 1, there can be multiple Virtual Private Clouds (VPC) in the same Region. A Virtual Private Cloud is composed of subnets, and the network interaction between subnets is completed through subnet gateways. And a cluster is established in a certain subnet. Therefore, there are the following three scenarios:

  • Different clusters can be created in different Virtual Private Clouds.
  • Different clusters can be created in the same subnet.
  • Different clusters can be created in different subnets.


Figure 1 Relationship Between Clusters, VPCs, Security Groups, and Nodes

Instance (Pod)

An instance (Pod) is the smallest basic unit for Kubernetes to deploy applications or services. A Pod encapsulates multiple application containers (it can also have only one container), storage resources, an independent network IP, and policy options for managing and controlling the way containers run.

Figure 2 Instance (Pod)

Container

A running instance created from a Docker image. Multiple containers can run on a single node. The essence of a container is a process, but unlike a process that runs directly on the host, the container process runs in its own independent namespace.

Workload (Project)

Workload is an abstract model of a group of Pods in Kubernetes, used to describe the running carrier of business operations. It includes various types such as Deployment, StatefulSet, DaemonSet, Job, and CronJob.

  • Stateless Workload: That is “Deployment” in Kubernetes. Stateless workloads support elastic scaling and rolling upgrades, and are suitable for scenarios where instances are completely independent and have the same functions, such as nginx, wordpress, etc.

  • Stateful Workload: That is “StatefulSet” in Kubernetes. Stateful workloads support the orderly deployment and deletion of instances and support persistent storage. They are suitable for scenarios where there is mutual access between instances, such as ETCD, mysql-HA, etc.

  • Create DaemonSet: That is “DaemonSet” in Kubernetes. The DaemonSet ensures that all (or some) nodes run a Pod instance and supports the dynamic addition of instances to new nodes. It is suitable for scenarios where an instance needs to run on each node, such as ceph, fluentd, Prometheus Node Exporter, etc.

  • Common Task: That is “Job” in Kubernetes. A common task is a short task that runs only once and can be executed immediately after deployment. The usage scenario is to execute a common task to upload the image to the image repository before creating a workload.

  • Scheduled Task: That is “CronJob” in Kubernetes. A scheduled task is a short task that runs according to a specified time period. The usage scenario is to synchronize the time for all running nodes at a certain fixed time point.


Figure 3 The Relationship between Workloads and Pods

Orchestration

The orchestration template contains the definitions of a group of container services and their interconnections, which can be used for the deployment and management of multi-container applications and virtual machine applications.

Image

A Docker image is a template and the standard format for packaging container applications, used to create Docker containers. In other words, a Docker image is a special file system that, besides providing the files such as programs, libraries, resources, and configurations required for the container to run, also contains some configuration parameters prepared for runtime (such as anonymous volumes, environmental variables, users, etc.). The image does not contain any dynamic data, and its content will not be changed after it is built. When deploying containerized applications, an image can be specified, and the image can come from Docker Hub, a container image service, or a user’s private Registry. For example, a Docker image can contain a complete Ubuntu operating system environment with only the applications needed by the user and their dependent files installed.

The relationship between an image and a container is just like that between a class and an instance in object-oriented programming. The image is a static definition, while the container is the entity when the image is running. Containers can be created, started, stopped, deleted, paused, and so on.


Figure 4 The Relationship between Images, Containers, and Workloads

Namespace

A namespace is an abstract integration of a group of resources and objects. Different namespaces can be created within the same cluster, and the data in different namespaces are isolated from each other. This enables them to both share the services of the same cluster and not interfere with each other. For example:

The business of the development environment and the test environment can be placed in different namespaces respectively.

Common pods, services, replication controllers, and deployments, etc., all belong to a certain namespace (the default is “default”), while nodes, persistentVolumes, etc., do not belong to any namespace.

Service

A service is an abstract method of exposing an application running on a group of pods as a network service.

With Kubernetes, you don’t need to modify the application to use an unfamiliar service discovery mechanism. Kubernetes provides its own IP addresses for pods and a single DNS name for a group of pods, and can perform load balancing among them.

Kubernetes allows you to specify a required type of service. The values and behaviors of the types are as follows:

  • ClusterIP: Internal cluster access. The service is exposed through the internal IP of the cluster. If this value is selected, the service can only be accessed within the cluster, and this is also the default ServiceType.
  • NodePort: Node access. The service is exposed through the IP on each node and a static port (NodePort). The NodePort service will be routed to the ClusterIP service, which will be automatically created. By requesting NodeIP:NodePort, a NodePort service can be accessed from outside the cluster.

Layer 7 Load Balancing (Ingress)

Ingress is a collection of routing rules for requests entering the cluster. It can provide URLs for external access to services, load balancing, SSL termination, HTTP routing, etc.

Network Policy

NetworkPolicy provides policy-based network control for isolating applications and reducing the attack surface. It uses label selectors to simulate traditional segmented networks and controls the traffic between them and the traffic from the outside through policies.

ConfigMap

ConfigMap is used to store key-value pairs of configuration data. It can be used to store a single property or a configuration file. ConfigMap is similar to secret, but it can handle strings that do not contain sensitive information more conveniently.

Secret

Secret solves the configuration problem of sensitive data such as passwords, tokens, and keys without exposing these sensitive data to the image or Pod Spec. Secret can be used in the form of a volume or an environmental variable.

Label

A label is actually a pair of key/value, which is associated with an object, such as a pod. We tend to use labels to mark the special characteristics of an object and make them meaningful to the user, but labels have no direct meaning to the kernel system.

LabelSelector

Label selector is the core grouping mechanism of Kubernetes. Through the label selector, the client/user can identify a group of resource objects with common characteristics or properties.

Annotation

Annotation is similar to label and is also defined in the form of key/value pairs.

Labels have strict naming rules. They define the metadata of Kubernetes objects and are used for Label Selector.

Annotations are “additional” information arbitrarily defined by the user for easy lookup by external tools.

PersistentVolume

PersistentVolume (PV) is a piece of network storage in the cluster. Like a node, it is also a resource of the cluster.

PersistentVolumeClaim

PV is a storage resource, and PersistentVolumeClaim (PVC) is a request for PV. PVC is similar to a pod: a pod consumes node resources, while PVC consumes PV resources; a pod can request CPU and memory resources, while PVC requests a data volume of a specific size and access mode.

Horizontal Pod Autoscaling (HPA)

Horizontal Pod Autoscaling, abbreviated as HPA, is a function in Kubernetes to achieve horizontal autoscaling of pods. The Kubernetes cluster can complete the expansion or contraction of services through the scale mechanism of the Replication Controller to achieve scalable services.

Affinity and Anti-affinity

Before an application is containerized, multiple components would be installed on a single virtual machine, and there would be communication between processes. But when doing containerized splitting, containers are often split directly according to processes. For example, one container for the business process, and another container for monitoring log processing or local data, and they have independent life cycles. At this time, if they are distributed at two far points in the network, the performance will be very poor due to multiple forwarding of requests.

  • Affinity: It can achieve nearby deployment, enhance network capabilities to realize nearby routing in communication, and reduce network losses. For example, if application A and application B interact frequently, it is necessary to use affinity to make the two applications as close as possible, even on the same node, to reduce the performance losses caused by network communication.

  • Anti-affinity: It is mainly for the sake of high reliability. Try to disperse the instances as much as possible. When a node fails, the impact on the application is only one over N or just one instance. For example, when an application is deployed with multiple replicas, it is necessary to use anti-affinity to scatter the various application instances on different nodes to improve HA.

  • NodeAffinity: By selecting labels, the pod can be restricted to be scheduled to specific nodes.

  • NodeAntiAffinity: By selecting labels, the pod can be restricted not to be scheduled to specific nodes.

  • PodAffinity: Specify that the workload is deployed on the same node. Users can perform nearby deployment of the workload according to business requirements, realize nearby routing of communication between containers, and reduce network consumption.

  • PodAntiAffinity: Specify that the workload is deployed on different nodes. Multiple instances of the same workload are deployed with anti-affinity to reduce the impact of downtime; applications that interfere with each other are deployed with anti-affinity to avoid interference.

Resource Quota

Resource Quotas are a mechanism used to limit the usage of user resources.

Limit Range

By default, all containers in K8S have no CPU and memory limits. LimitRange (abbreviated as limits) is used to add a resource limit to a namespace, including minimum, maximum, and default resources. When a pod is created, the parameters using limits are enforced to allocate resources.

Environmental Variables

Environmental variables refer to a variable set in the running environment of a container. You can set no more than 30 environmental variables when creating a container template. Environmental variables can be modified after the workload is deployed, providing great flexibility for the workload.

Setting environmental variables in CCE has the same effect as the “ENV” in the Dockerfile.

Application Service Mesh (Istio)

Istio is an open platform that provides connection, protection, control, and observation functions.

The Cloud Container Engine is deeply integrated with the application service mesh, providing a non-invasive microservice governance solution, supporting the complete life cycle management and traffic governance capabilities, and being compatible with the Kubernetes and Istio ecosystems. Once the application service mesh is enabled with one click, a non-invasive intelligent traffic governance solution can be provided, whose functions include load balancing, circuit breaking, flow limiting, and other governance capabilities. The application service mesh has built-in multiple gray release processes such as canary and blue-green, providing one-stop automated release management. Based on non-invasive monitoring data collection, it deeply integrates the application performance management (APM) capabilities, providing real-time traffic topology, call chain, and other service performance monitoring and running diagnosis, building a panorama of the service running view.

作者:chering  创建时间:2024-11-26 10:51
最后编辑:chering  更新时间:2025-01-17 09:02