Kubernetes 101

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was first developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a platform for containerized applications to be deployed and managed in a scalable and reliable manner. In this guide, we will explore the core components and concepts of Kubernetes, and how to implement them.

Understanding Containers

Before we dive into Kubernetes, it is important to understand the concept of containers. A container is a lightweight, standalone executable package that includes everything needed to run an application, including the code, runtime, system tools, libraries, and settings. Containers allow applications to be isolated from the host system and other applications, making them portable and easily deployable across different environments. Containers provide a consistent runtime environment for applications, regardless of the underlying infrastructure.

Core Concepts of Kubernetes

Kubernetes is built on a set of core concepts that define how it operates. These concepts include:

Nodes: Nodes are the physical or virtual machines that run containerized applications. Each node has a set of resources, including CPU, memory, and storage, that can be used by the applications running on it.

Pods: Pods are the smallest deployable units in Kubernetes. A pod is a single instance of a container, or a group of tightly coupled containers, that share the same resources and network namespace. Pods are scheduled on nodes and can be deployed, scaled, and managed independently.

Services: Services provide a way to expose a set of pods as a network service. A service allows clients to access the pods through a stable IP address and DNS name, regardless of their location or underlying infrastructure.

ReplicaSets: ReplicaSets ensure that a specified number of pods are running at all times. If a pod fails or is terminated, the ReplicaSet will automatically create a new pod to replace it.

Deployments: Deployments are used to manage updates to the configuration of a set of pods. Deployments allow for rolling updates, canary releases, and rollbacks, ensuring that the application is always available and up-to-date.

ConfigMaps: ConfigMaps are used to store configuration data that can be consumed by the applications running in the pods. ConfigMaps provide a way to decouple configuration from code, making it easier to manage and update.

Secrets: Secrets are used to store sensitive data, such as passwords and API keys, that can be consumed by the applications running in the pods. Secrets are encrypted at rest and in transit, providing a secure way to manage sensitive data.

Core Components of Kubernetes

Kubernetes is made up of several core components that work together to provide a powerful platform for container orchestration. These components include:

Kubelet: The Kubelet is the primary node agent that runs on each node in the Kubernetes cluster. The Kubelet is responsible for starting and stopping containers, monitoring their health, and reporting back to the Kubernetes API server.

Kube-proxy: The Kube-proxy is a network proxy that runs on each node in the Kubernetes cluster. The Kube-proxy is responsible for routing traffic to the correct pod based on the service IP and port.

API server: The API server is the central control plane for the Kubernetes cluster. The API server exposes the Kubernetes API, which can be used to manage the cluster.

Controller manager: The controller manager is responsible for managing the core Kubernetes controllers, such as ReplicaSets and Deployments. The controller manager ensures that the desired state of the cluster is maintained at all times.

Etcd: Etcd is a distributed key-value store that is used to store the configuration data for the Kubernetes cluster. Etcd provides a highly available and consistent data store that is used by all of the components in the Kubernetes cluster.

Implementing Kubernetes on AWS

Kubernetes is a powerful platform for managing containerized applications, and Amazon Web Services (AWS) provides a variety of services that make it easy to set up and manage a Kubernetes cluster on AWS. In this guide, we will walk you through the process of setting up a Kubernetes cluster on AWS, including the core components and concepts.

Prerequisites

Before you start setting up a Kubernetes cluster on AWS, there are a few prerequisites that you need to have in place:

  1. An AWS account: To use AWS services, you need to have an AWS account. If you don’t have one, you can sign up for a free trial.
  2. An IAM user with the necessary permissions: To create and manage resources in AWS, you need to have an AWS Identity and Access Management (IAM) user with the necessary permissions. We recommend that you create a separate IAM user for managing your Kubernetes resources.
  3. AWS CLI: The AWS Command Line Interface (CLI) is a tool that allows you to interact with AWS services from the command line. You can download and install the AWS CLI from the AWS website.
  4. Kubectl: Kubectl is a command-line tool that allows you to interact with your Kubernetes cluster. You can download and install kubectl from the Kubernetes website.
Setting up a Kubernetes Cluster on AWS

To set up a Kubernetes cluster on AWS, you can use the Amazon Elastic Kubernetes Service (EKS), which is a managed Kubernetes service provided by AWS. EKS makes it easy to set up, manage, and scale a Kubernetes cluster on AWS.

  1. Create an Amazon EKS Cluster

To create an EKS cluster, follow these steps:

  • Create an Amazon EKS Cluster VPC

Before you can create an EKS cluster, you need to create a Virtual Private Cloud (VPC) in which the cluster will run. A VPC is a logically isolated section of the AWS cloud that you can use to launch your resources. You can create a VPC using the AWS Management Console or the AWS CLI.

To create a VPC using the AWS CLI, run the following command:

aws ec2 create-vpc --cidr-block 10.0.0.0/16

This command creates a VPC with the IP address range of 10.0.0.0/16.

  • Create Amazon EKS Cluster Control Plane

After creating a VPC, you can create the Amazon EKS cluster control plane using the AWS Management Console or the AWS CLI.

To create a cluster control plane using the AWS CLI, run the following command:

aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::123456789012:role/eksServiceRole --resources-vpc-config subnetIds=subnet-xxxxx,subnet-xxxxx,securityGroupIds=sg-xxxxx

This command creates an EKS cluster named “my-cluster”. You need to replace 123456789012 with subnet-xxxxx and sg-xxxxx with the actual IDs of the subnets and security groups that you created earlier.

  • Update kubectl Config

After creating the EKS cluster control plane, you need to update your kubectl configuration file to access the cluster. To do this, run the following command:

aws eks update-kubeconfig --name my-cluster

This command downloads the necessary configuration information for your cluster and updates your kubectl configuration file.

  1. Create Worker Nodes

Once you have created the EKS cluster control plane, you need to create worker nodes to run your containerized applications. To create worker nodes, you can use Amazon Elastic Compute Cloud (EC2) instances.

  • Create an Amazon Machine Image (AMI) for Worker Nodes

To create an AMI for worker nodes, you need to first launch an EC2 instance and install Docker and the necessary dependencies on it. Then you can create an AMI from the instance.

You can use the following CloudFormation template to create an EC2 instance with the necessary configuration:

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-0c94855ba95c71c99
      InstanceType: t2.micro
      KeyName: my-key-pair
      SecurityGroupIds:
        - sg-1234567890abcdef0
      UserData:
        Fn::Base64: !Sub |
          #!/bin/bash
          sudo yum update -y
          sudo amazon-linux-extras install docker -y
          sudo service docker start
          sudo usermod -a -G docker ec2-user
          sudo systemctl enable docker
          sudo yum install -y aws-cfn-bootstrap
          sudo yum install -y awscli
          echo ECS_CLUSTER=my-cluster >> /etc/ecs/ecs.config
  LaunchConfiguration:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      ImageId: !Ref EC2Instance
      InstanceType: t2.micro
      UserData:
        Fn::Base64: !Sub |
          #!/bin/bash
          echo ECS_CLUSTER=my-cluster >> /etc/ecs/ecs.config
      SecurityGroups:
        - sg-1234567890abcdef0
      KeyName: my-key-pair
  AutoScalingGroup:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      LaunchConfigurationName: !Ref LaunchConfiguration
      MinSize: 1
      MaxSize: 3
      DesiredCapacity: 1
      VPCZoneIdentifier:
        - subnet-12345678

This CloudFormation template creates an EC2 instance with Docker installe Auto Scaling group with one instance. The instance is launched with the ECS_CLUSTER environment variable set to the name of your EKS cluster.

After launching the EC2 instance, you can create an AMI from it using the following command:

aws ec2 create-image --instance-id <instance-id> --name <ami-name> --description <ami-description>

Replace <instance-id> with the ID of the EC2 instance that you launched earlier, <ami-name> with a name for your AMI, and <ami-description> with a description for your AMI.

  1. Launch Worker Nodes with the AMI

Once you have created an AMI for worker nodes, you can use it to launch worker nodes in your EKS cluster. To launch worker nodes, you can use a CloudFormation template that creates an Auto Scaling group with EC2 instances running the AMI.

You can use the following CloudFormation template to launch worker nodes in your EKS cluster:

Resources:
  EC2InstanceProfile:
    Type: AWS::IAM::InstanceProfile
    Properties:
      Path: /
      Roles:
        - Ref: EC2Role
  EC2Role:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - sts:AssumeRole
      Path: /
      Policies:
        - PolicyName: ec2-policy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - ec2:DescribeInstances
                  - ec2:DescribeTags
                  - ec2:DescribeRegions
                  - ec2:DescribeSecurityGroups
                  - ec2:DescribeSubnets
                  - ec2:DescribeRouteTables
                  - ec2:CreateSecurityGroup
                  - ec2:CreateTags
                  - ec2:CreateRoute
                  - ec2:CreateRouteTable
                  - ec2:AuthorizeSecurityGroupIngress
                  - ec2:RevokeSecurityGroupIngress
                  - ec2:DeleteSecurityGroup
                  - ec2:CreateInternetGateway
                  - ec2:AttachInternetGateway
                  - ec2:CreateNatGateway
                  - ec2:DeleteNatGateway
                  - ec2:ModifyInstanceAttribute
                  - ec2:RunInstances
                  - ec2:TerminateInstances
                  - autoscaling:DescribeAutoScalingGroups
                  - autoscaling:DescribeLaunchConfigurations
                  - autoscaling:UpdateAutoScalingGroup
                  - autoscaling:CreateLaunchConfiguration
                  - autoscaling:DeleteLaunchConfiguration
                  - autoscaling:SetDesiredCapacity
                  - cloudformation:DescribeStackResource
                  - cloudformation:DescribeStacks
                  - cloudformation:GetTemplate
                  - cloudformation:ListStackResources
                  - cloudformation:UpdateStack
                  - cloudformation:CreateStack
                  - cloudformation:DeleteStack
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                  - logs:DescribeLogStreams
                  - logs:DescribeLogGroups
                  - iam:PassRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
  LaunchTemplate:
    Type: AWS::EC2::LaunchTemplate
    Properties:
      LaunchTemplateName: EKSWorkerNodeLaunchTemplate
      LaunchTemplateData:
        BlockDeviceMappings:
          - DeviceName: /dev/xvda
            Ebs:
              VolumeSize: 20
              VolumeType: gp2
              DeleteOnTermination: true
        ImageId: <ami-id>
        InstanceType: t3.medium
        KeyName: <key-pair>
        IamInstanceProfile:
          Name: <instance-profile>
        SecurityGroupIds:
          - <security-group-id>
        UserData: !Base64 |
          #!/bin/bash
          echo ECS_CLUSTER=<cluster-name> >> /etc/ecs/ecs.config
  AutoScalingGroup:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      VPCZoneIdentifier:
        - <subnet-id>
      LaunchTemplate:
      LaunchTemplateId: !Ref LaunchTemplate
      Version: !GetAtt LaunchTemplate.LatestVersionNumber
      MaxSize: 10
      MinSize: 1
      DesiredCapacity: 1
      Tags:
        - Key: Name
          Value: eks-worker-node
          PropagateAtLaunch: true
  NodeGroup:
    Type: AWS::EKS::Nodegroup
    Properties:
      ClusterName: <cluster-name>
      NodegroupName: eks-worker-nodegroup
      NodeRole: !GetAtt EC2InstanceProfile.Arn
      Subnets:
        - <subnet-id>
      InstanceTypes:
        - t3.medium
      AmiType: AL2_x86_64
      DiskSize: 20
      ScalingConfig:
        DesiredSize: 1
        MinSize: 1
        MaxSize: 10
      Labels:
        nodegroup-type: worker
      Tags:
        - Key: Name
          Value: eks-worker-node

This CloudFormation template creates an Amazon EKS worker node group that includes an Auto Scaling group, an Amazon EC2 launch template, and an Amazon EKS node group resource. The template uses the specified Amazon Machine Image (AMI) and instance type to launch worker nodes into the specified subnets.

Once you deploy this template, the worker nodes will join the EKS cluster and start running Kubernetes workloads. You can confirm this by running kubectl get nodes command and verifying that the worker nodes are ready and registered with the EKS cluster.

Conclusion

In this guide, we have covered the basic concepts of Kubernetes and how to deploy a Kubernetes cluster on Amazon Web Services (AWS) using Amazon EKS. We have also walked through the steps involved in creating an Amazon EKS cluster, deploying worker nodes, and deploying applications to the cluster.

We started by creating a VPC and setting up the necessary networking components, including public and private subnets, route tables, and Internet and NAT gateways. We then created an Amazon EKS cluster and deployed worker nodes using an Amazon EKS optimized Amazon Machine Image (AMI). Finally, we deployed a sample application to the cluster and verified that it was running successfully.

With the Kubernetes cluster up and running, there are several other topics that you may want to explore to further customize and optimize your cluster. These include:

  • Configuring cluster autoscaling
  • Using Kubernetes networking to manage service discovery and load balancing
  • Implementing Kubernetes RBAC to manage user access to the cluster
  • Configuring Kubernetes persistent storage using Amazon EBS or Amazon EFS

Amazon EKS provides a powerful and flexible platform for deploying and managing Kubernetes clusters in the cloud. By leveraging the capabilities of AWS, you can build highly scalable and resilient clusters that can support a wide range of workloads and applications. We hope that this guide has provided you with a good foundation for building and managing your own Kubernetes clusters on AWS, and we encourage you to continue exploring and experimenting with this powerful technology.