Wednesday, February 16, 2022

Docker vs. Kubernetes

What is Docker?

Docker is an open source containerization platform. Basically, it’s a toolkit that makes it easier, safer and faster for developers to build, deploy and manage containers. Although it began as an open source project, Docker today also refers to Docker, Inc., the company that produces the commercial Docker product. Currently, it is the most popular tool for creating containers, whether developers use Windows, Linux or MacOS.

  • Docker Engine: The runtime environment that allows developers to build and run containers.
  • Docker file: A simple text file that defines everything needed to build a Docker container image, such as OS network specifications and file locations. It’s essentially a list of commands that Docker Engine will run in order to assemble the image.
  • Docker Compose: A tool for defining and running multi-container applications. It creates a YAML file to specify which services are included in the application and can deploy and run containers with a single command via the Docker CLI.

How Does Docker Work

To understand how Docker works, you need to get acquainted with its main components and the role they play in the platform:

  • The Docker daemon is a service that runs on the host and listens to Docker API requests. This continuous process manages Docker objects and communicates with other daemons.
  • The Docker client is a component that provides a command line interface (CLI) for interacting with Docker and instructing dockerd which commands to carry out.
  • Docker objects are elements necessary to construct the applications. They include Docker images, containers, volumes, networks, and other objects.
  • Docker Registries are content delivery systems that store Docker images. You can set up and use a private registry or use Docker Hub, a public registry from which Docker pulls images by default.

When do I use Docker?

Using Docker, you can move your application from a local development server to a production server without any error. Docker is highly recommended for all application development. Docker offers so many tools (software such as NodeJS, PHP, Java, Any Database, etc) as images.

Advantages and Drawbacks of Docker

Take a look at the main pros and cons of using Docker.

  • It is simple and fast to spin up new container instances.
  • Consistency across multiple environments.
  • Isolated environments simplify debugging.
  • Large community support.
  • Containers are lighter and use less resources than virtual machines.
  • The platform supports CI/CD.
  • The ability to automate repetitive tasks.
  • Possible security issues if containers are not secured properly.
  • Potential performance issues in non-native environments.
  • Since containers share the host kernel, they are not entirely isolated environments.
  • Cross-platform compatibility limitations.
  • Not suitable for applications that require rich interfaces.

Build and Deploy Containers With Docker

Docker helps developers create and deploy software within containers. It’s an open source tool that allows you to “Build, Ship, and Run applications, Anywhere.”

What is Kubernetes?

Kubernetes is an open source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications. Containers operate in a multiple container architecture called a “cluster.” A Kubernetes cluster includes a container designated as a “master node” that schedules workloads for the rest of the containers — or “worker nodes” — in the cluster.

  • Deployment: Schedules and automates container deployment across multiple compute nodes, which can be VMs or bare-metal servers.
  • Service discovery and load balancing: Exposes a container on the internet and employs load balancing when traffic spikes occur to maintain stability.
  • Auto-scaling features: Automatically starts up new containers to handle heavy loads, whether based on CPU usage, memory thresholds or custom metrics.
  • Self-healing capabilities: Restarts, replaces or reschedules containers when they fail or when nodes die, and kills containers that don’t respond to user-defined health checks.
  • Automated rollouts and rollbacks: Rolls out application changes and monitors application health for any issues, rolling back changes if something goes wrong.
  • Storage orchestration: Automatically mounts a persistent local or cloud storage system of choice as needed to reduce latency — and improve user experience.

How Does Kubernetes Work?

The basic Kubernetes components and their role in this orchestration tool include the following:

  • A Kubernetes cluster is a set of node machines for running containerized applications. The cluster consists of a control plane and one or more computing machines.
  • The manifest file is a basic file that defines the general framework for a particular Kubernetes cluster and instructs the software on how you want your cluster to look like.
  • Kubectl is the command-line interface used for communicating with the API server. It provides instructions to the server and directly manages resources, adding and removing containers when needed.
  • The master node is responsible for load balancing workloads and establishing and maintaining communication inside the cluster. Additionally, it assigns and administers tasks to Worker Nodes.
  • Worker nodes are machines for deploying containerized workloads and storage volumes. A Kubernetes cluster consists of a single Master Node and multiple Worker Nodes, where each has its task.
  • A pod is the simplest Kubernetes object that consists of multiple containers that belong to the same node. Containers deployed within the same pod share resources such as their hostname, IP address, and IPC.

When do I use Kubernetes?

  1. When you want to monitor the health and performance of multiple containers.
  2. To deploy 1000s of containers in a single command.
  3. To detect fails/crashes of containers and fix them.
  4. To scale up and scale down the number of containers.
  5. For advanced networking between containers hosted across the cluster.
  6. To customize deployment of containers.
  7. To support all cloud service environments: many cloud providers offer built-in Kubernetes services.
  8. To upgrade all the containers in a single command.
  9. To roll back container updates if something goes wrong.
  10. To support a wide variety of authentication and authorization services.

Advantages and Drawbacks of Kubernetes

Take a look at the main advantages and disadvantages of Kubernetes.

  • Simplifies rolling updates, canary deployments, horizontal autoscaling, and other deployment operations.
  • Automated processes help speed up delivery and improve general productivity.
  • Its ability to run across multiple environments eliminates infrastructure lock-ins.
  • Provides the foundation for working with cloud-native apps.
  • Its features support high availability, low downtime, and overall more stable applications.
  • The complexity of the platform is not efficient for smaller applications.
  • Migrating a non-containerized application onto the Kubernetes platform could be quite challenging.
  • Due to its complexity, there is a steep learning curve that may initially reduce productivity.

Kubernetes and Docker: Finding your best container solution

Although Kubernetes and Docker are distinct technologies, they are highly complementary and make a powerful combination. Docker provides the containerization piece, enabling developers to easily package applications into small, isolated containers via the command line. Developers can then run those applications across their IT environment, without having to worry about compatibility issues. If an application runs on a single node during testing, it will run anywhere.

Docker and Kubernetes Work Together

Docker helps to “create” containers, and Kubernetes allows you to “manage” them at runtime. Use Docker for packaging and shipping the app. Employ Kubernetes to deploy and scale your app. Startups or small companies with fewer containers usually can manage them without having to use Kubernetes, but as the companies grow, their infrastructure needs will rise; hence, the number of containers will increase, which can be difficult to manage. This is where Kubernetes comes into play.

  1. The developers’ code is pushed into the Git.
  2. The build and test happen with Maven in Jenkins for C.
  3. Using Ansible as a deployment tool, we will write Ansible playbooks to deploy on AWS.
  4. We will introduce JFrog Artifactory as the repository manager after the build process from Jenkins; the artifacts will be stored in Artifactory.
  5. Ansible can communicate with Artifactory, take the artifacts and deploy them onto the Amazon EC2 instance.
  6. The SonarQube can help in reviewing the code and will give static code analysis.
  7. We then introduce Docker as a containerization tool. Just like the way we did on Amazon EC2, we will deploy the app on Docker container by creating a Docker file and Docker images.
  8. Once this above setup is done, we will introduce Kubernetes to create a Kubernetes cluster, and by using Docker images, we will be able to deploy.
  9. Finally, we will use Nagios to monitor the infrastructure.

Conclusion

Microservices help companies convert large monolithic applications into smaller components so they can package and deploy them separately without any dependency. Microservices give apps more agility, scalability and resilience — with microservices, apps can be updated, changed and redeployed faster.

Wednesday, December 8, 2021

The Modern Microservice

 Microservices can be deployed individually on separate servers provisioned with fewer resources - only what is required by each service and the host system itself, helping to lower compute resource expenses.

Microservices-based architecture is aligned with Event-driven Architecture and Service-Oriented Architecture (SOA) principles, where complex applications are composed of small independent processes which communicate with each other through APIs over a network. APIs allow access by other internal services of the same application or external, third-party services and applications.

Each microservice is developed and written in a modern programming language, selected to be the best suitable for the type of service and its business function. This offers a great deal of flexibility when matching microservices with specific hardware when required, allowing deployments on inexpensive commodity hardware.

Although the distributed nature of microservices adds complexity to the architecture, one of the greatest benefits of microservices is scalability. With the overall application becoming modular, each microservice can be scaled individually, either manually or automated through demand-based autoscaling.

Seamless upgrades and patching processes are other benefits of microservices architecture. There is virtually no downtime and no service disruption to clients because upgrades are rolled out seamlessly - one service at a time, rather than having to re-compile, re-build and re-start an entire monolithic application. As a result, businesses are able to develop and roll-out new features and updates a lot faster, in an agile approach, having separate teams focusing on separate features, thus being more productive and cost-effective.

Monolith to Microservices

 

  1. From Monolith to Microservices
  2. Introduction and Learning Objectives

Most new companies today run their business processes in the cloud. Newer startups and enterprises which realized early enough the direction technology was headed developed their applications for the cloud.

Not all companies were so fortunate. Some built their success decades ago on top of legacy technologies - monolithic applications with all components tightly coupled and almost impossible to separate, a nightmare to manage and deployed on super-expensive hardware.

If working for an organization which refers to their main business application "the black box”, where nobody knows what happens inside and most logic was never documented, leaving everyone clueless as to what and how things happen from the moment a request enters the application until a response comes out, and you are tasked to convert this business application into a cloud-ready set of applications, then you may be in for a very long and bumpy ride.


In time, the new features and improvements added to code complexity, making development more challenging - loading, compiling, and building times increase with every new update. However, there is some ease in administration as the application is running on a single server, ideally a Virtual Machine or a Mainframe.

monolith has a rather expensive taste in hardware. Being a large, single piece of software which continuously grows, it has to run on a single system which has to satisfy its compute, memory, storage, and networking requirements. The hardware of such capacity is both complex and extremely pricey.

Since the entire monolith application runs as a single process, the scaling of individual features of the monolith is almost impossible. It internally supports a hardcoded number of connections and operations. However, scaling the entire application can be achieved by manually deploying a new instance of the monolith on another server, typically behind a load balancing appliance - another pricey solution.

During upgrades, patches or migrations of the monolith application downtime is inevitable and maintenance windows have to be planned well in advance as disruptions in service are expected to impact clients. While there are third party solutions to minimize downtime to customers by setting up monolith applications in a highly available active/passive configuration, they introduce new challenges for system engineers to keep all systems at the same patch level and may introduce new possible licensing costs.



Thursday, December 2, 2021

Kubernetes Secrets

 

What are Kubernetes Secrets?

Introduction to K8s Secrets

The concept of Secrets refers to any type of confidential credential that requires privileged access with which to interact. These objects often act as keys or methods of authentication with protected computing resources in secure applications, tools, or computing environments. In this article, we are going to discuss how Kubernetes handles secrets and what makes a Kubernetes secret unique.

Why are Secrets Important?

In a distributed computing environment it is important that containerized applications remain ephemeral and do not share their resources with other pods. This is especially true in relation to PKI and other confidential resources that pods need to access external resources. For this reason, applications need a way to query their authentication methods externally without being held in the application itself.

Kubernetes offers a solution to this that follows the path of least privilege. Kubernetes Secrets act as separate objects which can be queried by the application Pod to provide credentials to the application for access to external resources. Secrets can only be accessed by Pods if they are explicitly part of a mounted volume or at the time when the Kubelet is pulling the image to be used for the Pod.

How Does Kubernetes Leverage Secrets?

The Kubernetes API provides various built-in secret types for a variety of use cases found in the wild. When you create a secret, you can declare its type by leveraging the `type` field of the Secret resource, or an equivalent `kubectl` command line flag. The Secret type is used for programmatic interaction with the Secret data.

Ways to create K8s Secrets

There are multiple ways to create a Kubernetes Secret.

Creating a Secret via kubectl

To create a secret via kubectl, you’re going to want to first create text file to store the contents of your secret, in this case a username.txt and password.txt:

echo -n 'admin' > ./username.txt
echo -n '1f2d1e2e67df' > ./password.txt

Then you’ll want to leverage the kubectl create secret to package these files into a Secret, with the final command looking like this:

kubectl create secret generic db-user-pass \
  --from-file=./username.txt \
  --from-file=./password.txt

The output should look like this:

secret/db-user-pass created

Creating a Secret from config file

You can also store your secure data in a JSON or YAML file and create a Secret object from that. The Secret resource contains two distinct maps:

  • data: used to store arbitrary data, encoded using base64
  • stringData: allows you to provide Secret data as unencoded strings

The keys of data and stringData must consist of alphanumeric characters, -, _ or ..

For example, to store two strings in a Secret using the data field, you can convert the strings to base64 as follows:

echo -n 'admin' | base64

The output should look like this:

YWRtaW4=

And for the next one:

echo -n '1f2d1e2e67df' | base64

The output should like similar to:

MWYyZDFlMmU2N2Rm

You can then write a secret config that looks like this:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm

Creating a secret using kustomize

You can also generate a Secret object by defining a secretGenerator in a kustomization.yaml file that references other existing files. For example, the following kustomization file references the ./username.txt and the ./password.txt files, for example:

secretGenerator:
- name: db-user-pass
  files:
  - username.txt
  - password.txt

Then apply the directory containing the kustomization.yaml to create the Secret:

kubectl apply -k .

The output should look similar to this:

secret/db-user-pass-96mffmfh4k created

Editing a Secret

You can edit an existing Secret with the following command:

kubectl edit secrets mysecret

This command opens the default configured editor and allows for updating the base64 encoded Secret values in the data field:

  # Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: { ... }
  creationTimestamp: 2016-01-22T18:41:56Z
  name: mysecret
  namespace: default
  resourceVersion: "164619"
  uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque

How to Use Secrets

Secrets can be used in a variety of ways, such as being mounted as data volumes or exposed as environment variables to be used by a container in a Pod. Secrets can also be used by other parts of the system, without being directly exposed to the Pod. For example, Secrets can hold credentials that other parts of the system should use to interact with external systems on your behalf.

Using Secrets as environment variables

To use a secret in an environment variable in a Pod, you’ll want to:

  1. Create a secret or use an existing one. Multiple Pods can reference the same secret.
    1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret’s name and key in env[].valueFrom.secretKeyRef.
  2. Modify your image and/or command line so that the program looks for values in the specified environment variables.

This is an example of a Pod that uses secrets from environment variables:

apiVersion: v1
kind: Pod
metadata:
  name: secret-env-pod
spec:
  containers:
  - name: mycontainer
    image: redis
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: password
  restartPolicy: Never

Using Immutable Secrets

Kubernetes provides an option to set individual Secrets as immutable. For clusters that extensively use Secrets (at least tens of thousands of unique Secret to Pod mounts), preventing changes to their data has the following advantages:

  • protects you from accidental (or unwanted) updates that could cause applications outages
  • improves performance of your cluster by significantly reducing load on kube-apiserver, by closing watches for secrets marked as immutable.

This feature is controlled by the ImmutableEphemeralVolumes feature gate, which is enabled by default since v1.19. You can create an immutable Secret by setting the immutable field to true. For example:

apiVersion: v1
kind: Secret
metadata:
  ...
data:
  ...
immutable: true

Built-in Secret Types

  • Opaque Secrets – The default Secret type if omitted from a Secret configuration file.
  • Service account token Secrets – Used to store a token that identifies a service account. When using this Secret type, you need to ensure that the `kubernetes.io/service-account.name` annotation is set to an existing service account name.
  • Docker config Secrets – Stores the credentials for accessing a Docker registry for images.
  • Basic authentication Secret – Used for storing credentials needed for basic authentication. When using this Secret type, the `data` field of the Secret must contain the `username` and `password` keys
  • SSH authentication secrets – Used for storing data used in SSH authentication. When using this Secret type, you will have to specify a `ssh-privatekey` key-value pair in the `data` (or `stringData`) field as the SSH credential to use.
  • TLS secrets – For storing a certificate and its associated key that are typically used for TLS . This data is primarily used with TLS termination of the Ingress resource, but may be used with other resources or directly by a workload. When using this type of Secret, the `tls.key` and the `tls.crt` key must be provided in the data (or `stringData`) field of the Secret configuration
  • Bootstrap token Secrets – Used for tokens used during the node bootstrap process. It stores tokens used to sign well known ConfigMaps.

Conclusion

So now that you’ve had a brief introduction to what a secret is, you’re ready to learn how to use them in practice. Stay tuned to our blog where we’re going to be learning how to apply the concepts we learned here in a practical environment.

Install Helm 3 Linux - Setup Helm 3 on Linux | Install Helm 3 on Ubuntu | Setup Helm 3 on Linux | Setup Helm 3 on Ubuntu

What is Helm and How to install Helm  version  3? Helm is a package manager for Kubernetes.  Helm  is the K8s equivalent of yum or apt. I t ...