Containerization of applications, i.e. the basics of Docker

Get to know Docker in a few simple steps

Containerization of <span class='purple'>applications</span>, i.e. the basics of <span class='purple'>Docker</span>

Introduction to Docker

1. What is Docker?

Docker is a containerization technology that allows you to create, manage, and run applications in isolated environments called containers. Containerization, on the other hand, is a method of packaging applications and their dependencies into one portable environment that can be run on any Docker-supporting system. This makes applications more portable, scalable and easy to manage.

2. What are the key features of Docker?

  1. Isolation - Each container runs in an isolated environment, ensuring that applications cannot communicate with each other unless we allow them to. Moreover, there will be no collisions with other applications. (e.g. port mapping collision)
  2. Portability - Containers can be run on different platforms and operating systems without having to change the code. The application migration itself is almost automatic.
  3. Performance - Docker containers, compared to virtual machines, are light and efficient due to the fact that they use the Kernel Host.
  4. Scalability - Docker makes it easy to scale applications both vertically (greater computing power) and horizontally (more application instances).

How does Docker work?

Docker uses container technologies that leverage OS-level virtualization to run multiple isolated applications on a single physical machine. Containers are lightweight because they share the host operating system kernel, which distinguishes them from traditional virtual machines that require full operating systems

3. Advantages of using Docker

Docker has revolutionized the way we build, deploy, and manage applications, offering portability, scalability, and performance that were previously difficult to achieve. With Docker, developers can focus on writing code rather than managing infrastructure. Its main advantages include:

  • Fast Deployment - With Docker, applications can be quickly packaged and run in various environments, reducing deployment time.
  • Dependency management - Docker ensures that all application dependencies are contained in the container, eliminating issues related to differences in environments.
  • Performance - Docker containers, compared to virtual machines, are light and efficient due to the fact that they use the Host Kernel.
  • Better resource utilization - Containers are more resource efficient than traditional virtual machines, allowing for better use of available computing power.

Docker Basic Concepts

1. What is Docker Image?

A rigid template that contains everything needed to run the application: code, runtime, libraries, and system tools. Images are versioned and can be downloaded from repositories such as Docker Hub or Github Container Registry.The first stage of running an application in Docker is creating or downloading an image.

2. What is a Docker Container?

A container is a running instance of a Docker image. It is an isolated environment that can be started, stopped, copied and deleted. Each container is an independent application. Importantly, Docker containers do not store application states (e.g. databases). If you delete a specific container that does not have Volumes, you will lose all data from the container.

3. What is Volumen (Volume) Docker?

Docker Volume is a mechanism used to permanently store data generated and used by Docker containers. Volumens are managed by Docker and allow data to be separated from the container lifecycle. Thanks to this, data can be preserved even after the container is removed, which is crucial for the durability and reliability of the application.

We distinguish 3 types of Volumes:

  1. Managed Volumes
    These are volumes created and managed by Docker. They are stored in a special place on the host's file system
    (usually in the /var/lib/docker/volumes directory).
  2. Bind Mounts
    Using bind mounts, the user manually specifies a location on the host file system to be made available to the container. This allows you to access host files and directories from within the container.
  3. TMPFS Mounts
    These are volumes created in RAM. They are very fast, but not permanent. Data in tmpfs mounts is lost after a container or system restart.

4. What is Dockerfile?

Text file containing a set of instructions that Docker uses to build the image. These instructions specify what databases, libraries, and configuration settings are included in the image. It is a kind of "Image Recipe", created by the Application Developer.

5. What is Docker Hub?

A public repository where you can store and share Docker images. Users can download ready-made images or upload their own. Unfortunately, the free version does not allow storing private images, unlike Github Container Registry.

How to install Docker?

Before you start installing Docker

Depending on your operating system, installing Docker requires different steps and components. For example, on Linux only Docker Engine is required, while on Windows it is also necessary to install Docker Desktop.

Details on installing the Docker environment can be found in the link below.

Docker Documentation

How to install Docker on Ubuntu?

Below is an example Docker installation on Ubuntu.

Uninstall old versions of Docker

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

Add official Docker GPG keys

sudo apt-get updatesudo apt-get install ca-certificates curlsudo install -m 0755 -d /etc/apt/keyringssudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.ascsudo chmod a+r /etc/apt/keyrings/docker.asc

Add Docker Repository to apt sources

echo'deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu$(./ etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt-get update

Docker Installation

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Verify correct installation

sudo docker run hello-world

The above command downloads the Hello-World image and then runs it. You should see positive output from the application if the installation was successful.

Basic Docker Commands

Docker Cheat Sheet

Below are the basic commands that will allow you to use Docker almost completely

Docker System Commands

docker --help Display instructionsdocker version Displays Docker versiondocker info Displays information about Docker installation

Docker Image Management

docker pull [image_name] Retrieves image from Docker Hub repositorydocker images -a Show all available images on the systemdocker build . -t [image_name:tag] Build an image and give it a name and tagdocker push [repository/image_name:tag] Push image to selected repositorydocker rmi [image_id] Remove image from system

Docker Container Management

docker run [image_name] Runs a new container from the specified image.docker run [image_name] -d Runs a new container from the specified image. (Detached mode)docker ps Displays a list of running containersdocker ps -a List all containersdocker stop [container_id] Stop Containerdocker rm [container_id] Remove Containerdocker exec -it [container_id] /bin/bash Launching the Container system shelldocker exec -it [container_id] /bin/sh Running the Container system shell (environment without Bash)

How to safely configure a Docker image?

Why is secure Docker image configuration so important?

Secure Docker image configuration is crucial for several reasons that have a direct impact on the integrity, confidentiality and availability of the application and the data it contains. Here are the most important reasons to keep your Docker images secure:

  1. Protection against hackers
    Docker images do not differ in use from regular applications. They may be vulnerable to various types of attacks, such as malware, man-in-the-middle attacks, and exploitation of vulnerabilities in operating systems and applications. Secure configuration minimizes the risk of such attacks by ensuring current and verified software versions.
  2. Application isolation
    The main benefit of using Docker technology is application isolation. Docker containers are designed to isolate applications from the host system and other containers. Improperly configured images can lead to data leaks between containers or allow privilege escalation to the host, which can threaten the entire system and, in extreme cases, even the cluster.
  3. Ensuring Data Integrity
    Data stored and processed by applications must be protected against unauthorized access and modifications. Secure image configurations help prevent data breaches and ensure that your data remains intact.
  4. Minimize attack surface
    Docker image minimization removes unnecessary packages and dependencies, reducing the attack surface. The fewer components in the image, the less chance that any of them will have security vulnerabilities. Always try to base your production image on minimal system images.
  5. Compliance with laws and regulations
    Many industries are subject to strict data protection and privacy regulations, such as GDPR and HIPAA. Securely configuring Docker images helps you meet these requirements, which is key to avoiding financial penalties and reputational damage.
  6. Makes management and scalability easier
    Secure images are easier to maintain and monitor. Automation of updates, patching and security audits is easier, which enables effective scaling of applications in production environments. It is also worth using all kinds of automatic tests in our CI/CD from Docker itself to exclude any shortcomings.

Practices for creating secure Docker images

Let's now move on to the basic and minimum practices for securing applications hosted in a Docker environment. Implementing the solutions below should largely reduce attack vectors against our application. Please note that the steps below apply only to the security of the Docker image and not the entire application. In order to maximize the security of our systems, it is worth using monitoring tools and proxies such as WAF (Web Application Firewall), which will automatically eliminate known attacks.

Which WAF should I choose?

Depending on the application you want to host, there are many open source and commercial Web Application Firewall solutions. The most important thing is to choose the right tool for a given application. For example, PHP-based applications will have different attack vectors than those based on Node. js. For hosting e.g. Wordpress, it is worth considering Modsecurity WAF (via Nginx), while for Next.js applications it is worth considering using Arcjet (via NPM). Of course, there is nothing stopping you from using several WAFs at the same time Inappropriate selection may lead to many false positives.

  1. Using official and trusted sources
    Only use official images available on Docker Hub or other trusted registries. Official images are regularly updated and checked for security vulnerabilities. Also remember to use official packages and libraries in your application.
  2. Software update
    Regularly update all libraries and operating system images in Docker. Make sure images are rebuilt with the latest package versions and security patches. Try to perform these operations relatively regularly.
  3. Image Minimization
    Create images containing only the necessary components to run your application. Use lightweight base operating systems such as Alpine Linux or Slim versions. Remember that before selecting an image, you can verify its security in Docer Hub. Each image has its own database of vulnerabilities, which you can read about on the publisher's subpage.
  4. Secret data management
    Docker images allow you to preview their "deployment". Avoid placing secrets such as passwords, API keys, and certificates directly in the Docker image. Use environment variables or secret management tools such as Docker Secrets.
  5. Setting appropriate permissions - Root Lock
    Use security policies such as SELinux or AppArmor to further secure containers and limit their access to system resources.
  6. Configure appropriate security policies
    Secure images are easier to maintain and monitor. Automation of updates, patching and security audits is easier, which enables effective scaling of applications in production environments. It is also worth using all kinds of automatic tests in our CI/CD from Docker itself to exclude any shortcomings.
  7. Regular security audits
    Regularly perform manual or automated security audits of your Docker images to detect and remediate potential security vulnerabilities. Use security scanning tools such as Docker Bench for Security and Docker Scout.

Summary

Secure Docker image configuration is crucial to protect your applications and data from a variety of threats. Taking care of security not only avoids problems with data breaches, but also ensures the stability and reliability of applications in production environments. By implementing good practices such as image minimization, regular updates, secret data management, and security audits, you can significantly reduce the risks associated with using Docker technology.

How to create your first secure Dockerfile?

1. Dockerfile structure

A Dockerfile is a script that contains a set of instructions and commands that Docker uses to build an image. Each statement in a Dockerfile performs a specific operation, such as installing a package, setting an environment variable, or copying files. Below is a detailed description of the basic Dockerfile structure.

Dockerfile structure description

The above information presents the minimum knowledge necessary to create your first custom Docker image. Of course, the Dockerfile itself accepts much more instructions and options that can be found in the Docker documentation.

Dockerfile at a glance...

Imagine that every time you want to deploy an application on a new server, you have to perform a series of activities to prepare the environment for your application. You must first download the necessary files, then install the libraries, configure the databases and then run the application so that it listens on a given port.

Performing these activities manually is a monotonous and undeveloping task, as we repeat the same activities over and over again. Docker comes to our aid and performs all these operations for us.< br/>
Treat the Dockerfile as an instruction for yourself in which you perform all the steps to run your application step by step. If you need to perform any action during deployment, it should also be included in the Dockerfile.

A properly constructed Dockerfile will reward you with the time you would otherwise have to devote to a one-time, independent application deployment. In addition, your application will become incredibly easy to transfer between servers.

2. How to build an image from a Dockerfile?

To create an image from a Dockerfile, execute the following command (The terminal must be in the directory with the Dockerfile)

docker build . -t image-name:tagEXAMPLE docker build . -t jakubwojtysiak.online:v1.0

3. How to run a Docker image from CMD?

To run the previously created image from the Dockerfile, execute the following command

docker run -p IP_ADDRESS:LOCAL_BROADCASTING_PORT:OPENED_CONTAINER_PORT/PROTOCOL IMG_IDEXAMPLE docker run -p 127.0.0.1:8080:80/tcp

The above command starts the Docker container on our local machine on port 80. The application in the container listens on port 8080. The container port 8080 has been mapped to local port 80.

If everything went well, your application should be available at 127.0.0.1:80 or localhost:80.

Do I have to specify port 80 in the URL?

Web applications by default listen on ports 80 and 443. If you do the above mapping, just enter 127.0.0.1 or localhost in the browser's URL bar and the browser will automatically redirect you to port 80.

This happens because all browsers automatically connect the client to port 80 and then to port 443 if an SSL certificate is available.