The world of application development and deployment has been revolutionized by the advent of containerization technologies, with Docker leading the charge. Offering an efficient, lightweight, and portable solution, Docker has quickly become an essential tool for developers seeking to simplify and streamline their workflows.
In this all-encompassing guide, we’ll introduce you to the fundamentals of Docker, from understanding its core concepts to implementing advanced techniques that will elevate your application development process.
Whether you’re a complete novice or an experienced developer looking to refine your skills, this Docker guide will equip you with the knowledge and tools you need to harness the full power of containerization and transform the way you build and deploy applications.
How to Install Docker
Before moving forward, I would recommend to install Docker in your computer (give it a try, it is free and open source).
- For Windows and MAC: download the installer from Docker web https://docs.docker.com/get-docker/
- For Linux:
- I am creating a public repository to summarize Docker installation and app deployment
- You can also check Docker docs
apt-get update && sudo apt-get upgrade && curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh && docker version
# docker run hello-world
How to use Docker to Containerize your Application
- Create a Dockerfile: A Dockerfile is a script that contains instructions on how to build a Docker image for your application.
- The Dockerfile specifies: the base image, application dependencies, and configurations.
- For a data analytics career, you will find useful the following structure for a Python App:
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
-
Prepare the requiremets.txt file: in here you will specify all the libraries needed that you will need to install on a new computer to be able to run your Python (in this case here) application.
-
Build the Docker image: Run the following command in the same directory as your Dockerfile to build the Docker image:
docker build -t your_image_name .
- Run the Docker container: After building the image, create and start a Docker container using the docker run command:
docker run -d -p host_port:container_port --name your_container_name your_image_name
Replace host_port with a port on your host machine, container_port with the port your application listens to inside the container, and your_container_name with a name for your container.
-
Test the containerized application: Access the application in your browser or through a REST client using the host machine’s IP address and the host_port you specified in the docker run command.
- You can try with 0.0.0.0:host_port
-
Push the image to a container registry (optional): If you want to deploy your containerized application on other systems or cloud platforms, you can push your Docker image to a container registry like Docker Hub or Google Container Registry.
docker login
docker tag your_image_name registry_url/your_username/your_image_name:tag
docker push registry_url/your_username/your_image_name:tag
Managing Docker Containers
How to name a docker container using CLI
To name a Docker container using the command line interface (CLI), use the –name flag when running the docker run command. For example:
docker run --name my_container_name -d image_name
Docker Networking
Docker provides several networking options to connect containers with each other and with the host system. The most common networking options are:
- Bridge network: Default network type, allowing containers to communicate with each other and the host.
- Host network: Containers share the host’s network stack, offering better performance but less isolation.
- Overlay network: Allows containers running on different hosts in a Docker Swarm to communicate.
You can create custom networks using the docker network create command and attach containers to them.
Docker Volumes
Docker volumes are used to persist data generated by containers and to share data between containers. You can create a volume using the docker volume create command and use the -v flag to mount the volume to a container:
docker run -d -v my_volume:/path/in/container image_name
Restart Policy
Remember to include the restart statement in the yml file:
version: '3.8'
services:
your_service_name:
image: the_image_name
container_name: your_container_name
restart: unless-stopped
Or if you have already running a container with the name ‘your_container_name’:
docker stop your_container_name
docker update --restart unless-stopped your_container_name
docker start your_container_name
CLI vs .YML
Docker CLI and .yaml files (typically Docker Compose files) are two ways to interact with Docker and manage containers.
- Docker CLI: Command-line interface to run, manage, and control Docker containers. Useful for simple tasks and quick container management.
docker run -d \
--name=webtop \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-p 2000:3000 \
-p 2001:3001 \
-v /path/to/data:/config \
--restart unless-stopped \
lscr.io/linuxserver/webtop:debian-kde
- Docker configuration files - .yml / .yaml (Docker Compose): A YAML file format to define and configure multi-container applications, services, networks, and volumes. Ideal for complex applications requiring multiple containers to work together.
- By using Docker Compose, you can define the entire application stack in a single file, simplifying deployment and management.
version: "2.1"
services:
webtop:
image: lscr.io/linuxserver/webtop:debian-kde #the image source, in here linuxserver container registry is used
container_name: webtop
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
volumes:
- /path/to/data:/config
ports:
- 2000:3000
- 2001:3001
restart: unless-stopped
Using Containers as Isolated enviroments
Python Apps
I have had some situations that neither conda nor venvs worked to manage properly my Python dependencies (you can blame it on me).
The solution I found was pretty simple, I will develop inside a Docker container.
For Python apps i just need to spin up this container:
version: '3'
services:
my-python-app:
image: python:3.11-slim
container_name: python-dev
command: tail -f /dev/null
volumes:
- python_dev:/app
working_dir: /app # Set the working directory to /app
ports:
- "8501:8501" #for example for streamlit apps to be accesible
volumes:
python_dev: #everything will be stored here
And then I just need to install the requirements inside the container with pip install.
That’s pretty handy for me, as later I know what to include in the requirements.txt file when I am building the final Docker Container image for my app to be encapsulated.
Node Apps
I wanted to try also some web development and discovered that many SSGs required node + dependencies.
No issues, we can use this container to develop:
version: '3'
services:
my-node-app:
image: node #latest
container_name: node_gastby_ssg #trying gatsby SSG
working_dir: /app
volumes:
- node-app-data:/app
ports:
- "8008:8000" # Expose host port 8000 and map it to container port 8000
command: tail -f /dev/null
volumes:
node-app-data:
FAQ
How to Develop using Docker Containers
Making sure that dependencies are fine is essential in any development project.
docker init
docker compose watch
version: '3'
services:
mynodeapp:
build:
context: ./chatly-web
dockerfile: docker/dev.Dockerfile
ports:
- "8008:8000" # Expose host port 8000 and map it to container port 8000
develop:
watch:
- action: sunc
path: ./chatly-web
target: /usr/src/app
ignore:
- node_modules/
- action: rebuild
pathL ./chatly-web/package.json
# command: tail -f /dev/null
# volumes:
# node-app-data: