Docker#

This guide is my personal take on what Docker is and includes some simple examples to illustrate the various parts of Docker.

Root access#

Docker requires root permission generally. To enable root (to prevent typing sudo all the time)

sudo -i

Alternatively, add user to group "docker" To create the docker group and add your user:

# Create the docker group.
sudo groupadd docker
# Add your user to the docker group.
sudo usermod -aG docker $USER
# On Linux, you can also run the following command to activate the changes to groups:
newgrp docker
# Verify that you can run docker commands without sudo.
docker run hello-world

If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error, which indicates that your ~/.docker/ directory was created with incorrect permissions due to the sudo commands.

WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied

To fix this problem, either remove the ~/.docker/ directory (it is recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:

sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R

Docker Architecture#

There are four main parts to Docker - dockerfile > image > container > swarm. * Create a dockerfile * A dockerfile is a list of instructions used to build images. * It typically starts from an operating system or a pre-existing image (i.e. the base image.) * Create a docker image * An image is like a blackbox with predefined input and output ports. * It is built from a dockerfile. * Alternatively, you can create an image from a running container using docker commit. * You can also combine both methods. First create an image by committing a running container, and use it as a base image in a dockerfile. * Run a container * A container is an instance of an image where its inputs and outputs have been defined. These options are defined either in CLI or in a docker-compose file. * Once a container starts, it'll start to process its inputs and generate outputs. * Run a swarm * A swarm is a collection of containers meant to work with one another. * It is typically defined with a docker compose file in simple cases. It can also be defined with docker stack.

Create Dockerfile#

A basic dockerfile is made up of a few simple instructions.

  1. The base image FROM python:3.7
  2. The working directory WORKDIR /path
  3. A predefined volume in the container (to be mounted at runtime) VOLUME /path
  4. Copying files from local host to container COPY <local file> <container directory>
    • Use COPY instead of ADD. COPY is more explicit, and all additional functions of ADD can be added via CMD.
  5. List of steps to run when creating the image. RUN <various shell command>
  6. Exposes a port. Like volume, EXPOSE designates a port on the container to be accessible outside. The exposed container port still needs to connect to a host port during runtime. EXPOSE 8000
  7. The commands to run when creating a container from the image CMD <shell command>
    • Note that unlike RUN, there should only be one CMD in the dockerfile.
    • If there are multiple RUN lines, the last one will be used.
    • We can also define entrypoint instead.
  8. To create a container that runs like an executable.
    • Set up CMD and ENTRYPOINT. Check this guide to understand the difference.

Below is an example of a dockerfile for setting up an mkdocs server.

FROM python:3.7

#======================
## Build / Setup
#======================
WORKDIR /usr/src/app
# Create a mount point for host directory
# /usr/src/app/data is the doc file of mkdoc
VOLUME ["/docs"]
# Copy basic requirements.txt for mkdocs
COPY ./requirements.txt /config/
# Pip install basic requirements.txt
RUN pip install --no-cache-dir -r /config/requirements.txt
EXPOSE 8000

#======================
## Docker Run
#======================
CMD pip install --no-cache-dir -r /docs/requirements.txt; \
cd /docs; mkdocs serve --dev-addr=0.0.0.0:8000

Create Image from Dockerfile#

  • -f specifies the name of the dockerfile
  • --tag specifies the tag to be assigned to the image
  • . points to the directory that contains the dockerfile.
docker build -f mkdocs.dockerfile --tag macadish/mkdocs .

Run Image#

docker run \
-d  \ #detached mode
-it \ #opens tty
--name mkdocs \ #set a name for the container
-v /volume2/nextcloud/NCdata/macadNC/files/Notebook/General:/docs:ro \ #mount a local host directory to the
-p 8089:8000 \ #<host port>:<exposed port>
macadish/mkdocs

Example: Ubuntu#

To run docker in the foreground,

sudo docker run -it <container> /bin/bash

To run docker in the background,

sudo docker run -it -d <container>
# with name
sudo docker run -it -d --name pulse1 ubuntu
#-it opens up tty, which enable /bin/bash

The difference between exec and run is

# run sets up a container
sudo docker run -d -it <container> /bin/bash

# exec send a command to a container already running
sudo docker exec -it <container> /bin/bash

To list all containers

sudo docker ps -a

To close all stopped containers

# all containers
docker container prune

# close a running container
docker container ls #Get id
docker stop <id>

To transfer files to the container

sudo docker cp <container>:file destination

To mount files in the container,

# Bind mount to a local directory
sudo docker run -d \
  --name devtest \
  --mount type=bind,source={path},target=/app \
  ubuntu
# Mount volume in /var/lib/docker/volumes, created with docker volume create
sudo docker run -d \
  --name devtest \
  --mount type=volume,source=myvol2,target=/app \
  ubuntu

Network#

To connect between different containers, they should be on the same docker network. Each compose file comes with a separate network, such that containers created from different docker-compose files cannot communicate with one another by default. To connect containers intialized from different compose files, setup a global docker network explicitly.

  • First, create the network in a docker-compose.yml file. We need to define both Name 1 and Name 2 for the network.
  • For each service defined in the same .yml, reference Name 1.
# truncated docker-compose.yml for traefik
version:...
services:
traefik:
    # The official v2 Traefik docker image
    image: traefik:v2.1
    container_name: traefik
    # Enables the web UI and tells Traefik to listen to docker
    restart: always
    networks:
      - traefik-web # Name 1
...

# Create a docker network in bridge mode. Note the 2 names
networks:
  traefik-web: # Name 1. It should be referenced by the services defined in this current compose files.
    name: traefik-web # Name 2. This is the global network name to be referenced by services from other compose files. It can be different from the Name 1.
  • For services defined in a different .yml, reference an existing network by adding the option external: true.
  • The name of the network is Name 2 as defined in the previous .yml file.
# truncated docker-compose.yml for portainer
version:...
services:
  portainer:
    # A dashboard for docker containers
    image: portainer/portainer
    container_name: portainer
    networks:
      - traefik-web # Name 2

networks:
  traefik-web: # Name 2
    external: true

Backup#

For boot drive backup, check out this link

rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /target/drive

Troubleshoot#

For Traefik related issues, refer to the traefik document.