New Aug 12, 2024

Docker Network Bugs and Fixes

More Front-end Bloggers All from dbushell.com View Docker Network Bugs and Fixes on dbushell.com

Docker is great! Docker networks are grand. I know, controversial opinions! Docker and Docker networks are not easy to understand and often table-flippin-infuriating for newcomers. Trouble is they don’t get much easier for experienced users either.

I’m by no means a Docker expert. I’ve just made the mistakes so you don’t have to.

In this blog post I cover three common problems:

The Docker Footgun

Any feature likely to lead to the programmer shooting themselves in the foot.

footgun — Wiktionary

A simple search for “Docker footgun” will show you just how many Docker developers are walking with a limp today. It’s a rite of passage.

Docker’s default behaviour is to bind ports to all interfaces. For example, say I run a database container on port 3306. By default Docker will bind that container to 0.0.0.0 — i.e. all IPv4 addresses on the machine. Any device on my local network has access. What if I’m running a web server with a public IP? The container is exposed to the entire internet. Docker networking will even bypass firewalls like UFW.

This “bug” was reported 10 years ago. Docker considers this a feature, not a bug. Technically, I would agree, but defaults matter. If something is so often misunderstood at the cost of security, perhaps it should be changed?

The Footgun Fix

The solution is Docker networks. If you search this problem online most answers stop there without so much as a “how” or “why”. I’ll explain with an example.

Below is a Docker compose file in which I have two containers:

networks:
  backend:

services:
  mariadb:
    image: mariadb:latest
    environment:
      MYSQL_ROOT_PASSWORD: password
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: password
      MYSQL_DATABASE: wordpress
    networks:
      - backend

  wordpress:
    image: wordpress:latest
    environment:
      WORDPRESS_DB_HOST: mariadb:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: password
      WORDPRESS_DB_NAME: wordpress
    networks:
      - backend
    ports:
      - "127.0.0.1:8080:80"

Look past the environment plumbing. You’ll see I’ve created a backend network. Without further config this is a bridge network. Both containers join this network. Docker networks use a private IP range. In my case Docker chose 172.18.0.0/16 for the backend network. I shouldn’t need to know but I can check:

# Docker compose prefixes the network name with the project directory
docker network inspect example_backend

You’ll also see I’m not binding any ports for the mariadb container. Once a network is specified Docker stops binding to 0.0.0.0 and only binds to the listed network IP addresses. With this configuration I have no direct access to the database, even from my local machine.

I’ve configured wordpress with WORDPRESS_DB_HOST set to mariadb:3306. mariadb is the database container name and hostname. Docker has internal DNS to resolve it. I could change this to 172.18.0.2:3306 but that is fragile. Docker network IPs can change unless you do a lot of explicit configuration.

Finally, you’ll see I am specifying a port for wordpress. I’m binding external port 8080 to port 80 inside the container. I’m also explicitly binding to localhost. This has fixed the insecure† defaults and I’ve successfully avoided shooting myself in the foot!

† Obviously do not use “password” — I don’t, I swear…

If this were a public server I might use a second network.

Here is a reduced example:

networks:
  backend:
  frontend:

services:
  mariadb:
    networks:
      - backend
  wordpress:
    networks:
      - backend
      - frontend
  caddy:
    image: caddy:latest
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
    networks:
      - frontend
    ports:
      - "443:443"

I’ve skipped a lot of config for clarity. Now the wordpress container is not exposing ports but instead attached to a second frontend network. The frontend network is shared with the caddy container. Caddy server can reverse proxy WordPress and handle TLS certificates.

I can configure Caddyfile like so:

example.com {
  reverse_proxy wordpress:80
}

The Docker internal DNS resolves wordpress to its IP on the frontend network. Because caddy is not on the backend network it cannot access the mariadb container itself. That’s good because only wordpress needs database access. The caddy container exposes port 443 because I want that publicly accessible.

Isn’t that cool?

Overlay Encryption

Docker networks are not restricted to the same machine. It’s possible to create a Docker swarm. Swarms can use overlay networks that connect containers between devices on the same local network. Be careful though; encryption is opt-in — another footgun!

docker network create \
  --driver overlay \
  --opt encrypted \
  --attachable \
    my-encrypted-network

Consider my example before. If the Caddy container was on machine A it can provided a HTTPS encrypted front-end to the WordPress container on machine B. That TLS encryption from the web browser terminates at Caddy (A). The proxied traffic to WordPress (B) through the swarm overlay network is therefore unencrypted HTTP traffic. It is susceptible to all the usual man-in-the-middle attacks. Overlay network encryption must be specifically configured to protect this traffic.

Without encryption any device listening in could theoretically steal the WordPress admin password. If the swarm stays within a trusted LAN then using unencrypted overlay networks isn’t so dangerous. But if you’re using a reverse proxy like Caddy or Traefik for HTTPS web GUIs, why not go the whole nine yards?

With docker compose you can mark a network as external if it was created elsewhere.

networks:
  my-encrypted-network:
    external: true

This network persists independently of the current project.

Private IP Ranges

I mentioned above that Docker networks use a private IP range. There are three private IP ranges designated by RFC 1918. They are:

The latter is most commonly used for local home networks. Docker overlay networks use the 10.0.0.0/8 range. Docker bridge networks usually prefer the 172.16.0.0/12 range. However, bridge networks can also use 192.168.0.0/16.

I don’t know the circumstances but Docker suddenly started using 192.168.64.0/20 on my Raspberry Pi. This broke access to my local network. Obviously Docker should avoid any range that clashes. One unfortunate developer debugged a similar issue for 5 days. Five days! No wonder Docker is so divisive.

If you’re hit by this problem like me, one solution is to edit the Docker daemon config.

The file is located at:

vim /etc/docker/daemon.json

Configure the default address pool:

{
  "default-address-pools": [
    {
      "base": "172.16.0.0/12",
      "size": 24
    }
  ]
}

Adjust the base range and size as desired. Matthew Strasiotto has written: “The definitive guide to docker’s default-address-pools option”.

And see “How do I exit Vim?” if need be.

You’ll need to stop Docker and remove any offending networks with docker network rm [...]. I had to also use ip route del [...] to get rid off them before rebooting the Raspberry Pi. New bridge networks were created in the correct range when I restarted containers. If you’re not using Docker compose you’ll have to manually create and attach networks using the Docker CLI.

There you go! Three problems; three solutions.

Docker is great. When it works. Remember that Docker defaults are not always the most secure.

Feedback and corrections 👉 @dbushell@fosstodon.org

Scroll to top