Your first docker container
So, you've set up docker and want to start doing something. The problem that occurs in most of these ideas is - what exactly to do?
Well, let me tell you the good news:
There's about a quadrillion docker containers and their all just waiting for you to use them.
Have a look at the Awesome-Selfhosted list to get some inspiration.
Be it your own website, or note-taking app, so much to see and explore!
For the sake of this tutorial and since you can use it later on, we will look at a simple webserver, called
Caddy
A webserver can be used for many different applications, I will show you how to host a simple website local on your computer. In the next blog entry, I will try to exlain how to access that page from the outside.
Since we want to do everything in docker, we have to setup the caddy container first. So head over to the caddy docker page and have a look around!
You will notice, that there are three distinct ways to start a docker container
1. CLI
Using the cli is the easiest way. You copy a command, paste it in the commandline and you're good to go!
For caddy this would be something like this:
docker run -d -p 80:80 \
-v $PWD/index.html:/usr/share/caddy/index.html \
-v caddy_data:/data \
caddy
Lets go over the important elements:
- You have
-p
for which ports need to be open. In this case only 80 for http -v
for the volumes. Since a docker container will use its own file system, independend from your host, techincally it cannot read any files form your host system and will delete all it's internal files after you've stopped it. For serving a webpage, that isn't ideal, as we want to save the page on our host system to edit it and then keep it saved when we recreate the container. Thats why this line:$PWD/index.html:/usr/share/caddy/index.html
will map the fileindex.html
into your caddy container. When you restart it, it will still be there and caddy can read it.- there's one other flag that is important, that is
-e
. These are environment variables, important for some containers. You will read about them at some point.
All in all, this method is good for testing stuff, but not recommended nor used by me. The reason is pretty simple:
You start the container, using the command up top. And then you forget about it. After a few days or weeks, you change something and stop the container. Now you'll need that command again, since you want the same volumes, variables etc. So hopefully you've saved it, otherwise it's quite a hassle to get it back.
2. Dockerfiles
I don't want to get into too much details with dockerfiles, they're used to build containers when you want to customize them to a certain degree. For example, imagine you want to make your own simple container that runs a very basic linux and some packages that you want to install. Then you'd create a dockerfile, choose a base image and then add your packages as installations. For example, i've setup a code-server for my LaTeX documents using this dockerfile:
# Use the LinuxServer Code Server as the base image
FROM lscr.io/linuxserver/code-server:latest
RUN apt-get update && \
apt-get install -y \
latexmk \
texlive texstudio \
texlive-full \
xzdec \
python3-pip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
python3 -m pip install setuptools &&\
python3 -m pip install pygments
# Set the default working directory
WORKDIR /workspace
This is fine, but overkill for most applications, since you want to just start an image and not customise things on it.
3. Docker compose
Ah, finally the good stuff.
Imagine, you have a web application that also uses a database. Now you want to connect those two together. You can set up both individually using docker cli or dockerfiles, but the connection between them is not clear. This is where docker compose comes into play.
A docker compose file is in .yaml
format and has the following structure:
services:
service1:
morethingshere
service2:
morethingshere
...
You just list all the services that you want to connect and boom, all done in one file, very readable and nice and easy.
The docker compose file for caddy looks like this:
version: "3.7"
services:
caddy:
image: caddy:latest
restart: unless-stopped
cap_add:
- NET_ADMIN
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
external: true
caddy_config:
And again, here you'll find some familiar elements, ports and volumes. For volume configuration, you can either have it in a containerised volume or use host mappings like i described earlier. Read this post and make up your own mind. I will use host mappings since I've got accustomed to it.
So, my dockerfile for caddy looks like this:
version: "3.7"
services:
caddy:
image: caddy:2.7.5
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- /data/docker/caddy/Caddyfile:/etc/caddy/Caddyfile
- /data/docker/caddy/site:/srv
- /data/docker/caddy/data:/data
- /data/docker/caddy/config:/config
- /data/docker/caddy/caddyfiles:/caddyfiles
networks:
- proxy-net
networks:
proxy-net:
external: true
One additional thing you'll find in mine is the networks
attribute. This is because I need to bring containers over differnet stacks together for caddy to read them. So the next step is to make your own first docker compose file
Choose a folder structure
As you'll probably do mutiple containers of varying complexity, I'd strongly suggest you choose a good folder structure for the docker compose files and the data of those containers.
As you can see in my docker-compose file, I have everything mapped in /data
, this is a mount for one of my SSDs. And in that, I have two folders, docker and docker-compose. Some put the docker compose files into /opt/docker/, others in their home directory. The possibilities are endless. The reason for my setup is ease of use: When anything happens to my system, I can rip out my system SSD, put in the new one, mount the /data SSD and can start all my containers right back up. The SSD is mirrored, so I don't have to worry about dataloss either. I'd at least recommend having two distinct folders for the data of the container and for the configuration (mostly your docker compose files).
So, now is time to overthink the million options that you are facing and then doing an instinctive choice that you'll most likely regret in a few weeks. At least if you're like me.
When you're done with that, go into your folder, create a folder for caddy and touch docker-compose.yaml
.
Now you can use your favourite text editor (probably nano
or vim
) to open it.
Paste this configuration and change the marked values:
version: "3.7"
services:
caddy:
image: caddy:100.0.0 # change this to the newest version from the caddy docker page linked up top, for now you can also use ':latest'
container_name: caddy
restart: unless-stopped
ports:
- 80:80
volumes:
# change yourfolder to the folder. For me this would be /data/docker/caddy/[..]:/[..].
- YOURFOLDER/Caddyfile:/etc/caddy/Caddyfile
- YOURFOLDER/site:/srv
- YOURFOLDER/data:/data
- YOURFOLDER/config:/config
Now save this document and your docker config is done for now. Please, don't start the container yet!
Setup the Caddyfile
Now it's time to serve your first page. For that, we have to create a Caddyfile. There are many ways how to use caddy, I've only used Caddyfiles and quite like them. You can read here about the different methods.
Make sure that in your directory (using ls -la
) there is no 'Caddyfile' entry already. If there is, you've been naughty and already started the container. If thats the case, this Caddyfile has automaticaly declared a directory by docker. Just delete it using `rm -r Caddyfile/' and you should be good.
Now create a Caddyfile using oyur favourite text tool, for me its vim Caddyfile
and enter this
localhost:80 {
respond "Hello, world!"
}
Now you're ready to start your container. So go on and do docker compose up -d; docker compose logs -f
to start it and look at the logs immediately. When I start a container for the first time, I either do (1) docker compose up
or (2) add the log command at the end. The problem with option 1 is that you will automatically end up in the logs and cannot exit them without stopping the container. That's why its easier to do it with the long command.
If your logs look good, you exit them with Ctrl+C
and type in curl http://localhost
, which should give you a response of "Hello World".
If so, congrats. If not, happy troubleshooting!
Serve a webpage
You may remember that, in our docker compose we added the line
YOURFOLDER/site:/srv
The folder that you chose here, will hold your page for now.
Go ahead, create your own little html page or copy mine:
<!DOCTYPE html>
<html>
<head>
<title>Caddyfile</title>
</head>
<body>
<h1>Greetings from Caddy!</h1>
</body>
</html>
Now you also have to change the Caddyfile, to serve this webpage. This can be done using this config:
:80 {
root * /srv
file_server
}
To reload the caddy, you should be able to just reload caddy by entering the shell of it and enter caddy adapt
. This has never worked for me and I've been too lazy to figure it out, so I always restart the caddy container using docker compose restart
. Not the best solution, but it works. Now curl http://localhost
should return that page and when you enter http://127.0.0.1
, you should see your first webpage.
Yay
This concludes the setup of caddy, next time I will focus on serving another docker container through caddy. Until then!