No matter where you are in your web development journey, there is a strong chance you have heard of Docker. Docker has grown from humble beginnings, as an internal tool for a web hosting company, to becoming a core tenant of modern web development. But, what is Docker?
In this post I will explain exactly what Docker is with a fun analogy. Then we will run through a small example to show you its power and why it's worth your time to know. From keeping a consistent developer experience to making deployments easier there are a number of benefits to using it.
Now, let's dive in.
Contents
Docker Explained
Put simply, Docker is a way for you to make sure your web application has all the things it needs to be able to run, no matter what computer you put it on. Docker means consistency.
If the computer in question has the Docker runtime installed, then it can run your app.
In fact, this idea has it's own name:
Containerisation
Docker is just the most popular flavour of containerisation.
So what is containerisation?
When you write software, you rely on lots of other code. Even if you write everything yourself and pull in no libraries or frameworks, you are still relying on the operating system having everything you need to run your application.
This is true of all software. Be it an iPhone app or a server application running in the cloud. This is why you see descriptions in App Stores say they need at least iOS 15, or Windows 11.2. The code inside the application will be dependent on the operating system having certain capabilities that aren't available on early versions.
When you are developing your own apps this sort of thing can become quite the pain in the ass, because not only is your software dependent on things in the operating system but also other things that you add into the mix, like environment variables and specific settings.
All these requirements can make it difficult to know exactly what setup is required to make an application run somewhere new. And unless you are making a little tool for yourself, your code will eventually need to run somewhere else!
Containerisation makes all of these problems go away
Let's look at an analogy for containerisation
In my garage I am fortunate enough to have a pretty good home gym. Squat rack, barbells, pull up bar, dumbbells, all sorts of things. No matter the gym session I want to do I can go in there and have a great workout.
In no single session do I use all the equipment I have, in fact some things will go weeks between uses, they aren't required for every session but eventually everything gets used.
In this example you can think of my garage as my own computer. Everything I need, no matter what I'm doing, customised in any number of ways that I can't remember.
Now, let's say I want to go and do a specific workout somewhere else. I'm going away with family and I want to be able to have a good workout everyday.
I don't need every item in my gym to have a good workout, I just need certain items to take with me. So, I grab some kind of container, like a big box, and put inside it all the items I will need. Let's grab
- two sets of dumbbells (heavy and light)
- two resistance bands
- a single weight plate
- a skipping rope.
Not much, but plenty for a good, specific workout, in fact I'm going to put a print out of a workout in the container as well. Also let's write on the outside of the container where to start:
Start workout with the first exercise in the instructions
Now, so long as I have that container with me I can always do the workout in the box.
In fact, since all of the dependencies are in the container, anyone who knows how to open the box and read the instructions could do it, anywhere in the world, and they would have the same great workout as me.
Well, that's exactly what Docker is.
In this analogy:
- The box is the Docker container
- The workout items are the dependencies your code needs
- The workout instruction are your code
- The command on the outside of the box is the start command for your application
- And the "any person who can read" is the equivalent to a computer with the docker runtime installed on it
Docker NodeJS Example
Firstly we need a simple application to run. Let's whip up a small NodeJS server really quickly. Create a new directory anywhere on your machine.
mkdir dockerExample
Go into that directory and run.
cd dockerExample
npm init
This example assumed you have NodeJS installed, if you don't then head to the NodeJS website and install the LTS version for your system.
Hit enter to set all the defaults.
We are going to make a simple server so we'll use Express. Run:
npm i express
Now create an index.js file in your project
And add the following code:
const express = require("express");
const app = express();
const port = 3001;
(async () => {
app.get("/", async (req, res) => {
res.send("Hello from Docker");
});
app.listen(port, () => {
console.log(`Listening on port ${port}`);
});
})();
You can now run this small server from your terminal with:
node index.js
You will get a message in your terminal like the below:
Listening on port 3001
and if you go to http://localhost:3001 in a web browser you will see the following message:
Hello from Docker
Which is a bit of a lie because right now this is not yet in Docker.
Adding Docker
First thing we need to do is install Docker to your machine. Go to the Docker website and select the correct choice for your operating systemm
Once downloaded and you've launched the Docker app we can get going.
Docker file vs image vs container
Now you have Docker running you can run Docker containers. Like I said above, these are little containers that run your code in a portable, isolated way.
But before you get to a container there are two other things you need to know about, a docker file and a docker image.
- A Docker file is what you write that lays out what you want your Docker container to be.
- A Docker image is all the code bundled up ready to run whenever you need it. This is like any other application on your computer, Docker will grab this and turn it in to a running container when you want it.
Let's start with the file.
Docker file
Add a file to your project, in the same directory as index.js called Dockerfile
(no extension, just Dockerfile
) and add the following contents:
FROM node:16.15.0
WORKDIR /var/www/
COPY . .
RUN npm install
RUN chown -R node:node .
USER node
CMD [ "node", "index.js" ]
Don't panic! I'm going to explain all this line by line.
- Line 1: here we are saying we want to use another image and then add to it. Docker has a big registry of other images that you can use as the basis to your image and then add to it. In this case we say that we want a specific version of the node image. What you will get is a Linux machine with Node version 16.15.0 installed on it.
- Line 2: remember that this is a little virtual computer we are running here so it has a file structure just like your computer. Here we are setting the directory we want to work in as
/var/www
. - Line 5: when your container is running it won't know anything about your computer, it's totally isolated. However when turning you Dockerfile into an image it knows about your computer. In this case
COPY
is a Docker command that can copy files from your computer into the image. Here we sayCOPY the contents of the current directory of my computer in to the current working directory of the image
. So if you are indockerExample
when you run this is will copy everything in that directory into/var/www
in the Docker image. This is how your code gets into the container. - Line 7: The
RUN
command allows you to execute a command in the build process of the image, in this case we want to install any dependencies that our code has, just like we would locally to download Express. - Line 8: for improved security we will be changing from the root super user (which is what you are within that Docker image at the moment) to the node user, which has far few privileges. So we use the Linux command
chown
to give the node user privileges to run code in this directory. - Line 10: we use the
USER
command to switch to that node user. - Line 12: finally we specify the command that will be run when the container is launched. In this case we run the the command we were running on our local machine in the last section. This means that when the container comes to life it will launch our code automatically.
With that saved to your Dockerfile you can now build the image.
In your terminal (making sure you are in the directory with the Dockerfile in it) run:
docker build . -t example
This tells Docker to build an image based on the Dockerfile in the current directory (that's what the .
is for at the end). the -t example
will be helpful in a minute, it tags the image with the word example.
You should see an output like this:
[+] Building 2.0s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:16.15.0 1.8s
=> [1/5] FROM docker.io/library/node:16.15.0@sha256:1817bb941c9a30fe2a6d75ff8675a8f6def408efe3d3ff43dbb006e2b534fa14 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 27.64kB 0.0s
=> CACHED [2/5] WORKDIR /var/www/ 0.0s
=> CACHED [3/5] COPY . . 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> CACHED [5/5] RUN chown -R node:node . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e263406aa4b0dee204b2043acclkjb7a890edf46f29dfe9b41af8bf2a0ebf8 0.0s
=> => naming to docker.io/library/example 0.0s
Which means it worked! š„³
So now you have and image that Docker can run.
Docker image
Go to your terminal and type:
docker image list
and you will see something list this:
REPOSITORY TAG IMAGE ID CREATED SIZE
example latest e263406aa4b0 15 minutes ago 913MB
Look at the size! 913MB for a little server that replies with a string? Remember though that this is a version of Linux that has Node installed on it that als have your code. There are ways of reducing this size but for this introduction to Docker this is fine for now.
Notice also the example
at the start as well, that's what we named it with the -t
above.
Docker container
We have an image now but this isn't running. We need to "spin up" this into a container. So lets do it.
Remember above where I said that the container has no idea about your machine? Well that's true, but like any computer you can open up network connections to it, and that's what we are going to do here. Spin up this container we are going to tell it to open up a network port so we can send it messages from our browser.
In your terminal run this command:
docker container run -p 3001:3001 example
Here we are telling docker to run a container with a link from our computers port 3001 to the containers port 3001. The final argument is the container name.
If you get an error like this:
docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:3001 -> 0.0.0.0:0: listen tcp 0.0.0.0:3001: bind: address already in use.
It's because you left your server running from when we were running this locally. Go to that terminal window and press ctrl + c
to end the process and then rerun your docker build command.
All being well you should see:
Listening on port 3001
in your terminal. To test it visit http://localhost:3001 in your web browser to see the output again. This time the hello really is from Docker š
View running Docker containers
To see this container running open a new terminal window and type:
docker container list
and you will get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
672d7e57f9ab example "docker-entrypoint.sā¦" 4 minutes ago Up 4 minutes 0.0.0.0:3001->3001/tcp epic_spence
and you can see you example image used for this container.
Want proof that this is really doing something? Copy the CONTAINER_ID from the output above and run this command in a terminal window:
docker stop 672d7e57f9ab
replacing the id I have above with whatever yours is. This will take a few seconds. Once done you can run docker container
list again to see no containers running. Try that web address and it won't work either.
To show you that it really is Docker running your code run this command:
docker container run -p 3002:3001 example
This will link you computers port 3002 to the Docker post 3001 (the one that your app is listening on).
The result message will still be Listening on port 3001
because that message is from the Docker container now running.
Now go to http://localhost:3001 in your browser. You won't get anything. Now go to http://localhost:3002 and you will see your "Hello from Docker" message.
How to delete a Docker image
Follow the steps above to stop the container, and then run:
docker image rm example
to delete it and get back that 913MB of space.
If it won't delete then run this command:
docker image rm example -f
The -f
will force it to delete.
What is Docker wrap up
This was a very simple overview of Docker. It is incredibly powerful and while this example might seem a bit pointless I want you to imagine a more complicated application, with many dependencies, multiple build steps all wrapped up in a single image that you can put on any server or computer, run docker container run imageName
and now it's up and running doing it's thing.
Really powerful and is the gateway to the world of DevOps and making deployments easier.
All the source code for this project can be found in the AllTheCode GitHub repo.