Deploying an app with Docker and Docker compose

Deploying an app with Docker and Docker compose

Docker

This article is one of my favorites in relation to Cloud Computing. Docker has improved and simplified the Software Development lifecycle/workflow and using it practically has been great.

A section of the Docker Website reads thus:

Developing apps today requires so much more than writing code. Multiple languages, frameworks, architectures, and discontinuous interfaces between tools for each lifecycle stage creates enormous complexity. Docker simplifies and accelerates your workflow, while giving developers the freedom to innovate with their choice of tools, application stacks, and deployment environments for each project.

Docker is a widely-used container tool that developers and operation teams use to create and automate deploying applications in lightweight containers on VMs. Because of Docker, applications can run efficiently in different and multiple environments.

Docker compose

You can perform many activities when you use an application say for example, an e-commerce application: you can log in with your account, you can add a product to your cart, you can checkout among other things. Each of these activities can be considered as a microservice; a service with a unique activity and database. Each of these microservices can be run in containers of their own and this is where Docker Compose comes to play. Docker compose runs all the microservices as a single service where each container runs in isolation but can interact when necessary.

For example:

If you have an application that requires an NGINX server and Redis database, you can create a Docker Compose YAML file that can run both the containers as a service without the need to start each one separately.

It's time to see how these two concepts work practically.

Prerequisites

Docker: Have docker installed

Docker compose: Comes pre-installed with Docker

A nodejs application

Create Dockerfile

Let's start by creating a Dockerfile in the root directory of the nodejs application. Name it Dockerfile.

Below is a typical Dockerfile. You can copy and paste this.

FROM node:14.15.4

ENV NODE_ENV=production

WORKDIR /src

COPY package*.json /

EXPOSE 8000

RUN npm install --production

COPY . /

CMD [ "node", "app.js" ]

The first line,

FROM node:14.15.4

tells Docker what image we are inheriting our base image from. There are lots of official images on dockerhub and we are using one of them, the Node.js image that already has all the packages we need to run a nodejs application.

This explanation in the docker documentation will help if you have a background knowledge of programming:

You can think of this in the same way you would think about class inheritance in object oriented programming. For example, if we were able to create Docker images in JavaScript, we might write something like the following.

class MyImage extends NodeBaseImage {}

This would create a class called MyImage that inherited functionality from the base class NodeBaseImage.

In the same way, when we use the FROM command, we tell Docker to include in our image all the functionality from the node:12.18.1 image.

Moving on to the next line,

ENV NODE_ENV=production

This specifies which environment the application is running, development or production.

WORKDIR /src

creates a working directory for the Docker image. So all subsequent commands after this will be executed in this path (/src) that you have set.

COPY package*.json /

If you are familiar with nodejs applications, you know that before you can install dependencies, you need to have two JSON files available, package.json and package-lock.json. That's exactly what we are doing here, copying these two files into the working directory.

EXPOSE 8000

EXPOSE the port 8000 on the container. This is the port where the Node.js app runs by default.

RUN npm install --production

And then we install dependencies in the environment we set.

Next thing is to add the source code to the working directory. Hence:

COPY . /

Instead of having many COPY commands in the Dockerfile, this single command takes all the files we need to copy and adds them to the image.

Lastly, we tell Docker what command to run when the image is run. Replace app.js with the name of your entry file. It is worthy to note that there can be only one executable CMD command in a Dockerfile.

CMD [ "node", "app.js" ]

Remember this Dockerfile is for a production environment. For a development environment, it will look like this (with some variations):

FROM node:14.15.4

WORKDIR /src

COPY package*.json /

EXPOSE 3000

ENV NODE_ENV=development

RUN npm install -g nodemon && npm install

COPY . /

CMD ["nodemon", "bin/www"]

Build Image

docker build --tag <tagname> .

or

docker build -t <tagname> .

This command builds an image from the Dockerfile we just created. The -t or --tag flag allows you to give the image a name as opposed to the random name that Docker will name the image.

Run docker images to view your image. If successful, you should see your image in the table. Assuming you named your image node-docker-image, the table should look like so:

$ docker images
REPOSITORY                TAG                  IMAGE ID                CREATED             SIZE
node-docker-image         latest              3809733582bc      1 minute ago           945MB

Run containers from the Docker Image

You can run as many containers as possible.

The command to do is docker run

However, there are flags you can add when running this command like:

-d : Use this flag when you want to run the command in detached mode -it : This flag runs the command in interactive mode -v : This flag is actually very useful in persisting data. I like it a lot. Prevents the need to re-run a container over and over when you make changes to the codebase. -p or --publish: In the format of [host port]:[container port], this exposes the port inside the container to the port outside it.

You can start, stop and restart a container with:

docker start <container_id> docker stop <container_id> docker restart <container_id>

Run docker ps to see your running container. If you have stopped the container however, you should see it when you run docker ps -a which shows you all containers, running or not.

So you see the flow: The Dockerfile builds into a Docker Image and the Docker Image runs as a container.

Now, let's see how Docker compose works.

Docker compose

If you remember how I described Docker compose, you'll remember it is a tool that runs multiple containers as a single service.

Create a YAML file in the root directory of the nodejs application named docker-compose.yml.

version: '3.8'
services:
  web:
    build:
      context: ./
      target: prod
    volumes:
      - .:/src
    command: npm run start
    ports:
      - "8000:8000"
    environment:
      NODE_ENV: production

First off, we tell Docker what version of Docker Compose we are using. Next, we specify the services. In our case, we have only one service (web) with build context at the current working directory and target as production. Then, we specify the volumes. As earlier explained, when changes are made to the codebase in the docker image/container, it reflects on reload. This is very useful. The command to start the node server in production is specified. For this command to work, there has to be a start script under scripts in the package.json file.

"start": "node app.js"

Again, replace app.js with what you named your entry file.

Next, we map the host's port to the container's port and lastly, set the environment variable.

Run

docker-compose up

to start the service

Check this documentation for other commands you can run with docker compose.

Conclusion

I hope my lengthy article has helped you understand how Docker helps us conquer the seeming complexity of app development. Thank you for reading.