Deploying applications is a complex task. You have to create some VMs, be it on DigitalOcean or AWS, download and install necessary prerequesites and then deploy your application. It would be easy if it were to end there, however, it doesn’t.

Following this you have application maintenance which includes deploying updated versions of your application in addition to other things like bug fixes. And this is where the real complexity starts. Your updated version might need another dependent application to be installed which in turn might need a version of some random tool to be upgraded. You did what was necessary but then you find out that the deployment of the updated application failed because you forgot to do that one null check in some corner of your application. So frantically you download the previous version of your application and try to restore it only to find that it doesn’t work anymore since you upgraded that random tool to support your newer application.

While all this is happening either your entire application is unavailable because your application is single instance or if it is indeed multi-instance split by a loadbalancer, all the other instances are being stressed out because one of the node is down.

And now you are thinking. Well, there has to be a better way.

Well my friend, there is.

*queue star wars intro soundtrack*

Docker.

Docker allows you to create and deploy application containers. What is an Application Container you ask? Well, you can think of an application container like a VM but unlike a fully fledged VM, its the bit that surrounds your application. So in essence, instead of it being a virtual machine, it is a virtual container for your application. This allows application containers to start up within seconds while a virtual machine could take minutes. Also, application containers take up much less RAM as they don’t have to load entire operating system in memory but only the bits that surround your application.

So how can these amazing and fast application containers help simplify your deployments? Here’s how.

Because creating and destroying application containers is a inexpensive process, both in terms of compute resources and time, it encourages a philosophy of disposable infrastructure. The idea is that instead of creating your infrastructure and taking care of it for its lifetime, you create it only when you need it and then destroy it when its not needed. Also, because all containers emerge from their respective Dockerfile(s), which is code, they can be versioned along side your actual application. This means that if you roll back to a previous version, you will deploy a docker container for that application using that version of the Dockerfile. Also, because the container contains your project and all of its dependencies, its self contained and can be stood up or torn down without any impact on any of the existing versions.

We’ve been on about Dockerfile for a while now so before we go any further lets see what it actually looks like. Here’s an example:

That is a simple Dockerfile that deploys a Spring Boot application in a ubuntu based container. Briefly, the above Dockerfile defines a container image. The first line defines that our image is FROM (based on) ubuntu 14.04 base image. Upon creating the image, it RUNs apt-get update -y  and apt-get install -y openjdk-7-headless commands. In addition to that, it adds (read copies) *.jar file from the present working directory to /app folder to make it available in the container. Also, because we know that the application is going to run on port 8080, we’re EXPOSE(ing) (defining) that port so that it can be accessed from outside the container. Finally, we’re defining the ENTRYPOINT (command that docker will execute when the container is started) as a java -jar command to execute our Spring Boot application.

I’ll go into the details about all the sections that comprise a Dockerfile some other time but for now, the thing to take away is that a Dockerfile forms basis of your container and is used to define what your application container image looks like. The image can then be used to create and run the container.

I am using a custom spring boot application, however, if you follow the REST tutorial on Spring Boot Tutorials website, you should end up with the same project as me. Within your project, make sure that the Dockerfile is located at the root of the project (in same directory as the src and build folders).

So once you have a Dockerfile, you can run the following command to create your image. Make sure that your Dockerfile is called exactly as Dockerfile and you are executing the following command from the same folder as the Dockerfile.

The above command builds our image and tags it (defined by -t option) with bootdemo2 tag. When the version (tag) is unspecified, docker assumes latest. However, if you do want to specify one, you just need to replace bootdemo2 with bootdemo2:1.0 or rather, more generically, bootdemo2:<version>. So building the first version of the application would be like so:

And then subsequently, the next minor version would be like:

In this case, just leave the tag blank to allow it to default to latest.

If everything went OK, you’ll be able to see your image listed within the list of docker images on your machine. This can be viewed using the following command:

You should see something like this in your output:

Amazing! You’ve created your first Docker image. Hurrah! Lets run it to start a container off of that image. Simply run:

You should see output like following:

Hurrah! Your container is running. Before we open our champagne bottles, lets break down that command to better understand what we just did.

The base command is docker run which tells it to run something. All the parameters that follow tells it what to run.

First one is the -p parameter. Remember that EXPOSE 8080? Well that was exposed on the container. However, if you want that port to be available on your host (similar to port forwarding), you’d need this bit. In the 8080:8080, the first part (before the colon) is the port on the container and the second part (after the colon) is the port on your machine. This mapping is optional. If you don’t provide this value, it won’t bind the port to your local port so to access your container you’ll have to specify your container IP address instead of just localhost.

Next is the name of the running container. This is entirely optional. If you don’t specify one, it will generate a random name.

After that is the --rm parameter. Again, this is optional as it tells Docker to remove the container after its stopped. By default, Docker will keep a container around so that you can inspect its logs to determine why it was shut down. For our test we just need it to run our container and then remove it when we don’t need it.

Last set of parameters, -i and -t (combined into -it) tells Docker to run our container in interactive mode ( -i) with a psuedo TTY ( -t). We’ve chosen interactive mode to make it easier for us to see the output and terminate the container as we can just press Ctrl + C to stop it instead of running docker stop <containerId> command.

Lastly, we specify the image we want to create a container from. In this case its bootdemo2. Since we haven’t specified a tag, docker assumes its latest. If we do want to specify one, say, 1.0, it would be like bootdemo2:1.0.

To check everything that Docker is currently running, you can run the following in a new tab.

You should get something like:

Also, just quickly, if you get the following error:

Just update your host environment with the following command:

and then re-run the above docker ps -a command.

To check whether or not the application is working fine, you could just navigate to your greeting endpoint (in my case http://localhost:8080/events) in your browser. This should work if you’ve passed in the -p parameter. However, if you are using boot2docker or docker on a non-linux based operating system, you will have to make sure that your docker-machine has port forwarding setup for 8080 (or whatever port you are trying to setup).

As you can see, we’ve got one docker container running. Because we are running it in interactive mode, there are two ways of stopping it. You can stop is by pressing Ctrl + C command in the window where its running interactively, but as you guessed it, this only works when its running in interactive mode. Normal way to stop your container is by using the docker stop command.

You can obtain the container name from whatever you named your container as, or if you didn’t explicitly name it, you can obtain the ID from the docker ps command.

Also, because we started it with --rm command, doing docker stop will remove the container as well. However, if we didn’t have that flag in, you’d have to run the below command after the stop command:

Once you’ve stopped your container, you could try re-running it but this time without the --rm flag and see the difference when it comes to stopping your container.

Relating back to our earlier analogy of problems with deploying different versions of a single application, using docker, since you would have a docker image for every version of your application, you’d just swap out an old container with a new one. Even better than that, you could try out the any version of your application locally and since its a container, it would run in exactly the same way in production. If things don’t work out, just remove that version of the container and re-deploy a version that works.

Simple! I hope this post has given you a good introduction to Docker and the problem it solves for you. I’d love to hear about your experiences with Docker. Drop a note down in the comments below or on my twitter account @davemanthan.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.