Shifting from virtual machines to Docker containers allows developers to deliver changes in a fraction of the time. And once an app is in a container, it’s portable. So the team can move it freely from AWS to Azure to Google Cloud, back to on-premise, optimizing the benefits of a hybrid environment.
Problem is, development teams aren’t always clear on how to get started with containers. They know the benefits, but have questions about how to containerize their application.
This article outlines a particular use case as a primer on the steps involved in modernizing an application from running on a Virtual Machine to a Docker container-based deployment. Although this piece highlights one specific best practice, keep in mind that there are other ways to containerize apps that may be better suited in other situations. Ideally, this story will serve as a reference on the benefits you can expect as you consider moving your applications to containers.
Our example is a traditional three-tier Java Spring Boot application using Maven for build and dependency management. The user interface is built with React.js and the REST-based JSON API is built with Spring MVC. It uses MySQL as a relational data store. This client-facing application is critical to the business. The front and back ends are built into a single application jar which contains embedded Tomcat and depends on JDK 1.8.
This is an ideal type of application to modernize with containers. It is stateless, has an automated and repeatable build process and is being actively developed.
How Was the Application Deployed?
Before containers, the application was deployed to each environment by updating the existing virtual machines. An environment-specific deployment script ran a set of commands to copy the latest version of the application to a shared file system, and then updated each VM with the new version. The deployment was slow and frequently failed due to transient timeout errors during the update process.
What Problem Were We Trying to Solve Using Containers?
Setting the Scene
As the team scaled, the deployment process became a bottleneck. The team was frequently troubleshooting deployment failures, which caused delays and missed target dates. Incorporating additional tools and automation required involving specific team members familiar with the custom deployment scripts.
The amount of effort and coordination required to deploy the application to production led to an increase in the amount of time between releases. To mitigate impacts to clients, the deployments were done late at night or early in the morning.
The team identified the problem as three things that were lacking for this application:
- Standardized CI/CD pipeline for build and deployment automation
- Stable and repeatable deployments
- Smaller and more frequent releases
Starting With a Base Image
First, we wanted to minimize the amount of code changes during containerization. This reduces the complexity, yet still gives us the ability to regression test before and after re-platforming from VM to containers. After the application is running in containers in production, we planned to leverage the benefits of containers to refactor and re-architect the application.
Since this was a “lift and shift” scenario, the operating system and libraries were defined by the existing application. So our goal was to determine the best approach to incorporate these dependencies into a container. The application required Linux and the Java Development Kit, so we leveraged an Open Source image from Docker Hub, based on Alpine Linux and OpenJDK. Depending on the situation, there are other options, such as Certified Images from the Docker Store if you need support, or building a custom image from scratch to maintain policies and tools across the enterprise. The latter offers the most flexibility but takes additional effort to support and maintain.
We created a Dockerfile that inherited from an Open Source base image, copied the application jar into the container’s file system and then ran the application by executing the jar from the command.
COPY target/demo-0.0.1-SNAPSHOT.jar /usr/src/demo-0.0.1- SNAPSHOT.jar
CMD java -jar /usr/src/demo-0.0.1-SNAPSHOT.jar
Now that we had a Dockerfile defined, we needed to build an image and give it a tag:
$ docker image build —tag demo:v0.0.1 .
Sending build context to Docker daemon 20.07MB Step 1/3 : FROM openjdk:8
Step 2/3 : COPY target/demo-0.0.1-SNAPSHOT.jar /usr/src/ demo-0.0.1-SNAPSHOT.jar
—-> Using cache
Step 3/3 : CMD java -jar /usr/src/demo-0.0.1-SNAPSHOT.jar
—-> Using cache
Successfully built 5207b1f7b396
Successfully tagged demo:v0.0.1
The preceding command looked for the Dockerfile in the current directory “.”, read the instructions in the Dockerfile and downloaded and compiled the image with a tag of “demo:v0.0.1”
Now that we had an image built, we could run it.
$ docker container run —detach —publish 8080:8080 demo:v0.0.1
We ran the container with the “—detach” option, so it runs in the background. Since the application listens for connections on port 8080, we needed to publish that port to the host. In addition, we asked for the same tag that we just built: “demo:v0.0.1.” At this point, we were able to use a browser to get a response from “http://localhost:8080/.”
Once we built and tested the container locally, we needed to push it to a registry. We created an organization on Docker Hub and used a private repository to hold all of the application’s container versions.
Next, we updated our CI/CD pipeline to trigger a build when a change is committed. The pipeline does a compile on the application, runs a Docker build on the Dockerfile with a version tag and then pushes it to the Docker Hub repository.
Now that we have a Docker image tagged with a version, deploying a new release is simply an update to the container cluster to roll out a newer version of the container. We chose the AWS Elastic Container Service (ECS), since we wanted a managed service to reduce the time spent on operational tasks.
What Benefits Did the Team Realize?
Going through the modernization process helped the team increase its velocity. They immediately benefitted from a standardized compile, build and push pipeline, without having to maintain custom deployment scripts.
The application start-up time went from minutes to seconds, which means changes are getting to environments quicker. This enables the team to deploy more frequently to lower environments and reduces the amount of time it takes to roll out a new version to production.
Prior to the modernization process, team members were deploying to production every three to four weeks. Once the application was modernized, they are now consistently deploying every two weeks, and getting new features into the hands of users twice as fast.
Moreover, now that the application is running in a container, it is portable between on-premise and public cloud environments.
We hope this article provided insight into how to get started containerizing your applications to benefit from the modern tooling, increased agility and portability containers provide.