The strength of Docker is the container. Container is a concept that has appeared on Linux for many years. It refers to one or more processes that are separate from the rest of the system, simply isolating the process from the real system, essentially like a virtual machine. Containers have all the files they need to run those processes system-independent, which is why they have become a very commonly used tool for deployments.
Docker has brought the power of these Linux containers to everyone and that is why they are used in different production environments.
Docker solves an issue that many developers have had a headache when working on multiple systems. The container makes it possible for the application to run consistently on any operating system without having to worry about the errors of different development and configuration environments.
What is Docker?
Now in short: docker will help group installed applications, when you want to deploy a certain project, just call that docker and don’t need to reinstall the applications already in docker. For example: you have installed php, apche, ftp…, if you want to deploy another project later, just remove docker, don’t have to reinstall php, apche, ftp… again.
And the general concept of Docker is like this:
Docker is an open source tool that handles the lifecycle of containers. It is used to simplify the way you build and deploy during development. That means you can have containers that have all the dependencies you need to run your application and manage it until the end of development.
Depending on your needs, the Docker container can be used to replace virtual machines. Virtual machines use more resources than containers because they need a virtual copy of the operating system and the hardware it needs to run. And it also takes up a lot of RAM as well.
The Docker container only needs a virtual copy of the operating system. It will use all the resources of the physical server, so they don’t need to split the hardware to use it as a virtual machine.
That means the container is super lightweight and can be used on any system configuration while still being able to run the application correctly when deployed locally.
With Docker, you can use the container for local development, then share that container with other developers and use that same container to deploy the product. When everything is ready, you can deploy your application as a container or as a coordinated service and it will run exactly the way it did locally.
Why should you be familiar with docker?
The container solves problems like “it runs on a normal machine”. Developers can share the container image, build, and run the same container on different machines. Once you can run the code consistently without worrying about the local installation environment, you can develop the application on any machine without having to change a variety of machine configurations. to be identical to your local one.
Working with Docker containers also makes it easier to deploy in any environment. You do not have to consider consuming additional resources when using the virtual machine. This will help improve application performance and reliability by providing you with a tool that allows you to manage all code and container changes during development.
How to work with Docker
There are a few key things you need to be aware of when working with Docker are images and containers.
Docker images are templates for creating containers. Docker specifies what packages and preconfigured server environments are used to run your application. Images are created from a set of files used to build the functionality of the container.
These files include dependencies, code for the application, and any other settings you need. There are several ways to create a new image. You can either take a running container and change something to save it as a new image, or you can create a new image from scratch by creating a new Dockerfile.
We will take a look at the Docker image below and analyze it. Let’s start by creating a Dockerfile to run a React application.
# pull official base image FROM node:alpine3.12 # set working directory WORKDIR /app # add `/app/node_modules/.bin` to $PATH ENV PATH /app/node_modules/.bin:$PATH # install app dependencies COPY package.json ./ COPY package-lock.json ./ RUN npm install RUN npm install [email protected] -g EXPOSE 3000 # add app COPY . ./ # start app CMD ["npm", "start"]
The beginning of each line in this file is a keyword used in Docker to help it understand what to do. In this file, I create a base image for Node to set up the environment I need to run the React application. Then, I create the working directory for the container.
This is where the application code will be kept in the container. Then you set the path for where the dependencies will be installed and continue to install the dependencies listed in your packge.json. Next, I told Docker that the container is listening on port 3000. Finally, you add the application to the correct directory and launch it.
Now we can build images using the Docker command:
docker build -t local-react:0.1 .
Don’t forget the “.” at the end of the line! It tells Docker that you are building images from the files and folders in your current working directory.
Now that you have built an image successfully, you can create a container with it. Run your images as a container using this Docker command:
docker run --publish 3000:3000 --detach --name lr local-react:0.1
This command takes your image and runs it as a container. Back to images, you have set up port 3000 of the container available outside is that container. With –publish, you are forwarding traffic from your system’s port 3000 to the container. We have to, because otherwise the firewall will prevent all network traffic from reaching your container.
-Detach runs the container in the background of the terminal. That means it doesn’t take any display input or output data. This is a popular option, but you can always re-attach the container to the terminal if you need it later. –Name allows you to provide a name for the container you will need for later commands. In this case, the container is created named lr.
You should now be able to visit localhost: 3000 and see your application running.
Docker may not be used everywhere, but it’s a popular technology that you should be aware of. It makes development on different systems more convenient.