Docker is a containerization tool developed by Docker Inc. in 2013, which helps you to create, deploy, and run containers smoothly. In other words, Docker acts like a wrapper, around software, IoS, and dependencies for a particular code to work. Even though the concept of containerization was existing years before, Docker made it easy and candid to handle containers in an efficient manner.
Docker originally built for Linux, but now works fine on Windows and Mac IoS as well. It is a common scenario that issues occur between developers and testers while running the same code in different environments. All these issues will get removed if we utilize Docker since it uses an o.s level virtualization, also known as containerization.
Containerization and virtualization are two entirely different technologies where both have their advantages and disadvantages.
Imagine a developer creating a website in PHP. After the completion of the code, he wants to test the site. For the same, he needs to run it on a server that contains an OS, PHP software, and necessary libraries. After obtaining the desired output, the developer will pass it on to the tester for examining the website. The examiner also has OS, PHP software, and necessary libraries installed in his system. But since there are multiple PHP versions, there are chances that the website won’t work as intended. On top of it, the production environment also can be different.
Here comes the relevance of Docker. Using Docker, the developer can create a docker image with necessary software and libraries, and it will run the same everywhere as intended, where the docker platform got installed. In addition to that, making updations and changes to the existing setup makes it a lot easier, if we use Docker, as it is only a matter of updating the image and sharing it.
A docker file is a set of instructions to build a docker image. It includes the required O.S, environment variables, file locations, ports, and the collection of commands to run while the container works.
Once you write all instructions in a docker file, its time to make it portable by converting the instructions into a docker image, you can use the docker build command for creating images. Once the image gets created, you may share the same with everyone, and they can download and run the picture.
The run command launches the container. You can run multiple boxes at a time and also have the option to start and stop them. Docker uses a unique name, which you may specify explicitly and container ID to identify each container.
Docker hub is a central repository that helps you find official Docker images, and also you can use the same for sharing your custom images. You can upload content to the docker hub using docker push and download pictures using docker pull commands.
Known as the heart on which the containers run. Docker engine has two different versions:
Docker-compose is generally used for defining and running multi-container applications written in YAML. The docker-compose allows users to run commands on multiple containers at once, like, building images, scaling containers, running containers that got stopped, and much more.
Docker swarm is a container orchestration tool which helps users to manage multiple containers deployed across multiple host machines. Docker swarm clusters the boxes, and there will be a swarm manager for controlling the activities and devices that have joined the cluster and commonly referred to as nodes.
To recapitulate, as per reports, Docker holds an 80% share of containerization technology. Portability, flexibility, and simplicity are the key reason why Docker has been able to generate such strong momentum.
– By Sabu Thomas Vincent
DevOps Engineer