Today, the company no longer releases a large number of functions at once, but tries to transfer small functions to customers through a series of release trains. This has many advantages, such as quick feedback from customers, better software quality, etc. which in turn leads to high customer satisfaction. To achieve this goal, the company must:
- Increase deployment frequency
- Reduce the failure rate of the new version
- Reduced lead time between repairs
- The average recovery time is faster when the new version crashes
DevOps satisfies all these requirements and contributes to seamless software delivery.
What are the advantages of DevOps?
- Continuous software delivery
- Fix less complicated issues
- Solve problems faster
- Deliver functions faster
- More stable operating environment
- More time to add value (instead of repair / maintenance)
What is the function of CI (Continuous Integration) server?
The CI server function is to continuously integrate all ongoing changes and submit them to the repository by different developers, and check for compilation errors. It needs to build the code multiple times a day, preferably after each commit, so that it can detect which commit bug was when the problem occurred.
What is virtualization?
Virtualization allows you to run two completely different operating systems on the same hardware. Each guest operating system has gone through all the processes of booting and loading the kernel. You can have very strict security, for example, the guest operating system cannot fully access the host operating system or other clients and messes up.
The virtualization methods can be classified based on how the virtualization methods mimic the hardware of the guest operating system and simulate the customer operating environment. There are three main types of virtualization:
- Container-based virtualization
How is Docker different from virtual machines?
Docker is not a virtualization method. It relies on other tools that actually implement container-based virtualization or operating system-level virtualization. For this, Docker initially used the LXC driver, then moved to libcontainer and now renamed to runc. Docker is mainly focused on automatically deploying applications within application containers. Application containers are designed to package and run a single service, while system containers are designed to run multiple processes, such as virtual machines. Therefore, Docker is regarded as a container management or application deployment tool on a containerized system.
- Unlike virtual machines, containers do not need to boot the operating system kernel, so they can be created in less than a second. This feature makes container-based virtualization more unique and desirable than other virtualization methods.
- Because container-based virtualization adds little or no overhead to the host, container-based virtualization has near-native performance
- For container-based virtualization, unlike other virtualization, no other software is required.
- All containers on the host share the host’s scheduler, thereby saving the need for additional resources.
- Compared with the virtual machine image, the size of the container state (Docker or LXC image) is very small, so the container image is easy to distribute.
- Resource management in the container is achieved through cgroups. Cgroups do not allow containers to consume more resources than they are allocated. Although all resources of the host are visible in the virtual machine, they cannot be used. This can be achieved by running top or htop on both the container and the host. The output of all environments looks similar.
The internal mechanism of the container?
Around 2006, people, including some of Google ’s employees, implemented a new Linux kernel-level feature called namespaces (though this idea already existed in FreeBSD). One function of the operating system is to allow global resources (such as network and disk) to be shared to the process. What if you wrap these global resources in a namespace so that they are only visible to those processes running in the same namespace? For example, you can get a large disk and place it in namespace X, and then processes running in namespace Y cannot view or access it. Similarly, processes in namespace X cannot access anything in the memory allocated to namespace Y. Of course, processes in X cannot view or communicate with processes in namespace Y. This provides a kind of virtualization and isolation for global resources.
This is how Docker works: each container runs in its own namespace, but uses the exact same kernel as all other containers. Isolation occurs because the kernel knows the namespace allocated to the process and ensures that the process can only access resources in its own namespace during API calls.
What is Docker?
- Docker is a containerized platform that packages your application and all its dependencies together in the form of containers to ensure that your application runs seamlessly in any environment of development, testing, or production.
- Docker container, a software packaged in a complete file system, the file system contains everything needed to run: code, runtime, system tools, system libraries, etc. can be installed on the server anything.
- This ensures that the software will always run the same regardless of its environment.
How to use Docker to build a system that has nothing to do with the environment?
There are three main functions that help achieve this goal:
- Environment variable injection
- Read-only file system
What is a Docker image?
The Docker image is the source code of the Docker container. In other words, Docker images are used to create containers. Use the build command to create images, and they will generate containers when started with run. The image is stored in the Docker registry http: // registry.hub.docker.com Because they can become very large, mirrors are designed to be composed of other mirror layers, allowing the minimum amount of data to be sent when transferring mirrors over the network.
What is a Docker container?
The Docker container includes the application and all its dependencies, but shares the kernel with other containers and runs as an independent process in user space on the host operating system. Docker containers do not depend on any specific infrastructure: they can run on any computer, any infrastructure, and any cloud.
What is Docker Hub?
Docker hub is a cloud-based registry service that allows you to link to code repositories, build images and test them, store manually pushed images, and links to the Docker cloud so that you can deploy images to hosts. It provides centralized resources for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development process.
What are the states of Docker containers?
Docker containers can have four states:
We can identify the status of the Docker container by running commands:
docker ps –a
This will in turn list all available docker containers and their corresponding status on the host. From there we can easily identify the container of interest to check its status accordingly.
What type of application-stateless or state full is better for Docker containers?
It is best to create a stateless application for Docker Container. We can create a container from the application and take out the configurable state parameters from the application. Now we can run the same container in production and QA environments with different parameters. This helps to reuse the same image in different scenes. Using Docker Containers is easier to extend stateless applications than using stateful applications.
Explain the basic Docker usage process
- Everything starts with Dockerfile. Dockerfile is the source code of the image.
- After creating a Dockerfile, you can build it to create an image of the container. The image is just the “compiled version” of the “source code”, which is the Dockerfile.
- After obtaining the image of the container, you should use the registry to redistribute the container. The registry is like a git repository-you can push and pull images.
- Next, you can use the image to run the container. In many ways, running containers are very similar to virtual machines (but without hypervisors).
What are the most common instructions in Dockerfile?
Some common instructions in Dockerfile are as follows:
- FROM: We use FROM to set the basic image for subsequent instructions. In each valid Dockerfile, FROM is the first instruction.
- LABEL: We use LABEL to organize our mirrors by project, module, license, etc. We can also use LABEL to help automate. In LABEL, we specify a key-value pair that can later be used to process Dockerfile programmatically.
- RUN: We use the RUN command to execute any instruction in the new layer above the current mirror. With each RUN command, we add some content at the top of the image and use it in the subsequent steps of the Dockerfile.
- CMD: We use the CMD command to provide default values for the execution container. In the Dockerfile, if we include multiple CMD commands, only the last instruction is used.
What is the difference between the COPY and ADD commands in the Dockerfile?
In general, although ADD and COPY are functionally similar, COPY is preferred.
That’s because it is more transparent than ADD. COPY only supports basic copying of local files into containers, while ADD has some functions (such as local tar extraction and remote URL support only). These functions are not very obvious. Therefore, the best use of ADD is to automatically extract the local tar file into the image as shown in ADD rootfs.tar.xz /
Explain the ONBUILD instruction of dockerfile?
When an image is used as the basis for another image build, the ONBUILD instruction adds a trigger instruction to the image that will be executed later. This is useful if you want to build an image that will be used as the basis for building other images (for example, you can use a user-specific configuration to customize the application build environment or daemon).
What is the difference between Docker images and layers?
- Images: Docker images are built from a series of read-only layers
- Layers: Each layer represents an instruction in the mirrored Dockerfile.
The following Dockerfile contains four commands, each of which creates a layer.
COPY . /app
RUN make /app
CMD python /app/app.py
The important thing is that each layer is just a set of difference layers from the previous layer (the same ones will not be put in the new layer).
What is Docker Swarm?
Docker Swarm is Docker’s native cluster. It turns the Docker host pool into a single virtual Docker host. Docker Swarm provides a standard Docker API, and any tool that has communicated with the Docker daemon can be transparently extended to multiple hosts using Swarm.
How to monitor Docker in production?
Docker provides tools such as docker stats and docker events to monitor Docker in production. We can use these commands to get reports of important statistics.
- Docker statistics: When we call docker stats using the container ID, we get the container’s CPU, memory usage, etc. It is similar to the top command in Linux.
- Docker event: A Docker event is a command used to view the ongoing activity stream in the Docker daemon.
Some common Docker events are: attach, commit, die, detach, rename, destroy, etc. We can also use various options to limit or filter the events we are interested in.
How does Docker run containers on non-Linux systems?
By adding the namespace function to the Linux kernel version 2.6.24, the concept of containers can be realized. The container adds its ID to each process and adds a new access control check to each system call. It is accessed by the clone () system call, which allows the creation of a separate instance of the previous global namespace.
If containers can be used due to the functions available in the Linux kernel, the obvious question is how to run containers on non-Linux systems. Both Docker for Mac and Windows use Linux VMs to run containers. Docker Toolbox is used to run containers in the Virtual Box VM. However, the latest Docker uses Hyper-V in Windows and Hypervisor.framework in Mac.
How to use Docker in multiple environments?
The following changes can be made:
- Remove any volume bindings of application code so that the code remains in the container and cannot be changed from outside
- Bind to different ports on the host
- Set environment variables in different ways (for example, to reduce the verbosity of logging, or enable email sending)
- Specify a restart strategy (for example, restart: always) to avoid downtime
- Add additional services (for example, log aggregator)
Therefore, you may wish to define an additional Compose file, such as production.yml, which specifies a configuration suitable for production. This configuration file only needs to contain the changes you want to make from the original Compose file.
Why doesn’t Docker Compose wait for the container to be ready, and then continue to start the next service in a dependent order?
Compose starts and stops the container according to dependency obedience, and determines that the dependency statements are depends_on, links, volumes_from, and network_mode: “service: …”
However, for startup, Compose will not wait until the container is “ready to run”. Here is a good reason:
- The problem of waiting for the database (for example) to be ready is really just a subset of the larger problem of distributed systems. In production, your database may become unavailable or mobile host at any time. Your application needs to be able to adapt to these types of failures.
- To deal with this problem, design the application to try to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
- The best solution is to perform this check in the application code at startup and when the connection is lost for any reason.