Top 10 Best Practices For Docker

  • June 30, 2021

Containers are the industry standard. They are a unit of software that packages up an application’s code and all its dependencies for it to run reliably from one environment to another. More specifically, a Docker container image is used as a stand-alone, lightweight software package with everything needed to run your application. 

Organizations from banking to e-commerce want to learn more about how containers are used in their applications. They are focused on moving from more manually-based IT tasks to software applications. Containers help enable many processes on this software, such as automating the test, configure, and runtime operations. They are the leading choice to run microservices-based applications and are used in platforms like AWS, ECS, and Kubernetes. To help you better understand and work with containers, we’ve compiled a list of 10 security and performance best practices for Docker Containers. 

0. Docker images should have a single and atomic purpose. 

 

1. Use Smaller Images When Possible

Choose images with the fewest possible OS libraries and tools to minimize the attack surface and risk to containers. 

2. Use Users With Lower Privileges

Create a dedicated user and group on the image with minimal permissions to run the application. Use the same user to run the process. 

For example, the Node.js image that has a generic Node user built-in:
FROM node: 10-alpine
USER node
CMD node index.js

3. Try to Avoid Persistent Volumes. Persist the Information Outside the Containers. 

Based on the way containers are designed, i.e., to run immutable code, they have the advantage of consuming fewer resources and starting faster than a traditional application server. Its escalation and resilience mechanism is based on the speed with which instances can be created and destroyed, and the load derived from them.

If a Docker instance is used to store information, it will not be available in case of load balancing in another instance. This is something that the platform can do automatically when and if it needs to. If it scales by consumption, the information will be lost when destroying and restarting the container. This is also something that the platform can do automatically when it detects failures in order to keep the system available. This also makes storing information in a Docker instance risky. 

Instead, Docker allows information to persist in external stores such as databases or online services. It also supports the ability to mount persistent volumes. However, it is good to keep in mind that it generates performance problems and decreases the quality of the application design. For those reasons, mounting persistent volumes is a practice that should be avoided and should only be used as a last resort.

4. Sign and Verify the Images To Mitigate MITM (Man in the Middle) Attacks

When relying on docker images, it is essential to confirm that the image being used is the image the editor put in and that no one has manipulated it. Always verify the authenticity of the images you click. 

5. Find, Repair, and Monitor Open Source Vulnerabilities

As part of your continuous integration, you should scan your Docker images for any known vulnerabilities. You can use Snyk, an open-source tool, to do this. Snyk will look for security vulnerabilities in open-source application libraries and Docker images. 

How to use Snyk to scan Docker images:
$ snyk test –docker node: 10 –file = path / to / Dockerfile

Use Snyk to monitor and alert on recently disclosed vulnerabilities in a docker image:
$ snyk monitor –docker node: 10

6. Do Not Filter Confidential Information to Docker Images

It is easy to accidentally leak tokens and keys in images when compiling. To ensure safety, follow these guidelines:

  • Use multi-stage builds
  • Use Docker’s “secrets” feature to mount confidential files without caching them (only supported since Docker 18.04).
  • Use a .dockerignore file to avoid a dangerous COPY statement, which extracts confidential files that are part of the compilation context

7. Use Fixed Labels For Immutability

Docker image owners can push new versions to the same tags, leading to inconsistent images during builds and making it difficult to track whether a vulnerability has been fixed. Fixed labels can mitigate this problem. 

8. Use COPY Instead of ADD

Random URLs used for ADD could lead to malicious data sources or MITM attacks. Additionally, ADD can unpack local files, which may not be expected and lead to path and Zip Slip vulnerabilities. Instead, use COPY, unless ADD is specifically required.

9. Use Multi-Stage Compilations To Generate Smaller And Safer Images

Using multi-stage builds to produce smaller, cleaner images will minimize the attack surface for clustered docker image dependencies. 

10. Use Linter

Use Hadolint Linter, a static code analysis tool, to automatically apply Dockerfile best practices. This will detect problems and alert you when they are found in a Dockerfile. 

Understanding containers is critical when you are automating your applications. By following these best practices, you will ensure that your containers not only perform optimally but are also as secure as possible.

 

Credits
Written by: Gaston Valdes
General corrections and edition: Diego Woitasen