The Evolution of Docker Container Security: Part 3

When it comes to Docker container security, orchestration models favor integrated roles, network scrutiny and small attack surfaces with host protection.

Tom Henderson

September 3, 2018

5 Min Read
Tiling

Part 1 of this series examined the emergence of Docker and how it owes a nod to several different schools of thought about how to group processes together and make them do work, without easily getting out of control. In Part 2 we took a look at some of the Docker container security concerns that have grown along with the platform's popularity. In Part 3 we examine how to maintain Docker container integrity. 

Docker was born in the quest to sandbox workloads while making them easily interconnected. Docker workloads are security-subordinated in numerous ways, but the provenance of a Docker image is incredibly important. Just as a bad Lego part can bring down the entire project, a bad image can compromise general Docker container security.

For more on Docker container security, see Part 1 and Part 2 of this series.

The Lego-like components of Docker fit neatly into continuous integration/continuous delivery (CI/CD) and agile development models. These models build apps using varying code-control integration applications that build code bases. Inside the code bases are customizations and code models that are works in progress, through to final and retired production application builds.

Multiple products are available within the CI/CD and agile development universe. Some products aid development and testing, some are good at deployment, and some handle multiple products. Some tools can be used for different purposes, but software composition and deployment frameworks are often disconnected. This forces organizations to look for tools that are not only integrated, but can also survive the scrutiny of audit.

Kubernetes, an orchestration framework, currently dominates Docker productions. However, it's one important part of a much larger picture--the violin section in a very large orchestra. At each and every step in the coding, framework, test, delivery/deployment and end-of-production steps, there are roles that define who may use the building blocks and how the blocks relate to other building blocks through the cycle, from initiation through to production and death (or rejuvenation).

The roles, along with the security relationship to a project, define the security at each step in the life of a systems project--and, in multiple project uses, the relationships among multiple and complex components like libraries, configuration settings, and network design and structures. Role-based, identity-based, or privilege-based security becomes a key component in the success of the delivery chain when responsibilities are taken seriously. 

Container Integrity and Fleet Manifest 

Researchers and analysts note the importance of container integrity and image manifests, and the need to scrutinize each element in the container delivery chain. Although log managers can help forensically pinpoint where problems occurred, most would agree that the prevention steps start with defining and enforcing responsibilities. Docker offers pre-parsed base containers, but, as mentioned, sometimes doesn’t keep track of them well. Third-party apps may have this capability built-in, using their own sources of container integrity and fingerprinting methods. Static code parsers, the apps that examine code for both licensing problems and configuration issues, vary widely in their ability to integrate easily into code production frameworks.

Once a container is sourced, developer code and libraries have been added, and configurations have been changed to meet the needs of orchestration and communication, a network relationship must be developed among not only containers but also other resources that containers will use and contribute to. The data product is an asset, and assets need protection--and, more than likely, must also conform to regulatory guidelines. The data is hot until it’s moved into another possibly regulated asset amalgamation and is finally erased. With that said, even the erasure may be regulated in some way, depending on the jurisdiction involved. Encryption keys, asset versioning and other resources may be needed when evaluating the app controls that will service the development cycle.  

Here, third-party applications may do periodic checks for container and configuration integrity, while also modulating, often via privilege management, updates/changes/patches/fixes. Additional functions may sniff network traffic, looking for irregular sources or destinations or sudden fleet silence.

When containers talk across host boundaries, their conversations require encryption, and may also require key/secrets management, adding an additional a layer of complexity to privilege management. Here again, third-party container security applications handle the multiple layers of encryption and key management, as well as key aging, life and store. Multiply these needs over numerous projects, and the complexity begs for astute assistance. 

Minimizing Attack Surface

Another very important step is to consider not only the host, but also the attack surface of each element in the delivery chain. Even generic and signed images can be misconfigured or configured for eras gone by. Fresh cloud-native host instances aren’t necessarily inspected for all of the lock-down steps and required patches (think Spectre, Meltdown, and so on).

Smaller images with fewer components are better, and there are many base images that have been stripped of nearly anything but the needed working parts. Some organizations have gone to the trouble of eliminating code elements known to have a high patch rate (think systemd) to eliminate high-frequency patches through the dev-to-production process. Although the practice of keeping images patched is an excellent idea, fewer components to patch means fewer versions, testing matrices and, ultimately, fewer code tests and pushes. Code begs to be slimmed down.

Organizations like Twistlock, AquaSecurity, Zscaler and others take different approaches to addressing both coding and deployment frameworks--often with strong protection commonalities. Traffic monitoring and firewall components in these products are helpful, as are very specific imaging/deployment marking. Using key management tools, then enforcing their use and dealing with exception handling, can be a non-trivial exercise, but guards against image poisoning, malware or configuration drift.

An old maxim is that salespeople see things as 106 and engineers as 10^-6; security personnel must forge a chain spanning the two. Keeping the chain light means keeping images light. Keeping the chain taut means network control and monitoring. Astute repository control--including privilege, key and chains of authority--will help ensure that the chain doesn’t rust. Then we forge a new chain.

 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like