Screen Shot 2022 09 23 At 9 51 17 Am

Containerization…trouble in paradise?

Sept. 23, 2022
Container security is emerging as a serious challenge.

By Ravi Bhagavatula, senior staff engineer at Lynx Software Technologies

Virtualization technology, where multiple operating systems can be run on shared hardware, is easy to understand yet inefficient in terms of its resources. Just a few decades ago, everyone used virtual machines (VM) to host and manage infrastructure; more recently, industries have shifted toward using containers with systems such as Docker and Kubernetes.

Containers try to achieve the same concept as virtual machines, while also eliminating duplication of effort between machines. Instead of loading an entire operating system (OS) for an app, Docker lets containers use the kernel of the host OS in tandem with sideloading app-specific libraries and programs, for efficiency. By adjusting the container and its image at the same time, it becomes possible to fine-tune the specific libraries and configuration your app will use. This results in performance gains without the overhead of running an entire OS.

Containers are easy to run on development machines, and the deployment process itself is also much simpler since one just uploads pre-built containers to a container repository and production systems can pull the updated version.

So…all is good right?

Not quite. Container security is emerging as a serious challenge. Container applications are very flexible and often incorporate elements that interact over a network connection. This means, therefore, that there is an opening for security vulnerabilities such as bugs, misconfiguration, authentication and authorization issues to be introduced into the system.

With the IT and OT worlds merging, the challenge becomes one of delivering both security (building resilience so that a deliberate or accidental attack on a connected platform does not cause a system to change functionality or become unreliable) and safety (ensuring an OT network continues to run in a way where human life, or other precious items, are safe regardless of any other workloads the converged system is processing).

Container security risks can be grouped into two main areas:

1.    Compromise of a container image or container as a whole

2.     Misuse of a container to attack the host OS, or other hosts or containers

These challenges haven’t gone unnoticed. The National Institute of Standards and Technology (NIST) offers a detailed container security guide, which includes comprehensive information and recommendations on container security. This can be broken down into six main areas:

1) Countermeasures to container images—NIST advises organizations to maintain a set of trusted images and registries to ensure that only images from this set are allowed to run in their environment, which helps monitor the images for malware and avoids storing secrets in images.

2) Countermeasures to registries—Configuring development tools, orchestrators, and container runtimes to only connect to registries over encrypted channels, ensuring access (particularly write access) to registries that contain proprietary or sensitive images, and require authentication.

3) Countermeasures to Orchestrators—Orchestrators should use at least have a privilege access model in which users are only granted the ability to perform the specific actions on the specific hosts, containers, and images that their job roles require.

4) Countermeasures to containers themselves—Monitoring the container runtime for vulnerabilities, remediates problems quickly when they are detected, and controls the egress network traffic sent by containers.

5) Countermeasures to host OS risks—Grouping container workloads onto hosts by sensitivity level, helps to not mix containerized and non-containerized workloads on the same host instance, and using tools that regularly check for and apply updates to all software components used within the OS.

6) Harnessing automation—Enabling trusted computing by a) Measuring firmware, software, and configuration data before it is executed using a Root of Trust for Measurement (RTM). B) Storing measurements in a hardware root of trust, like a trusted platform module (TPM). C) Validating that the current measurements match the expected measurements. With all of these in place, it can be attested that the platform can be trusted to behave as expected

For Lynx, we view the fifth item as one of the fundamental challenges our customers face. A compromise of the host OS can lead to the compromise of all applications running on it, which opens the potential for the behavior of the system to be altered. As a precaution for this, NIST recommends that “organizations should use these minimalistic OSs to reduce their attack surfaces and mitigate the typical risks and hardening activities associated with general-purpose OSs.”

So, what does that really entail?

First, the system must be separated using advanced hypervisor technology. This must be a true hypervisor, not one where an underlying operating system exists, since this is a single point of vulnerability. From there, the OS instance is responsible for executing the container and should be separate from any other applications. Then, the system should be run as an immutable infrastructure.

Second, there is a need to define a system architecture in which applications are only afforded the system resources they need—nothing more. This is often referred to as a “least privilege” architecture.

Third, consider how unikernels can be used in place of regular operating systems as the basis of certain applications. Unikernels are effectively a set of pre-built binary libraries, but they do not handle resource allocation. The hypervisor handles direct hardware interoperation. The unikernel architecture concept is designed to deliver the security strengths of VM-level partitioning with the speed and footprint size benefits attributed to containers.

Unlike an operating system, unikernels run in user mode. Kernel mode is generally reserved for the lowest-level, most-trusted functions of the operating system. This is because crashes in kernel mode are so catastrophic that they can halt the entire system. In user mode, the executing code has no ability to directly access hardware or reference memory.

Unikernels do not allocate resources. Instead, the hypervisor handles direct hardware interoperation. All application-specific system calls are pushed as close to the app as possible. For secure systems, a type-1 hypervisor (where no underlying “helper” OS is present) that runs directly on hardware and loads virtual machines should be used.

Unikernels are single thread, single user. There is significant overhead to add process management. For operating systems, there must be a way to start, stop and inspect a process to ensure inter-process communication and to allow multiple users to have authorization,  authentication and resource isolation.

For the unikernel, simplicity wins.

To summarize, containers deliver immense value for new systems. That said, they are not a cure-all. Isolating the use of a container for a specific virtual machine that is proven to be isolated from critical-system applications and operations is the prudent approach.