Alex Artamonov | Mission Critical
Get a thorough understanding of the options.
Feeling boxed in by containerization? Unsure where it fits in within your organization? Concerned about security?
Join the club. Having been around in some form since 1980, containerization today qualifies a shape-shifting trend. It’s an updated replay of the venerable Infrastructure-as-a-Service (Iaas) vs. Platform-as-a-Service (PaaS) face-off — and, on both sides, the players’ needs remain largely the same. Each environment has its pros and cons, and one size definitely does not fit all.
On the PaaS side, the mega-providers (Google, Microsoft, Salesforce, et. al) are striving for stickiness/API dependence and will use their considerable muscle to tamp down IaaS-type solutions that foster portability and vendor neutrality. Docker — the defacto standard for containerization — isn’t IaaS, but it may use an IaaS provider to host the Docker environment. Containers are more flexible than PaaS but less flexible than IaaS.
A VM offers users full control over the operating system, updates and installed software. Providers tend to manage containers, and containerization offers only limited options for installed software and the development environment. These differences in management and responsibilities are non-trivial.
Because users fully manage a VM, any security updates, software installs and other upgrades are solely their responsibility (this also presupposes a higher level of technical knowledge and IT support). Providers have been known to pre-install the OS, but not in every case. Generally speaking, for smaller, rapid deployment, a container can be a smart choice; for larger, more complex, and scalable applications, VMs might be the better option.
While there are merits to each approach, two things strike me as self-evident:
• Entirely too many organizations don’t fully understand containerization
• Organizations need to understand, especially because of the second self-evident truth: getting containerization wrong can have some adverse consequences on cyber security
The shared environment is the culprit here: containers rely on them and VM doesn’t, hence the opportunity, even the invitation, for breaches. So while there certainly are use cases for containers, organizations typically don’t grasp the proper use case (e.g., production environments are not always optimum).
Note that while VMs share and virtualize the underlying hardware, Docker (that is, containers) virtualize and share the underlying operating system. That specific approach can result in both more attacks and portability issues, since containers must be built to the standard of the host Docker environment.
Grasping these distinctions needs to be the first order of business for anyone considering containerization.
Securing container hosting servers is especially challenging. As a container management suite, Docker runs on top of an operating system and, in most cases today, the operating system runs on top of a hypervisor. With so many layers, it’s often difficult to create a secure system, and a significant opportunity exists for errors and mistakes. While the concept of containerization has been with us for a while, its burgeoning popularity, combined with the complexity of today’s applications, is changing how we look at container security, scalability and manageability. In this sense, the technology is both new and fragmented.
Given the variety of container operating systems options, finding a stable and secure host is a non-trivial matter. With so many variables, teams are typically focused on security within their own environment, rather taking a more global perspective. Of course, bigger players like Docker now maintain a large security team. Even so, we’re still encountering some severe public exploits, most recently the CVE-2019-5736 exploit, which allowed complete root access to the system. Root access provides the attacker with access to all containers and data hosted within that container — definitely a situation to avoid.
One other important note: because of these multiple layers, it’s now essential to consider the update process. Users are at the mercy of the host to update the container OS and container management tools, and properly configure isolated networks and data. Going back to exploits: while it might seem that only certain versions of the application are exploitable, the real question is, how often does the provider install patches and updates, and are applications up to date now? In a similar vein, scalability is theoretically possible, but it requires running containers on top of a hypervisor, inside a virtual machine. That in turn entails additional overhead and complexity when scaling the environment. And in terms of manageability, a special set of skills is often required to properly develop, deploy and manage a Docker image.
Smaller startups often invest in deployed containers, but as the application and teams grow, organizations tend to gravitate to IaaS deployments. Because engineers are usually ordinarily required to configure IaaS, these deployments frequently prove to be less complex than containers while being easier to secure, scale and manage. IaaS tools are much more mature, with substantial universal development efforts behind them. Security is more advanced and customization is high. With IaaS, users are no longer locked into a single provider, as with containers, where moving from, say, Docker to a different platform might not be feasible.
Fully understanding containers is the best way to ensure that you don’t box yourself or your organization in.
Read in Mission Critical