The Dangers of Cattle in a Containerized Environment
The growth of popularity of containerized, micro-services environments has arguably pushed us into a new age of computerization. Once we used to set up physical computers in physical data centers and configure them individually at great time and expense. We then emerged from this age to an age of virtualization, giving us the ability to copy, clone and spin up virtual machines with a few clicks of a button. Docker, Kubernetes and Containerization is the next step in that evolution. With the help of containerization applications can now be split up into even smaller micro-services, distributing workload and creating a modular application that brings many advantages in operations, cost and maintenance.
One of those advantages has been coined as treating our servers as “cattle” instead of “pets.” In the old age, servers would be set up and cared for at great expense much like a house pet. When these servers developed issues, we would debug and fix them due to the great expense of replacing them. Now, in this containerized, dev-ops, infrastructure-as-code age, we simply treat these servers as cattle. At the first sign of serious problems, it much faster and effective to destroy the container and spin up a new one to replace it.
The main advantage of this shift to “cattle methodology” is speed. When the goal is to reduce downtime and restore a service why not take the shortest path with the least resistance? What one needs to take into account with this cattle methodology is the dangers speed can bring to an environment. Many security breaches in corporate environments have less to do with the technical vulnerabilities exploited but the policies and processes that were not followed. The fear of downtime and loss of revenue can lead network and operation teams to “act before they can think.” Introducing future issues that can come later from their actions. A quickly deployed misconfigured server or network can often be a greater vulnerability than any technical vulnerability reported by a vulnerability scanner.
When an operations or network team treats their network like “cattle” they can run faster than they should, destroying and deploying services faster than they ever have before. Policies and processes can be left behind in the effort to fix the problem. In the old age of physical servers and data centers this was less of a problem as deployments could take anywhere from hours to days. Knowledge of deployments would spread in meetings and security engineers could intervene and apply security best practices. In a containerized dev-ops environment these deployments could come and go during a lunch break. Strong policies and procedures are needed to make sure catastrophic misconfigurations or vulnerabilities aren’t deployed to the environment.
To amplify the dangers of misconfigurations, a containerize environment can be very different than a normal environment and brings a whole new layer of complexity. The growth of containerization has brought many changes and improvements that can easily be lost to anyone not attempting to stay on top of latest changes. Frequent and proper auditing of containerized environments are essential to a strong security posture. Some of the best work in this area has come from the Azure Security Team who made their own Threat Matrix for Kubernetes. This threat matrix is modeled from the MITRE Attack Framework. This matrix attempts to document the many different Docker and Kubernetes-centric attacks in each phase of the attack life-cycle.
Anyone working or thinking of working in a docker/containerized environment should be familiar with this threat matrix. This matrix is the best example of the extra layer of complexity that comes with a docker and Kubernetes environment. A layer on top of the normal security practices that must be followed for any type of environment.