- Security
- A
Improving the security of your CI/CD through Shared Docker executor and OPA plugin
Hello, tekkix! The security team of the Platform, led by its team leader Vladimir Bukin, is in touch. The main task of our team is to protect CI/CD and, in particular, GitLab with K8s. Next, I will tell you about how we implement, maintain, and improve our authorization plugin for Docker socket.
It so happened in our industry that information security always lags behind the technologies that have been implemented in IT. When implementing technology, various risks arise that were not thought of during development. For the world of information security, Docker and K8s are still quite new technologies. There are not many studies. There are still a lot of vulnerabilities (including undiscovered ones), and therefore it is especially interesting to work with them.
In the article, I want to tell you how we made our CI/CD processes more secure: in particular, about the shared Docker executor and the use of Open Policy Agent (OPA). I will share our rules for the OPA plugin, which can be reused in any company to secure your containers.
The article will be especially useful for information security engineers, DevOps engineers, architects, and CTOs, but developers will also find something interesting for themselves, I am sure.
Let's go!
Shared executor in CI/CD: types and risks
Let's figure out what we are talking about. GitLab is often used to build CI/CD. We were no exception and also went this way. To build CI, .gitlab-ci.yaml is used, in which you describe the instructions for the build - jobs. They, in turn, are executed on the executor or, as they are also called, gitlab runners.
There are different types of executors:
Shell — all jobs are run within the OS on behalf of one user;
Docker — each job runs in a container on the Docker daemon. To work with Docker, the Docker API socket is passed into the job;
Kubernetes — each job runs in its own pod.
For example: to make the build environment predictable and reproducible, the build needs to be run in containers or virtual machines that are recreated after each build. In GitLab CI, this can be achieved by using the Docker executor for builds. The Docker executor runs a Docker container on the GitLab Runner, inside which the shell instructions described in GitLab CI are executed.
Such shared executors in CI/CD bring potential security threats to our processes. Let's discuss what can happen. In GitLab, in addition to the usual executor, there is a shared one. This is an executor available to any team that has access to CI/CD.
By using it accidentally or intentionally, one team can attack another team's job. Or imagine a situation where an employee of a team has been hacked. This is especially dangerous in a multitenancy environment, where several teams can "live" in one CI/CD and there is no trust between them.
Now let's look at the relationships teams can find themselves in:
One team has access to another team's code.
Risks: there are cases when the code is valuable and cannot be disclosed to anyone, even to a neighboring team.One team has access to other teams' artifacts and the ability to overwrite them.
Risks: if artifacts are created in jobs, they are pushed to the company's registry, and if one team can overwrite another team's artifact, then what was expected will not be in production.The team has access to secrets used in other teams' jobs.
Risks: there are secrets in jobs that are needed, for example, to push artifacts to the registry. If a secret is stolen, we will have the ability to overwrite from the point above.The team has elevated privileges on the executor.
Risks: this allows achieving all of the above, that is, if we manage to get elevated privileges, exit the container in Docker or Kubernetes and get root there, for example, in the case of shell, then it is clear that the attacker will have more opportunities.
Next, let's look at these threats by executor types.
Shell
Code reading: possible
Access to artifacts: possible
Access to secrets: possible
Privilege escalation: generally not required
Consequences
When a job is run in the executor, the source code is mounted in a certain folder and before it is deleted, it can be read.
Since we are launching in the OS on behalf of the same user, we have read permissions. And if there are write permissions, we can overwrite the artifacts that are collected in this folder.
We can read secrets that may be available both in the file and in the memory of the running executor process, because since processes running on behalf of the same user as ours, we can read and analyze their memory, extract secrets from there.
Docker
Code reading: possible via Docker API socket
Access to artifacts: possible via Docker API socket
Access to secrets: possible via Docker API socket
Privilege escalation: possible via docker run --privileged
Consequences
The main problem with Docker is that it passes its API socket into the job, if this happens, then we get a chain similar to the first point:
code reading via docker cp;
artifact push via docker cp in the other direction;
access to secrets via docker exec;
privilege escalation by launching a privileged container using the docker run—privileged command. The process runs on the host OS, which is not limited in any way. This threatens us with unauthorized reading and modification of the file system.
Kubernetes (*)
(*) — within the framework of the article, we will consider only the use of dind (Docker in Docker).
If we use dind, then we have all of the above and even more than on the Docker executor.
Consequences
By default, you can escalate privileges, and if an attacker does this, then not only the host is hacked, but the entire cluster is hacked. The attacker goes to the node and escalates privileges in the cluster.
I would like to note that in CI/CD the unsafe nuance of using Docker, since it usually has excessive functionality, that is, it is both a build tool and a runtime. And in CI/CD you only need a build tool, you don't need a runtime.
There are other tools: kaniko and buildah. They allow you to build OCI images, but do not allow you to run them, since they do not require a Docker daemon. But if you still really want Docker and an executor in CI/CD, then you should pay attention to the OPA plugin.
Open Policy Agent (OPA)
Open Policy Agent (OPA) — a unified set of tools and framework for policy development within the built-in open-source cloud stack.
One of the tools is a plugin that allows creating allow and deny rules based on the request sent to the Docker API socket. When I was looking for ready-made solutions for API request validation, I couldn't find anything suitable, so I had to write my own rules.
And here's what I came up with
First, let's talk about which commands we need to prohibit in the Docker daemon to make this structure more secure, while keeping the Docker API socket exposed in the shared Docker executor.
To do this, we need to prohibit:
the ability to affect the Docker daemon itself (prohibit Docker daemon system calls);
Swarm;
the ability to create privileged (*) containers and, accordingly, exit Docker;
the ability to mount files/directories from the host OS;
the ability to read data in another container;
the ability to inject into another container.
What does the OPA plugin work with?
The plugin receives a parsed HTTP request:
Method
Path
Query
Headers
Body
Therefore:
1. OPA — read Docker documentation
To understand how to write Rego rules that will prohibit everything we don't need, let's refer to the official Docker API documentation. It states that no API should be missed, otherwise, it will be possible to get through them somehow.
2. OPA — impact on the daemon
Next, Docker commands that you have probably seen will be provided. And the first thing we prohibit is docker plugin, because with this command you can simply disable OPA.
Hint! This is done as follows:
get a list of used plugins docker plugin ls --filter enabled=true;
disable OPA or any other docker plugin disable --force
; prohibit docker swarm and just in case docker volumes;
in terms of running containers, we must prohibit:
--privileged (privileged launch);
--cap-add (launch with capability);
--ipc (IPC namespace);
--pid (PID namespace);
--network (Network namespace);
-v (mounting volume);
--cgroup-parent;
--device (connecting device);
--security-opt apparmor/seccomp (disabling apparmor/seccomp).
3. OPA — restrictions on working with other containers
Regarding working with other containers, we can prohibit:
docker exec;
docker cp;
docker stop/kill/pause/restart;
docker update;
docker attach/logs;
docker commit/checkpoint.
Creating a Rego rule
This is a rule that prohibits passing devices to the container when it is created. It allows the request only if the body has a host config with the parameter not devices or null.
OPA — exceptions
-p — ports on the host;
docker stop (from a threat model perspective, it only affects the availability of the runner, the code will not leak, but CI/CD may stop);
docker attach/logs (reading logs is allowed, but this is acceptable if there is an additional check for the absence of confidential information in the logs).
OPA — admin password for bypass
If the admin needs to do something with Docker, they can add a specific header that will be checked by the Rego rule. When using the special admin header, no other checks will be applied.
Now let's illustrate the above with examples:
PID ns restriction
package docker.authz
allow {
not pid
}
pid { #only string values "" are allowed
not is_string(input.Body.HostConfig.PidMode)
} {
input.Body.HostConfig.PidMode != ""
}
seccomp_apparmor restriction
package docker.authz
allow {
not seccomp_apparmor_unconfined
}
seccomp_apparmor_unconfined {
contains(input.Body.HostConfig.SecurityOpt[_], "unconfined")
}
exec restriction
package docker.authz
allow {
not exec
}
exec {
val := input.PathArr[_]
val == "exec"
}
Conclusions
The use of Docker executor in CI/CD processes carries significant risks, especially in multi-user and multi-level environments where isolation between teams is important.
The high level of privileges provided by Docker creates many points of vulnerability. Although there are alternatives such as Kaniko and Buildah that remove the excessive functionality of Docker, its demand remains high.
Therefore, if the choice in favor of the Docker executor is inevitable, it is extremely important to secure it. Using Open Policy Agent (OPA) with well-thought-out Rego rules allows you to create the necessary level of protection, limiting access to critical functions and preventing unwanted actions.
By implementing OPA, many potential threats can be avoided while maintaining Docker's flexibility in CI/CD.
And the rules we have developed for the OPA plugin will help on the way to implementation.
THE END
P.S. For our team, the initiator to turn to OPA was Pasha Sorokin. Special thanks to him for this!
Write comment