Using GitLab CI/CD with a Kubernetes cluster
DETAILS: Tier: Free, Premium, Ultimate Offering: SaaS, self-managed
- Introduced in GitLab 14.1.
- The pre-configured variable
$KUBECONFIG
introduced in GitLab 14.2.- Introduced the
ci_access
attribute in GitLab 14.3.- The ability to authorize groups was introduced in GitLab 14.3.
- Moved to GitLab Free in 14.5.
- Support for Linux package installations was introduced in GitLab 14.5.
- The ability to switch between certificate-based clusters and agents was introduced in GitLab 14.9. The certificate-based cluster context is always called
gitlab-deploy
.- Renamed from CI/CD tunnel to CI/CD workflow in GitLab 14.9.
You can use GitLab CI/CD to safely connect, deploy, and update your Kubernetes clusters.
To do so, install an agent in your cluster. When done, you have a Kubernetes context and can run Kubernetes API commands in your GitLab CI/CD pipeline.
To ensure access to your cluster is safe:
- Each agent has a separate context (
kubecontext
). - Only the project where the agent is configured, and any additional projects you authorize, can access the agent in your cluster.
To use GitLab CI/CD to interact with your cluster, runners must be registered with GitLab. However, these runners do not have to be in the cluster where the agent is.
Use GitLab CI/CD with your cluster
To update a Kubernetes cluster with GitLab CI/CD:
- Ensure you have a working Kubernetes cluster and the manifests are in a GitLab project.
- In the same GitLab project, register and install the GitLab agent.
-
Update your
.gitlab-ci.yml
file to select the agent's Kubernetes context and run the Kubernetes API commands. - Run your pipeline to deploy to or update the cluster.
If you have multiple GitLab projects that contain Kubernetes manifests:
- Install the GitLab agent in its own project, or in one of the GitLab projects where you keep Kubernetes manifests.
- Authorize the agent to access your GitLab projects.
- Optional. For added security, use impersonation.
-
Update your
.gitlab-ci.yml
file to select the agent's Kubernetes context and run the Kubernetes API commands. - Run your pipeline to deploy to or update the cluster.
Authorize the agent
If you have multiple GitLab projects, you must authorize the agent to access the project where you keep your Kubernetes manifests. You can authorize the agent to access individual projects, or authorize a group or subgroup, so all projects within have access. For added security, you can also use impersonation.
Authorization configuration can take one or two minutes to propagate.
Authorize the agent to access your projects
- Introduced in GitLab 14.4.
- Changed to remove hierarchy restrictions in GitLab 15.6.
- Changed to allow authorizing projects in a user namespace in GitLab 15.7.
To authorize the agent to access the GitLab project where you keep Kubernetes manifests:
-
On the left sidebar, select Search or go to and find the project that contains the agent configuration file (
config.yaml
). -
Edit the
config.yaml
file. Under theci_access
keyword, add theprojects
attribute. -
For the
id
, add the path to the project.ci_access: projects: - id: path/to/project
- Authorized projects must have the same root group or user namespace as the agent's configuration project.
- You can install additional agents into the same cluster to accommodate additional hierarchies.
- You can authorize up to 100 projects.
All CI/CD jobs now include a kubeconfig
file with contexts for every shared agent connection.
The kubeconfig
path is available in the environment variable $KUBECONFIG
.
Choose the context to run kubectl
commands from your CI/CD scripts.
Authorize the agent to access projects in your groups
- Introduced in GitLab 14.3.
- Changed to remove hierarchy restrictions in GitLab 15.6.
To authorize the agent to access all of the GitLab projects in a group or subgroup:
-
On the left sidebar, select Search or go to and find the project that contains the agent configuration file (
config.yaml
). -
Edit the
config.yaml
file. Under theci_access
keyword, add thegroups
attribute. -
For the
id
, add the path:ci_access: groups: - id: path/to/group/subgroup
- Authorized groups must have the same root group as the agent's configuration project.
- You can install additional agents into the same cluster to accommodate additional hierarchies.
- All of the subgroups of an authorized group also have access to the same agent (without being specified individually).
- You can authorize up to 100 groups.
All the projects that belong to the group and its subgroups are now authorized to access the agent.
All CI/CD jobs now include a kubeconfig
file with contexts for every shared agent connection.
The kubeconfig
path is available in an environment variable $KUBECONFIG
.
Choose the context to run kubectl
commands from your CI/CD scripts.
.gitlab-ci.yml
file to run kubectl
commands
Update your In the project where you want to run Kubernetes commands, edit your project's .gitlab-ci.yml
file.
In the first command under the script
keyword, set your agent's context.
Use the format <path/to/agent/project>:<agent-name>
. For example:
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/project:agent-name
- kubectl get pods
If you are not sure what your agent's context is, open a terminal and connect to your cluster.
Run kubectl config get-contexts
.
Environments that use Auto DevOps
If Auto DevOps is enabled, you must define the CI/CD variable KUBE_CONTEXT
.
Set the value of KUBE_CONTEXT
to the context of the agent you want Auto DevOps to use:
deploy:
variables:
KUBE_CONTEXT: path/to/agent/project:agent-name
You can assign different agents to separate Auto DevOps jobs. For instance,
Auto DevOps can use one agent for staging
jobs, and another agent for production
jobs.
To use multiple agents, define an environment-scoped CI/CD variable
for each agent. For example:
- Define two variables named
KUBE_CONTEXT
. - For the first variable:
- Set the
environment
tostaging
. - Set the value to the context of your staging agent.
- Set the
- For the second variable:
- Set the
environment
toproduction
. - Set the value to the context of your production agent.
- Set the
Environments with both certificate-based and agent-based connections
When you deploy to an environment that has both a certificate-based cluster (deprecated) and an agent connection:
- The certificate-based cluster's context is called
gitlab-deploy
. This context is always selected by default. - In GitLab 14.9 and later, agent contexts are included in
$KUBECONFIG
. You can select them by usingkubectl config use-context <path/to/agent/project>:<agent-name>
. - In GitLab 14.8 and earlier, you can still use agent connections, but for environments that
already have a certificate-based cluster, the agent connections are not included in
$KUBECONFIG
.
To use an agent connection when certificate-based connections are present, you can manually configure a new kubectl
configuration context. For example:
deploy:
variables:
KUBE_CONTEXT: my-context # The name to use for the new context
AGENT_ID: 1234 # replace with your agent's numeric ID
K8S_PROXY_URL: https://<KAS_DOMAIN>/k8s-proxy/ # For agent server (KAS) deployed in Kubernetes cluster (for gitlab.com use kas.gitlab.com); replace with your URL
# K8S_PROXY_URL: https://<GITLAB_DOMAIN>/-/kubernetes-agent/k8s-proxy/ # For agent server (KAS) in Omnibus
# ... any other variables you have configured
before_script:
- kubectl config set-credentials agent:$AGENT_ID --token="ci:${AGENT_ID}:${CI_JOB_TOKEN}"
- kubectl config set-cluster gitlab --server="${K8S_PROXY_URL}"
- kubectl config set-context "$KUBE_CONTEXT" --cluster=gitlab --user="agent:${AGENT_ID}"
- kubectl config use-context "$KUBE_CONTEXT"
# ... rest of your job configuration
Environments with KAS that use self-signed certificates
If you use an environment with KAS and a self-signed certificate, you must configure your Kubernetes client to trust the certificate authority (CA) that signed your certificate.
To configure your client, do one of the following:
- Set a CI/CD variable
SSL_CERT_FILE
with the KAS certificate in PEM format. - Configure the Kubernetes client with
--certificate-authority=$KAS_CERTIFICATE
, whereKAS_CERTIFICATE
is a CI/CD variable with the CA certificate of KAS. - Place the certificates in an appropriate location in the job container by updating the container image or mounting via the runner.
- Not recommended. Configure the Kubernetes client with
--insecure-skip-tls-verify=true
.
Restrict project and group access by using impersonation
DETAILS: Tier: Premium, Ultimate Offering: SaaS, self-managed
- Introduced in GitLab 14.5.
- Changed in GitLab 15.5 to add impersonation support for environment tiers.
By default, your CI/CD job inherits all the permissions from the service account used to install the agent in the cluster. To restrict access to your cluster, you can use impersonation.
To specify impersonations, use the access_as
attribute in your agent configuration file and use Kubernetes RBAC rules to manage impersonated account permissions.
You can impersonate:
- The agent itself (default).
- The CI/CD job that accesses the cluster.
- A specific user or system account defined within the cluster.
Authorization configuration can take one or two minutes to propagate.
Impersonate the agent
The agent is impersonated by default. You don't need to do anything to impersonate it.
Impersonate the CI/CD job that accesses the cluster
To impersonate the CI/CD job that accesses the cluster, under the access_as
key, add the ci_job: {}
key-value.
When the agent makes the request to the actual Kubernetes API, it sets the impersonation credentials in the following way:
-
UserName
is set togitlab:ci_job:<job id>
. Example:gitlab:ci_job:1074499489
. -
Groups
is set to:-
gitlab:ci_job
to identify all requests coming from CI jobs. -
The list of IDs of groups the project is in.
-
The project ID.
-
The slug and tier of the environment this job belongs to.
Example: for a CI job in
group1/group1-1/project1
where:- Group
group1
has ID 23. - Group
group1/group1-1
has ID 25. - Project
group1/group1-1/project1
has ID 150. - Job running in the
prod
environment, which has theproduction
environment tier.
- Group
Group list would be
[gitlab:ci_job, gitlab:group:23, gitlab:group_env_tier:23:production, gitlab:group:25, gitlab:group_env_tier:25:production, gitlab:project:150, gitlab:project_env:150:prod, gitlab:project_env_tier:150:production]
. -
-
Extra
carries extra information about the request. The following properties are set on the impersonated identity:
Property | Description |
---|---|
agent.gitlab.com/id |
Contains the agent ID. |
agent.gitlab.com/config_project_id |
Contains the agent's configuration project ID. |
agent.gitlab.com/project_id |
Contains the CI project ID. |
agent.gitlab.com/ci_pipeline_id |
Contains the CI pipeline ID. |
agent.gitlab.com/ci_job_id |
Contains the CI job ID. |
agent.gitlab.com/username |
Contains the username of the user the CI job is running as. |
agent.gitlab.com/environment_slug |
Contains the slug of the environment. Only set if running in an environment. |
agent.gitlab.com/environment_tier |
Contains the tier of the environment. Only set if running in an environment. |
Example config.yaml
to restrict access by the CI/CD job's identity:
ci_access:
projects:
- id: path/to/project
access_as:
ci_job: {}
Example RBAC to restrict CI/CD jobs
The following RoleBinding
resource restricts all CI/CD jobs to view rights only.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ci-job-view
roleRef:
name: view
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: gitlab:ci_job
kind: Group
Impersonate a static identity
For a given connection, you can use a static identity for the impersonation.
Under the access_as
key, add the impersonate
key to make the request using the provided identity.
The identity can be specified with the following keys:
-
username
(required) uid
groups
extra
See the official Kubernetes documentation for details.
Restrict project and group access to specific environments
DETAILS: Tier: Free, Premium, Ultimate Offering: SaaS, self-managed
- Introduced in GitLab 15.7.
By default, if your agent is available to a project, all of the project's CI/CD jobs can use that agent.
To restrict access to the agent to only jobs with specific environments, add environments
to ci_access.projects
or ci_access.groups
. For example:
ci_access:
projects:
- id: path/to/project-1
- id: path/to/project-2
environments:
- staging
- review/*
groups:
- id: path/to/group-1
environments:
- production
In this example:
- All CI/CD jobs under
project-1
can access the agent. - CI/CD jobs under
project-2
withstaging
orreview/*
environments can access the agent.-
*
is a wildcard, soreview/*
matches all environments underreview
.
-
- CI/CD jobs for projects under
group-1
withproduction
environments can access the agent.
Related topics
- Self-paced classroom workshop (Uses AWS EKS, but you can use for other Kubernetes clusters)
- Configure Auto DevOps
Troubleshooting
~/.kube/cache
Grant write permissions to Tools like kubectl
, Helm, kpt
, and kustomize
cache information about
the cluster in ~/.kube/cache
. If this directory is not writable, the tool fetches information on each invocation,
making interactions slower and creating unnecessary load on the cluster. For the best experience, in the
image you use in your .gitlab-ci.yml
file, ensure this directory is writable.
Enable TLS
If you are on a self-managed GitLab instance, ensure your instance is configured with Transport Layer Security (TLS).
If you attempt to use kubectl
without TLS, you might get an error like:
$ kubectl get pods
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Unable to connect to the server: certificate signed by unknown authority
If you use an environment with KAS and a self-signed certificate, your kubectl
call might return this error:
kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority
The error occurs because the job does not trust the certificate authority (CA) that signed the KAS certificate.
To resolve the issue, configure kubectl
to trust the CA.
Validation errors
If you use kubectl
versions v1.27.0 or v.1.27.1, you might get the following error:
error: error validating "file.yml": error validating data: the server responded with the status code 426 but did not return more information; if you choose to ignore these errors, turn validation off with --validate=false
This issue is caused by a bug with kubectl
and other tools that use the shared Kubernetes libraries.
To resolve the issue, use another version of kubectl
.