top of page
Search
  • Jānis Orlovs

Running Containerized Tests in Kubernetes with Jenkins and TestContainers

TestContainers.com offers a popular open-source framework for integration testing by providing lightweight, disposable instances of various services like databases, message brokers, and web browsers within Docker containers. However, integrating this with CI environments on Kubernetes can be challenging due to its reliance on the Docker API..


This article will show how to run Testconainers tests in CI in Kubernetes. At CWISE we have used two ways how to run Testconainers in CI on Kubernetes.


Objective


This article demonstrates methods to execute TestContainers tests in CI on Kubernetes, drawing from our experience at CWISE.


Pre 1.24 Kubernetes: Docker in Docker

Context: Before Kubernetes 1.24, it was feasible to utilize Docker in Docker (DIND), allowing direct use of the Docker API without involving Kubelet.

Pre-requisites:

  • Kubernetes version earlier than 1.24.

  • Nodes should be running on Docker.



Jenkins Pipeline Setup with DIND


Kubernetes Setup


To run in DIND setup is needed, to setup an additional mount for Docker setup (make sure it's allowed via RBAC and MAC tool like Seccomp or Selinux)

apiVersion: v1
kind: Pod
metadata:
  name: java-docker-build-pod
spec:
  volumes:
    - name: socket
      hostPath:
        path: /var/run/docker.sock
  containers:
    - name: openjdk
      image: openjdk:11
      volumeMounts:
        - name: socket
          mountPath: /var/run/docker.sock
    - name: docker
      image: docker
      volumeMounts:
        - name: socket
          mountPath: /var/run/docker.sock

Jenkins Pipeline code

pipeline {
    agent {
        kubernetes {
            yamlFile 'path/to/pod.yaml'
        }
    }
    stages {
        stage('Build') {
            steps {
                container('openjdk') {
                    sh './gradlew build'
                }
            }
        }
    }
}

This code will give the ability to use build with Testcontainers on top of Kubernetes in the Docker-in-Docker setup


1.24 and Later Kubernetes: Kubedock

After Kubernetes version 1.24, support for DIND was deprecated. We started using Kubedock. Kubedock is a minimal implementation of the docker API that will orchestrate containers on a Kubernetes cluster.



Jenkins Pipeline Setup with KubeSock


Kubernetes Setup

Grant RBAC permissions in namespace

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kubedock
  namespace: dev
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "get", "list", "delete", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create", "get", "update"]

Assign role to Jenkins service account (assuming namespace is dev and Jenkins service account used in pipelines name is: jenkins


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: testcontainers-rolebinding
  namespace: dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubedock
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: dev

Pod.yaml setup

apiVersion: v1
kind: Pod
metadata:
  name: java-docker-build-pod
spec:
  volumes:
    - name: socket
      hostPath:
        path: /var/run/docker.sock
  containers:
    - name: openjdk
      image: openjdk:11
      tty: true 
    - name: kubedock
      image: joyrex2001/kubedock:latest
      command:
      - /usr/local/bin/kubedock
      args:
      - server
      - --reverse-proxy
      - --timeout=2m
      tty: true

Jenkins Pipeline code


pipeline {
    agent {
        kubernetes {
            yamlFile 'path/to/pod.yaml'
        }
    }
    environment {
     TESTCONTAINERS_RYUK_DISABLED = "true"
     TESTCONTAINERS_CHECKS_DISABLE = "true"
     DOCKER_HOST = "tcp://127.0.0.1:2475"
    }
    stages {
        stage('Build') {
            steps {
                container('openjdk') {
                    sh './gradlew build'
                }
            }
        }
    }
}

Conclusion

The article provides a comprehensive guide on running TestContainers integration tests in a CI environment atop Kubernetes, showcasing solutions for both pre and post-Kubernetes 1.24 environments.



Recent Posts

See All

Ansible: Variable Input Validation

We recently encountered a challenging issue with Ansible: throughout the process of automating our Oracle database setup, one of the components necessitated a particular sequence of symbols for passwo

Commentaires


bottom of page