Jenkins has long been a staple in CI/CD, but as DevOps evolves, is it still the right tool for the job?
Jenkins is the oldest and probably still the most widely used continuous integration server. But even though Jenkins is widely used, it has recently been falling out of favor. However, Jenkins is still frequently used, and it will take years before it disappears from the scene. In this blog post, I explain why Jenkins is so widespread, and yet why we will hopefully not see much use of it in the future.
Jenkins can be used very flexibly. However, what used to be seen as a strength of the tool is now seen a weakness. There are many ways to solve a problem in Jenkins, which means that every Jenkins installation and every setup can end up looking completely different.
Jenkins can be installed quickly. Jenkins is written in Java and therefore runs on many different kinds of host systems. In most cases, Linux servers are used, or Jenkins itself is started in a container. But while it can be quickly and easily installed, maintaining the infrastructure and creating the pipelines is a much more difficult task.
Architecture and Scaling
Jenkins consists of two components: The controller manages the pipeline, and the agents connect to it to execute the pipeline jobs.
The controller and the agents used to be called the “master” and the “slaves.” This terminology is no longer officially used due to its racist history. However, these terms can still be found in many places, so it is important to recognize their meaning.
With this architecture, it can be tempting to create a central controller for all teams, on which the different pipelines are then executed. Doing so would not be a good idea because it would mean that all dependencies, plug-ins, and so on would have to be installed on this controller. First, this would result in a very large Jenkins controller with many pipelines and plug-ins, which cannot be easily maintained without a separate team. If several projects have to share a Jenkins controller, the settings and plug-ins almost always conflict with each other, and it could cause problems if many people need to access the controller at the same. Second, many complicated pipelines would place a heavy load on the Jenkins controller. This heavy load could cause even simple pipelines to be processed at a snail’s pace, causing frustration.
These drawbacks are why companies usually set up many smaller Jenkins controllers, such as one for each team or department. However, this approach results in the problem that all the independent instances cannot be managed centrally. Administration becomes difficult and characterized by manual work, which defeats the DevOps goal of using automation wherever possible.
The Jenkins Configuration as Code plug-in provides a remedy. It allows the Jenkins controller to be defined and configured in YAML code, which previously required many clicks in the UI. Using this plug-in, there can be several YAML files for various instances within an organization that contain relevant configurations, such as the necessary passwords and the plug-ins used. Ideally, the Jenkins container should be containerized so that it is started up on a container host on which a new image is built and deployed with every update.
Defining and configuring the Jenkins controller in YAML also has the advantage that the description of the pipeline can be saved with the actual project, making it versioned and portable! This is an important point for the reproducibility of the build. If pipelines are configured manually instead, the definition would be saved in the Jenkins controller itself, which means the pipeline would not be defined as code, so there would be no versioning. (In practice, I often see that the definitions are saved and versioned separately from the actual project, which is also not helpful.)
If you are working with such outdated pipelines, you should take steps to modernize them. Doing so will increase visibility and improve maintainability, which will make subsequent DevOps-related moves much easier to make.
But even if so much can be automated, the engineering effort is still high if many small installations have to be maintained and managed. In the event of problems and updates, a maintenance window must be found and communicated for each instance, and each automation must also be executed in a controlled manner in order to detect and rectify problems.
In addition to the controller, the agents also need to be managed. As usual, Jenkins offers many different options for installing, managing, and scaling agents. Nowadays, you should rely on containerized agents that automatically scale up and down depending on how much workload is currently needed.
Credentials and Permissions Management
Another problem with the maintenance and administration of Jenkins is credentials and permissions management. Because Jenkins is the central hub of automation, credentials and permissions for the entire DevOps toolchain need to be managed here. All other tools must be connected to Jenkins: the Git server, various tools for QA, security tools, deployment tools, and so on.
Credentials, such as passwords, tokens, and other methods for access control, must be stored and configured for each tool. The Jenkins server is therefore the central target of your infrastructure and must be secured accordingly. You absolutely need a well-thought-out system of restrictions and should regulate access as strictly as possible to prevent misuse.
In addition to credentials, you also need to consider permissions management in Jenkins. Permissions are closely related to credentials, as they also restrict who can and cannot take certain actions. It is usually necessary (and sensible) to duplicate the configuration of the Git server: Whoever is allowed to check in and change code is also allowed to start and edit the build. This means that you have to maintain these permissions in two places. This also involves a certain amount of effort and increases the complexity of the setup. But this is how the permissions for individual users and user groups can be managed in Jenkins. Role-based strategies are also possible, through a plug-in.
Like all software, Jenkins has security vulnerabilities, some of which are found from time to time. These security vulnerabilities are contained in Jenkins itself and in its plug-ins.
Updates usually require Jenkins to be restarted, which can be quite disruptive, as it can be difficult to find a suitable time for a restart if all pipelines are located on a central server. And if there are a large number of independent instances, manual work is required to complete a restart. These difficulties mean that teams often postpone updating Jenkins, which is not good in terms of security.
Secure and viable management of Jenkins for infrastructure alone is quite complex and requires a team, which needs to be quite large for larger companies.
Limited Range of Functions and Plug-in Hell
The basic scope of Jenkins is quite limited. Almost all functions can be used only if plug-ins are installed. Even important core functions are mapped to plug-ins; for example, the Git plug-in is needed to work with Git repositories. You can find a plug-in for almost any task, and if there isn’t yet one available for a given task, you can also develop a plugin yourself. This allows you to move complexities from the pipeline to a plug-in.
The problem, however, is that even more effort is required to ensure that the different plug-ins fit together. Jenkins is notorious for compatibility problems, because while the core plug-ins are developed and maintained by the Jenkins team, additional plugins are the responsibility of the community. Some plug-ins are actively maintained, and others are not.
This can mean that you may be dependent on plug-ins that are deeply wired into your workflow and DevOps toolchain but are no longer maintained and, therefore, slow down updates of the entire system. In short, all Jenkins plug-ins that you use become part of your software supply chain and need to be maintained accordingly. And if you use many Jenkins plug-ins, it may not be possible with reasonable effort.
The prevalence of plug-ins applies not only to Jenkins but also to many similar tools. A marketplace offering plug-ins and templates seems to have become almost a flagship of many platforms. Be very careful with such marketplaces! By using plug-ins, you are not only making yourself functionally dependent on third-party offerings, but, in the worst case, you could be also be introducing nasty security problems into your system.
In addition to security problems, using a large number of plug-ins can lead to a loss of overview. This is especially true if different plug-ins are used for the same tasks in different pipelines—developers cannot simply jump back and forth between projects, as they would first have to deal with the foundation on which the pipelines were built in.
Clarity
A continuous integration server should ensure that you can easily see important information. The visibility of the project status is important, as they say.
Specifically, it should be quick and easy to see in the server whether the project is building properly and whether all tests are running successfully. However, Jenkins provides too many different options for seeing this information. For example, the cleanest way to see this information would be if the hierarchical structure from the source code management platform were mapped in Jenkins. This would make it easy to find your way around. Unfortunately, Jenkins does not enforce such a mapping; it does not even offer the option as a default, as Jenkins exists separately from the source code repository.
The connection between Jenkins and the repository must first be established, which can be done—you may have guessed it—only via a plug-in. (Previously, this connection had to be established completely manually, meaning that a separate pipeline had to be configured for each branch and for each project. As this was tedious, it was often not done, which is why pipelines were rarely executed for short-lived branches.)
To map the information from the source code management platform to Jenkins, you need the Branch Source plug-in for GitHub, GitLab, or Bitbucket. With this plug-in, projects are automatically created in Jenkins and correspond to the structure of the integrated repositories. To do this, a Jenkinsfile is searched for in which the pipeline is defined. (The standard is a single Jenkinsfile, but this is only one standard of theoretically many that can be implemented.) If a branch disappears, then the corresponding job also disappears. This uniform structure makes it easy to find your way around, both in your own project and in the projects of other teams.
The main sticking point is, of course, that many Jenkins instances are not set up according to best practices, and there is often a lot of uncontrolled growth. If you use GitHub Actions or GitLab CI/CD instead, the functionality of Branch Source is a built-in feature that works without configuration, as the repositories and the continuous integration engine are directly linked.
Let’s draw a conclusion: Is it better to be able to add all conceivable functions in a modular way via plug-ins, or to rely on a monolithic one-size-fits-all solution? Both approaches have their advantages; the answer probably lies somewhere in the middle. In my opinion, the CI/CD platform should offer a certain basic set of functions that you can absolutely rely on and that are maintained by the core team. Custom extensions for details and special use cases work fine, but processes for accessing repositories and building the software must be smooth and without a lot of manual work.
However, in my experience with Jenkins, the tool does not offer this basic set of functions; far too much is outsourced to plug-ins. Their quality is inconsistent; some are good while others are mediocre. In addition, the many customizations decrease user-friendliness and increase the learning curve; no one can really claim to have mastered Jenkins because no two setups are the same. You have to be careful not to use too many different plug-ins and functions, which can make administration even more complicated. This often means that there is a dedicated Jenkins team responsible for configuration, which is something you want to avoid.
Complexity in the Construction of Pipelines
In Jenkins, pipelines can be configured and managed in many different ways. This makes administration flexible but also complicated, as each team can choose its own approach.
The oldest variant is to click together the pipeline configuration in the web interface and store the shell scripts and other integrations there. This corresponds to a manually configured pipeline and should never be done, as the resulting build would not be reproducible in this way.
Instead, the pipelines must be created in a file, namely in the already mentioned Jenkinsfile. You already know that there are two different types of pipelines: scripted pipelines and declarative pipelines. Jenkins naturally supports both approaches, and the approaches can also be mixed, which makes pipeline construction very flexible. Though this flexibility is convenient, it is also the biggest problem with Jenkins when it comes to defining pipelines. Each pipeline can be as complex as you like.
For scripted pipelines, Jenkins uses the Groovy language. Scripted pipelines written in Groovy can be executed in a Java virtual machine. Typical programming language elements such as loops and functions can be used to write these pipelines.
Declarative pipelines can be used in Jenkins to avoid the creation of multiple pipelines that all look different. Declarative pipelines can look relatively similar across all instances, which simplifies maintenance and makes it easier for developers to familiarize themselves with a CI/CD system.
pipeline {
agent {
docker {
image 'maven:3.8.1-openjdk-1'
}
}
stages {
stage('Build') {
steps {
sh 'mvn install -DskipTests -batch-mode
-Dmaven.repo.local=./.m2/repository'
}
}
}
}
The code above shows an example declarative Jenkins pipeline. The pipeline definition defines agents and stages. The agent configuration indicates that the specified Maven image should be used for all jobs. The build stage defines a step in which Maven is executed with a few parameters within the shell.
If you use Jenkins and use a colorful bouquet of different types of pipeline definitions, you should migrate to a uniform declarative standard to reduce uncontrolled growth. Ideally, however, I would recommend migrating away from Jenkins altogether.
The Role of Jenkins in the Overall DevOps Concept
There are many reasons why more and more organizations are saying goodbye to Jenkins and turning to other solutions. The main reason is the high cost of maintaining the systems developed using Jenkins. By combining the use of many different plug-ins, it is not easy to keep the toolchain simple. As it is very difficult to administer different Jenkins installations in the same way, isolated solutions are often created, which makes it difficult to maintain an overview of different projects. The developer experience suffers as a result, as developers end up working more on the pipelines than on the actual development work.
Theoretically, it is possible to implement all desired functions in Jenkins in this way, but in practice it is quite difficult.
Editor’s note: This post has been adapted from a section of the book DevOps: Frameworks, Techniques, and Tools by Sujeevan Vijayakumaran. Sujeevan is a senior solutions engineer at Grafana Labs. Previously, he worked at GitLab, where he helped large corporations from Germany, Austria, and Switzerland transition to a DevOps culture. He cohosts the German technology podcast, TILpod, and enjoys giving talks at open-source conferences—not only on technical topics, but also on good teamwork, efficient communication, and everything else that is part of the DevOps culture.
Comments