Issue
Poor performance of a Jenkins instance is frequently due to misconfiguration during installation and are often the result of not following Best Practices.
Solution
Open files and new processes limits
Jenkins is an application which usually produces more open files in the OS than the default values set-up in almost any Linux distribution. When there is a migration or an installation of a fresh Jenkins instance, this is usually the first issue you will face due to Jenkins not being able to open as many files as it requires.
The stacktrace we will see on Jenkins when this happens is:
Caused by: java.io.IOException: Too many open files at java.io.UnixFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:1006) at java.io.File.createTempFile(File.java:1989)
To access to the default values in the OS you can use the command ulimit -a
.
max user processes (-u) 1024 open files (-n) 1024
The recommended values below should be set-up in /etc/security/limits.conf
.
jenkins soft nofile 4096 jenkins hard nofile 8192 jenkins soft nproc 30654 jenkins hard nproc 30654
-
Note that this assumes
jenkins
is the Unix user running the Jenkins process. If you’re running JOC, the user is probablyjenkins-oc
.
You can find detailed information about this problem in our KB article
Disable THP
Some Unix distributions have Transparent Huge Pages (THP) enabled which is known to cause performance issues with Java workload on big servers. If you would like more background on this, you can take a look at this JDK issue: https://bugs.openjdk.org/browse/JDK-8024838.
The recommendation for Jenkins is to disable the THP. For this, run this command as root:
echo "never" > /sys/kernel/mm/redhat_transparent_hugepage/enabled
Detailed information about disabling THP can be found in the RHEL KB: https://access.redhat.com/solutions/46111
Run CloudBees CI on HA
Run CloudBees CI on Traditional using High Availability (active/passive)
$JENKINS_HOME Shared Storage
This is perhaps the most common mistake executed by Jenkins Administrators which has a big performance impact. It is important that the .WAR
file is not extracted to the `$JENKINS_HOME/war ` directory in the shared filesystem. Doing so, the application will execute read operations through the shared filesystem location.
Some configurations may do this by default, but .WAR
extraction can easily be redirected to a local cache (ideally SSD for better Jenkins core I/O) on the container/VM’s local filesystem with the JENKINS_ARGS
properties --webroot=$LOCAL_FILESYSTEM/war
--pluginroot=$LOCAL_FILESYSTEM/plugins
. For example, on Debian installations, where $NAME
refers to the name of the jenkins instance: --webroot=/var/cache/$NAME/war
--pluginroot=/var/cache/$NAME/plugins
.
-
Note (if Jenkins is running in a web container): The
--pluginroot
and--webroot
options are specific to Winstone. The alternative to--pluginroot
is to add the system property-Dhudson.PluginManager.workDir=$LOCAL_FILESYSTEM/plugins
. There is no need for an alternative to--webroot
since the.war
is extracted in a local directory of the container manager. For example in Tomcat, if the application name isjenkins
the.war
is extracted under$CATALINA_HOME/webapps/jenkins
. -
NOTE:
--pluginroot
option and-Dhudson.PluginManager.workDir
system property only work since jenkins-1.649 so if the argument is added in a jenkins version lower than this one, jenkins might not be able to start.
$JENKINS_HOME
is read intensively during the start-up. If bandwidth to your shared storage is limited, you’ll see the most impact in startup performance. Large latency causes a similar issue, but this can be mitigated somewhat by using a higher value in the bootup concurrency by a system property -Djenkins.InitReactorRunner.concurrency=8
.
Use NFS v3 or higher than v4.1
Use NFSv3 or NFSv4.1 or greater as NFSv4.0 is NOT recommended due known performance problems. Please follow the best practices on regards NFS on the KB below.
EFS configuration for performance
In case you are using EFS, follow the step 1 and 2 explained in EFS Guide