A file descriptor is an object that a process uses to read or write to an open file and open network sockets (although there are other uses).
Operating systems place limits on the number of file descriptors that a process can open. In addition to per-process limits, an OS also has a global limit on the number of file descriptors that all its processes, together, might consume.
A common bottleneck in the default Linux operating system configuration is a lack of file descriptors.
a CloudBees Flow server uses approximately one file descriptor per running job step and three per uncompleted job.
The following example configures CloudBees Flow to use a new limit of 32768:
Add the following line to the
initscript for the CloudBees Flow Server (in
/etc/init.d/commander) before the
ulimit -n 32768
Restart the CloudBees Flow server:
a CloudBees Flow agent uses at least two file descriptors per running job step.
It is important to make sure that operating systems on high traffic sites are configured to provide sufficient numbers of file descriptors to CloudBees Flow.
The following example describes how to raise the maximum number of file descriptors to 32768 for the CloudBees Flow process on the Red Hat Linux distribution:
Allow all users to modify their file descriptor limits from an initial value of 1024 up to the maximum permitted value of 32768 by changing
/etc/security/limits.conf. The following two lines should be part of the file contents:
soft nofile 1024
hard nofile 32768
/etc/pam.d/login, add the following line if it does not already exist:
session required pam_limits.so
Configure CloudBees Flow to use the new limits. Add the following line to the
initscript for the CloudBees Flow Agent (in
ulimit -n 32768
Restart the CloudBees Flow agent: