Problem
You may get "Too many open files" message when using CloudBees products in *nix system. For example, here is a WARN reported by the CloudBees Accelerator cluster manager:
2019-10-14T07:26:00.798 | WARN | 0.0.0.0:8031} | | ServerConnector | java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:377) at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:500) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:748)
And here is logs written by CloudBees CD (CloudBees Flow) server:
INFO | jvm 1 | 2016/02/16 02:46:10.542 | 2016-02-16 02:46:10.292:WARN:oejs.ServerConnector:qtp1921015445-28-acceptor-0@6ca61f50-ServerConnector@4f24894d{HTTP/1.1}{0.0.0.0:8000}: INFO | jvm 1 | 2016/02/16 02:46:10.542 | java.io.IOException: Too many open files INFO | jvm 1 | 2016/02/16 02:46:10.542 | at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:377) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:500) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) INFO | jvm 1 | 2016/02/16 02:46:10.542 | at java.lang.Thread.run(Thread.java:745)
This means that the process exceeds the "open file limit" from the operating system.
Solution
You can change the settings to enable the process to get a bigger limit. Please refer to the following articles for more details:
You may want to check the current limit before changing to a bigger limit. There are many ways to do this. Here are some convenient commands to check the limits of the CloudBees product processes.
checking the limits of the CloudBees CD (CloudBees Flow) server process:
ubuntu@ip-172-31-41-128:~$ cat /proc/`ps -ef | grep ElectricFlowServer | awk '{ print $2 }' | head -1`/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 64063 64063 processes Max open files 1048576 1048576 files Max locked memory 16777216 16777216 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 64063 64063 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us ubuntu@ip-172-31-41-128:~$
checking the limits of the CloudBees CD (CloudBees Flow) agent process:
ubuntu@ip-172-31-41-128:~$ cat /proc/`ps -ef | grep ElectricFlowAgent | awk '{ print $2 }' | head -1`/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 64063 64063 processes Max open files 1048576 1048576 files Max locked memory 16777216 16777216 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 64063 64063 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us ubuntu@ip-172-31-41-128:~$
checking the limits of the CloudBees CD (CloudBees Flow) Repository Server process:
ubuntu@ip-172-31-41-128:~$ cat /proc/`ps -ef | grep ElectricFlowRepository | awk '{ print $2 }' | head -1`/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 64063 64063 processes Max open files 1048576 1048576 files Max locked memory 16777216 16777216 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 64063 64063 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us ubuntu@ip-172-31-41-128:~$
checking the limits of the Accelerator cluster manager process:
Assuming that the cluster manager is run by the default user "eacmuser" with default parameters, the following command should work:
ubuntu@ip-172-31-41-128:~$ cat /proc/`ps -u eacmuser -f | grep accelerator | awk '{ print $2 }' | head -1`/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 64063 64063 processes Max open files 1048576 1048576 files Max locked memory 16777216 16777216 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 64063 64063 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us ubuntu@ip-172-31-41-128:~$