Workspace CleanUp threads may cause performance issues

Article ID:360047019911
1 minute readKnowledge base

Issue

  • Jenkins suffer slowness and performance issues

  • branch-api version is lower than 2.5.6

  • In the thread dumps you can observe BLOCKED threads like

"Computer.threadPoolForRemoting [#15979]" #398547 daemon prio=5 os_prio=0 tid=0x00007f0509fcf000 nid=0x1e982 waiting for monitor entry [0x00007f0246221000]
   java.lang.Thread.State: BLOCKED (on object monitor)
	at jenkins.branch.WorkspaceLocatorImpl.locate(WorkspaceLocatorImpl.java:158)
	- waiting to lock <0x00000003c0677358> (a hudson.slaves.DumbSlave)
	at jenkins.branch.WorkspaceLocatorImpl.locate(WorkspaceLocatorImpl.java:129)
	at jenkins.branch.WorkspaceLocatorImpl.access$100(WorkspaceLocatorImpl.java:80)
	at jenkins.branch.WorkspaceLocatorImpl$Deleter$CleanupTask.run(WorkspaceLocatorImpl.java:402)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
	- <0x000000049de00a48> (a java.util.concurrent.ThreadPoolExecutor$Worker)

blocked waiting for the "Workspace clean-up thread"

"Workspace clean-up thread" #220769 daemon prio=5 os_prio=0 tid=0x00007f054cb47000 nid=0x1414 in Object.wait() [0x00007f046e386000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)
	- locked <0x000000048d215470> (a [B)
	at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
	at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
	at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
	- locked <0x000000048d227520> (a java.io.InputStreamReader)
	at java.io.InputStreamReader.read(InputStreamReader.java:184)
	at java.io.BufferedReader.fill(BufferedReader.java:161)
	at java.io.BufferedReader.readLine(BufferedReader.java:324)
	- locked <0x000000048d227520> (a java.io.InputStreamReader)
	at java.io.BufferedReader.readLine(BufferedReader.java:389)
	at jenkins.branch.WorkspaceLocatorImpl.load(WorkspaceLocatorImpl.java:222)
	at jenkins.branch.WorkspaceLocatorImpl.locate(WorkspaceLocatorImpl.java:159)
	- locked <0x00000003c0677358> (a hudson.slaves.DumbSlave)
	at jenkins.branch.WorkspaceLocatorImpl.locate(WorkspaceLocatorImpl.java:129)
	at jenkins.branch.WorkspaceLocatorImpl.locate(WorkspaceLocatorImpl.java:125)
	at hudson.model.Slave.getWorkspaceFor(Slave.java:335)
	at hudson.model.WorkspaceCleanupThread.execute(WorkspaceCleanupThread.java:78)
	at hudson.model.AsyncPeriodicWork$1.run(AsyncPeriodicWork.java:101)
	at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
	- None

Explanation

Thread contention with the WorkspaceLocatorImpl can cause Jenkins to hang. Large amounts of workspace cleanup threads, such as from deleting large Multibranch/Organization items, can cause this behaviour.

Other causes are possible like a filesystem poor performance.

Resolution

Upgrade Branch API plugin to version 2.5.6 or higher.

References