Issue
-
Jobs fail or you see exceptions in your Jenkins log with errors similar to
java.io.IOException: No space left on device
Resolution
Running out of disk space is one of the more common causes of issues on any production system, and Jenkins is no different. If you encounter "no space left on device" errors, the process of fixing the problem is relatively straightforward.
-
The error message should indicate where the system was trying to write; for example,
/var/jenkins_home
-
On the affected system (either the Jenkins controller or possibly an agent), run
df
and look for volumes whose Use% is 100%. -
For volumes that are at or near 100% block storage used, you may want to search for "large" files, using
find
. For example,find /var/jenkins_home -size +100M
. Or you can usedu -sh *
in the directory, and drill down accordingly. Both of these may be very slow if the volume is large. -
If a regular
df
does not show any volumes at 100% usage, it is possible that the volume in question is out of inodes.df -i
will show this as IUse% being at 100%. -
When a volume is out of inodes but seems to have adequate free block storage, it is usually caused by the existence of a very large number of small or empty files or directories. Every file or directory created consumes 1 inode, and they are a finite resource. To find where this might be happening, you can go into each directory of a volume and run a simple
find . | wc -l
and look for unusually large numbers. Again this may require drilling down to find the source of the problem. -
Finally, if neither of these checks turns up anything, it is possible that a Jenkins job or other process is trying to create a single file that exceeds the amount of free space. When the volume is full, the process fails with an error and the file is deleted, so then it appears that there is no issue. In this case, finding the offending job or process would require looking at Jenkins or system logs.