Issue
When restoring from a backup using the CloudBees Backup plugin, the controller fails to start with the following error:
java.nio.file.FileAlreadyExistsException: /var/jenkins_home/infradna-backup-restore.properties at java.base/sun.nio.fs.UnixFileSystem.move(UnixFileSystem.java:912) at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:309) at java.base/java.nio.file.Files.move(Files.java:1431) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener$2.visitFile(RestoreServletListener.java:228) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener$2.visitFile(RestoreServletListener.java:211) at java.base/java.nio.file.Files.walkFileTree(Files.java:2799) at java.base/java.nio.file.Files.walkFileTree(Files.java:2870) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.moveRecursively(RestoreServletListener.java:211) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.moveRecursively(RestoreServletListener.java:204) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.moveRecursivelyThenDeleteSource(RestoreServletListener.java:182) Caused: java.lang.RuntimeException: Was not able to move recursively from /var/jenkins_home/restore-<sha-hash> to /var/jenkins_home at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.moveRecursivelyThenDeleteSource(RestoreServletListener.java:185) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.doRestore(RestoreServletListener.java:145) at com.cloudbees.jenkins.infradna.backup.RestoreServletListener.contextInitialized(RestoreServletListener.java:86)
This causes the managed controller to enter a CrashLoopBackOff state in Kubernetes environments, preventing the controller from starting.
Explanation
When a restore job is executed, the plugin creates two items in the Jenkins home directory:
-
A marker file:
infradna-backup-restore.properties -
A restore directory containing the backup content:
restore-<sha-hash>/
The restore process expects an immediate restart.
However, if an administrator runs a backup job after restore but before restarting the controller, the new backup captures these residual restore files.
When this backup is later restored, the plugin attempts to overwrite the existing infradna-backup-restore.properties file from the restore-<sha-hash>/ directory, causing the FileAlreadyExistsException.
This edge case was not considered in the original implementation.
Resolution
This issue will be resolved in CloudBees CI in the January 2026 release.
The fix prevents backup jobs from including restore residual files (infradna-backup-restore.properties and restore-<sha-hash>/ directories) in new backups.
Workaround
Simply deleting the $JENKINS_HOME/infradna-backup-restore.properties file will cause further issues and may result in an incomplete restore or loss of controller configuration.
|
If the controller is in a CrashLoopBackOff state due to this issue, follow these recovery steps:
For controllers that have not undergone multiple restart attempts
-
Access the
$JENKINS_HOMEdirectory. On Kubernetes, stop the managed controller and use a rescue pod. -
Delete the problematic marker file from the restore directory:
rm /var/jenkins_home/restore-<sha-hash>/infradna-backup-restore.propertiesReplace
<sha-hash>with the actual hash value from your environment. -
Restart the controller.
Do NOT delete the $JENKINS_HOME/infradna-backup-restore.properties file, as it belongs to the current incomplete restoration process.
|
For controllers that have undergone multiple restarts
Multiple restart attempts create multiple archived-<timestamp> directories, complicating recovery.
Simply deleting the restore marker file may result in missing job definitions or identity key issues (e.g., operations center rejecting controller connection).
Instead, complete the restore manually using the rescue pod.
If necessary, contact CloudBees Support for assistance with backup restoration.