Disk Usage

Disk space usage varies and depends on the quantity and size of the jobs you run. We recommend starting with the following free space recommendations:

Server

10 GB is recommended.

Agents

5 GB each is recommended.

Sizing Artifact Cache Directory Space on Resources

By default, artifacts are retrieved into the <DATA_DIRECTORY>/artifact-cache directory of the agent installation. You can modify the agent.conf file to change the location, or you can specify the cache directory location on each resource known to CloudBees Flow.

Determining how much free space the cache partition needs to accommodate all of your artifact versions can be difficult. One approach is that for each artifact, estimate how large you think each version will be and how many versions you plan to keep. Compute the total required space to be the sum of version-size * numVersions for each artifact. Add a buffer of 50%. Using your end result, allocate a disk/partition of that size and configure the cache as a directory on that disk/partition.

Repository Server

If using Artifact Management functionality, the repository server might need 20-30 GB.

Although a server install includes an artifact repository, we recommend that production repository servers be installed on different machines than the CloudBees Flow server. The repository server might do a very large amount of disk and network I/O when transferring artifact versions to and from requesters, and this might adversely affect CloudBees Flow server performance.

Sizing the Repository Backingstore

For a repository installation, by default, the repository backingstore is the <DATA_DIRECTORY>/repository-data directory. You can modify the <DATA_DIRECTORY>/conf/repository/server.properties file or use ecconfigure to update the backingstore location. Determining exactly how much free space the backingstore disk/partition needs to accommodate your artifact versions can be difficult. Here is one approach to approximate the disk size you need:

For each artifact, estimate how large you think each version will be and how many versions you plan to keep. Compute the total required space to be the sum of version-size * numVersions for each artifact. Add a buffer of 50%. Using your end result, allocate a disk/partition that size and configure the repository backingstore as a directory on that disk/partition.

Logs

You can set the following properties as Java system properties in wrapper.conf :

  • ec.logRoot controls the location of the log output. The default location is the logs/commander.log directory.

  • ec.logHistory controls the number of days of log history that is kept. The default is 30 days.

  • ec.logSize controls the size of each log file before it is zipped up and a new log file started. The default is 100 MB, but each log rotation will zip the file, so that only about 6-7 MB of space are being taken.

    Production systems generate multiple log files per day – an average system can generate 50-100 log files. This means that the daily requirement for space (under this type of load) is 300-700 MB. Retaining 1 months' worth of logs requires 9-21 GB of space, so adjusting the ec.logHistory value to something lower might be appropriate, if you want to allot less space for this logging.

To limit the amount of disk space for logging, the most effective approach is to use a lower ec.logHistory value.

DevOps Insight Server

Determining the amount of disk space required for the DevOps Insight Server depends on the shape and size of data that you will store on the DevOps server. This data is used by Elasticsearch, which is the underlying analytics store and search engine. Following are general guidelines based on CloudBees performance and scalability tests.

CloudBees Flow sends all deployment events, pipeline runs, and release data to the DevOps Insight server. The following table shows the average size for each data set:

Data set Amount Documents in corresponding Elasticsearch index Average index size

Deployments

100 deployments

73443 (The number of documents per deployment depends on the number of deployment events in your deployment process)

18.5 MB

Pipeline runs

4 pipeline runs

24 (The number of documents per pipeline run depends on your pipeline definition)

285 KB

Releases

5 releases

5

52 KB

Based on the above table, if you will run 10 deployments a week, 2 pipeline runs a month, and 1 release per month, then over one year including weekends and holidays, you will need about 97 MB (95 MB + ~1.7 MB + ~0 MB) of disk space to store deployment events, pipeline runs, and release data in the Elasticsearch server backing the DevOps Insight server.

If you have also set up the plugins to collect and send data for the Release Command Center to the DevOps Insight server, then you will need additional disk space, which can be determined as follows. The following table shows the average size for each data set:

Data set Amount Documents in corresponding Elasticsearch index Average index size

Features (stories)

400 features

1200 (Assuming that each feature underwent three updates from the point it was first sent to Elasticsearch)

256 KB

Builds

40 builds

40

248 KB

Quality (aggregated test results)

140

140

347 KB

Incidents

90 incidents

270 (Assuming that each incident underwent three updates from the point it was first sent to Elasticsearch)

164 KB

Based on the above table, if you will run 10 builds a day, 50 aggregated test results a day, 50 features (stories) per month, and 5 incidents per month, then over one year including weekends and holidays, you will need about 240 MB (22 MB + 44 MB + 64 MB + 109 MB) of additional disk space to store data collected by the plugins.

The total disk space for a year would be about 340 MB (97 MB + 240 MB) based on the above metrics. You should apply the required adjustments to calculate your disk space requirements for the DevOps Insight server based on your data-generation patterns.

Copyright © 2010-2020 CloudBees, Inc.Online version published by CloudBees, Inc. under the Creative Commons Attribution-ShareAlike 4.0 license.CloudBees and CloudBees DevOptics are registered trademarks and CloudBees Core, CloudBees Flow, CloudBees Flow Deploy, CloudBees Flow DevOps Insight, CloudBees Flow DevOps Foresight, CloudBees Flow Release, CloudBees Accelerator, CloudBees Accelerator ElectricInsight, CloudBees Accelerator Electric Make, CloudBees CodeShip, CloudBees Jenkins Enterprise, CloudBees Jenkins Platform, CloudBees Jenkins Operations Center, and DEV@cloud are trademarks of CloudBees, Inc. Most CloudBees products are commonly referred to by their short names — Accelerator, Automation Platform, Flow, Deploy, Foresight, Release, Insight, and eMake — throughout various types of CloudBees product-specific documentation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Jenkins is a registered trademark of the non-profit Software in the Public Interest organization. Used with permission. See here for more info about the Jenkins project. The registered trademark Jenkins® is used pursuant to a sublicense from the Jenkins project and Software in the Public Interest, Inc. Read more at www.cloudbees.com/jenkins/about. Apache, Apache Ant, Apache Maven, Ant and Maven are trademarks of The Apache Software Foundation. Used with permission. No endorsement by The Apache Software Foundation is implied by the use of these marks.Other names may be trademarks of their respective owners. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this content, and CloudBees was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this content, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.