KBEC-00277 - Understanding Memory Usage in CloudBees CD (CloudBees Flow)

Article ID:360032828592
3 minute readKnowledge base
On this page

Summary

Determining the correct memory setting for your CloudBees CD (CloudBees Flow) instance can take some trial-and-error.

the commander.log contains memory information which can help give you perspective on the memory usage of your system, to allow you to better configure these settings.

Description

The Java heap size attribute is located in the wrapper.conf file within the server/conf directory.

By default, the settings are set as follows:

# Initial Java Heap Size (in %) wrapper.java.initmemory.percent=40 # Maximum Java Heap Size (in %) wrapper.java.maxmemory.percent=40

For example, if your server were running on a machine with 8GB of memory, there would be approximately 3.2GB of heap memory available for the CloudBees CD (CloudBees Flow) server.

The commander.log will contain a snapshot memory capture every 15 minutes. If you search for the phrase "MemoryGuard" in the logs, you can run into such lines. This table exposes an example of what is being reported inside the commander.log file:

 Pool                     Type          Used        Committed       Init        Max

 Code Cache           usage        37.03 MB   37.44 MB    2.44 MB   48.00 MB
 Code Cache           peak         37.09 MB   37.44 MB    2.44 MB   48.00 MB
 G1 Eden Space        usage        1.516 GB   2.340 GB   2.520 GB      -1  B
 G1 Eden Space        collection   2.234 GB   2.355 GB   2.520 GB      -1  B
 G1 Eden Space        peak         8.898 GB   9.387 GB   2.520 GB      -1  B
 G1 Survivor Space    usage       184.00 MB  184.00 MB       0 B       -1  B
 G1 Survivor Space    collection  168.00 MB  168.00 MB       0 B       -1  B
 G1 Survivor Space    peak        368.00 MB  368.00 MB       0 B       -1  B
 G1 Old Gen           usage        7.182 GB   9.480 GB   9.480 GB  12.000 GB
 G1 Old Gen           collection   5.198 GB   6.641 GB   9.480 GB  12.000 GB
 G1 Old Gen           peak         8.282 GB   9.480 GB   9.480 GB  12.000 GB
 G1 Perm Gen          usage       108.41 MB  112.00 MB   20.00 MB  128.00 MB
 G1 Perm Gen          collection  104.63 MB  108.00 MB   20.00 MB  128.00 MB
 G1 Perm Gen          peak        128.00 MB  128.00 MB   20.00 MB  128.00 MB

Collector                          Count            Time
G1 Young Generation                 4780          528743
G1 Old Generation                     10           11698

From this table we can confirm the following information:

  1. Use the Old Gen lines to learn that:

    • The currently committed memory is 9.48 GB [ see usage line ]

    • The max possible memory is 12 GB [ see MAX column in any of the 3 lines ]

    • The current memory in use is 7.182 GB [ see usage line ]

    • The maximum memory used since the last restart is 8.282 GB [ see peak line ]

    • The last time there was a Garbage Collection (GC) event, memory usage shrank back to 5.198 GB [ see collection line ]

  2. The collector lines expose:

    • That there have been just under 5000 collection events of Young Generation memory

      • This is memory that is intended to be transient, and therefore collected frequently.

      • By dividing the time by the count, you can find that such events are taking an average of 110ms.

    • That there have been 10 collections of Old Generation memory. This memory is a traditional JAVA GC event where the Java engine cleans up stale memory. Under such events, the CloudBees CD (CloudBees Flow) server will temporarily stop being functional - so keeping this time to a minimum is important. By dividing the time by the count, this example shows that such events were taking 1.2s. As such, these GC events are probably not a noticeable event in most cases.

Should any of this example data cause you to be concerned?

If the current data is a valid representation of what’s been running in your environment for awhile, then this example implies that one might be able to lower their max memory needs to 9GB or 10GB without impacting performance on the server. The result of having a lower max memory may cause JAVA to collect old-gen memory slightly more frequently, which can help keep the time needed for GC events to a minimum.

What if I am seeing peak memory events where my USED memory reaches close to my maximum?

If this situation is a rare event, you should look into what may have caused these temporary memory peak to occur. Certain actions may cause you to encounter temporary memory peaks. eg:

  • Exports of large projects

  • Exports of a full DB w/o jobs (exports with jobs are no longer guaranteed to succeed on a running system)

  • Launching large amounts of concurrent jobs or workflows

If I am continuing to see peak events happen, should I just increase the memory I allocate to CloudBees CD (CloudBees Flow)?

Adding more memory may help to survive such peaks, but depending on your situation, and what you are trying to achieve, you may also want to consider splitting your work across multiple servers (High Availability feature is available as of v5.0). In general, if you move to HA, 3 servers should be recommended so that a node loss can be survived by the remaining 2 servers.

What are some common physical memory allocations?

A small production system may be okay with using 12-16GB of physical memory. A common allocation of memory for CloudBees CD (CloudBees Flow) customers is 32-48GB of physical memory. We have some customers who have allocated >100GB of physical memory to handle the peak-hour loads that their system encounters.