KBEA-00162 Integrating Coverity Analysis for C/C++ into an ElectricAccelerator Build

Article ID:360033187131
14 minute readKnowledge base

You can integrate Coverity Analysis for C/C++ into a makefile-based ElectricAccelerator® build on Linux or Windows if the build meets the prerequisites that follow.

Prerequisites

Your build must meet the following criteria before you integrate Coverity Analysis for C/C++ into an ElectricAccelerator build:

  • Your build already works with ElectricAccelerator.
    For example on Windows, all .pdb serialization issues are resolved.

  • Your build is on a Linux or Windows system, which is supported both by Coverity Analysis and by ElectricAccelerator.

  • Each ElectricAccelerator agent is running the ElectricAccelerator eRunner daemon.

  • Your build must not use Visual Studio commands, such as devenv.

  • Windows: Only makefile-based builds, such as nmake or Cygwin makefiles are supported.

  • Windows: PCH duplication is not supported. (i.e. PCH+PDB splitting. It is recommended that you set FORCE_Z7.)

Gather required information

Before you begin the integration, make sure that you have:

  • A supported Coverity Analysis package for your platform.
    See the Coverity documentation for more information.

  • The hostname or IP address of your Electric Cloud Cluster Manager.

  • A username and password that can be used to access the Cluster Manager with cmtool, which runs the getAgents command.

Initial setup

Before you can integrate Coverity Analysis for C/C++ into an ElectricAccelerator build, there are a few set-up steps that you need to perform on the build machine and on each ElectricAccelerator agent.

Installing Coverity Analysis for C/C++

Before your ElectricAccelerator build can call Coverity Analysis, you must make Coverity Analysis available and accessible to the build machine and to each agent. You can install Coverity Analysis on each machine, or share a common installation between them. For example, you can use an ElectricAccelerator Electric File System, or a shared directory.

IMPORTANT:

  • Coverity Analysis must be available in the same location on each ElectricAccelerator agent. Throughout this document, the location of Coverity Analysis is referred to as .

  • You can add the directory to the EMAKE_ROOT setting, so it is accessible from all the ElectricAccelerator agents.

  • You must update the PATH for the build user so that ElectricAccelerator can find Coverity Analysis. You do not need to update the PATH on each agent because the ElectricAccelerator build sets the PATH on each agent.

Configuring your compiler

The compiler configuration must be available on the build machine as well as on each agent. If you have separate installations on each machine, the compiler must be configured on each machine. If you have a shared installation, you only need to run the configuration once. To configure you compiler, use the cov-configure command.

On Linux systems, if you use the GCC compiler, you can use the following example:

% cov-configure --compiler gcc

On Windows systems, if you use the cl.exe compiler, you can use the following example:

C:\> cov-configure --compiler cl

Note :
You can use the ElectricAccelerator utility named clusterexec to run this command on each of your agents. For example:

  • Linux:

      % clusterexec --hosts "host1 ... hostN" \ /cov-configure --compiler gcc
  • Windows:

      C:\> clusterexec --hosts "host1 ... hostN" \ \bin\cov-configure --compiler cl

See the "Configuring compilers for Coverity Analysis" section (section 2.6) in the "Coverity Analysis 8.0 User and Administrator Guide" for more information about configuring your compiler.

Coverity integration files

ElectricAccelerator comes with the files that you need to integrate Coverity Analysis with an ElectricAccelerator build. These files are in the ElectricAccelerator install directory under /unsupported/coverity or may be obtained from Electric Cloud support. Copy them to the Coverity Analysis /bin directory. On Linux, the files are shell scripts. On Windows, the files are batch files.

Linux

To integrate with an ElectricAccelerator build on a Linux system, use the following shell scripts:

  • cov-ec-vars.sh

  • cov-ec-wrapper.sh

  • cov-ec-finalize-agents.sh

  • cov-ec-cleanup-agent.sh

The integration scripts must be executable, and accessible on the build machine and on each agent.

Before you can use the scripts, you must customize them with values from your site configuration. You only need to customize the script cov-ec-vars.sh because the other scripts get their customized values from it. Edit the cov-ec-vars.sh script using the following values:

  • : Specify the IP address or hostname of your Electric Cloud Cluster Manager.

  • : Specify a valid username for the Cluster Manager. This username only needs to be able to run the getAgents command.

  • : Specify a valid password for the Cluster Manager. This user only needs to be able to run the getAgents command.

  • : Specify the SQL query used (possibly) to limit the results returned from the getAgents command. For more information about valid values for the filter, see the Electric Cloud cmtool documentation.

  • : Specify the path to the directory where Coverity Analysis is installed on each agent. The path should not include the bin directory. This location is also referred to as . For example: /tools/cov-sa-linux-6.6.2

  • : Specify the path to a local directory that each agent can use. This directory does not have to exist, however the build user must have write permissions in the local directory that you specify.

  • : Specify the path to a temporary directory that can be accessed on the build machine. The default is /tmp so you might not need to change this.

  • : Specify the default name of the resulting intermediate directory. You can customize the default name on a per-run basis. See the "Using a different intermediate directory name" section (section 2.4.6.1) in the "Coverity 6.6.2 Deployment Guide" for more information.

Windows

To integrate with a Windows ElectricAccelerator build, use the following batch files:

  • cov-ec-vars.bat

  • cov-ec-wrapper.bat

  • cov-ec-finalize-agents.bat

  • cov-ec-cleanup-agent.bat

The integration batch files must be executable, and accessible on the build machine and on each agent.

Before you can use the batch files, you must customize them using the following values from your site configuration. You only need to customize the batch file cov-ec-vars.bat because the other batch files get their customized values from it. Edit the batch file cov-ec-vars.bat using the following values:

  • : Specify the IP address or hostname of your Electric Cloud Cluster Manager.

  • : Specify a valid username for the Cluster Manager. This user only needs to be able to run the getAgents command.

  • : Specify a valid password for the Cluster Manager. This user only needs to be able to run the getAgents command.

  • : Specify the SQL query to use to limit the results that the getAgents command returns. For more information about valid values for the filter, see the Electric Cloud cmtool documentation.

  • : Specify the path to the directory where Coverity Analysis is installed on each agent. The path should not include the bin directory. This location is also referred to as . For example: C:\tools\cov-sa-win32-6.6.2

  • : Specify the path to a local directory that each agent can use. This directory does not have to exist, however the build user must have write permissions in the local directory that you specify.

  • : Specify the path to a temporary directory that can be accessed on the build machine. The default is %TEMP% so you might not need to change it.

  • : Specify the default name of the resulting intermediate directory. You can customize the default name on a per-run basis. See the "Using a different intermediate directory name" section (section 2.4.6.1) in the Coverity documentation for more information.

Modifying the build

After the initial setup is complete, you must modify the build so that:

  • cov-translate is called alongside every native compile. The cov-translate commands gather the necessary information to perform the analysis.

  • The build gathers all of the analysis information from each of the agents at the end of the build.

To change your build, modify your makefiles.

Retaining existing functionality

When you modify your makefiles, it is important to control whether or not Coverity Analysis will run. To accomplish this, use the if functionality of make. The following example uses GNU make syntax:

ifeq ($(COVERITY),1)
     ... makefile contents for a Static Analysis build ...
else
     ... makefile contents for a non-Static Analysis build ...
endif

With this in your makefile, your existing functionality is kept intact, and you can perform a Coverity Analysis build with the emake COVERITY=1 command.

The syntax for a makefile that uses nmake is different than the syntax in the preceding example. See the "Differences using nmake makefiles" section in this document for details.

Injecting cov-translate calls into the build on Linux

To make sure that the cov-translate command is run alongside each compile, each native compile must be run through a wrapper batch file or script. To make sure that the batch file is called, the compiler macro, typically $(CC), needs to be overridden so that it calls the Coverity wrapper. For example, an original makefile line is:

CC=gcc
CXX=g++

On Linux systems, the modified makefile is:

ifeq ($(COVERITY), 1)
     CC=cov-ec-wrapper.sh gcc
     CXX=cov-ec-wrapper.sh g++
else
     CC=gcc
     CXX=g++
endif

Injecting cov-translate calls into the build on Windows with the Visual Studio add-in.

On Windows systems, the new version of Visual Studio plugin (4.2.5/5.0 or greater) lets you set an environment variable which enables you to add something to the front of the cl.exe compile lines in the generated makefiles. This is much easier than having to edit the makefiles by hand.

The environment variable ECADDIN_COVERITY_BATCH_FILE should be set to cov-ec-wrapper.bat.

The environment variable ECADDIN_COVERITY_DATABASE_LOCATION should be set to the location of the Coverity database set by coverity_agent_local-dir in cov-ec-vars.bat. (e.g. c:\cov-agent).

When using precompiled headers, the emit database for the PCH creator is copied and used by the PCH user. Copying the PCH database is necessary to ensure subsequent compiles can be parallelized. The Visual Studio add-in will create a maximum of ECADDIN_MAX_PDB_FILES databases.

When not using precompiled headers, a unique emit database is created for each compilation.

Add --emake-exclude-env=COMPUTERNAME to ensure teh agent hostname is used when creating the emit database.

Set ECADDIN_USE_INLINE_FILES=true if using a PATH containing Chinese characters.

Injecting cov-translate calls into the build on Windows by hand

On Windows systems, the modified makefile is:

ifeq ($(COVERITY), 1)
     CC=cov-ec-wrapper.bat gcc
     CXX=cov-ec-wrapper.bat g++
else
     CC=gcc
     CXX=g++
endif

Alternative to modifying the makefiles

Note : An alternative to modifying each makefile is to override $(CC) on the command line. The drawback to this method is that you must remember to override the $(CC) macro each time you want to perform a Coverity Analysis build. If you use the nmake command, you might need to modify your makefiles before you can override a macro from the command line. See the "Macros set on the command line" section in this document for details.

  • Linux:

      % emake CC=cov-ec-wrapper.sh\ gcc CXX=cov-ec-wrapper.sh\ g++ ...

    The overridden macros must not contain quotes or this will not work, hence the escaped spaces.

  • Windows:

      C:\> emake CC="cov-ec-wrapper.bat gcc" CXX="cov-ec-wrapper.bat g++" ...

Gathering the build results

By injecting the cov-translate command calls into an ElectricAccelerator build, the cov-translate command creates Coverity Analysis emit directories on each of the ElectricAccelerator agents used. Before the analysis, you must gather and merge together all of the individual emits with the cov-ec-finalize-agents.sh script or cov-ec-finalize-agents.bat batch file; you must modify the build to call the script or batch file.

Note : The cov-ec-finalize-agents.sh script or cov-ec-finalize-agents.bat batch
file uses the Electric Cloud tools clusterdownload and clusterexec to gather and cleanup
the emits from each of the agents. Both of these tools require that the eRunner daemon is running on the machine. You can modify the cov-ec-finalize-agents.sh script or
cov-ec-finalize-agents.bat batch file to use other tools, such as the ssh or scp command, to perform these steps.

Before you call the cov-ec-finalize-agents.sh script or cov-ec-finalize-agents.bat batch file, it is required that:

  • The script or batch file is the last thing that your build calls.

  • The script or batch file is passed the ECLOUD_BUILD_ID as its first argument.

  • The script or batch file is called on the build machine and not handed out to the agents.

Gathering the results inside of make

To modify your makefiles to call the cov-ec-finalize-agents.sh script or
cov-ec-finalize-agents.bat batch file, add a new target to your makefile.

On Linux systems:

# If this is a Static Analysis build:
ifeq ($(COVERITY), 1)

# Define the finalize target to gather all of the emits:
#pragma runlocal
cov-finalize:
      @cov-ec-finalize-agents.sh $(ECLOUD_BUILD_ID)
else

# Not a Static Analysis build; Define an empty target:
cov-finalize:
endif

On Windows systems:

# If this is a Static Analysis build:
ifeq ($(COVERITY), 1)

# Define the finalize target to gather all of the emits:
#pragma runlocal
cov-finalize:
     @cov-ec-finalize-agents.bat $(ECLOUD_BUILD_ID)
else

# Not a Static Analysis build; Define an empty target:
cov-finalize:
endif
In the preceding examples:
  • Although it is not required, an empty cov-finalize target is defined for non-Coverity Analysis builds.

  • The cov-finalize target is marked with #pragma runlocal, which is an Electric Cloud directive that ensures that this target will run locally on the build machine. The directive is required.

  • You are required to specify the cov-finalize target on the make command line or by adding the cov-finalize target as a dependency to the appropriate target in your build. Keep in mind that the cov-finalize target must be run last.

Gathering the results outside of make

Rather than the build calling the cov-ec-finalize-agents.sh script or the cov-ec-finalize-agents.bat batch file, the script or batch file can be run after the build is complete. However, the script or batch file must be run as the same user that the build was run as.

  • For example, as the build user on a Linux system:

      % cov-ec-finalize-agents.sh NNN
  • For example, as the build user on a Windows system:

      C:\> cov-ec-finalize-agents.bat NNN


    where is the of the completed build.

Important : Until the cov-ec-finalize-agents.sh script or cov-ec-finalize-agents.bat batch file is called, all of the temporary emit directories will remain on the agents.

Performing a build

After your build is modified, it is time to try a Coverity Analysis build with ElectricAccelerator. This is as easy as running an emake build.

The actual command you run might vary slightly, so there are a few important points regarding the sample commands that follow:

  • Using the Visual Studio add-in, run the build like you normally would (Electric Cloud -> Rebuild Solution.

  • On Windows systems without the addin, you need to specify the appropriate --emake-emulation argument, either nmake or cygwin, depending on your build.

  • If you modified the dependencies of your makefile to call the cov-finalize target automatically, do not specify it on the command line.

  • If you did not override the CC macro within your makefiles, you need to specify it on the command line.

  • If you usually just run the emake without specifying a target to perform your build, you need to identify the default target for the makefile. Next, specify the default target along with the cov-finalize target on the command line, unless the cov-finalize target was added as a dependency, in which case you do not need a target. The default target in a makefile is either the first target defined in the makefile or the target specified by the .default directive.

The following examples perform a Coverity Analysis build using ElectricAccelerator.

If you previously started your build with:

  • Linux:

      % emake all
  • Windows:

      C:\> emake all

You could now run:

  • Linux:

      % emake COVERITY=1 all cov-finalize
  • Windows:

      C:\> emake COVERITY=1 all cov-finalize

The resulting emit directory is now in the current directory and named coverity, or the name of the directory that you customized to be, in the cov-ec-vars.sh script or cov-ec-vars.bat batch file.

Using a different intermediate directory name

The scripts or batch files use a default emit directory named or the name of the directory that you customized to be, in the cov-ec-vars.sh script or cov-ec-vars.bat batch file. If you want a different emit directory name, you can change it on the command line by setting the COVERITY_INTERMEDIATE macro:

% emake COVERITY=1 COVERITY_INTERMEDIATE=coverity.myname all cov-finalize

On the preceding command line, the final emit directory is named coverity.myname.

Running the analysis

After you have your emit directory, you can run the analysis as usual.

  • Linux:

      % cov-analyze --dir coverity
  • Windows:

      C:\> cov-analyze --dir coverity

Validating the results

Because the wrapper script or batch file was injected by overriding the compiler $(CC) makefile macro, it is possible that some compiles were missed. For example, a Makefile might have directly called gcc.
Therefore, it is a good idea to compare the results from a serial cov-build to those that were obtained from an ElectricAccelerator build.

Linux:

# build with cov-build
% cov-build --dir coverity.covbuild make all

# build with ElectricAccelerator
% emake COVERITY=1 COVERITY_INTERMEDIATE=coverity.emake all cov-finalize

Windows:

# build with cov-build
C:\> cov-build --dir coverity.covbuild make all

# build with ElectricAccelerator
C:\> emake COVERITY=1 COVERITY_INTERMEDIATE=coverity.emake all cov-finalize

Validation 1: Number of translation units

You can check if any translation units are missing, by using the cov-manage-emit command (and a few shell commands if you are on Linux). You can compare the emit that is generated by the ElectricAccelerator build with the emit that is generated by the cov-build command.

Linux:

# Generate the list of translation units for the cov-build emit:
% cov-manage-emit --dir coverity.covbuild list \
| grep -v "Translation unit" | cut -d ">" -f 2 | sort > covbuild.lst

# Generate the list of translation units for the ElectricAccelerator emit:
% cov-manage-emit --dir coverity.emake list \
| grep -v "Translation unit" | cut -d ">" -f 2 | sort > emake.lst

# Compare the results:
% diff covbuild.lst emake.lst

Windows:

# Generate the list of translation units for the cov-build emit:
C:\> cov-manage-emit --dir coverity.covbuild list

# Generate the list of translation units for the ElectricAccelerator emit:
C:\> cov-manage-emit --dir coverity.emake list

If there are any differences, find the location in your makefiles where that source file is compiled. There are a couple of likely causes why a source file might be missed:

  • The file is not compiled using the $(CC) macro, rather it is compiled by a direct call to the compiler.
    In this case, the solution might be to modify the makefile to use the $(CC) macro instead of calling the compiler directly. The goal is to have the compile wrapped by the cov-ec-wrapper.sh script or cov-ec-wrapper.bat batch file.

  • The file is compiled using a different macro, such as $(CXX).
    In this case, the proper solution is to override the different macro, such as $(CXX), similar to how $(CC) was overridden.

    Note : Differences marked with (no code) can safely be ignored.

    It is only important that the same translation units (source files) show up in the output. The IDs associated with each translation unit can be different between the two emits.

After you have modified the build, perform a new ElectricAccelerator build and repeat the validation step.

Validation 2: Analysis results

Although you can perform the previous validation quickly, a more thorough validation is to compare the analysis results. To do this, run an analysis on each emit and compare the results.

Linux:

# analyze the cov-build emit
% cov-analyze --dir coverity.covbuild

# analyze the ElectricAccelerator emit
% cov-analyze --dir coverity.emake

Windows:

# analyze the cov-build emit
C:\> cov-analyze --dir coverity.covbuild

# analyze the ElectricAccelerator emit
C:\> cov-analyze --dir coverity.emake

Both runs should have similar results.

Differences using nmake makefiles

There are a few important differences between GNU make and nmake that relate to the instructions previously described.

Syntax for if

One of the most noticeable differences between GNU make and nmake is the syntax used for the if functionality. An if in nmake is similar to:

!if "$(COVERITY)" == "1"
    ... makefile contents for a Static Analysis build ...
!else
    ... makefile contents for a non-Static Analysis build ...
!endif

The makefile fragment required for this integration is similar to:

# if this is a Static Analysis build
!if "$(COVERITY)" == "1"

# Define the finalize target to gather all of the emits:
#pragma runlocal
cov-finalize:
     @cov-ec-finalize-agents.bat $(ECLOUD_BUILD_ID)
!else
# Not a Static Analysis build, define an empty target
cov-finalize:
!endif

Macros set on the command line

The GNU make command allows you to set on the command line a macro that is passed down to all of the sub-makes, and that overrides any value that might be specified within the sub-makefiles. For example:

C:\> make NUMBER=1

In the preceding line, the NUMBER macro is set to 1 in all of the sub-makes regardless of what it is set to in the sub-makefiles. This behavior makes it easy to override the compiler macro, such as CC, in all sub-makefiles by specifying it on the command line.

Unfortunately, nmake has different behavior; if a macro is redefined within a sub-makefile the recursive make does not override the value by default. Therefore, if the compiler macro is defined in all of your makefiles, you cannot override the compiler macro on the command line. However, you can modify the nmake makefiles to have it behave similarly to the GNU make command.

There are two procedures that you can use to modify your nmake makefiles to behave similarly to GNU make:

  • To explicitly set the macro when recursing calling make

  • To explicitly pass down the MAKEFLAGS macro:

To explicitly set the macro when recursing calling make

  1. You can explicitly set the macro in question when recursively calling nmake.

    For example, your recursive nmake calls might look like:

     $(MAKE) /f subdir/Makefile CC=$(CC)


    The preceding line passes down the current value of CC, or whatever macro you specify, to the sub-make.

  2. Make the preceding change everywhere that nmake is called recursively.

To explicitly pass down the MAKEFLAGS macro

  1. Change the makefile to explicitly pass down MAKEFLAGS when recursively calling make.

  2. If your recursive make call looks like:

     $(MAKE) /f subdir/Makefile



    Change it to:

     $(MAKE) /$(MAKEFLAGS) /f subdir/Makefile


  3. After all recursive make calls are updated, run nmake and specify the /e argument on the command line.

    For example:

     C:\> nmake /e ONE=1 TWO=2 THREE=3


    In the preceding example, the values for all three macros (ONE, TWO, THREE) are passed down to the sub-make, and override any values that might be set in the respective makefiles. Although this procedure is similar to one in the "To explicitly set the macro when recursing calling make" section, it has the advantage that you do not need to explicitly name the macros that are passed down. However, it does require that you remember the /e flag on the command line.