Jenkins: Getting Karma Generated Test Results to Appear in Maven Project Job

PROBLEM

Jenkins, for some reason, does not pick up Karma generated JUnit test reports even though they are created in the right directory… and apparently, it is a known problem. While Freestyle project job allows us to manually publish these JUnit reports, my intention is to rely on Maven project job to do the same thing.

PROJECT STRUCTURE

For simplicity sake, we will use the following project structure in this example:-

testKarma
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── choonchernlim
    │   │           └── testKarma
    │   │               └── main
    │   │                   └── Main.java
    │   └── webapp
    │       └── resources
    │           └── js
    │               └── main.js
    └── test
        ├── java
        │   └── com
        │       └── choonchernlim
        │           └── testKarma
        │               └── main
        │                   └── MainTest.java
        ├── js
        │   └── main-spec.js
        └── resources
            ├── karma.conf.js
            └── karma.jenkins.conf.js

CONFIGURING KARMA CONFIG FILE

src/test/resources/karma.config.js

This Karma config file is used for running tests locally, and it looks like this:-

module.exports = function ( config ) {
    config.set( {
        basePath         : '../../../',
        frameworks       : ['jasmine'],
        files            : [
            'src/main/webapp/resources/js/**/*.js',
            'src/test/js/**/*.js'
        ],
        exclude          : [],
        preprocessors    : {
            'src/main/webapp/resources/js/**/*.js' : ['coverage']
        },
        reporters        : ['progress', 'coverage'],
        port             : 9876,
        colors           : true,
        logLevel         : config.LOG_INFO,
        autoWatch        : true,
        browsers         : ['Chrome', 'PhantomJS'],
        singleRun        : false,
        plugins          : [
            'karma-jasmine',
            'karma-chrome-launcher',
            'karma-phantomjs-launcher',
            'karma-junit-reporter',
            'karma-coverage'
        ],
        coverageReporter : {
            type : 'html',
            dir  : 'target/coverage/'
        }
    } );
};

src/test/resources/karma.jenkins.config.js

This Karma config file contains a slight modification of karma.config.js and it is used for running tests in Jenkins:-

module.exports = function ( config ) {
    config.set( {
        basePath         : '../../../',
        frameworks       : ['jasmine'],
        files            : [
            'src/main/webapp/resources/js/**/*.js',
            'src/test/js/**/*.js'
        ],
        exclude          : [],
        preprocessors    : {
            'src/main/webapp/resources/js/**/*.js' : ['coverage']
        },
        // added `junit`
        reporters        : ['progress', 'junit', 'coverage'],
        port             : 9876,
        colors           : true,
        logLevel         : config.LOG_INFO,
        // don't watch for file change
        autoWatch        : false,
        // only runs on headless browser
        browsers         : ['PhantomJS'],
        // just run one time
        singleRun        : true,
        // remove `karma-chrome-launcher` because we will be running 
        // on headless browser on Jenkins
        plugins          : [
            'karma-jasmine',
            'karma-phantomjs-launcher',
            'karma-junit-reporter',
            'karma-coverage'
        ],
        // changes type to `cobertura`
        coverageReporter : {
            type : 'cobertura',
            dir  : 'target/coverage-reports/'
        },
        // saves report at `target/surefire-reports/TEST-*.xml` 
        // because Jenkins looks for this location and file
        // prefix by default.
        junitReporter    : {
            outputFile : 'target/surefire-reports/TEST-karma-results.xml'
        }
    } );
};

ENSURING KARMA GENERATED JUNIT REPORT SHOWS UP IN JENKINS…

This is indeed the most important part of getting the Karma-Jenkins integration to work. Instead of manually running karma start command in Jenkins, we will rely on maven-karma-plugin to do this for us. The key here is to specify the correct <phase> so that Jenkins picks up and presents the generated report.

pom.xml

<plugin>
    <groupId>com.kelveden</groupId>
    <artifactId>maven-karma-plugin</artifactId>
    <version>1.6</version>
    <executions>
        <execution>
            <phase>process-test-classes</phase>
            <goals>
                <goal>start</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <configFile>src/test/resources/karma.jenkins.conf.js</configFile>
        <browsers>PhantomJS</browsers>
    </configuration>
</plugin>

CONFIGURING JENKINS

Since Karma test runner requires NodeJS, we will install NodeJS Plugin in Jenkins. This allows us to automatically install NodeJS from Jenkins.

Once installed, go to Manage Jenkins -> Configure System and go to NodeJS section:-

Although this section allows us to specify npm packages to install, I’m having trouble installing certain packages, such as karma-phantomjs-launcher. The phantomJS package invokes node install.js during the installation, however, the node command isn’t available in PATH environment variable at this point. Thus, the installation will always fail. So, the npm packages will be configured at the job level in the next step.

Next, create a Maven project job and configure it.

Configuring Build Environment, Pre Steps and Build

We exposed NodeJS to PATH environment variable so that we can install phantomJS package. Next, we created a pre-build step to install any necessary plugins globally. Finally, we will want Maven to invoke test goal so that it runs both Java tests and Karma test runner.

Configuring Coverage Report

We provided the Karma generated coverage report XML file.

OUTCOME

When we run Build Now in Jenkins, both unit test and coverage reports will display both Java and JavaScript execution results.

Combining and Minifying JavaScript Files with Google Closure Compiler

GOAL

The goal is to combine and minify several JS files into one JS file in the right order.

PROBLEM

Let’s assume we have the following directory structure with three JS files.

Directory Structure

appdev
└── src
    ├── appdev.blackcow.js
    ├── appdev.js
    └── appdev.whitesheep.js

appdev.js

var AppDev = {
    modules : [],

    start : function () {
        var moduleName;
        for ( moduleName in AppDev.modules ) {
            if ( AppDev.modules.hasOwnProperty( moduleName ) ) {
                AppDev.modules[moduleName]();
            }
        }
    }
};

appdev.blackcow.js

AppDev.modules.blackcow = function ( config ) {
    console.log( 'in black cow module...', config );
};

appdev.whitesheep.js

AppDev.modules.whitesheep = function ( config ) {
    console.log( 'in white sheep module...', config );
};

SOLUTION

To pull this off, we will leverage Google Closure Compiler.

There are multiple ways to use this compiler, but if you are using Mac, the easiest approach, in my opinion, is to install this compiler using Brew.

brew install closure-compiler

Once installed, navigate to the appdev directory and run the following command:-

closure-compiler --js `find src/**/*.js` --js_output_file appdev.min.js

Now, a new file called appdev.min.js will be created.

appdev
├── appdev.min.js
└── src
    ├── appdev.blackcow.js
    ├── appdev.js
    └── appdev.whitesheep.js

The reformatted file content looks like this:-

AppDev.modules.blackcow = function ( a ) {
    console.log( "in black cow module...", a )
};
var AppDev = {
    modules : [], start : function () {
        for ( var a in AppDev.modules ) {
            if ( AppDev.modules.hasOwnProperty( a ) ) {
                AppDev.modules[a]()
            }
        }
    }
};
AppDev.modules.whitesheep = function ( a ) {
    console.log( "in white sheep module...", a )
};

The generated code is going to cause problem because the Closure Compiler basically appends each file content based on the order of the JS file names.

To fix this, we have to specify the dependencies so that the Closure Compiler will auto-sort the files correctly.

appdev.js

goog.provide('AppDev');

var AppDev = {
    modules : [],

    start : function () {
        var moduleName;
        for ( moduleName in AppDev.modules ) {
            if ( AppDev.modules.hasOwnProperty( moduleName ) ) {
                AppDev.modules[moduleName]();
            }
        }
    }
};

appdev.blackcow.js

goog.require('AppDev');

AppDev.modules.blackcow = function ( config ) {
    console.log( 'in black cow module...', config );
};

appdev.whitesheep.js

goog.require('AppDev');

AppDev.modules.whitesheep = function ( config ) {
    console.log( 'in white sheep module...', config );
};

After rerunning the compiler, the file content for appdev.min.js now looks correct.

var AppDev = {
    modules : [], start : function () {
        for ( var a in AppDev.modules ) {
            if ( AppDev.modules.hasOwnProperty( a ) ) {
                AppDev.modules[a]()
            }
        }
    }
};
AppDev.modules.blackcow = function ( a ) {
    console.log( "in black cow module...", a )
};
AppDev.modules.whitesheep = function ( a ) {
    console.log( "in white sheep module...", a )
};

MS SQL Server: Executing SQL Script from Command Line

PROBLEM

When opening a 150MB SQL script file in Microsoft SQL Server Management Studio, the following error appears:-

SOLUTION

Instead of opening the large SQL script file and execute it, we can execute it directly from command line.

sqlcmd -E -d[database_name] -i[sql_file_path]

… where -E uses trusted connection, -d points to the database and -i points to the SQL script file path.

For example,

sqlcmd -E -dshittydb -ic:\Users\shittyuser\shittydb.sql

IntelliJ: Overriding Log4J Configuration Globally for JUnit

PROBLEM

Most of the time, we may have several Log4J configurations depending on the environments, for example:-

  • log4j.xml (Log4J) or log4j2.xml (Log4J2) – Production configuration using socket appender.
  • log4j-dev.xml (Log4J) or log4j2-dev.xml (Log4J2) – Development configuration using console appender.

Since log4j.xml and log4j2.xml are the default configuration files for Log4J and Log4J2, these configurations will always be used unless we override the configuration file path.

In another word, if we don’t override the configuration file path and we run our JUnit test cases offline from IntelliJ, it may take a very long time to execute them due to the broken socket connection. Further, it is not a good idea to clutter our production log files with non-production logs.

SOLUTION

To fix this, we need to configure IntelliJ to always use our development Log4J configuration.

First, on the menu bar, select Run -> Edit Configurations...

Then, delete all the existing JUnit configuration files.

Finally, expand Defaults -> JUnit.

Under VM options, specify the following system property:-

  • For Log4J, specify -Dlog4j.configuration=log4j-dev.xml
  • For Log4J2, specify -Dlog4j.configurationFile=log4j2-dev.xml

Please note the slight change on the system property name depending on the Log4J version.

Now, when we run our JUnit test cases from IntelliJ, it will always pick up the correct custom Log4J configuration file.

Java + Groovy: Creating Immutable List

Java: Mutable List

// class java.util.Arrays$ArrayList
final List mutableList = Arrays.asList(1, 2, 3);

Java: Immutable List

// class java.util.Collections$UnmodifiableRandomAccessList
final List immutableList = Collections.unmodifiableList(Arrays.asList(1, 2, 3));

Java: Immutable List using Guava

// class com.google.common.collect.RegularImmutableList
final ImmutableList guavaImmutableList = ImmutableList.of(1, 2, 3);

Groovy: Immutable List

// class java.util.Collections$UnmodifiableRandomAccessList
final def groovyImmutableList = [1, 2, 3].asImmutable()

IntelliJ: Selectively Disable Line Wrap

PROBLEM

Sometimes, we have very lengthy statements that look like this:-

@Service
public class LengthOfStayHeuristicServiceImpl extends HeuristicService {
    @Override
    public Double compute(HeuristicBean heuristicBean) {
        ...
				
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_AND_LUMBOSACRAL_MINUS_CERVICAL_AND_NOT_SPECIFIED);
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_MINUS_LUMBOSACRAL);
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_CERVICAL_MINUS_NOT_SPECIFIED);

        return new LengthOfStayAlgorithm(map).run();
    }
}

When we reformat the code in IntelliJ, it becomes like this:-

@Service
public class LengthOfStayHeuristicServiceImpl extends HeuristicService {
    @Override
    public Double compute(HeuristicBean heuristicBean) {
        ...
				
        setValue(map,
                 surgeryLocationMap,
                 LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_AND_LUMBOSACRAL_MINUS_CERVICAL_AND_NOT_SPECIFIED);
        setValue(map,
                 surgeryLocationMap,
                 LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_MINUS_LUMBOSACRAL);
        setValue(map,
                 surgeryLocationMap,
                 LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_CERVICAL_MINUS_NOT_SPECIFIED);

        return new LengthOfStayAlgorithm(map).run();
    }
}

There are times we really don’t want the long statements to wrap around because they look very messy.

SOLUTION

While there is no option to selectively disable just the line wrap in IntelliJ, there is a way to selectively disable code formatting.

First, we need to enable the Formatter Control.

Now, we can annotate our code like this:-

@Service
public class LengthOfStayHeuristicServiceImpl extends HeuristicService {
    @Override
    public Double compute(HeuristicBean heuristicBean) {
        ...
				
        // @formatter:off
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_AND_LUMBOSACRAL_MINUS_CERVICAL_AND_NOT_SPECIFIED);
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_LUMBAR_OR_SACRAL_MINUS_LUMBOSACRAL);
        setValue(map, surgeryLocationMap, LengthOfStayAlgorithm.VariableEnum.SURGERY_LOCATION_CERVICAL_MINUS_NOT_SPECIFIED);
        // @formatter:on

        return new LengthOfStayAlgorithm(map).run();
    }
}

When we reformat the code, that portion of code will remain unformatted.

Java: Promoting Testability by Having Enum Implementing an Interface

OVERVIEW

This post illustrates how we can easily write a better test case without polluting our production code with non-production code by performing a minor refactoring to the production code.

PROBLEM

Let’s assume we have a simple Data Reader that reads all the lines of a given algorithm data file and returns them:-

public class DataReader {
    public List<String> getDataLines(AlgorithmEnum algorithm) {
        // we have `StS-data.txt`, `CtE-data.txt` and `TtI-data.txt` under `src/main/resources` dir
        String fileName = String.format("%s-data.txt", algorithm.getShortName());
        Scanner scanner = new Scanner(getClass().getClassLoader().getResourceAsStream(fileName));

        List<String> list = new ArrayList<String>();

        while (scanner.hasNextLine()) {
            list.add(scanner.nextLine());
        }

        return list;
    }
}

This API accepts an AlgorithmEnum and it looks something like this:-

public enum AlgorithmEnum {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI");

    private String shortName;
    
    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Let’s assume each algorithm data file has millions of data lines.

So, how do we test this code?

SOLUTION 1: Asserting Actual Line Count == Expected Line Count

One straightforward way is to:-

  • Pass in one of the Enum constants (AlgorithmEnum.SKIN_TO_SKIN, etc) into DataReader.getDataLines(..)
  • Get the actual line counts
  • Assert the actual line counts against the expected line counts

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(AlgorithmEnum.SKIN_TO_SKIN);
        assertThat(lines, hasSize(7500));
    }
}    

This is a pretty weak test because we only check the line counts. Since we are dealing with a lot of data lines, it becomes impossible to verify the correctness of each data line.

SOLUTION 2: Adding a Test Constant to AlgorithmEnum

Another approach is to add a test constant to AlgorithmEnum:-

public enum AlgorithmEnum {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI"),
		
    // added a constant for testing purpose
    TEST_ABC("ABC");

    private String shortName;
    
    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Now, we can easily test the code with our test data stored at src/test/resources/ABC-data.txt:-

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(AlgorithmEnum.TEST_ABC);
        assertThat(lines, is(Arrays.asList("line 1", "line 2", "line 3")));
    }
}

While this approach works, we pretty much polluted our production code with non-production code, which may become a maintenance nightmare as the project grows larger in the future.

SOLUTION 3: AlgorithmEnum Implements an Interface

Instead of writing a mediocre test case or polluting the production code with non-production code, we can perform a minor refactoring to our existing production code.

First, we create a simple interface:-

public interface Algorithm {
    String getShortName();
}

Then, we have AlgorithmEnum to implement Algorithm:-

public enum AlgorithmEnum implements Algorithm {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI");

    private String shortName;

    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Now, instead of passing AlgorithmEnum into getDataLines(...), we will pass in Algorithm interface.

public class DataReader {
    public List<String> getDataLines(Algorithm algorithm) {
        String fileName = String.format("%s-data.txt", algorithm.getShortName());
        Scanner scanner = new Scanner(getClass().getClassLoader().getResourceAsStream(fileName));
    
        List<String> list = new ArrayList<String>();
    
        while (scanner.hasNextLine()) {
            list.add(scanner.nextLine());
        }

        return list;
    }
}

With these minor changes, we can easily unit test the code with our mock data stored under src/test/resources directory.

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(new Algorithm() {
            @Override
            public String getShortName() {
                // we have `ABC-data.txt` under `src/test/resources` dir
                return "ABC";
            }
        });

        assertThat(lines, is(Arrays.asList("line 1", "line 2", "line 3")));
    }
}