Category Archives: Testing

Spring: Component Scan Selected Classes

PROBLEM

Let’s assume we have a package with the following classes where each class is either annotated with Spring’s @Service, @Component, @Controller or @Repository.

app
├── A.groovy
├── B.groovy
├── C.groovy
├── D.groovy
└── E.groovy

When writing unit test, we want Spring to component scan class A and class B.

SOLUTION

Before we begin, we configure Log4j to log Spring in debug level.

<logger name="org.springframework">
    <level value="debug"/>
</logger>

Step 1

If we configure the test class like this…

@ContextConfiguration
class ASpec extends Specification {
    @Configuration
    @ComponentScan(
            basePackageClasses = [A]
	)
    static class TestConfig {
    }

    def "..."() {
        // ...
    }
}

It will scan all Spring components that reside in the same package as class A.

Debugging log:-

[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/A.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/B.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/C.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/D.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/E.class]

Step 2

We can set includeFilters to include just class A and class B…

@ContextConfiguration
class ASpec extends Specification {
    @Configuration
    @ComponentScan(
            basePackageClasses = [A],
            includeFilters = [@ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = [A, B])]
	)
    static class TestConfig {
    }

    def "..."() {
        // ...
    }
}

… but it doesn’t do anything.

Debugging log:-

[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/A.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/B.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/C.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/D.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/E.class]

Step 3

To fix this, we set useDefaultFilters to false to disable any automatic detection of classes annotated with Spring’s @Service, @Component, @Controller or @Repository.

@ContextConfiguration
class ASpec extends Specification {
    @Configuration
    @ComponentScan(
            basePackageClasses = [A],
            useDefaultFilters = false,
            includeFilters = [@ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = [A, B])]
    )
    static class TestConfig {
    }

    def "..."() {
        // ...
    }
}

Now, we get the intended behavior.

Debugging log:-

[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/A.class]
[DEBUG] [ClassPathBeanDefinitionScanner] [findCandidateComponents:294] - Identified candidate component class: file [/path/target/classes/app/B.class]

Guava: Testing equals(..) and hashcode(..)

PROBLEM

Let’s assume we want to test the following equals(..):-

public class Person {
    private String name;
    private int age;

    @Override
    public boolean equals(Object o) {
        if (o == null || getClass() != o.getClass()) {
            return false;
        }

        Person person = (Person) o;
        return Objects.equal(name, person.name);
    }

    @Override
    public int hashCode() {
        return Objects.hashCode(name);
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }
}

A correctly implemented equals(..) must be reflexive, symmetric, transitive, consistent and handles null comparison.

In another word, you have to write test cases to pass at least these 5 rules. Anything less is pure bullshit.

SOLUTION

You can write these tests yourself… or you can leverage Guava’s EqualsTester. This library will test these 5 rules and ensure the generated hashcode matches too.

First, include the needed dependency:-

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava-testlib</artifactId>
    <version>18.0</version>
    <scope>test</scope>
</dependency>

Instead of writing JUnit tests, I’ll be writing Spock specs, which is essentially built on top of Groovy, because it allows me to write very clear and clean tests.

class PersonSpec extends Specification {

    def person = new Person(name: 'Mike', age: 10)

    def "equals - equal"() {
        when:
        new EqualsTester().
                addEqualityGroup(person,
                                 new Person(name: 'Mike', age: 10),
                                 new Person(name: 'Mike', age: 20)).
                testEquals()

        then:
        notThrown(AssertionFailedError.class)
    }

    def "equals - not equal"() {
        when:
        new EqualsTester().
                addEqualityGroup(person,
                                 new Person(name: 'Kurt', age: 10)).
                testEquals()

        then:
        thrown(AssertionFailedError.class)
    }
}

Oh… wait for it… BOOM!

Karma: Getting Started

Overview

This tutorial walks through the steps need to configure Karma to work with a Maven web project. It will also be used as a base for my future Karma related posts.

Install Node.js

First, download Node.js from http://nodejs.org/download/ and install it. Once installed, we should be able to invoke npm command from the command line, for example:-

npm -version

Create package.json

Let’s assume we have the following Maven project:-

testKarma
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   ├── resources
│   │   └── webapp
│   └── test
│       └── java
└── testKarma.iml

Please ignore testKarma.iml, which is an IntelliJ specific file.

Create a file called package.json under project root directory…

testKarma
├── package.json
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   ├── resources
│   │   └── webapp
│   └── test
│       └── java
└── testKarma.iml

… with the following content…

{
  "name": "testKarma",
  "private": true
}

In this case, we created a private repository by setting private to true.

Without this flag set, when you run npm install ..., we will get these annoying warnings:-

npm WARN package.json testKarma@ No description
npm WARN package.json testKarma@ No repository field.
npm WARN package.json testKarma@ No README data

Install Karma and Plugins

From the project root directory, run the following command:-

npm install karma karma-jasmine@0.2.2 karma-chrome-launcher karma-phantomjs-launcher karma-junit-reporter karma-coverage --save-dev

At the time of this post, by default, karma-jasmine will install version 0.1.5. So, we will manually specify the latest version that allows us to use Jasmine 2.x.

For local testing, we will run our tests against both Chrome browser and PhantomJS, which is a headless browser. So, make sure Chrome browser is already installed.

The project structure now contains the installed plugins for JavaScript testing:-

testKarma
├── node_modules
│   ├── karma
│   ├── karma-chrome-launcher
│   ├── karma-coverage
│   ├── karma-jasmine
│   ├── karma-junit-reporter
│   └── karma-phantomjs-launcher
├── package.json
├── pom.xml
├── src
│   ├── main
│   └── test
└── testKarma.iml

The package.json contains a history of the plugins we installed within this project:-

{
  "name": "testKarma",
  "private": true,
  "devDependencies": {
    "karma": "^0.12.24",
    "karma-chrome-launcher": "^0.1.5",
    "karma-coverage": "^0.2.6",
    "karma-jasmine": "^0.2.2",
    "karma-junit-reporter": "^0.2.2",
    "karma-phantomjs-launcher": "^0.1.4"
  }
}

A couple of important notes:-

  • Basically, since we are not running npm install with -g option, the plugins will not be installed globally. Rather, they are installed within the project root directory.
  • Using --save-dev option, if package.json exists, this file will be automatically updated to keep track of the installed plugins and versions.
  • If we decided to update the plugin versions, we just need to modify this file and run npm install to update them.
  • We do not want to commit node_modules directory into any VCS because there are at least 5000 files in this directory. So, remember to configure a VCS exclusion on this directory (see IntelliJ: Handling SVN Global Ignore List).
  • When other peers check out this project from VCS, they will run npm install, which will automatically install all the dependencies listed in package.json.

Create Karma Configuration File

Instead of running karma init karma.conf.js to step through the process to create a configuration file, we will manually create two Karma configuration files under src/test/resources directory.

testKarma
├── node_modules
├── package.json
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   ├── resources
│   │   └── webapp
│   └── test
│       ├── java
│       └── resources
│           ├── karma.conf.js
│           └── karma.jenkins.conf.js

karma.conf.js

This configuration is used for local testing.

module.exports = function ( config ) {
    config.set( {
        basePath         : '../../../',
        frameworks       : ['jasmine'],
        files            : [
            'src/main/webapp/resources/js/**/*.js',
            'src/test/js/**/*.js'
        ],
        exclude          : [],
        preprocessors    : {
            'src/main/webapp/resources/js/**/*.js' : ['coverage']
        },
        reporters        : ['progress', 'coverage'],
        port             : 9876,
        colors           : true,
        logLevel         : config.LOG_INFO,
        autoWatch        : true,
        browsers         : ['Chrome', 'PhantomJS'],
        singleRun        : false,
        plugins          : [
            'karma-jasmine',
            'karma-chrome-launcher',
            'karma-phantomjs-launcher',
            'karma-junit-reporter',
            'karma-coverage'
        ],
        coverageReporter : {
            type : 'html',
            dir  : 'target/coverage/'
        }
    } );
};

karma.jenkins.conf.js

This configuration is used for automated testing on Jenkins.

module.exports = function ( config ) {
    config.set( {
        basePath         : '../../../',
        frameworks       : ['jasmine'],
        files            : [
            'src/main/webapp/resources/js/**/*.js',
            'src/test/js/**/*.js'
        ],
        exclude          : [],
        preprocessors    : {
            'src/main/webapp/resources/js/**/*.js' : ['coverage']
        },
        // added `junit`
        reporters        : ['progress', 'junit', 'coverage'],
        port             : 9876,
        colors           : true,
        logLevel         : config.LOG_INFO,
        // don't watch for file change
        autoWatch        : false,
        // only runs on headless browser
        browsers         : ['PhantomJS'],
        // just run one time
        singleRun        : true,
        // remove `karma-chrome-launcher` because we will be running on headless
        // browser on Jenkins
        plugins          : [
            'karma-jasmine',
            'karma-phantomjs-launcher',
            'karma-junit-reporter',
            'karma-coverage'
        ],
        // changes type to `cobertura`
        coverageReporter : {
            type : 'cobertura',
            dir  : 'target/coverage-reports/'
        },
        // saves report at `target/surefire-reports/TEST-*.xml` because Jenkins
        // looks for this location and file prefix by default.
        junitReporter    : {
            outputFile : 'target/surefire-reports/TEST-karma-results.xml'
        }
    } );
};

Write JS Production Code and Test Code

Now, it’s time to write some tests and run them.

testKarma
├── node_modules
├── package.json
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   ├── resources
│   │   └── webapp
│   │       └── resources
│   │           └── js
│   │               └── hello.js
│   └── test
│       ├── java
│       ├── js
│       │   └── hello-spec.js
│       └── resources
│           ├── karma.conf.js
│           └── karma.jenkins.conf.js
└── testKarma.iml

hello.js

We will keep our production code to the minimum in this example:-

var hello = {
    speak : function () {
        return 'Hello!';
    }
};

hello-spec.js

A very simple test case:-

describe( 'hello module', function () {
	'use strict';
    it( 'speak()', function () {
        expect( hello.speak() ).toBe( 'Hello!' );
    } );
} );

Run Karma using karma.conf.js

From the project root directory, run Karma:-

node_modules/karma/bin/karma start src/test/resources/karma.conf.js

The console should now look like this:-

INFO [karma]: Karma v0.12.24 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser PhantomJS
INFO [PhantomJS 1.9.8 (Mac OS X)]: Connected on socket YNMziWTsyeaf6DZzw06D with id 3320523
INFO [Chrome 38.0.2125 (Mac OS X 10.9.5)]: Connected on socket rYQeni1xm1bbqfa3w06E with id 50259203
PhantomJS 1.9.8 (Mac OS X): Executed 1 of 1 SUCCESS (0.003 secs / 0.001 secs)
Chrome 38.0.2125 (Mac OS X 10.9.5): Executed 1 of 1 SUCCESS (0.008 secs / 0.001 secs)
TOTAL: 2 SUCCESS

The target/coverage directory should contain Chrome and PhantomJS subdirectories.

testKarma
├── node_modules
├── package.json
├── pom.xml
├── src
├── target
│   └── coverage
│       ├── Chrome 38.0.2125 (Mac OS X 10.9.5)
│       │   ├── index.html
│       │   ├── js
│       │   │   ├── hello.js.html
│       │   │   └── index.html
│       │   ├── prettify.css
│       │   └── prettify.js
│       └── PhantomJS 1.9.8 (Mac OS X)
│           ├── index.html
│           ├── js
│           │   ├── hello.js.html
│           │   └── index.html
│           ├── prettify.css
│           └── prettify.js
└── testKarma.iml

When opening one of the index.html files, we should see the coverage report.

Run Karma using karma.jenkins.conf.js

Although we will only use karma.jenkins.conf.js for automated testing on Jenkins, we will run this configuration file to see the generated output differences.

Clean the target directory:-

mvn clean

Run Karma with karma.jenkins.conf.js instead:-

node_modules/karma/bin/karma start src/test/resources/karma.jenkins.conf.js

Since this configuration only runs on the headless browser, we will not see Chrome results here:-

INFO [karma]: Karma v0.12.24 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
INFO [PhantomJS 1.9.8 (Mac OS X)]: Connected on socket SM_5sy2wzHdL4ru9yg0X with id 42922522
PhantomJS 1.9.8 (Mac OS X): Executed 1 of 1 SUCCESS (0.002 secs / 0.001 secs)

The target directory should contain cobertura-coverage.xml for coverage result and TEST-karma-results.xml for test result.

testKarma
├── node_modules
├── package.json
├── pom.xml
├── src
├── target
│   ├── coverage-reports
│   │   └── PhantomJS 1.9.8 (Mac OS X)
│   │       └── cobertura-coverage.xml
│   └── surefire-reports
│       └── TEST-karma-results.xml
└── testKarma.iml

IntelliJ: Overriding Log4J Configuration Globally for JUnit

PROBLEM

Most of the time, we may have several Log4J configurations depending on the environments, for example:-

  • log4j.xml (Log4J) or log4j2.xml (Log4J2) – Production configuration using socket appender.
  • log4j-dev.xml (Log4J) or log4j2-dev.xml (Log4J2) – Development configuration using console appender.

Since log4j.xml and log4j2.xml are the default configuration files for Log4J and Log4J2, these configurations will always be used unless we override the configuration file path.

In another word, if we don’t override the configuration file path and we run our JUnit test cases offline from IntelliJ, it may take a very long time to execute them due to the broken socket connection. Further, it is not a good idea to clutter our production log files with non-production logs.

SOLUTION

To fix this, we need to configure IntelliJ to always use our development Log4J configuration.

First, on the menu bar, select Run -> Edit Configurations...

Then, delete all the existing JUnit configuration files.

Finally, expand Defaults -> JUnit.

Under VM options, specify the following system property:-

  • For Log4J, specify -Dlog4j.configuration=log4j-dev.xml
  • For Log4J2, specify -Dlog4j.configurationFile=log4j2-dev.xml

Please note the slight change on the system property name depending on the Log4J version.

Now, when we run our JUnit test cases from IntelliJ, it will always pick up the correct custom Log4J configuration file.

Java: Promoting Testability by Having Enum Implementing an Interface

OVERVIEW

This post illustrates how we can easily write a better test case without polluting our production code with non-production code by performing a minor refactoring to the production code.

PROBLEM

Let’s assume we have a simple Data Reader that reads all the lines of a given algorithm data file and returns them:-

public class DataReader {
    public List<String> getDataLines(AlgorithmEnum algorithm) {
        // we have `StS-data.txt`, `CtE-data.txt` and `TtI-data.txt` under `src/main/resources` dir
        String fileName = String.format("%s-data.txt", algorithm.getShortName());
        Scanner scanner = new Scanner(getClass().getClassLoader().getResourceAsStream(fileName));

        List<String> list = new ArrayList<String>();

        while (scanner.hasNextLine()) {
            list.add(scanner.nextLine());
        }

        return list;
    }
}

This API accepts an AlgorithmEnum and it looks something like this:-

public enum AlgorithmEnum {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI");

    private String shortName;
    
    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Let’s assume each algorithm data file has millions of data lines.

So, how do we test this code?

SOLUTION 1: Asserting Actual Line Count == Expected Line Count

One straightforward way is to:-

  • Pass in one of the Enum constants (AlgorithmEnum.SKIN_TO_SKIN, etc) into DataReader.getDataLines(..)
  • Get the actual line counts
  • Assert the actual line counts against the expected line counts

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(AlgorithmEnum.SKIN_TO_SKIN);
        assertThat(lines, hasSize(7500));
    }
}    

This is a pretty weak test because we only check the line counts. Since we are dealing with a lot of data lines, it becomes impossible to verify the correctness of each data line.

SOLUTION 2: Adding a Test Constant to AlgorithmEnum

Another approach is to add a test constant to AlgorithmEnum:-

public enum AlgorithmEnum {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI"),
		
    // added a constant for testing purpose
    TEST_ABC("ABC");

    private String shortName;
    
    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Now, we can easily test the code with our test data stored at src/test/resources/ABC-data.txt:-

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(AlgorithmEnum.TEST_ABC);
        assertThat(lines, is(Arrays.asList("line 1", "line 2", "line 3")));
    }
}

While this approach works, we pretty much polluted our production code with non-production code, which may become a maintenance nightmare as the project grows larger in the future.

SOLUTION 3: AlgorithmEnum Implements an Interface

Instead of writing a mediocre test case or polluting the production code with non-production code, we can perform a minor refactoring to our existing production code.

First, we create a simple interface:-

public interface Algorithm {
    String getShortName();
}

Then, we have AlgorithmEnum to implement Algorithm:-

public enum AlgorithmEnum implements Algorithm {
    SKIN_TO_SKIN("StS"),
    CLOSURE_TO_EXIT("CtE"),
    TIME_TO_INCISION("TtI");

    private String shortName;

    AlgorithmEnum(String shortName) {
        this.shortName = shortName;
    }

    public String getShortName() {
        return shortName;
    }
}

Now, instead of passing AlgorithmEnum into getDataLines(...), we will pass in Algorithm interface.

public class DataReader {
    public List<String> getDataLines(Algorithm algorithm) {
        String fileName = String.format("%s-data.txt", algorithm.getShortName());
        Scanner scanner = new Scanner(getClass().getClassLoader().getResourceAsStream(fileName));
    
        List<String> list = new ArrayList<String>();
    
        while (scanner.hasNextLine()) {
            list.add(scanner.nextLine());
        }

        return list;
    }
}

With these minor changes, we can easily unit test the code with our mock data stored under src/test/resources directory.

public class DataReaderTest {
    @Test
    public void testGetDataLines() {
        List<String> lines = new DataReader().getDataLines(new Algorithm() {
            @Override
            public String getShortName() {
                // we have `ABC-data.txt` under `src/test/resources` dir
                return "ABC";
            }
        });

        assertThat(lines, is(Arrays.asList("line 1", "line 2", "line 3")));
    }
}

Spock: Reading Test Data from CSV File

Following up on my recent post about creating a Spock specification to read the test data from a CSV file without loading all the data into the memory, I created a CSVReader that implements Iterable that allows me to pull this off. You may download the source code here.

With this implementation, I can now write an elegant Spock specification:-

class MySpockSpec extends Specification {
    @Unroll
    def "#firstNum + 1 == #secondNum"() {
        expect:
        Integer.valueOf(firstNum as String) + 1 == Integer.valueOf(secondNum as String)

        where:
        [firstNum, secondNum] << new CSVReader(getClass().getClassLoader().getResourceAsStream("test.csv"))
    }
}

Maven: Unable to Execute Spock Specs

PROBLEM

When running mvn clean test, Maven Surefire Plugin doesn’t pick up *Spec.groovy test files.

SOLUTION

By default, Maven Surefire Plugin is configured to execute test files with the following patterns: **/Test*.java, **/*Test.java and **/*TestCase.java.

To fix this, we need to modify the inclusion list for this plugin. Since both Java and Groovy files get compiled down to *.class, it is probably easier to just include *.class instead of *.java or *.groovy.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.17</version>
    <configuration>
        <includes>
            <include>**/Test*.class</include>
            <include>**/*Test.class</include>
            <include>**/*TestCase.class</include>
            <include>**/*Spec.class</include>
        </includes>
    </configuration>
</plugin>