Groovy: Copying Properties Between Two Beans

PROBLEM

Given two beans…

class A {
    String name
    LocalDateTime localDateTime
}

class B {
    String name
    LocalDateTime localDateTime
}

There are several ways to copy properties from one bean to another:-

  • The most rudimentary way is to “get” each property from one bean and “set” it on another bean, which is VERY verbose and stupid.
  • Another way is to leverage utilities such as BeanUtils provided by either Apache Commons or Spring. While both libraries are called BeanUtils, they behave slightly different from one another.
  • Write home-grown reflection function… and now you have two problems: 1) it may not handle edge cases properly and 2) no one understands your implementation.

SOLUTION

Groovy provides a helper class to solve this problem called InvokerHelper. The advantage of using this is there’s no need to import yet another dependency and it still allows us to keep our code concise.

Scenario 1: Both beans have exact properties

class MySpec extends Specification {
    class A {
        String name
        LocalDateTime localDateTime
    }

    class B {
        String name
        LocalDateTime localDateTime
    }

    def "given a and b with same exact properties, should copy all properties"() {
        given:
        def a = new A(name: 'name',
                      localDateTime: LocalDateTime.now())
        def b = new B()

        when:
        InvokerHelper.setProperties(b, a.properties)

        then:
        b.name == a.name
        b.localDateTime == a.localDateTime
    }
}

Scenario 2: Source bean has additional properties

class MySpec extends Specification {
    class A {
        String name
        LocalDateTime localDateTime
        Integer extra1
        Boolean extra2
    }

    class B {
        String name
        LocalDateTime localDateTime
    }

    def "given a has additional properties than b, should ignore additional properties"() {
        given:
        def a = new A(name: 'name',
                      localDateTime: LocalDateTime.now(),
                      extra1: 1,
                      extra2: true)
        def b = new B()

        when:
        InvokerHelper.setProperties(b, a.properties)

        then:
        b.name == a.name
        b.localDateTime == a.localDateTime
    }
}

Scenario 3: Destination bean has additional properties

class MySpec extends Specification {
    class A {
        String name
        LocalDateTime localDateTime
    }

    class B {
        String name
        LocalDateTime localDateTime
        Integer extra1
        Boolean extra2
    }

    def "given b has additional properties than a, should set additional properties as null"() {
        given:
        def a = new A(name: 'name',
                      localDateTime: LocalDateTime.now())
        def b = new B()

        when:
        InvokerHelper.setProperties(b, a.properties)

        then:
        b.name == a.name
        b.localDateTime == a.localDateTime
        b.extra1 == null
        b.extra2 == null
    }
}

Scenario 4: Same property but different data type from each bean

The short answer is don’t do it. It’s not worth the hassle and confusion.

class MySpec extends Specification {
    class A {
        String number
    }

    class B {
        Integer number
    }

    def "given same property name but different data type, should go bat shit crazy"() {
        given:
        def a = new A(number: '0')
        def b = new B()

        when:
        InvokerHelper.setProperties(b, a.properties)

        then:
        b.number == 48 // ASCII value for character '0'
    }

    def "given same property name but different data type, should go bat shit crazy again"() {
        given:
        def a = new A(number: '10')
        def b = new B()

        when:
        InvokerHelper.setProperties(b, a.properties)

        then:
        thrown ClassCastException // because there's no ASCII value for character '10'
    }
}

Spring MVC: Failed to convert value of type ‘java.lang.String’ to required type ‘java.time.LocalDateTime’

PROBLEM

Given the following controller …

@RestController
@RequestMapping(value = '/controller')
class MyController {

    @RequestMapping(method = RequestMethod.GET)
    ResponseEntity main(@RequestParam(name = 'dateTime') LocalDateTime dateTime) {
        // ...

        return ResponseEntity.noContent().build()
    }
}

When executing …

GET https://localhost:8443/controller?dateTime=2017-06-22T17:38

… the web service call returns 400 Bad Request with the following error in the console log:-

Failed to bind request element: org.springframework.web.method.annotation.MethodArgumentTypeMismatchException: 
Failed to convert value of type 'java.lang.String' to required type 'java.time.LocalDateTime'; nested exception 
is org.springframework.core.convert.ConversionFailedException: Failed to convert from type [java.lang.String] 
to type [@org.springframework.web.bind.annotation.RequestParam java.time.LocalDateTime] for value '2017-06-22T17:38'; 
nested exception is java.lang.IllegalArgumentException: Parse attempt failed for value [2017-06-22T17:38]

SOLUTION

One solution is to change the data type from java.time.LocalDateTime to java.lang.String before parsing it to java.time.LocalDateTime. However, it is a little more verbose than I like.

A better way is to leverage @DateTimeFormat:-

@RestController
@RequestMapping(value = '/controller')
class MyController {

    @RequestMapping(method = RequestMethod.GET)
    ResponseEntity main(@RequestParam(name = 'dateTime') @DateTimeFormat(pattern = "yyyy-MM-dd'T'HH:mm") LocalDateTime dateTime) {
        // ...

        return ResponseEntity.noContent().build()
    }
}

MS SQL Server + Hibernate 5: Incorrect syntax near ‘@P0’

PROBLEM

When upgrading to Hibernate 5, the following exception is thrown:-

Caused by: java.sql.SQLException: Incorrect syntax near '@P0'.
	at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372) ~[jtds-1.3.1.jar:1.3.1]
	at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988) ~[jtds-1.3.1.jar:1.3.1]
	at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421) ~[jtds-1.3.1.jar:1.3.1]

SOLUTION

Change the MS SQL Server dialect from this…

org.hibernate.dialect.SQLServerDialect

… to this …

org.hibernate.dialect.SQLServer2012Dialect

tar: Exiting with failure status due to previous errors

PROBLEM

When creating a compressed archive file:-

tar -zcvf apps.tar.gz apps

… the following error is thrown:-

tar: Exiting with failure status due to previous errors

SOLUTION

This error usually occurs due to permission issues.

However, the error messages are hidden beneath gobs of output activated by the verbose flag.

To fix this, reduce the output by removing the -v flag:-

tar -zcf apps.tar.gz apps

… and now, the error message appears:-

tar: apps/apps.key.enc: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors

Once the problem is fixed, the command will run successfully.

Docker: Handling Circular Dependency between Containers

PROBLEM

Let’s assume we are going to run 3 containers:-

Nginx is used to serve cleaner URLs through reverse proxies so that users will access http://server/jenkins and http://server/nexus instead of remembering specific ports.

So, the simplified docker-compose.yml looks like this:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"
    
  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

While http://server/jenkins and http://server/nexus work flawlessly, the Jenkins container is unable to communicate with Nexus through http://server/nexus/some/path, which is handled by Nginx.

Hence, when a Jenkins job tries to pull artifacts from Nexus, the following error is thrown:

[ERROR]     Unresolveable build extension: Plugin ... or one of its 
dependencies could not be resolved: Failed to collect dependencies 
at ... -> ...: Failed to read artifact descriptor for ...: Could 
not transfer artifact ... from/to server 
(http://server/nexus/repository/public/): Connect to server:80 
[server/172.19.0.2] failed: Connection refused (Connection refused) 
-> [Help 2]

SOLUTION: ATTEMPT #1

The first attempt is to set up a link between Jenkins and Nginx with the Nginx alias pointing to the hostname, which is server.

The goal is when Jenkins communicate with Nexus through http://server/nexus/some/path, Nginx will handle the reverse proxy accordingly.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nginx:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

However, when running the containers, it halts with an error:-

ERROR: Circular dependency between nginx and jenkins

SOLUTION: ATTEMPT #2

In effort to prevent the circular dependency problem, we can set up a link between Jenkins and Nexus with the Nexus alias pointing to the hostname, which is server.

This way, Jenkins communicate directly with Nexus through http://server:8081/nexus/some/path and Nginx will stay out of it.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nexus:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

This works without problem.

However, this configuration somewhat defeats the purpose of using Nginx because while the users may access Jenkins and Nexus without specifying custom ports, Jenkins has to communicate with Nexus using port 8081.

Furthermore, this Nexus port is fully exposed in the build logs in all Jenkins jobs.

SOLUTION: ATTEMPT #3

The last attempt is to configure Nginx with the hostname as a network alias.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus
    networks:
      default:
        aliases:
         - ${HOSTNAME}

volumes:
  jenkins:
  nexus:

networks:
  default:

This time, Jenkins is able to communicate successfully with Nexus through http://server/nexus/some/path and Nginx will handle the reverse proxy accordingly.

Docker: Defining Custom Location for Named Volume

PROBLEM

Let’s assume we have the following docker-compose.yml:

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:

By the default, all Docker-managed named volumes are stored under the installed Docker directory… typically, /var/lib/docker/volumes/[path].

However, it is possible /var mount is low on disk space.

SOLUTION

It appears we can create a custom location for the given named volume:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:
    driver_opts:
      type: none
      device: /data/jenkins
      o: bind

Keep in mind /data/jenkins must be created first on the host.

ES6 + Mocha + Sinon: Mocking Imported Dependency

PROBLEM

Let’s assume we have the following 2 files:-

apis.js

import fetch from 'isomorphic-fetch';

export const logout = () => (
  fetch('/logout')
    .then(resp => resp.json())
    .catch(err => err)
);

service.js

import { logout } from './apis';

export const kickUserOut = activeSession => (
  activeSession ? logout() : undefined
);

Let’s assume we want to test the logic in service.js without using nock to mock the HTTP call in apis.js.

While proxyquireify allows us to mock out the apis.js dependency in service.js, sometimes it is a little more complicated than needed.

SOLUTION

A simpler approach is to use sinon to stub out logout() defined in apis.js.

service-spec.js

import { beforeEach, afterEach, describe, it } from 'mocha';
import { expect } from 'chai';
import sinon from 'sinon';
import { kickUserOut } from './service';

// import everything as an object
import * as apis from './apis';

describe('service => kickUserOut', () => {
  let logoutStub;

  // before running each test, stub out `logout()`
  beforeEach(() => {
    logoutStub = sinon.stub(apis, 'logout').returns('success');
  });

  // after running each test, restore to the original method to
  // prevent "TypeError: Attempted to wrap logout which is already wrapped"
  // error when executing subsequent specs.
  afterEach(() => {
    apis.logout.restore();
  });

  it('given active session, should invoke logout API', () => {
    expect(kickUserOut(true)).to.deep.equal('success');
    expect(logoutStub.calledOnce).to.equal(true);
  });

  it('given expired session, should not invoke logout API', () => {
    expect(kickUserOut(false)).to.equal(undefined);
    expect(logoutStub.calledOnce).to.equal(false);
  });
});