Docker: Handling Circular Dependency between Containers

PROBLEM

Let’s assume we are going to run 3 containers:-

Nginx is used to serve cleaner URLs through reverse proxies so that users will access http://server/jenkins and http://server/nexus instead of remembering specific ports.

So, the simplified docker-compose.yml looks like this:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"
    
  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

While http://server/jenkins and http://server/nexus work flawlessly, the Jenkins container is unable to communicate with Nexus through http://server/nexus/some/path, which is handled by Nginx.

Hence, when a Jenkins job tries to pull artifacts from Nexus, the following error is thrown:

[ERROR]     Unresolveable build extension: Plugin ... or one of its 
dependencies could not be resolved: Failed to collect dependencies 
at ... -> ...: Failed to read artifact descriptor for ...: Could 
not transfer artifact ... from/to server 
(http://server/nexus/repository/public/): Connect to server:80 
[server/172.19.0.2] failed: Connection refused (Connection refused) 
-> [Help 2]

SOLUTION: ATTEMPT #1

The first attempt is to set up a link between Jenkins and Nginx with the Nginx alias pointing to the hostname, which is server.

The goal is when Jenkins communicate with Nexus through http://server/nexus/some/path, Nginx will handle the reverse proxy accordingly.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nginx:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

However, when running the containers, it halts with an error:-

ERROR: Circular dependency between nginx and jenkins

SOLUTION: ATTEMPT #2

In effort to prevent the circular dependency problem, we can set up a link between Jenkins and Nexus with the Nexus alias pointing to the hostname, which is server.

This way, Jenkins communicate directly with Nexus through http://server:8081/nexus/some/path and Nginx will stay out of it.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nexus:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

This works without problem.

However, this configuration somewhat defeats the purpose of using Nginx because while the users may access Jenkins and Nexus without specifying custom ports, Jenkins has to communicate with Nexus using port 8081.

Furthermore, this Nexus port is fully exposed in the build logs in all Jenkins jobs.

SOLUTION: ATTEMPT #3

The last attempt is to configure Nginx with the hostname as a network alias.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus
    networks:
      default:
        aliases:
         - ${HOSTNAME}

volumes:
  jenkins:
  nexus:

networks:
  default:

This time, Jenkins is able to communicate successfully with Nexus through http://server/nexus/some/path and Nginx will handle the reverse proxy accordingly.

Advertisements

Docker: Defining Custom Location for Named Volume

PROBLEM

Let’s assume we have the following docker-compose.yml:

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:

By the default, all Docker-managed named volumes are stored under the installed Docker directory… typically, /var/lib/docker/volumes/[path].

However, it is possible /var mount is low on disk space.

SOLUTION

It appears we can create a custom location for the given named volume:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:
    driver_opts:
      type: none
      device: /data/jenkins
      o: bind

Keep in mind /data/jenkins must be created first on the host.

ES6 + Mocha + Sinon: Mocking Imported Dependency

PROBLEM

Let’s assume we have the following 2 files:-

apis.js

import fetch from 'isomorphic-fetch';

export const logout = () => (
  fetch('/logout')
    .then(resp => resp.json())
    .catch(err => err)
);

service.js

import { logout } from './apis';

export const kickUserOut = activeSession => (
  activeSession ? logout() : undefined
);

Let’s assume we want to test the logic in service.js without using nock to mock the HTTP call in apis.js.

While proxyquireify allows us to mock out the apis.js dependency in service.js, sometimes it is a little more complicated than needed.

SOLUTION

A simpler approach is to use sinon to stub out logout() defined in apis.js.

service-spec.js

import { beforeEach, afterEach, describe, it } from 'mocha';
import { expect } from 'chai';
import sinon from 'sinon';
import { kickUserOut } from './service';

// import everything as an object
import * as apis from './apis';

describe('service => kickUserOut', () => {
  let logoutStub;

  // before running each test, stub out `logout()`
  beforeEach(() => {
    logoutStub = sinon.stub(apis, 'logout').returns('success');
  });

  // after running each test, restore to the original method to
  // prevent "TypeError: Attempted to wrap logout which is already wrapped"
  // error when executing subsequent specs.
  afterEach(() => {
    apis.logout.restore();
  });

  it('given active session, should invoke logout API', () => {
    expect(kickUserOut(true)).to.deep.equal('success');
    expect(logoutStub.calledOnce).to.equal(true);
  });

  it('given expired session, should not invoke logout API', () => {
    expect(kickUserOut(false)).to.equal(undefined);
    expect(logoutStub.calledOnce).to.equal(false);
  });
});

Synology NAS: Running CrashPlan in Docker Container

BACKGROUND

The reason to run CrashPlan in Docker container is to prevent any future Synology’s DSM updates from breaking the CrashPlan app.

Let’s assume the Synology NAS IP address is 1.2.3.4.

STEPS

Diskstation Manager

Log into Diskstation Manager: http://1.2.3.4:5000

Install Docker.

Package Center -> Utilities -> Third Party -> Docker

Mac

SSH into Synology NAS.

ssh admin@1.2.3.4

Install CrashPlan Docker container.

sudo docker pull jrcs/crashplan

Run CrashPlan Docker container. In this example, we want to backup photo and video directories.

sudo docker run -d --name CrashPlan \
 -p 4242:4242 -p 4243:4243 \
 -v /volume1/photo:/volume1/photo -v /volume1/video:/volume1/video \
 jrcs/crashplan:latest

Back to Diskstation Manager

Get authentication token from the running CrashPlan.

Docker -> Container -> CrashPlan -> Details -> 
Terminal -> Create -> bash

Run command:-

cat /var/lib/crashplan/.ui_info

The following text are printed:-

4243,########-####-####-####-############,0.0.0.0

Copy ########-####-####-####-############ to somewhere first.

By default, CrashPlan allocates 1GB of memory. The recommendation is to allocate 1GB of memory per 1TB of storage to prevent CrashPlan from running out of memory. In this example, we are going to increase it to 3GB.

Edit /var/crashplan/conf/my.service.xml.

vi /var/crashplan/conf/my.service.xml

Change the following line:-

<config ...>
	...
	<javaMemoryHeapMax>3072m</javaMemoryHeapMax>
	...
</config>

Edit /var/crashplan/app/bin/run.conf.

vi /var/crashplan/app/bin/run.conf

Change the following line:-

SRV_JAVA_OPTS="... -Xmx3072m ..."                                                         
GUI_JAVA_OPTS="..."

Stop CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Action -> Stop

Enable auto-restart on CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Edit -> General Settings -> 
Enable auto-restart -> OK

Start CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Action -> Start

Back to Mac

Download and install CrashPlan software.

Disable CrashPlan service since the UI acts as a client.

sudo launchctl unload -w /Library/LaunchDaemons/com.crashplan.engine.plist

Edit /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties.

sudo nano /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties

Uncomment serviceHost and update Synology NAS IP address.

#Fri Dec 09 09:50:22 CST 2005
serviceHost=1.2.3.4
#servicePort=4243
#pollerPeriod=1000  # 1 second
#connectRetryDelay=10000  # 10 seconds
#connectRetryAttempts=3
#showWelcome=true

#font.small=
#font.default=
#font.title=
#font.message.header=
#font.message.body=
#font.tab=                                  

Edit /Library/Application Support/CrashPlan/.ui_info.

sudo nano "/Library/Application Support/CrashPlan/.ui_info"

Replace the authentication token with the value from above step. Replace IP address with Synology NAS IP address.

4243,########-####-####-####-############,1.2.3.4

Finally, run CrashPlan app to view the backup process.

Java: Exploring Preferences API

BACKGROUND

In any written scripts or rich client apps, there is almost a need to persist the user preferences or app configurations.

Most of the time, we, the proud developers, handle that situation in very ad-hoc manner. When storing in a file, we use different formats: from old-boring XML, to cool-kid JSON, to even cooler-kid YAML or the kindergarten-kid key=value property. Then, we have to decide where to write the file to, whether to use C:\ and screw your non-windows users, whether to use backslashes to construct the file path or forward slashes because we are sick and tired escaping the effing backslashes.

The long story short is… yes, we, the proud developers, can do all that… or, as one of my current project peers like to say, “make it configurable” on literally everything to the point it’s pretty close of becoming a drinking game now.

But, the point I want to make here is… we are consistent on being inconsistent.

SOLUTION

Java provides the Preferences API as an attempt to solve this mess. Using this API, the developers do not need to know where or how to store the user preferences or app configurations. Rather, it relies on the native API to store the data: registry for Windows, .plist for Mac and XML for Unix/Linux.

The most interesting part is… the Preferences API has been around since JDK 1.4.

Code wise, it doesn’t get any simpler than this:-

// create new configuration or reference existing configuration
Preferences preferences = Preferences.userNodeForPackage(WuTangClan)

// insert/update 3 key/value pairs
preferences.put('key1', 'value1')
preferences.put('key2', 'value2')
preferences.put('key3', 'value3')

// returns 'value2'
println preferences.get('key2', '-')

// returns '-'
println preferences.get('invalid', '-')

// remove by key
preferences.remove('key3')

// delete everything
preferences.removeNode()

But, where and how exactly do Mac and Windows store this data?

There are several ways to get an instance of Preferences.

Preferences.userNodeForPackage(WuTangClan)

Mac

If WuTangClan class file is located under wu.tang.clan.config package, the configuration file is created at ~/Library/Preferences/wu.tang.clan.plist with the following content:-

{    "/wu/tang/clan/" = {
        "config/" = {
            "key1" = "value1";
            "key2" = "value2";
        };
    };
}

64-bit Windows + 64-bit JVM

Configuration is stored in the registry with the following key:-

HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\wu\tang\clan\config

Preferences.userRoot().node(‘path’)

Example 1

Let’s assume we have this:-

Preferences.userRoot().node('wu')

Mac

The configuration is created at ~/Library/Preferences/com.apple.java.util.prefs.plist with the following content:-

{    "/" = {
        ...
        
        "wu/" = {
            "key1" = "value1";
            "key2" = "value2";
        };
        
        ...
    };
}

This file also contains configurations from other installed software.

64-bit Windows + 64-bit JVM

Configuration is stored in the registry with the following key:-

HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\wu

Example 2

How about this?

Preferences.userRoot().node('wu/tang')

// ... OR ...

Preferences.userRoot().node('wu').node('tang')

Mac

The configuration still resides under ~/Library/Preferences/com.apple.java.util.prefs.plist with the following content:-

{    "/" = {
        ...
        "wu/" = {
            "tang/" = {
                "key1" = "value1";
                "key2" = "value2";
            };
        };
        ...
    };
}

64-bit Windows + 64-bit JVM

Configuration is stored in the registry with the following key:-

HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\wu\tang

Example 3

How about this?

Preferences.userRoot().node('wu/tang/clan')

// ... OR ...

Preferences.userRoot().node('wu').node('tang').node('clan')

Mac

Now, the shit is about to get real here.

Mac, for some reason, creates a stub under ~/Library/Preferences/com.apple.java.util.prefs.plist with the following content:-

{    "/" = {
        ...
        "wu/" = { "tang/" = { "clan/" = { }; }; };
        ...
    };
}

The actual configuration now resides under ~/Library/Preferences/wu.tang.clan.plist:-

{    "/wu/tang/clan/" = {
        "key1" = "value1";
        "key2" = "value2";
    };
}

It appears when the path reaches certain depth, Mac will create the separate configuration file for it.

64-bit Windows + 64-bit JVM

Configuration is stored in the registry with the following key:-

HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\wu\tang\clan

Preferences.systemNodeForPackage(WuTangClan) or Preferences.systemRoot().node(‘path’)

Instead of storing the configuration at user level, we may also store the configuration at system level.

Mac

Instead of storing under ~/Library/Preferences, the configuration is stored under /Library/Preferences.

On top of that, based on Java Development Guide for Mac, the configuration is only persisted if the user is an administrator.

The really weird part is the code will not throw any exceptions due to insufficient permission.

64-bit Windows + 64-bit JVM

Instead of storing under HKEY_CURRENT_USER\[path], the configuration is stored under HKEY_LOCAL_MACHINE\[path].

Best Practices

I’m not sure if this is a best practice, but my personal preference is to specify my own string path through Preferences.userRoot().node(..).

Preferences.userNodeForPackage(..) worries me because if I refactor my code by moving the class files around, it may not find the existing configuration due to changed path.

When specifying the string path, do make sure the path value is rather unique to prevent reading an existing configuration from other installed software.

Spring + Ehcache: XML-less Spring Configuration for Ehcache 2.x vs Ehcache 3.x

BACKGROUND

The documentation on the web regarding Ehcache 3.x configuration using Spring is rather lacking. There is apparently a very distinct difference in Spring Java-based configuration between Ehcache 2.x vs Ehcache 3.x.

Spring + Ehcache 2.x

Dependency:-

<dependency>
    <groupId>net.sf.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>2.10.3</version>
</dependency>

Spring configuration:-

@Configuration
@EnableCaching
class Config {
    @Bean
    CacheManager cacheManager() {
        return new EhCacheCacheManager(ehCacheManager())
    }

    @Bean(destroyMethod = 'shutdown')
    net.sf.ehcache.CacheManager ehCacheManager() {
        CacheConfiguration cacheConfiguration = new CacheConfiguration(
                name: 'person',
                maxEntriesLocalHeap: 5,
                timeToLiveSeconds: 5
        )

        net.sf.ehcache.config.Configuration config = new net.sf.ehcache.config.Configuration()
        config.addCache(cacheConfiguration)

        return new net.sf.ehcache.CacheManager(config)
    }
}

Spring + Ehcache 3.x

Dependency:-

<dependency>
    <groupId>org.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>3.3.1</version>
</dependency>
<dependency>
    <groupId>javax.cache</groupId>
    <artifactId>cache-api</artifactId>
    <version>1.0.0</version>
</dependency>

Spring configuration:-

import org.ehcache.config.CacheConfiguration
import org.ehcache.config.builders.CacheConfigurationBuilder
import org.ehcache.config.builders.ResourcePoolsBuilder
import org.ehcache.core.config.DefaultConfiguration
import org.ehcache.expiry.Duration
import org.ehcache.expiry.Expirations
import org.ehcache.jsr107.EhcacheCachingProvider
import org.springframework.cache.annotation.EnableCaching
import org.springframework.cache.jcache.JCacheCacheManager
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration

import javax.cache.CacheManager
import javax.cache.Caching
import java.util.concurrent.TimeUnit

@Configuration
@EnableCaching
class Config {
    @Bean
    JCacheCacheManager jCacheCacheManager() {
        return new JCacheCacheManager(cacheManager())
    }

    @Bean(destroyMethod = 'close')
    CacheManager cacheManager() {
        CacheConfiguration cacheConfiguration = CacheConfigurationBuilder.newCacheConfigurationBuilder(
                Object,
                Object,
                ResourcePoolsBuilder.heap(5)).
                withExpiry(Expirations.timeToLiveExpiration(new Duration(5, TimeUnit.SECONDS))).
                build()

        Map<String, CacheConfiguration> caches = ['person': cacheConfiguration]

        EhcacheCachingProvider provider = (EhcacheCachingProvider) Caching.getCachingProvider()
        DefaultConfiguration configuration = new DefaultConfiguration(caches, provider.getDefaultClassLoader())

        return provider.getCacheManager(provider.getDefaultURI(), configuration)
    }
}

Design Pattern: Re-accommodate

WHAT

Forcefully evict a random entity from the system due to overcapacity problem caused by own fault. Then, spend countless of hours cleaning up the mess.

USAGE

Let’s assume your system has heap size problems and it is about to run out of memory because you implemented endless recursions or have too many running threads.

  1. Randomly select 4 entities (running processes) from the system.
  2. Ask each selected entity to voluntarily quit.
  3. If the chosen entity does not comply, forcefully evict it from the system.
  4. Clean up any data corruption.

WHEN TO USE IT

Only use this design pattern when building any software systems for United Airlines.