GCP + Container Registry: Pushing/Pulling Images

PROBLEM

You want to push a new image to Google Container Registry (GCR) or pull an existing image from GCR.

SOLUTION

Pushing a New Image to GCR

Prepare your Dockerfile.

FROM alpine:3.7

# some content...

Create an image and tag it with a path pointing to GCR within a project.

There are several variations of GCR’s hostname (ex: gcr.io, us.gcr.io, eu.gcr.io, etc) depending on the data center’s location.

The GCR path has the following format: [HOSTNAME]/[PROJECT-ID]/[IMAGE].

docker build -t gcr.io/shitty-project/shitty-repo .

Log into GCP.

gcloud auth login

Register gcloud as a Docker credential helper.

gcloud auth configure-docker

Push the image to GCR.

docker push gcr.io/shitty-project/shitty-repo

View pushed image.

gcloud container images list-tags gcr.io/shitty-project/shitty-repo

DIGEST        TAGS    TIMESTAMP
78b36c0b456d  latest  2019-03-07T16:19:53

The repository and image can also be viewed in GCP Console.

Image in GCR

Pulling an Existing Image from GCR

Proceed with the authentication process if it is not done already.

gcloud auth login
gcloud auth configure-docker

Pull the image from GCR.

docker pull gcr.io/shitty-project/shitty-repo

Docker: Executing Startup Script When Running Container Interactively

PROBLEM

When running the Docker container interactively (ex: docker run --rm -it myimage), you want to run a startup script every time.

SOLUTION

For Ubuntu, Debian and Centos images, write the startup script to /root/.bashrc:

# UBUNTU
FROM ubuntu:latest
RUN echo "echo 'Welcome!'" >> /root/.bashrc
WORKDIR /home

# DEBIAN
FROM debian:latest
RUN echo "echo 'Welcome!'" >> /root/.bashrc
WORKDIR /home

# CENTOS
FROM centos:latest
RUN echo "echo 'Welcome!'" >> /root/.bashrc
WORKDIR /home

For Alpine image, it’s a little different because it uses Ash shell. Besides writing the startup script to /root/.profile, you also need to set that path to an environment variable called ENV:

FROM alpine:latest
ENV ENV=/root/.profile
RUN echo "echo 'Welcome!'" > $ENV
WORKDIR /home

Docker: Handling Circular Dependency between Containers

PROBLEM

Let’s assume we are going to run 3 containers:-

  • Jenkins – http://server:8080/jenkins
  • Nexus – http://server:8081/nexus
  • Nginx – http://server

Nginx is used to serve cleaner URLs through reverse proxies so that users will access http://server/jenkins and http://server/nexus instead of remembering specific ports.

So, the simplified docker-compose.yml looks like this:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"
    
  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

While http://server/jenkins and http://server/nexus work flawlessly, the Jenkins container is unable to communicate with Nexus through http://server/nexus/some/path, which is handled by Nginx.

Hence, when a Jenkins job tries to pull artifacts from Nexus, the following error is thrown:

[ERROR]     Unresolveable build extension: Plugin ... or one of its 
dependencies could not be resolved: Failed to collect dependencies 
at ... -> ...: Failed to read artifact descriptor for ...: Could 
not transfer artifact ... from/to server 
(http://server/nexus/repository/public/): Connect to server:80 
[server/172.19.0.2] failed: Connection refused (Connection refused) 
-> [Help 2]

SOLUTION: ATTEMPT #1

The first attempt is to set up a link between Jenkins and Nginx with the Nginx alias pointing to the hostname, which is server.

The goal is when Jenkins communicate with Nexus through http://server/nexus/some/path, Nginx will handle the reverse proxy accordingly.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nginx:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

However, when running the containers, it halts with an error:-

ERROR: Circular dependency between nginx and jenkins

SOLUTION: ATTEMPT #2

In effort to prevent the circular dependency problem, we can set up a link between Jenkins and Nexus with the Nexus alias pointing to the hostname, which is server.

This way, Jenkins communicate directly with Nexus through http://server:8081/nexus/some/path and Nginx will stay out of it.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"
    links:
     - nexus:${HOSTNAME}

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus

volumes:
  jenkins:
  nexus:

This works without problem.

However, this configuration somewhat defeats the purpose of using Nginx because while the users may access Jenkins and Nexus without specifying custom ports, Jenkins has to communicate with Nexus using port 8081.

Furthermore, this Nexus port is fully exposed in the build logs in all Jenkins jobs.

SOLUTION: ATTEMPT #3

The last attempt is to configure Nginx with the hostname as a network alias.

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
     - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home
    environment:
      JENKINS_OPTS: "--prefix=/jenkins"

  nexus:
    image: "sonatype/nexus3"
    ports:
     - "8081:8081"
    volumes:
     - nexus:/nexus-data
    environment:
      NEXUS_CONTEXT: "nexus"

  nginx:
    build: ./nginx
    ports:
     - "80:80"
    links:
     - jenkins
     - nexus
    networks:
      default:
        aliases:
         - ${HOSTNAME}

volumes:
  jenkins:
  nexus:

networks:
  default:

This time, Jenkins is able to communicate successfully with Nexus through http://server/nexus/some/path and Nginx will handle the reverse proxy accordingly.

Docker: Defining Custom Location for Named Volume

PROBLEM

Let’s assume we have the following docker-compose.yml:

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:

By the default, all Docker-managed named volumes are stored under the installed Docker directory… typically, /var/lib/docker/volumes/[path].

However, it is possible /var mount is low on disk space.

SOLUTION

It appears we can create a custom location for the given named volume:-

version: '2'

services:
  jenkins:
    image: "jenkinsci/jenkins"
    ports:
    - "8080:8080"
    volumes:
     - jenkins:/var/jenkins_home

volumes:
  jenkins:
    driver_opts:
      type: none
      device: /data/jenkins
      o: bind

Keep in mind /data/jenkins must be created first on the host.

Synology NAS: Running CrashPlan in Docker Container

BACKGROUND

The reason to run CrashPlan in Docker container is to prevent any future Synology’s DSM updates from breaking the CrashPlan app.

Let’s assume the Synology NAS IP address is 1.2.3.4.

STEPS

Diskstation Manager

Log into Diskstation Manager: http://1.2.3.4:5000

Install Docker.

Package Center -> Utilities -> Third Party -> Docker

Mac

SSH into Synology NAS.

ssh admin@1.2.3.4

Install CrashPlan Docker container.

sudo docker pull jrcs/crashplan

Run CrashPlan Docker container. In this example, we want to backup photo and video directories.

sudo docker run -d --name CrashPlan \
 -p 4242:4242 -p 4243:4243 \
 -v /volume1/photo:/volume1/photo -v /volume1/video:/volume1/video \
 jrcs/crashplan:latest

Back to Diskstation Manager

Get authentication token from the running CrashPlan.

Docker -> Container -> CrashPlan -> Details -> 
Terminal -> Create -> bash

Run command:-

cat /var/lib/crashplan/.ui_info

The following text are printed:-

4243,########-####-####-####-############,0.0.0.0

Copy ########-####-####-####-############ to somewhere first.

By default, CrashPlan allocates 1GB of memory. The recommendation is to allocate 1GB of memory per 1TB of storage to prevent CrashPlan from running out of memory. In this example, we are going to increase it to 3GB.

Edit /var/crashplan/conf/my.service.xml.

vi /var/crashplan/conf/my.service.xml

Change the following line:-

<config ...>
	...
	<javaMemoryHeapMax>3072m</javaMemoryHeapMax>
	...
</config>

Edit /var/crashplan/app/bin/run.conf.

vi /var/crashplan/app/bin/run.conf

Change the following line:-

SRV_JAVA_OPTS="... -Xmx3072m ..."                                                         
GUI_JAVA_OPTS="..."

Stop CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Action -> Stop

Enable auto-restart on CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Edit -> General Settings -> 
Enable auto-restart -> OK

Start CrashPlan Docker container.

Docker -> Container -> CrashPlan -> Action -> Start

Back to Mac

Download and install CrashPlan software.

Disable CrashPlan service since the UI acts as a client.

sudo launchctl unload -w /Library/LaunchDaemons/com.crashplan.engine.plist

Edit /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties.

sudo nano /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties

Uncomment serviceHost and update Synology NAS IP address.

#Fri Dec 09 09:50:22 CST 2005
serviceHost=1.2.3.4
#servicePort=4243
#pollerPeriod=1000  # 1 second
#connectRetryDelay=10000  # 10 seconds
#connectRetryAttempts=3
#showWelcome=true

#font.small=
#font.default=
#font.title=
#font.message.header=
#font.message.body=
#font.tab=                                  

Edit /Library/Application Support/CrashPlan/.ui_info.

sudo nano "/Library/Application Support/CrashPlan/.ui_info"

Replace the authentication token with the value from above step. Replace IP address with Synology NAS IP address.

4243,########-####-####-####-############,1.2.3.4

Finally, run CrashPlan app to view the backup process.