Playing with Shifter Part 2 – converted Docker containers inside Slurm

This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.

Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:

required /usr/lib64/shifter/shifter_slurm.so shifter_config=/etc/shifter/udiRoot.conf

as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:

#!/bin/bash

#SBATCH -p debug
#SBATCH --image=debian:wheezy

shifter cat /etc/issue

results in the following on our RHEL compute nodes:

[samuel@bruce Shifter]$ cat slurm-1734069.out 
Debian GNU/Linux 7 \n \l

simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.

That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=dispel4py/docker.openmpi
#SBATCH --ntasks=3
#SBATCH --tasks-per-node=1

shifter cat /etc/issue

srun shifter python /home/tutorial/mpi4py_benchmarks/helloworld.py

This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:

[samuel@bruce Python]$ cat slurm-1734135.out
Ubuntu 14.04.4 LTS \n \l

libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce001

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
Hello, World! I am process 0 of 3 on bruce001.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce002

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 1 of 3 on bruce002.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce003

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 2 of 3 on bruce003.

It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).

Open-MPI allows you to specify what transports to use, so adding one line to my batch script:

export OMPI_MCA_btl=tcp,self,sm

cleans up the output a lot:

Ubuntu 14.04.4 LTS \n \l

Hello, World! I am process 0 of 3 on bruce001.
Hello, World! I am process 2 of 3 on bruce003.
Hello, World! I am process 1 of 3 on bruce002.

This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.

Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=tcp,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

giving these latency results:

[samuel@bruce MPI]$ cat slurm-1734137.out
# MPI Latency Test
# Size [B]        Latency [us]
0                        16.19
1                        16.47
2                        16.48
4                        16.55
8                        16.61
16                       16.65
32                       16.80
64                       17.19
128                      17.90
256                      19.28
512                      22.04
1024                     27.36
2048                     64.47
4096                    117.28
8192                    120.06
16384                   145.21
32768                   215.76
65536                   465.22
131072                  926.08
262144                 1509.51
524288                 2563.54
1048576                5081.11
2097152                9604.10
4194304               18651.98

To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=openib,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

which then gave these latency numbers:

[samuel@bruce MPI]$ cat slurm-1734138.out
# MPI Latency Test
# Size [B]        Latency [us]
0                         2.52
1                         2.71
2                         2.72
4                         2.72
8                         2.74
16                        2.76
32                        2.73
64                        2.90
128                       4.03
256                       4.23
512                       4.53
1024                      5.11
2048                      6.30
4096                      7.29
8192                      9.43
16384                    19.73
32768                    29.15
65536                    49.08
131072                   75.19
262144                  123.94
524288                  218.21
1048576                 565.15
2097152                 811.88
4194304                1619.22

So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).

Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.

Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.

So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed! 🙂

Git: Renaming/swapping “master” with a branch on Github

I was playing around with some code and after having got it working I thought I’d make just one more little quick easy change to finish it off and found that I was descending a spiral of additional complexity due to the environment in which it had to work. As this was going to be “easy” I’d been pushing the commits to master on Github (I’m the only one using this code) and of course a few reworks in I’d realised that this was never going to work out well and needed to be abandoned.

So, how to fix this? The ideal situation would be to just disappear all the commits after the last good one, but that’s not really an option, so what I wanted was to create a branch from the last good point and then swap master and that branch over. Googling pointed me to some possibilities, including this “deprecated feedback” item from “githubtraining” which was a useful guide so I thought I should blog what worked for me in case it helps others.

  1. git checkout -b good $LAST_GOOD_COMMIT # This creates a new branch from the last good commit
  2. git branch -m master development # This renames the "master" branch to "development"
  3. git branch -m good master # This renames the "good" branch to "master".
  4. git push origin development # This pushes the "development" branch to Github
  5. In the Github web interface I went to my repos “Settings” on the right hand side (just above the “clone URL” part) and changed the default branch to “development“.
  6. git push origin :master # This deletes the "master" branch on Github
  7. git push --all # This pushes our new master branch (and everything else) to Github
  8. In the Github web interface I went and changed my default branch back to “master“.

…and that was it, not too bad!

You probably don’t want to do this if anyone else is using this repo though. 😉

UniFi systemd unit file for Ubuntu 15.04

At work we’ve started using some UniFi wireless gear and the system I’ve managed to commandeer to do the control system for it is running Kubuntu 15.04 which uses systemd. Now the UniFi Debian packages don’t supply systemd unit files so I went hunting and found a blog post by Derek Horn about getting it running on CentOS7 so I nabbed his and adapted it for Ubuntu (which wasn’t that hard).

The file lives in /etc/systemd/system/unifi.service and was enabled with systemctl enable unifi.service (from memory, there might have been another step that involved getting systemd to rescan unit files to pick up the new one, but I don’t remember for sure).

Here is the unit file:

#
# Systemd unit file for unifi-rapid
#

[Unit]
Description=UniFi Wireless AP Control System
After=rsyslog.target network.target

[Service]
Type=simple
User=root
#ExecStart=/usr/bin/java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start
ExecStart=/usr/bin/jsvc -nodetach -home /usr/lib/jvm/java-7-openjdk-amd64 -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ace.jar -pidfile /var/run/unifi/unifi.pid -procname unifi -outfile SYSLOG -errfile SYSLOG -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx1024M com.ubnt.ace.Launcher start
#ExecStop=/usr/bin/java -jar /usr/lib/unifi/lib/ace.jar stop
ExecStop=/usr/bin/jsvc -home /usr/lib/jvm/java-7-openjdk-amd64 -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ace.jar -pidfile /var/run/unifi/unifi.pid -procname unifi -outfile SYSLOG -errfile SYSLOG -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx1024M -stop com.ubnt.ace.Launcher stop
SuccessStartStatus=0
SuccessExitStatus=255

[Install]
WantedBy=multi-user.target

ARM v8 (64-bit) developer boxes

Looks like things are moving along in the world of 64-bit ARM, systems aimed at early adopting developers are now around. For instance APM have their X-C1 Development Kit Plus which has 8 x 2.4GHz ARMv8 cores, 16GB RAM, 500GB HDD, 1x10gigE, 3x1gigE for ~US$2,500 (or a steep discount if you qualify as a developer). Oh, and it ships with Linux by default of course.

Found via a blog post by Steve McIntyre about bringing up Debian Jessie on ARMv8 (it’ll be a release architecture for it) which has the interesting titbit that (before ARM had their Juno developer boxes):

Then Chen Baozi and the folks running the Tianhe-2 supercomputer project in Guangzhou, China contacted us to offer access to some arm64 hardware

So it looks like (I presume) NUDT are paying it some attention & building/acquiring their own ARMv8 systems.

First beta release of Vacation 1.2.8.0

Vacation 1.2.8.0-beta1 is a release that fixes a long standing bug handling wrapped email headers. Many thanks to Zdenek Havranek for the fix!

Richard Keech contributed a chkvacation script to enable/disable and check your vacation status. He also contributed some SELinux information for Vacation that you can find in the INSTALL file.

It also has some minor changes to the build system including the ability to do “make install DESTDIR=/foo/bar” and to also put the German manual page in the correct location.

Please see the ChangeLog for more information.

https://sourceforge.net/projects/vacation/files/vacation/1.2.8.0-beta1/

Please test and let me know of any bugs you find!

How to delete lots of programs in MythTV, easily

I realised I had over 60 episodes of Get Smart recorded which I was never going to get around to watching, so I wanted to delete them quickly. I had a quick poke at MythWeb but that didn’t seem to have the functionality but a quick google revealed this forum post which says:

When in select recording to watch, mark the recording with a backslash “/”.
Mark all that you want to delete.
Press M to bring up the Recordings list menu.
Select playlist options
Select Delete

Works like a charm!

There’s also Craig’s set of command line tools that can assist with this: http://taz.net.au/mythtv-tools/.

PGI ARM OpenCL compiler gone, first victim of nVidia purchase?

Last August PGI announced an update to its “PGI OpenCL Compiler for ARM” (PGCL 12.7), but if you go looking for that on the PGI news page you won’t find it. In fact if you go to their products page and go to the link for the “PGI OpenCL Compiler for ARM” you’ll find it’s gone too..

For the record that part of the products page currently looks like:

PGI Compilers and Tools for Mobile and Embedded Platforms

PGI OpenCL Compiler for ARM
PGCLâ„¢ is an OpenCLâ„¢ framework for compiling and running OpenCL 1.1 embedded profile
applications on the ST-Ericsson NovaThorâ„¢ U8500 and follow-on platforms using a single
ARM core as the OpenCL host and multiple ARM cores as an OpenCL computing device

The interesting thing is that this has happened recently, Google’s cache of the news page (dated July 25th 2013) still has the announcement listed:

 The Portland Group Updates its OpenCL Compiler for Multi-core ARM

Portland, Oregon
August 21, 2012
Latest PGCL includes automatic generation of NEON/SIMD instructions

The Portland Group® (PGI), a wholly-owned subsidiary of STMicroelectronics and the leading
independent supplier of compilers and tools for high-performance computing, today announced
the release of PGCL 12.7. PGCLâ„¢ is the PGI OpenCL framework for multi-core ARM-based
Systems-on-Chips (SoCs), currently available on ST-Ericsson NovaThorâ„¢ platforms. PGCL includes
a PGI OpenCL compiler for multi-core ARM CPUs as a compute device and complements OpenCL
for GPUs. 

So something changed in the last week, oddly around the same time that nVidia announced it was buying PGI..

Disable SpamCop reporting in SpamAssassin

If you’ve been using SpamAssassin and have been reporting to SpamCop then you’ll have found overnight that you got a heap of bounces back saying things like:

<devnull@prod-sc-app7.sv4.ironport.com> (expanded from
    <spamassassin-submit@spam.spamcop.net>): unknown user: "devnull"

It turns out that the spamassassin-submit@spam.spamcop.net appears to be something that the SpamAssassin developers set without consulting with SpamCop, and SpamCop have just been blackholing those reports for an unknown amount of time. Last night it went away and so now IronPort are rejecting them which was how I learnt of this. I’m not impressed by what the SA developers did her, it should have required you to put in a registered SpamCop address and not reported if that wasn’t set.

I’ve disabled my SpamCop reporting by commenting out this line in /etc/mail/spamassassin/v310.pre on my Debian mailserver:

loadplugin Mail::SpamAssassin::Plugin::SpamCop

If you use SpamAssassin and don’t have a registered SpamCop account you’ll want to do the same.

Speeding up Digikam’s face recognition (with risks)

It’d been a while since I’d last told Digikam to scan my collection for faces, and having just upgraded to 3.2.0 I thought it was about time to have another shot at it. However, I’d noticed it was taking an awful long time and seemed to only be using one of the eight cores on this system (Ivy Bridge i7-3770K running Kubuntu 13.04) so I thought I’d see if simply taking advantage of OpenMP could improve things with multithreading.

To do that I just started a new konsole and (as a first step) told OpenMP to use all the cores with:

export OMP_NUM_THREADS=8

Running digikam from that session and starting a face scan showed that yes, it was using all 8 cores, but not really to a great amount. Running iotop showed it doing about 5MB/s in reads and latencytop showed that it was spending most of its time in fsync(). Now that’s good, because it’s making sure that the data has really hit the rust to ensure everything is consistent.

However, in this case I can rebuild the entire face database should I need to, and I have about 66GB of photos to scan, plus I wanted to see just how fast this could go. 😉 So now it’s time to get a little dangerous and try Stewart Smith’s wonderfully named “libeatmydata” library which gives you a library (surprise surprise) and helper program that lets you preload an fsync() function that really only does return(0); (which, you may be interested to know, is still POSIX compliant).

So to test that out I just needed to do:

eatmydata digikam

and suddenly I had 8 cores running flat out. iotop showed that Digikam was now doing about 25-30MB/s reads and latencytop showed most of its time waiting for things was now for user space lock contention, i.e. locks protecting shared data structures to stop threads from stomping on each other and going off into the weeds. Interestingly the disks are a lot quieter than before too. Oh, and it’s screaming through the photos now. 🙂

WARNING: Do not use eatmydata for anything you care about, it will do just what it says in the name should your power die, system hang, universe end, etc..