Automating LXC container creation with Ansible

LXC is a Linux container technology that I use for both development and production setups hosted on Debian.

This type of container acts a lot like a lightweight virtual machine, and can be administered with standard linux tools. When configured over SSH, you should be able to use the same scripts against either an LXC container or VM without noticing the difference.

This setup will provision “privileged” containers behind a NAT, which is a setup that is most useful for a developer workstation. A setup in a server rack would be more likely to use “unprivileged” containers on a host bridge, which is slightly more complex to set up. The good news is that the guest container will behave very similarly once it’s provisioned, so developers shouldn’t need to adapt their code to those details either.

Manual setup of an LXC container

You need to know how to do something manually before you can automate it.

The best reference guide for this is the current Debian documentation. This is a shorter version of those instructions, with only the parts we need.

Packages

Everything you need for LXC is in the lxc Debian package:

$ sudo apt-get install lxc
...
The following additional packages will be installed:
  bridge-utils debootstrap liblxc1 libpam-cgfs lxcfs python3-lxc uidmap
Suggested packages:
  btrfs-progs lvm2
The following NEW packages will be installed:
  bridge-utils debootstrap liblxc1 libpam-cgfs lxc lxcfs python3-lxc uidmap
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,367 kB of archives.
After this operation, 3,762 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
...

Network

Enable the LXC bridge, and start it up:

echo 'USE_LXC_BRIDGE="true"' | sudo tee -a /etc/default/lxc-net
$ sudo systemctl start lxc-net

This gives you an internal network for your containers to connect to. From there, they can connect out to the Internet, or communicate with each-other:

$ ip addr show
...
3: lxcbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 scope global lxcbr0
       valid_lft forever preferred_lft forever

Defaults

Instruct LXC to attach a NIC to this network each time you make a containers:

$ sudo vi /etc/lxc/default.conf

Replace that file with:

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

You can then create a ‘test1’ box, using the Debian image online. Note the output here indicates that the container has no SSH server or root password.

$ sudo lxc-create --name test1 --template=download -- --dist=debian --release=stretch --arch=amd64
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Debian container (release=stretch, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

The container is created in a stopped state. Start it up now:

$ sudo lxc-start --name test1 

It now appears with an automatically assigned IP.

$ sudo lxc-ls --fancy
NAME  STATE   AUTOSTART GROUPS IPV4       IPV6 
test1 RUNNING 0         -      10.0.3.250 -    

Set up login access

Start by getting your SSH public key ready. You can locate at ~/.ssh/id_rsa.pub. You can use ssh-keygen to create this if it doesn’t exist.

To SSH in, you need to install an SSH server, and get this public key into the /root/authorized_keys file in the container.

$ sudo lxc-attach --name test1
root@test1:/# apt-get update
root@test1:/# apt-get -y install openssh-server
root@test1:/# mkdir -p ~/.ssh
root@test1:/# echo "ssh-rsa (public key) user@host" >> ~/.ssh/authorized_keys

Type exit or press Ctrl+D to quit, and try to log in from your regular account over SSH:

$ ssh root@10.0.3.250
The authenticity of host '10.0.3.250 (10.0.3.250)' can't be established.
ECDSA key fingerprint is SHA256:EWH1zUW4BEZUzfkrFL1K+24gTzpd8q8JRVc5grKaZfg.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.250' (ECDSA) to the list of known hosts.
Linux test1 4.14.0-3-amd64 #1 SMP Debian 4.14.13-1 (2018-01-14) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@test1:~# 

Any you’re in. You may be surprised how minimal the LXC images are by default, but the full power of Debian is available from apt-get.

This container is not configured to start on boot. For that, you would add this line to /var/lib/lxc/test1/config:

lxc.start.auto = 1

Teardown

To stop the test1 container and then delete it permanently, run:

sudo lxc-stop --name test1
sudo lxc-destroy --name test1

Automated setup of LXC containers with Ansible

Now that the basic steps have been done manually, I’ll show you how to Ansible to create a set of LXC containers. If you haven’t used it before, Ansible is an automation tool for managing computers. At its heart, it just logs into machines and runs things. These scripts are an approximate automation of the steps above, so that you can create 10 or 100 containers at once if you need to.

I use this method on a small project that I maintain on GitHub called ansible-live, which bootstraps a containerized training environment for Ansible.

Host setup

You need a few packages and config files on the host. In addition to the lxc package, we need lxc-dev and the lxc-python2 python package to manage the containers from Ansible:

- hosts: localhost
  connection: local
  become: true
  vars:
  - interface: lxcbr0

  tasks:
  - name: apt lxc packages are installed on host
    apt: name={{ item }}
    with_items:
    - lxc
    - lxc-dev
    - python-pip

  - copy:
      dest: /etc/default/lxc-net
      content: |
        USE_LXC_BRIDGE="true"

  - copy:
      dest: /etc/lxc/default.conf
      content: |
        lxc.network.type = veth
        lxc.network.link = {{ interface }}
        lxc.network.flags = up
        lxc.network.hwaddr = 00:16:3e:xx:xx:xx

  - service:
      name: lxc-net
      state: started

  - name: pip lxc packages are installed on host
    pip:
      name: "{{ item }}"
    with_items:
    - lxc-python2
    run_once: true

This can be executed with this command:

ansible-playbook setup.yml --ask-become-pass --diff

Container creation

Add a file called inventory to specify the containers to use. These are two IP addresses in the range of the LXC network.

deb1 ansible_host=10.0.3.100
deb2 ansible_host=10.0.3.101

For local work, I find it easier to set an IP address with Ansible and use the /etc/hosts file, which is why IP addresses are included here. Without it, you need to wait for each container to boot, then detect its IP address before you can log in.

Add this to setup.yml

- hosts: all
  connection: local
  become: true
  vars:
  - interface: lxcbr0
  tasks:
  - name: Load in local SSH key path
    set_fact:
      my_ssh_key: "{{ lookup('env','HOME') }}/.ssh/id_rsa.pub"

  - name: interface device exists
    command: ip addr show {{ interface }}
    changed_when: false
    run_once: true

  - name: Local user has an SSH key
    command: stat {{ my_ssh_key }}
    changed_when: false
    run_once: true

  - name: containers exist and have local SSH key
    delegate_to: localhost
    lxc_container:
      name: "{{ inventory_hostname }}"
      container_log: true
      template: debian
      state: started
      template_options: --release stretch
      container_config:
        - "lxc.network.type = veth"
        - "lxc.network.flags = up"
        - "lxc.network.link = {{ interface }}"
        - "lxc.network.ipv4 = {{ ansible_host }}/24"
        - "lxc.network.ipv4.gateway = auto"
      container_command: |
        if [ ! -d ~/.ssh ]; then
          mkdir ~/.ssh
          echo "{{ lookup('file', my_ssh_key) }}" | tee -a ~/.ssh/authorized_keys
          sed -i 's/dhcp/manual/' /etc/network/interfaces && systemctl restart network
        fi

In the next block of setup.yml, use keyscan to get the SSH keys of each machine as it becomes available.

- hosts: all
  connection: local
  become: false
  serial: 1
  tasks:
  - wait_for: host={{ ansible_host }} port=22

  - name: container key is up-to-date locally
    shell: ssh-keygen -R {{ ansible_host }}; (ssh-keyscan {{ ansible_host }} >> ~/.ssh/known_hosts)

Lastly, jump in via SSH and install python. This is required for any follow-up configuration that uses Ansible.

- hosts: all
  gather_facts: no
  vars:
  - ansible_user: root
  tasks:
  - name: install python on target machines
    raw: which python || (apt-get -y update && apt-get install -y python)

Next, you can execute the whole script to create the two containers.

ansible-playbook setup.yml --ask-become-pass --diff

Scaling to hundreds of containers

Now that you have created two containers, it is easy enough to see how you would make 20 containers by adding a new inventory:

for i in {1..20}; do echo deb$(printf "%03d" $i).example.com ansible_host=10.0.3.$((i+1)); done | tee inventory
deb001.example.com ansible_host=10.0.3.2
deb002.example.com ansible_host=10.0.3.3
deb003.example.com ansible_host=10.0.3.4
...

And then run the script again:

ansible-playbook -i inventory setup.yml --ask-become-pass

This produces 20 machines after a few minutes.

The processes running during this setup were mostly rync (copying the container contents), plus the network waiting to retrieve python many times. If you need to optimise to frequent container spin-ups, LXC supports
storage back-ends that have copy-on-write, and you can cache package installs with a local webserver, or build some packages into the template.

Running these 20 containers plus a Debian desktop, I found that my computer was using just 2.9GB of RAM, so I figured I would test 200 empty containers at once.

for i in {1..200}; do echo deb$(printf "%03d" $i).example.com ansible_host=10.0.3.$((i+1)); done > inventory
ansible -i inventory setup.yml

It took truly a very long time to add Python to each install, but the result is as you would expect:

$ sudo lxc-ls --fancy
NAME               STATE   AUTOSTART GROUPS IPV4       IPV6 
deb001.example.com RUNNING 0         -      10.0.3.2   -    
deb002.example.com RUNNING 0         -      10.0.3.3   -    
deb003.example.com RUNNING 0         -      10.0.3.4   -    
...
deb198.example.com RUNNING 0         -      10.0.3.199 -    
deb199.example.com RUNNING 0         -      10.0.3.200 -    
deb200.example.com RUNNING 0         -      10.0.3.201 -    

The base resource usage of an idle container is absolutely tiny, around 13 megabytes — the system moved from 2.9GB to 5.4GB of RAM used when I added 180 containers. Containers clearly have a lower overhead than VM’s, since no RAM has been reserved here.

Software updates

The containers are updated just like regular VM’s-

apt-get update
apt-get dist-upgrade

Backups

In this setup, the container’s contents is stored under /var/lib/lxc/. As long as the container is stopped, you get at it safely with tar or rsync to make a full copy:

$ sudo tar -czf deb001.20180209.tar.gz /var/lib/lxc/deb001.example.com/
$ rsync -avz /var/lib/lxc/deb001.example.com/ remote-computer@example.com:/backups/deb001.example.com/

Full-machine snapshots are also available on the Ceph or LVM back-ends, if you use those.

Teardown

The same Ansible module can be used to delete all of these machines in a few seconds.

- hosts: all
  connection: local
  become: true
  tasks:
  - name: Containers do not exist
    delegate_to: localhost
    lxc_container:
      name: "{{ inventory_hostname }}"
      state: absent
ansible-playbook -i inventory teardown.yml --ask-become-pass

Conclusion

Hopefully this post has given you some insight into one way that Linux containers can be used. I have found LXC to be a great technology to work with for standalone setups, and regularly use the same scripts to configure either an LXC container or a VM’s depending on the target environment.

The low resource usage also means that I can run fairly complex setups on a laptop, where the overhead of large VM’s would be prohibitive.

I don’t think that LXC is directly comparable to full container ecosystems like Docker, since they are geared towards different use cases. These are both useful tools to know, and have complementary strengths.

Using custom fonts to add new glyphs to an Epson printer

I develop a printer driver for ESC/POS receipt printers, and we regularly get feature requests for encoding text in the Chinese, Japanese and Korean languages (“CJK”).

I have recently been looking for a way to add support for these on receipt printers that have no native ability to render them, and thought I would write a bit about some progress so far.

I previously wrote a bit about printing individual bitmaps for each character, where here I am aiming to print entire scripts.

Background

Programmers usually deal with text in UTF-8, but receipt printers don’t. Instead, they still use a series of legacy code pages to represent non-ASCII text. Mapping arbitrary text to something understood by these printers is a huge challenge.

The escpos-php driver will automatically map a lot of western scripts to these code pages. However, if you attempt to send an example string like “日本語” to escpos-php currently, the driver will substitute it with “???”, since it doesn’t know how to convert them to ESC/POS.

On some printers, there are native commands to print Japanese, but for a driver project, we need something with broad compatibility. So, I decided to try to get this working on an Epson TM-T20 variant which has no CJK fonts.

I started by making a new standalone test script, which converts text input into ESC/POS using a cut-down version of the escpos-php printer driver.

$text = file_get_contents("php://stdin");
$connector = new FilePrintConnector("php://stdout");
$printer = new Printer($connector);
$printer -> text($text);
$printer -> cut();
$printer -> close();

I then modified this to print arbitrary UTF-8 text with a local bitmap font. These next sections go through some of the things I had to write to get it all working.

Character representation

I decided to start with the GNU Unifont project, because it ships fixed-width binary fonts in a text format that can be parsed without a font library, is freely licensed, and has excellent coverage.

So the first issue to solve was to do with font sizes:

  • Unifont contains characters that are 8 or 16 pixels wide, that cover the entire Unicode Basic Multilingual Plane (BMP), at 16 characters tall.
  • ESC/POS supports a fixed 12×24 or a smaller 9×17 font.
  • ESC/POS fonts are submitted in a 24 pixel tall format regardless of print area.

Since the characters would be surrounded by too much whitespace in the “Font A” (12×24) representation, I settled on printing in “Font B” (9×17), leaving a one-pixel space underneath, and to the right of each character. These pictures show how the glpyhs (grey) are laid out in the available print area (unused print area in white), in the available memory (unused memory in red).

Note that wider characters have a two-pixel dead-zone on the right. The non-printable 7 pixels at the bottom of the images are ignored by the printer.

The format on the printer for each character stores bits in a column-major format, while most raster formats are row-major, so I wrote a quick converter to rotate the bits. The converter code is not very concise, so I’ll just share a screen capture here. The full code is linked at the end of this post.

Lastly, the output size on paper was tiny, so I set the printer to double the size, which results in text that is around 50% larger than the default output.

Storage of fonts

There is only space for 95 single-width characters in an ESC/POS font, but the scripts are much larger than this.

I treated the font as a queue in this implementation. During the print-out, new characters are added to the font as necessary, and the font is re-written from the front as space runs out. This is also known as a FIFO cache eviction policy.

Input

I converted the string input to an array of Unicode code points to avoid canonicalisaton issues.

$chrArray = preg_split('//u', $text, -1, PREG_SPLIT_NO_EMPTY);
$codePoints = array_map("IntlChar::ord", $chrArray);
foreach($codePoints as $char) {
  $this -> writeChar($char);
}

The IntlChar class is provided by an extension which is very useful but not widespread, which limits the portability of this code.

Result

I got the list of languages from the sidebar of a Wikipedia article to use as a test string, since it contains short strings in a large number of scripts.

cat test.txt | php unifont-example.php > /dev/usb/lp0

The output contains a large number of correctly rendered scripts, including the CJK output, which was not previously possible on my printer.

Success!

Advantages

Previously, I have tried generating small images from system fonts to send text to the printer. This is quite costly in terms of processing and data transfer, and the printer is unable to format or wrap the text for you.

Storing glyphs in the custom font area involves transferring less raster data, and allows most text formatting commands to be used.

Limits

These characters are a different size to the native printer fonts, so we can’t mix them on the same line. This means that we can’t use this code to implement an automatic fallback in escpos-php. However, it may appear in a future version as an alternative “PrintBuffer”, which can be explicitly enabled by developers who are not interested in using the native fonts.

The esc2html utility is not able to emulate custom fonts, so the output cannot currently be rendered without an Epson printer.

Also, we simply printed a stream of characters, which is not really how text works. To implement Unicode, we need to be able to join and compose characters, and respect bi-directional text. Unicode text layout is not trivial at all.

Get the code

The full script is available in the escpos-snippets repo on GitHub, where I store prototypes of new functionality that is not yet ready for prime-time.

How to use HiDPI displays on Debian 9

I recently added a 4K monitor to my Debian box, and had to set a few things to make it display things at a good size. These high-density moniotors that are becoming common on laptops and desktops are known as “HiDPI” displays.

Currently I get the best results with:

  • Window scaling factor of 2
  • Font scaling 0.90 to make text slightly smaller

Note that “window scaling” is not “upscaling” (stretching an image). In this version of Gnome, it means “single/double/triple DPI”. The implementations are in the process of changing: Soon you should be able to set any scaling factor.

This post assumes a Gnome version around 3.26, which is what you would get as a default if you installed Debian 9 today.

Apply to one user

Under Settings → Devices → Displays, set the Scale to 200%.

Under Tweaks → Fonts, set the Scaling Factor to 0.90.

Next, add these variables to ~/bashrc to apply similar scaling to QT apps.

QT_AUTO_SCREEN_SCALE_FACTOR=0
QT_SCALE_FACTOR=2

Log out and back in to ensure that the settings have applied everywhere.

Apply to any user

If you have a shared system (eg. domain accounts), or want to style the login box as well, then you can set the same settings as below.

These steps are based on answers to the Ask Ubuntu question: Adjust text scaling factor for all users.

nano /usr/share/glib-2.0/schemas/org.gnome.desktop.interface.gschema.xml

Set the text-scaling-factor to 0.9, and the scaling-factor to 2.

<key name="text-scaling-factor" type="d">
  <range min="0.5" max="3.0"/>
  <default>0.9</default>
  <summary>Text scaling factor</summary>
  <description>
    Factor used to enlarge or reduce text display, without changing font si$
  </description>
</key>
<key name="scaling-factor" type="u">
  <default>2</default>
  <summary>Window scaling factor</summary>
  <description>
    Integer factor used to scale windows by. For use on high-dpi screens.
    0 means pick automatically based on monitor.
  </description>
</key>

Re-compile the schemas:

glib-compile-schemas /usr/share/glib-2.0/schemas

Next drop some similar environent variables for QT apps in /etc/profile.d/hidpi.sh to apply it to all users:

export QT_AUTO_SCREEN_SCALE_FACTOR=0
export QT_SCALE_FACTOR=2

After this, reboot. If the setting has applied, then the gdm3 login box will be scaled as well.

How to use parallel to speed up your work

GNU Parallel is a tool to execute multiple commands at once. In its basic usage, you would list your commands in a file, so that it can execute them, several at a time.

It gives the most benefit on processes that don’t fully utilise your CPU. Almost every laptop, desktop and single board computer now has multiple CPU cores available, so you are probably missing out if you frequently perform batch operations without it.

Installation

On Debian or Ubuntu:

sudo apt-get install parallel
parallel --cite

On Fedora the package name is the same:

sudo dnf install parallel
parallel --cite

Example 1: Convert loops to pipes

Using the ImageMagick tool to convert a folder of GIF images to PNG format can be done in a loop:

for i in *.gif; do convert $i -scale 200% ${i%.*}.png; done

Or, you could print each command in a loop then pass them to parallel.

for i in *.gif; do echo convert $i -scale 200% ${i%.*}.png; done | parallel

The second command is many times faster on a multi-core computer.

Example 2: Replace xargs with parallel

This command executes a single “pngcrush” command on each PNG file in a directory, one at a time.

find . -type f -name '*.png' -print0  | xargs -0 -n1 -r pngcrush -q -ow -brute

To convert this to use parallel, you would use the following command-line:

find . -type f -name '*.png' | parallel "pngcrush -q -ow -brute {}"

Don’t use xargs in parallel mode

Expert command line users will also know about xargs -P, which seems to do the same thing at a glance.

xargs is good at making really long command-lines, and not so good at executing multiple commands at once. It will mix the output of the commands, and requires you to specify the number of jobs to run.

Parallel is designed to do lots of things at once, and it does it well. It will choose some good defaults for the number of processes to execute, and adds an insane collection of features that you need for large batches. To name just a few:

  • Control spawning of new jobs based on things like available memory, system load, or an absolute number of jobs to keep running
  • Distribute jobs to remote computers
  • Show progress
  • Control of when to terminate the jobs

How to generate star fields

I recently needed a texture for the skybox in a 3D space game.

I used this ImageMagick one-liner to generate a dotted canvas with some grey and white pixels. It displays well if the texture will be stretched.

convert -size 1600x900 xc: +noise Random -channel R -threshold 0.5% \
        -negate -channel RG -separate +channel \
        -compose multiply -composite stars.png

Alternative style

With a simple modification, the bright stars are made bigger, and the dull ones are made smaller, with only black and white used. This works well if the texture will be shrunk for display.

convert -size 800x450 xc: +noise Random -channel R -threshold 0.5% \
        -negate -channel RG -separate +channel \
        -compose multiply -composite -resize 200% \
        -threshold 10% stars-rounded.png

Full process

Although these are great one-liners, the actual process is a bit hard to follow without some smaller steps.

Here, we will generate a 150×90 star field in several steps. I’ve scaled each of these pictures to 200% of their original size and converted them to PNG for display on the web. I’ve used BMP in the commands only because it saves some plumbing around colour spaces.

Start with a blank canvas:

convert -size 150x90 xc: stars-01.bmp

Add random RGB noise:

convert stars-01.bmp +noise Random stars-02.bmp

In the red channel of the image, apply a black/white threshold: If the red channel is greter than 0.5%, it is set to the maximum, otherwise it is set to the minimum.

Mostly the red channel is now 100%, with random dots of 0%. The blue and green channels are still completely random:

convert stars-02.bmp -channel R -threshold 0.5% stars-03.bmp

Negate the image, so that the red channel is mostly 0%, with dots of 100%:

convert stars-03.bmp -negate stars-04.bmp

Extract the red channel and green channel.

convert stars-04.bmp -channel RG -separate +channel stars-05.bmp

The red channel will be dots of white:

The green channel will be random:

Multiply the channels together, so that the white dots become grey dots, each with a random brightness:

convert stars-05-0.bmp stars-05-1.bmp -compose multiply -composite stars-06.bmp

For the alternative style, scale the image up, then apply a new threshold. Brighter dots will appear as larger patches of white, while dull stars will be smaller or invisible:

convert stars-06.bmp -resize 200% -threshold 10% stars-06-rounded.bmp

How to use a Radeon graphics card on Debian 9

I have previously blogged about Radeon graphics cards on different Debian installs.

ATI has now released a new free driver which works brilliantly on Debian. In the past, Debian users had to choose between using the community-provided free software driver, or the proprietary one. Generally the proprietary driver was more feature-rich, but the free driver worked more reliably across upgrades. So now, you can safely ignore old guides and start using it.

Here’s how:

Upgrade

Make sure you are on Debian 9 (Stretch) or newer.

These steps apply to a fresh install.

Identify

You should use lspci to confirm that you have an ATI card.

$ lspci | grep Radeon
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X]

Install firmware

You need to install a package called firmware-linux-free to get the driver working at all. If you want decent graphics performance, you will need firmware-linux-nonfree as well, which involves adding non-free sources:

nano /etc/apt/sources.list

Add the words “contrib non-free” to the end of your mirror:

deb http://.../debian/debian/ stretch main contrib non-free

Add the packages:

apt-get update
apt-get install firmware-linux-free firmware-linux-nonfree

And reboot:

reboot

What, that’s it?

Well, yes, for a fresh install that’s it. If your install is old, you might also have to remove old drivers or install the xserver-xorg-video-amdgpu and xserver-xorg-video-ati packages (in my case, these were already installed).

The Debian Wiki AtiHowto contains some more detailed information, most of which is not relevant for a simple desktop setup.

How to assemble a Linux software RAID array on a different computer

With Linux software RAID, if you ever toast your computer, you can retrieve the disks and open up the array on a different computer.

They appear as “Linux Software RAID Member” in the disk utility.

Simply install mdadm, and scan for arrays:

$ sudo apt-get install mdadm
$ sudo mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.

The array will then appear as a new disk, which can be formatted, mounted, or cloned via the usual tools.

OpenWrt setup on Netgear WNR2200

I recently wanted to connect some devices for a temporary setup, where a wireless LTE modem would provide Internet access. Unfortunately, one of the devices was not close enough to pick up the signal with its USB WiFi dongle.

net1

Because the modem does not have a LAN port, the usual “run a cable” solution was out. There’s a few other options, from range extenders, to getting better modem, or just upgrading to a “real” USB WiFi dongle. Before purchasing new hardware, I decided to try re-purposing an old Netgear WNR2200 as a wireless client and 4 port switch.

net-svg

In this setup, the LTE modem does the heavy lifting, with all of the wireless clients using it for LAN and Internet access. In the next room, the Netgear router is placed close enough to pick up the signal, and an Ethernet cable runs to the PC, beyond the reach of WiFi.

Deciding to re-flash

Replacing firmware is worth investigating when the hardware is capable, but you aren’t given the option to configure it the way you want.

The Netgear WNR2200 is a low end wireless router, and the vendor firmware does not support joining a WiFi network as a client.

2016-10-router

It also pays to update your research. OpenWrt added support for this router a few days after I bought it, but I hadn’t looked it up again.

Uploading firmware

My main resource was this page on the OpenWRT Wiki. Firmware is organised by wireless chipset, then by router model.

The file I used to update my router was named openwrt-15.05.1-ar71xx-generic-wnr2200-squashfs-factory.img.

This is simply uploaded on the Adminisration → Firmware Upgrade screen:

2016-10-router2

2016-10-router3

First impressions

The first thing I noticed was that I lost WiFi, and that the page I had bookmarked for logging in was no longer valid!

2016-10-router4

This makes sense, of course. The configuration will not be carried across from the vendor firmware, and a different web administration tool is being used.

The Linux userspace is very rich compared with vendor firmware. It has things like dmesg, SSH, ifconfig, ping, and even a networked package manager.

Configuration checklist

I performed all configuration through the web in this setup. The “LuCi” interface allows setting the WiFi chip into “Client” mode, and then searching and joining a network. Once this was done, I assigned it as the “WAN” interface, so that it occupied a single IP address on the WiFi network, and providing a NAT and wired, four port switch.

There are more advanced, bridged setups that are possible. You should investigate this if you want one network, so that things like printer auto-discovery and internal SSH work consistently. I was only interested in sharing the Internet connection, which is why the setup was so simple.

What didn’t work

USB, but I didn’t spend long on this either. I was considering using USB to connect the modem to the Netgear router. The Wiki suggests that this is now possible, but after installing some packages for “USB tethering” and rebooting, I had no luck. Typing lsusb, only the “root hub” was listed, and the device was not getting any power.

This was necessary for the setup, so I just abandoned it. The vendor firmware couldn’t use the USB port for networking either, so no real loss.

libvirt: Migrate a VM from qemu:///session to qemu:///system

In recent versions of the libvirt virtualisation libraries, you to create and manage virtual machines as a regular user, using the qemu:///session connection.

This is great, but the networking is quite limited. I found that machines defined in Gnome Boxes could not speak to each-other, and that libvirt commands for networking were unavailable.

For this reason, I’ve written this quick guide for booting up an existing same VM image under the qemu:///system instance, which is faster than re-installing the machine. Unlike most sorts of migrations, this leaves the disk image at the same location on the same host machine.

There’s many different ways to do VM’s in Linux. This setup will be useful only if you use libvirt/kvm using qcow2 images on Debian. As always, consider doing a backup before trying new things.

Configurations

First, find your virtual machine in virsh, and dump its configuration to a text file in your home directory, as a regular user.

$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     foo-machine                    shut off
$ virsh dumpxml > foo-machine.xml

Now remove the VM definition from your user:

$ virsh undefine foo-machine
Domain foo-machine has been undefined

Import the definitions into virsh as the root user:

$ sudo virsh define foo-machine.xml 
Domain foo-machin defined from foo-machine.xml

Attempt to start the new VM definition. Depending on where the disk image is, expect an error.

$ sudo virsh start foo-machine

Disk images

The disk image needs to be accessible to the libvirt-qemu user. There’s two basic ways to achieve this: Re-permission the directories above it, or move it.

I chose to just re-permission it, since it’s not an issue to have world-readable directories on this particular box:

$ cat foo-machine.xml | grep source
      <source file='/home/example/.local/share/gnome-boxes/images/foo-machine'/>

This one-liner outputs the commands to run to make a directory work-navigable:

$ dir=`pwd`; while [ "$dir" != "/" ]; do echo "chmod o+x,g+x \"$dir\""; dir=`dirname $dir`; done
chmod o+x,g+x "/home/example/.local/share/gnome-boxes/images"
chmod o+x,g+x "/home/example/.local/share/gnome-boxes"
chmod o+x,g+x "/home/example/.local/share"
chmod o+x,g+x "/home/example/.local"
chmod o+x,g+x "/home/example"
chmod o+x,g+x "/home"

And the user account needs to be able to write as well:

$ sudo chown libvirt-qemu /home/example/.local/share/gnome-boxes/images/foo-machine

Once you have the permissions right, the VM should start, using the same command as before:

$ sudo virsh start foo-machine

More importantly, you can now hook up virt-manager and view your machine on qemu:///system, allowing you to configure the VM with any network settings you need.

How to set up Docker containers in Travis CI

This post outlines a method for using Docker for testing on Travis CI. It may be useful to you if you are a web application developer who uses GitHub.

I use this setup in my web-based word puzzle generator, so that every change is spun up and tested with a web browser before it is merged.

I got the idea for writing this from a few lines in the docker-compose documentation, which suggested that Docker is an easy way to perform automated testing over a running application:

$ docker-compose up -d
$ ./run_tests
$ docker-compose down

This snippet was missing some setup and an example app, but these three lines do all the heavy lifting.

The Docker setup

In order to focus on the Docker setup, I made a server which simply responds to TCP requests on port 5000 with the text “Hello World”.

This file is called server.sh, and sits in a directory called foo_server:

#!/bin/sh
while true; do
  # Send 'Hello World' to anybody who connects on port 5000
  echo "Hello World" | nc -l 5000
done

Alongside it, I added a Dockerfile to instruct docker to execute this tiny ‘application’ in a container, after installing the dependencies. This machine is built from the Docker-official Debian image:

FROM debian
ADD . /usr/share/test-server
WORKDIR /usr/share/test-server
RUN apt-get update && apt-get install --assume-yes netcat-openbsd
CMD ./server.sh

Lastly, a .dockerignore file is used to avoid loading the Dockerfile to the container:

# Ignore docker files
Dockerfile
.dockerignore

In the directory above, a simple test script, test.sh is used to see that the server is returning the expected output:

#!/bin/sh
set -e
expected="Hello World"
actual=`nc -v localhost 5000`
echo "Expecting: $expected"
echo "Server says: $actual"
if [ "$expected" != "$actual" ]; then
  echo "Test failed"
  exit 1
else
  echo "Test passed"
  exit 0
fi

Alongside the test file, a file called docker-compose.yml instructs Docker to create a container out of the foo_server example, and forward port 127.0.0.1:5000 to it.

version: '2'
services:
  foo:
    build: foo_server
    ports:
     - "5000:5000"
    container_name: foo_1

To try it out for yourself, you need a relatively recent version of Docker and docker-compose. The versions provided in Debian were not new enough to execute the examples, but the Docker project provides repos containing newer builds for Debian & Ubuntu. For my distro, the install was:

curl -sSL "https://get.docker.com/gpg" | sudo -E apt-key add -
echo "deb https://apt.dockerproject.org/repo debian-stretch main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install docker-engine
sudo pip install docker-compose

The versions this got me were docker 1.11.2, and docker-compose 1.7.1. Straight after the install, I could deploy & test an example locally:

$ docker-compose up --build -d
$ ./test.sh
$ docker-compose down

The CI setup

I’ll assume that if you’re reading this, you are familiar with the basics of Travis CI. The large block of code below is the .travis.yml file to set up the test machine, then execute the tests against a container.

---
# Use Ubuntu 'trusty' distribution
sudo: required
dist: trusty

install:
  # Update docker-engine using Ubuntu 'trusty' apt repo
  - >
    curl -sSL "https://get.docker.com/gpg" |
     sudo -E apt-key add -
  - >
    echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" |
     sudo tee -a /etc/apt/sources.list
  - sudo apt-get update
  - >
    sudo apt-get -o Dpkg::Options::="--force-confdef" \
     -o Dpkg::Options::="--force-confold" --assume-yes install docker-engine
  - docker version
  # Update docker-compose via pip
  - sudo pip install docker-compose
  - docker-compose version

before_script:
  - docker-compose up --build -d

script:
  - ./test.sh

after_script:
  - docker-compose down
...

Note: This uses Travis CI’s trusty distribution, which at the time of writing is the newest stable build platform available on Travis CI. This shipped an outdated version of Docker, which had to be installed over. Because the existing Docker was configured, I had to override a debconf prompt, which is why the apt addon syntax was not used to set up dependencies.

Result

The build result for each commit is displayed in Travis CI:

2016-06-ci-result-1

Under this, the output of the passing test script is shown, showing what has been set up:

2016-06-ci-result-2

Using this setup in practice

Moving this from a demo setup to a real setup would be fairly simple:

  1. Replace the installation with a real software stack
  2. Replace the server run with a command to serve the application (such as a Apache HTTP, Tomcat or Node)
  3. Replace the tests with real tests (such as Cucumber or Selenium).

The example in the pre-amble installs a LAMP stack and tests it with Selenium in its CI build.

If your application is a bit larger, your only extra complexity will come from running multiple containers with docker-compose.

Get the code

All of these scripts in a working CI example are available on GitHub:
mike42/minimal-docker-ci