Building a 1U quiet NAS

I recently built a compact, quiet rackmount NAS for home. I haven’t seen any builds quite like it online, so I’m writing a bit about how it came together.

The problem

My old backup was a mirrored pair of 2 TB hard drives in an old desktop computer, with a portable 2 TB hard drive as an off-site copy. The disks are now 9 years old, and 2 TB is small enough that I need to ration the space. I also recently repurposed most of the components in that system, so I no longer have an up-to-date backup.

I want to solve this properly, and hopefully build a replacement setup which I can install in my network cabinet, to run 24/7.

But if I’m going to do that, the must-haves are:

  • It fits in a 1U rack-mount form-factor with maximum 25cm depth
  • It’s quiet
  • It has front USB for making an offline copy of the backups
  • It has 2 SATA disks – I don’t want to be running a NAS off USB-SATA adapters, for example
  • It has wired ethernet – my network is 1 Gigabit

Nice-to-haves would be:

  • Fast front USB
  • Hot-swap disks or more disks
  • Faster network

I’ve worked with servers, and I’ve built small-form-factor computers, so how hard can it be to build a small-form-factor server?

Parts list

I spent a lot of time sketching out possible builds in LibreOffice Draw. Every combination of parts had some compromise, which is a familiar theme from small-form-factor PC building.

I ended up deciding on parts which follow normal PC standards, hopefully giving me a good chance of keeping it working for many years.

  • Case: Case Athena Power RM1UC138
  • Power supply: HDPLEX GAN 250W
  • Motherboard: Topton N6005 Mini ITX
  • RAM: Crucial 16GB (2 x 8GB) DDR4 3200 SO-DIMM
  • Boot disk: Samsung 970 EVO Plus 1TB M.2 SSD
  • Storage disks: 2 x Samsung 870 QVO 8TiB 2.5″ SSD
  • Fans: 4 x Noctua A4x20 PWM
  • Drive cage: Icy Dock MB608SP-B – 6 x 2.5″ SATA bays
  • Set of 6 x 50cm SATA cables

There are some compatibility issues in the above part list, and it took a little bit of problem-solving to get everything working together.

Closer look at the Athena Power RM1UC138

Short-depth 1U computer cases are nearly impossible to find in Australia. I ordered the Athena Power RM1UC138 from the United States, which is an OEM case with some flexibility built in.

On the front it has a 5.25″ bay, which I plan to use to add a drive cage. The front USB is only USB 2.0, which would normally be a disadvantage, but in this instance is a good match for the motherboard I’m using. The power button looks like a toggle switch, but is actually a momentary switch.

The side shows that the rack ears can be put on the front or back, allowing the case to be mounted in either direction.

On the back there is a wire mesh panel for cutting out a custom I/O shield, which is handy since I have an off-brand motherboard.

On the inside, it’s configured for 2 x 3.5″ hard drives by default. It also includes 2 x 2-pin 12v fans. They are loud like you would find in most managed network switches, but are not jet-engine loud like most servers. The fan controller simply distributes 12v, and has no speed control.

This is not a very common case, and I read everything I could find online about it. In order to help the next person who is searching for it, here are two more random pieces of information which I could not confirm until I had the case in-hand:

  • Stand-offs on the case are all 4mm tall and non-removable
  • Screw spacing of USB 2.0 front panel connector is approx 30.5mm (from centre of each screw). Stacked USB 3 headers that are 30mm spacing are available online and could be made to work.

Closer look at the Topton N6005 Mini ITX

I chose to use a Topton motherboard with a built-in Intel N6005 CPU for this build, since the alternatives were either too tall, use a socketed CPU, use an old CPU, would require add-in cards to get multiple SATA ports, or were not sold in Australia. All of these would make it far more difficult to complete the small, quiet build which I was aiming for.

From the few threads online about this board, I gathered that it is fussy about RAM compatibility, so I booted it up at the first opportunity with an SSD containing Pop!_OS to check that it worked. It’s not my use case, but this motherboard would definitely be viable as a lightweight desktop.

The specific memory I used was a 16 GB kit with the model number CT2K8G4SFRA32A. According to Intel’s product documentation, the N6005 only supports 16 GB maximum, and while I could find claims that higher-capacity memory does work, I couldn’t find anybody who posted actual part numbers.

I was happy to find that the built-in cooler is inaudible at idle loads. This was a bit of a risk: the cooler doesn’t have standard dimensions, so I couldn’t have easily replaced it with an alternative if it was noisy. Based on other people’s experiences with this board, I re-pasted the cooler with Noctua NT-H1 thermal paste, to hopefully help keep temperatures down at higher loads so that the fan will not need to spin up as much. I also also avoided using the M.2 slot which receives the most hot air from the cooler.

Topton also sells an N5105 variant which appears to be more popular (more info here), as well as an alternative layout which has a PCIe slot instead of a second M.2 slot.

Custom power supply adapter plate

The case is designed for a Flex ATX power supply, which is not what I’m using. I’ve instead opted to use a passively-cooled HDPLEX GAN 250W power supply, which ships with both an IEC C6 and IEC C14 cable.

I needed to choose one of these cables, and figure out how to securely mount it to the case.

I designed an adapter plate in FreeCAD around the included IEC C6 cable, since it had threaded holes, and screws were included.

I ordered it from a prototype supplier in laser cut 1mm steel, painted in matte black.

This is the first time I’ve used FreeCAD to design a part, and parametric CAD certainly has a learning curve. For this build, it was well worth it, since the result is better (and safer) than anything I could have improvised.

At the time of writing this post, HDPLEX sells plates for mounting their IEC C14 cables in cases accepting SFX and ATX power supplies, but none for cases which accept Flex ATX power supplies.

Custom fan controller

Cooling this build quietly was always going to be a challenge. The case shipped with 2 x 12 V fans, and had a simple splitter which ran them at max speed, which was just too loud.

I designed a replacement fan controller in KiCad, which allows me to upgrade to high-quality 4-pin fans with PWM speed control, and to set the speed using a potentiometer. I wrote about prototyping this in a separate blog post.

This photo shows the custom controller alongside the original one it replaces.

As you can probably guess, I’ve designed this to use the same mounting location, at the front of the case. My power supply has no Molex connectors, so I’m using a SATA-Molex power adapter.

The main drawback to this simple design is that once I close up the case, I can no longer adjust the fan speed.

Final assembly

Before continuing any further, I took apart the case completely and deleted three standoffs with a belt sander, to leave a flat area for power supply installation later.

Once I got the case back together, the motherboard went in first. I raised it by 1mm using plastic washers, hoping to line it up better with the I/O shield included with the motherboard.

The I/O opening for 1U servers is narrower than standard PC builds, so I needed to cut the I/O shield, which I unfortunately did not do correctly.

Since that did not work, I carefully marked and cut out the wire mesh I/O shield included with the case instead. I still left the motherboard raised up on 1mm washers, though this is not necessary anymore. You need to use slightly longer screws if you try this.

After that I installed the four case fans, plus the fan controller. I’m using a front-to-back airflow, with 2 x 40mm fans mounted at the front, and 2 x 40mm fans at the back. I added a Y splitter to the back fans, which did not have long enough cables to reach the fan controller.

The next component I installed was the drive cage. It’s worth mentioning at this point that the drive cage also has a fan header, which is the same as a 3-pin header that you would find on a PC motherboard. It supplies a different voltage for each of the speed settings. Medium is approximately 7.5 volts and is relatively quiet with the included “Good quality DC fan” fan, and high speed is 12 volts. I set mine to off but left the fan installed.

To install items into the 5.25″ bay in this case, you attach a bracket, then fasten it from above. The bracket allows the depth to be adjusted as well.

I also installed disks in the drive cage at this point, and numbers on the front. Disk 1 is connected to SATA0 on the motherboard, disk 2 is connected to SATA1, and so on.

Next was the power supply. I installed the custom plate for the power connector, and also installed the mounting plate on the bottom of the PSU so that it would have a flat surface. After confirming that it would fit, I cleaned both surfaces with alcohol, and applied double-sided tape.

I then followed a rehearsed path to drop the PSU into place. There is no opportunity to adjust it once it sticks.

At this point I connected everything up and booted up the system to start checking for problems, since it’s easier to troubleshoot in this state. Two modifications I made here were to disconnect the bright red HDD LED, and to introduce a SATA power Y splitter, because the power supply SATA cables were stretched to the limit.

It took a lot of work (and cable ties) to arrange the cables flat so that the case could close. In defence of cable ties, they do make maintenance more difficult, but that’s a worthwhile trade-off for keeping cables clear of airflow paths, fan blades, and the guillotine-like action of the top cover sliding shut.

Completed build

After closing the case, the build is, 434mm × 254mm x 44mm, or 4.8 litres, excluding rack ears.

This is how it appears from the front.

And this is how it appears from the back.

Software

I’m starting with Proxmox, with OpenMediaVault deployed as a virtual machine. I haven’t used either of these before, but both are Debian-based and provide convenient web front-ends to the tools I would otherwise be configuring on the command-line.

I’m passing through the disks as block devices. Running the NAS like this should make it possible to provision extra workloads which need their own SATA disks in future, or to switch from OpenMediaVault to stock Debian if necessary, all without connecting a monitor.

Within OpenMediaVault, I’ve configured Linux software RAID, with an ext4 filesystem, shared via Samba, and can access that file share over the network.

I’ve enabled some basic power management features such as C-states. The system idles in the range of 12-14 watts measured from the wall, and goes up to 20 watts when moving files around.

I don’t need a lot of disk capacity, so I’ve been able to preserve a useful property of my old setup, where every disk in the system has a full copy of the data, in a format which can be understood by a normal Linux system. This it makes single-disk recovery possible using any surviving disk from the system on practically any computer, and that disk can be from either an offline copy or one of the disks in the RAID mirror.

I haven’t tested the process of making an offline copy of the backup volume, but that will be up next.

Wrap-up

This is possibly the most effort I’ve ever put into a PC build. The only unexpected issue I encountered is how heavy it is, and wont be rack-mounting it until I get some generic rails.

The computer uses a strange mix of parts, but meets my requirements well. I hope that by writing this up, I’ll be providing some useful notes to anybody attempting to build something similar.

This project also gave me a chance to practice my entry-level CAD skills to build something which I’ll actually be using. I find a lot of utility in paper prototyping, and printed each design in 1:1 scale to check the physical dimensions before ordering anything.

For the circuit board, I used a print-out to check each part footprint, as well as the hole locations for fitting it in the case.

As with many of the projects which I blog about, I’ve put the design files up on GitHub. The fan controller is at mike42/fan-controller-athena-power, while the Flex ATX adapter plate is at mike42/flexatx-adapter-hdplex.

How to auto-scale the display in GNOME Boxes

I recently installed a virtual machine in GNOME Boxes, and the display was stuck at 1024×768.

The type of display used here is called SPICE, and it includes a channel for auto-scaling. The guest simply needs the agent to be installed.

In this case, I’m running a Debian guest, which means that I must have forgotten to install the spice-vdagent package.

# apt-get install spice-vdagent
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  spice-vdagent
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 47.6 kB of archives.
After this operation, 174 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 spice-vdagent amd64 0.18.0-1 [47.6 kB]
Fetched 47.6 kB in 0s (99.9 kB/s)  
Selecting previously unselected package spice-vdagent.
(Reading database ... 132855 files and directories currently installed.)
Preparing to unpack .../spice-vdagent_0.18.0-1_amd64.deb ...
Unpacking spice-vdagent (0.18.0-1) ...
Setting up spice-vdagent (0.18.0-1) ...
Created symlink /etc/systemd/system/sockets.target.wants/spice-vdagentd.socket → /lib/systemd/system/spice-vdagentd.socket.
[spice-vdagentd.conf:2] Line references path below legacy directory /var/run/, updating /var/run/spice-vdagentd → /run/spice-vdagentd; please update the tmpfiles.d/ drop-in file accordingly.
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for systemd (241-5) ...

The easiest way to ensure that everything is running correctly is to reboot, since the agent will start on boot, and this also forces a new log-in, and a new connection to the display.

# reboot

Result

Assuming that you are otherwise on the default settings, the display in the guest VM will now automatically adjust as you resize the window.

If the “Share Clipboard” setting is enabled for the virtual machine, then spice-vdagent will also enable you to copy & paste text between the host and guest.

How to use the qemu-bridge-helper on Debian 10

If you use the libvirt virtualisation libraries, then you will be familiar with the “user session”. This feature lets you provision virtual machines to run under a regular, unprivileged user account.

The user session is used by GNOME Boxes, and can also be managed from Virtual Machine Manager.

The main downside to this setup is that a regular user can only access a very limited range of networking options. The last time that I mentioned this in a blog post, a reader pointed out that you can actually use qemu-bridge-helper to provide bridged networking to unprivileged virtual machines.

Today I finally tried this out, and it worked really well. With a bit of configuration, you can extend proper networking to this type of VM.

The host

I’m running a graphical Debian 10 desktop, with a few basic virtualisation packages.

  • gnome-boxes for creating VM’s as a local user. This depends on libvirt-daemon, which is enough to host VM’s on the system.
  • virt-manager for a more advanced graphical interface.

The tool that I’m writing about today is qemu-bridge-helper, which is in the qemu-system-common package.

After installation, you will also need to ensure that libvirtd is running.

$ systemctl enable libvirtd.service
$ systemctl start libvirtd.service

Set up a bridge

Libvirt ships with a basic network bridge configuration, you just need to enable it.

Command-line method

Start the default network bridge, and configure it to run on startup.

$ sudo virsh net-autostart --network default
$ sudo virsh net-start --network default

Once this is set up, you should see the bridge virbr0, reporting the IP range 192.168.122.1/24.

$ ip addr show virbr0
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

Graphical method

First, open up Virtual Machine Manager, and authenticate. Right click on QEMU/KVM, and select Details.

Under Virtual NetworksdefaultAutostart, check On Boot, then click Apply.

Setting up qemu-bridge-helper

Create the file /etc/qemu/bridge.conf with the content:

allow virbr0

Restrict the permissions of this file to make sure it can’t be edited by regular users.

# chown root:root /etc/qemu/bridge.conf
# chmod 0640 /etc/qemu/bridge.conf

Add setuid to the qemu-bridge-helper binary.

# chmod u+s /usr/lib/qemu/qemu-bridge-helper

If you do not correctly set this last step, then you will receive the following error when you attempt to connect a VM to the bridge:

Error starting domain: internal error: /usr/lib/qemu/qemu-bridge-helper --use-vnet --br=virbr0 --fd=28: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to create tun device: Operation not permitted

Setting up the VM

Create a virtual machine, either though GNOME Boxes or Virtual Machine Manager. I am using a CentOS VM as an example here, but the guest platform is not particularly important.

Using Virtual Machine Manage, change the network card to the “shared network” virbr0.

The graphical configuration above is equivalent to the following libvirt domain XML, as below.

<interface type='bridge'>
  <mac address='52:54:00:08:5a:7c'/>
  <source bridge='virbr0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

Result

After restarting the network interface in the guest, I was able to ping the the guest from the host and vice-versa.

This is a significant improvement from “user-mode” networking, which does not facilitate host-to-guest and guest-to-guest communication.

The default virbr0 bridge uses an internal subnet, so the guest here is still inaccessible from the wider LAN. If this doesn’t match your setup, then you can use the same technique to connect unprivileged virtual machines to another bridge of your choice.

Further reading

I had to adapt some paths, user accounts and package names to get this working on Debian. The sources I used are:

Three ways to archive a website

I recently needed to archive a small website before decommissioning it. There are a few distinct reasons you might want an archive of a website:

  • To archive the information in case you need it later.
  • To archive the look and feel so that you can see how it has progressed.
  • To archive the digital artifacts so that you can host them elsewhere as a mirror.

Each of these produces files in a different format, which are useful over different time-periods. In this post, I’ll write a bit about all three, since it’s easiest to archive a website while it is still online.

1. Saving webpage content to PDF

To write an individual page to a PDF, you can use wkhtmltopdf. On Debian/Ubuntu, this can be installed with:

sudo apt-get install wkhtmltopdf

The only extra setting I use for this is the “Javascript delay”, since some parts of the page will be loaded after the main content.

mkdir -p pdf/
wkhtmltopdf --javascript-delay 1000 https://example.com/ pdf/index.pdf

This produces a PDF, which you can copy/paste text from, or print.

You then simply repeat this for every page which you want to archive.

2. Saving webpage content to an image

If you are more interested in how the website looked, rather than what it contained, then you can use the same package to write it to an image. I used the jpg format here, because the file sizes were reasonable at higher resolution. I also zoom the page 200% to get higher quality, and selected sizes which are typical of desktop, tablet and mobile screen sizes.

mkdir -p jpg/desktop jpg/mobile jpg/tablet
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 4380 https://example.com/ jpg/desktop/index.jpg
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 2048 https://example.com/ jpg/tablet/index.jpg
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 960 https://example.com/ jpg/mobile/index.jpg

This gives you three images for the page. This example page is quite short, but a larger page produces a very tall image.

As above, this needs to be repeated for each page which you want to archive.

3. Mirroring the entire site as HTML

A full mirror of the site is a good short-term archive. Some websites have a lot of embedded external content like maps and external social media widgets, which I would expect to gradually stop working over time as these services change. Still, you might still be able to browse the website on your local computer in 10 or 20 years time, depending on how browsers change.

wget is the go-to tool for mirroring sites, but it has a lot of options!

mkdir -p html/
cd html
wget \
  --trust-server-names \
  -e robots=off \
  --mirror \
  --convert-links \
  --adjust-extension \
  --page-requisites \
  --no-parent \
  https://example.com

There are quite a few options here, I’ll briefly explain why I used each one:

--trust-server-names Allow the correct filename to be used when a redirect is used.
-e robots=off Disable rate limiting. This only OK to do if you own the site and can be sure that mirroring it will not cause capacity issues.
--mirror Short-hand for some options to recursively download the site.
--convert-links Change links on the target site to local ones.
--adjust-extension If you get a page called “foo”, save it as “foo.html”.
--page-requisites Also download CSS and Javascript files referenced on the page
--no-parent Only download sub-pages from the starting page. This is useful if you want to fetch only part of the domain.

The result can be opened locally in a web browser:

These options worked well for me on a WordPress site.

Putting it all together

The site I was mirroring was quite small, so I manually assembled a list of pages to mirror, gave each a name, and wrote them in a text file called urls.txt in this format:

https://site.example/ index
https://site.example/foo foo
https://site.example/bar bar

I then ran this script to mirror each URL as an image and PDF, before mirroring the entire site locally in HTML.

#!/bin/bash
set -exu -o pipefail

mkdir -p jpg/desktop jpg/mobile jpg/tablet html/ pdf
while read page_url page_name; do
  echo "## $page_url ($page_name)"
  # JPEG archive
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 4380 $page_url jpg/desktop/$page_name.jpg
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 2048 $page_url jpg/tablet/$page_name.jpg
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 960 $page_url jpg/mobile/$page_name.jpg
  # Printable archive
  wkhtmltopdf --javascript-delay 1000 $page_url pdf/$page_name.pdf
done < urls.txt

# Browsable archive
MAIN_PAGE=$(head -n1 urls.txt | cut -d' ' -f1)
mkdir -p html/
(cd html && \
  wget --trust-server-names -e robots=off --mirror --convert-links --adjust-extension --page-requisites --no-parent $MAIN_PAGE)

The actual domain example.com has only one page, so after running the script against it, it downloads this set of files:

├── archive.sh
├── html
│   └── example.com
│       └── index.html
├── jpg
│   ├── desktop
│   │   └── index.jpg
│   ├── mobile
│   │   └── index.jpg
│   └── tablet
│       └── index.jpg
├── pdf
│   └── index.pdf
└── urls.txt

Happy archiving!

Monitoring network throughput with Prometheus

Today I’m writing a bit about a Prometheus deployment that I made last year on a Raspberry Pi, to get better data about congestion on my uplink to the Internet.

The problem

You have probably run an Internet speed test before, like this:

2017-07-net-05

A speed test will tell you how slow your computer’s connection is, but it can’t narrow down whether it’s because of other LAN users, the line quality, or congestion at the provider.

You can start to assemble this information from the router, which has counters for each network interface:

2017-07-net-02

This table is from a Sagemcom F@ST 3864, which is a consumer-grade router. It has no SNMP interface, so the only way to get these metrics is to query /statsifc.html and /info.html from the LAN.

Getting the data

We can derive throughput metrics for the uplink if we scrape these metrics every few second and load them into a time-series database. To do this, I wrote a small adapter (called an “exporter” in Prometheus lingo), which exposed the metrics in a more structured way.

The result was a web page on the Raspberry Pi, which returned interface data like this:

# HELP lan_network_receive_bytes Received bytes for network interface
# TYPE lan_network_receive_bytes gauge
lan_network_receive_bytes{device="eth0"} 0.0
lan_network_receive_bytes{device="eth1"} 0.0
lan_network_receive_bytes{device="eth2"} 0.0
lan_network_receive_bytes{device="eth3"} 0.0
lan_network_receive_bytes{device="wl0"} 737476060.0
# HELP lan_network_send_bytes Sent bytes for network interface
# TYPE lan_network_send_bytes gauge
lan_network_send_bytes{device="eth0"} 363957004.0
lan_network_send_bytes{device="eth1"} 0.0
lan_network_send_bytes{device="eth2"} 0.0
lan_network_send_bytes{device="eth3"} 0.0
lan_network_send_bytes{device="wl0"} 2147483647.0
# HELP lan_network_receive_packets Received packets for network interface
# TYPE lan_network_receive_packets gauge
lan_network_receive_packets{device="eth0",disposition="transfer"} 1766250.0
lan_network_receive_packets{device="eth0",disposition="error"} 0.0
lan_network_receive_packets{device="eth0",disposition="drop"} 0.0
lan_network_receive_packets{device="eth1",disposition="transfer"} 0.0
lan_network_receive_packets{device="eth1",disposition="error"} 0.0
lan_network_receive_packets{device="eth1",disposition="drop"} 0.0
lan_network_receive_packets{device="eth2",disposition="transfer"} 0.0
lan_network_receive_packets{device="eth2",disposition="error"} 0.0
lan_network_receive_packets{device="eth2",disposition="drop"} 0.0
lan_network_receive_packets{device="eth3",disposition="transfer"} 0.0
lan_network_receive_packets{device="eth3",disposition="error"} 0.0
lan_network_receive_packets{device="eth3",disposition="drop"} 0.0
lan_network_receive_packets{device="wl0",disposition="transfer"} 6622351.0
lan_network_receive_packets{device="wl0",disposition="error"} 0.0
lan_network_receive_packets{device="wl0",disposition="drop"} 0.0
# HELP lan_network_send_packets Sent packets for network interface
# TYPE lan_network_send_packets gauge
lan_network_send_packets{device="eth0",disposition="transfer"} 3148577.0
lan_network_send_packets{device="eth0",disposition="error"} 0.0
lan_network_send_packets{device="eth0",disposition="drop"} 0.0
lan_network_send_packets{device="eth1",disposition="transfer"} 0.0
lan_network_send_packets{device="eth1",disposition="error"} 0.0
lan_network_send_packets{device="eth1",disposition="drop"} 0.0
lan_network_send_packets{device="eth2",disposition="transfer"} 0.0
lan_network_send_packets{device="eth2",disposition="error"} 0.0
lan_network_send_packets{device="eth2",disposition="drop"} 0.0
lan_network_send_packets{device="eth3",disposition="transfer"} 0.0
lan_network_send_packets{device="eth3",disposition="error"} 0.0
lan_network_send_packets{device="eth3",disposition="drop"} 0.0
lan_network_send_packets{device="wl0",disposition="transfer"} 8803737.0
lan_network_send_packets{device="wl0",disposition="error"} 0.0
lan_network_send_packets{device="wl0",disposition="drop"} 0.0
# HELP wan_network_receive_bytes Received bytes for network interface
# TYPE wan_network_receive_bytes gauge
wan_network_receive_bytes{device="ppp2.1"} 3013958333.0
wan_network_receive_bytes{device="ptm0.1"} 0.0
wan_network_receive_bytes{device="eth4.3"} 0.0
wan_network_receive_bytes{device="ppp1.1"} 0.0
wan_network_receive_bytes{device="ppp3.2"} 0.0
# HELP wan_network_send_bytes Sent bytes for network interface
# TYPE wan_network_send_bytes gauge
wan_network_send_bytes{device="ppp2.1"} 717118493.0
wan_network_send_bytes{device="ptm0.1"} 0.0
wan_network_send_bytes{device="eth4.3"} 0.0
wan_network_send_bytes{device="ppp1.1"} 0.0
wan_network_send_bytes{device="ppp3.2"} 0.0
# HELP wan_network_receive_packets Received packets for network interface
# TYPE wan_network_receive_packets gauge
wan_network_receive_packets{device="ppp2.1",disposition="transfer"} 11525693.0
wan_network_receive_packets{device="ppp2.1",disposition="error"} 0.0
wan_network_receive_packets{device="ppp2.1",disposition="drop"} 0.0
wan_network_receive_packets{device="ptm0.1",disposition="transfer"} 0.0
wan_network_receive_packets{device="ptm0.1",disposition="error"} 0.0
wan_network_receive_packets{device="ptm0.1",disposition="drop"} 0.0
wan_network_receive_packets{device="eth4.3",disposition="transfer"} 0.0
wan_network_receive_packets{device="eth4.3",disposition="error"} 0.0
wan_network_receive_packets{device="eth4.3",disposition="drop"} 0.0
wan_network_receive_packets{device="ppp1.1",disposition="transfer"} 0.0
wan_network_receive_packets{device="ppp1.1",disposition="error"} 0.0
wan_network_receive_packets{device="ppp1.1",disposition="drop"} 0.0
wan_network_receive_packets{device="ppp3.2",disposition="transfer"} 0.0
wan_network_receive_packets{device="ppp3.2",disposition="error"} 0.0
wan_network_receive_packets{device="ppp3.2",disposition="drop"} 0.0
# HELP wan_network_send_packets Sent packets for network interface
# TYPE wan_network_send_packets gauge
wan_network_send_packets{device="ppp2.1",disposition="transfer"} 7728904.0
wan_network_send_packets{device="ppp2.1",disposition="error"} 0.0
wan_network_send_packets{device="ppp2.1",disposition="drop"} 0.0
wan_network_send_packets{device="ptm0.1",disposition="transfer"} 0.0
wan_network_send_packets{device="ptm0.1",disposition="error"} 0.0
wan_network_send_packets{device="ptm0.1",disposition="drop"} 0.0
wan_network_send_packets{device="eth4.3",disposition="transfer"} 0.0
wan_network_send_packets{device="eth4.3",disposition="error"} 0.0
wan_network_send_packets{device="eth4.3",disposition="drop"} 0.0
wan_network_send_packets{device="ppp1.1",disposition="transfer"} 0.0
wan_network_send_packets{device="ppp1.1",disposition="error"} 0.0
wan_network_send_packets{device="ppp1.1",disposition="drop"} 0.0
wan_network_send_packets{device="ppp3.2",disposition="transfer"} 0.0
wan_network_send_packets{device="ppp3.2",disposition="error"} 0.0
wan_network_send_packets{device="ppp3.2",disposition="drop"} 0.0
# HELP adsl_attainable_rate_down_kbps ADSL Attainable Rate down (Kbps)
# TYPE adsl_attainable_rate_down_kbps gauge
adsl_attainable_rate_down_kbps 19708.0
# HELP adsl_attainable_rate_up_kbps ADSL Attainable Rate up (Kbps)
# TYPE adsl_attainable_rate_up_kbps gauge
adsl_attainable_rate_up_kbps 1087.0
# HELP adsl_rate_down_kbps ADSL Rate down (Kbps)
# TYPE adsl_rate_down_kbps gauge
adsl_rate_down_kbps 18175.0
# HELP adsl_rate_up_kbps ADSL Rate up (Kbps)
# TYPE adsl_rate_up_kbps gauge
adsl_rate_up_kbps 1087.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 34197504.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 22441984.0
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1497148890.92
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 3254.92
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024.0

I then deployed Prometheus to the same Raspberry Pi, and configured it to read these metrics every few seconds by editing prometheus.yml

global:
  scrape_interval: 5s

scrape_configs:
  - job_name: net
    static_configs:
    - targets: ["localhost:8000"]

Making some queries

Prometheus has a query language, which I find similar to spreadsheet formulas. You can enter a query directly into the web interface to get a graph or data table.

2017-07-net-03

I settled on these queries to get the data I needed. They show me the maximum attainable line rate, actual sync rate, and current throughput over the WAN interface.

Downloads

Throughput:

rate(wan_network_receive_bytes{device="ppp2.1"}[10s])*8/1024/1024

ADSL attainable:

adsl_attainable_rate_down_kbps/1024

ADSL sync:

adsl_rate_down_kbps/1024

Uploads

Usage:

rate(wan_network_send_bytes{device="ppp2.1"}[10s])*8/1024/1024

ADSL attainable:

adsl_attainable_rate_up_kbps/1024

ADSL sync:

adsl_rate_up_kbps/1024

Onto a dashboard

I then deployed the last component in this setup, Grafana, to the Raspberry Pi. This tool lets you save your queries on a dashboard.

I made two plots, one for uploads, and one for downloads-

2017-07-net-04

By saturating the link with traffic (such as when running a speed test), it was now possible to compare the actual network speed with the ADSL sync speed.

2017-07-net-06

In my case, the best attainable network speed changed depending on the time of day, while the ADSL sync speed was constant. That’s a simple case of congestion.

Conclusion

I’ve deployed a few tiny Prometheus setups like this, because of how simple it is to work with new sources of metrics. It’s designed for much larger setups than an individual router, so it’s a worthwhile tool to be familiar with. Data is always a good reality-check for your assumptions, of course.

This setup had the level of security that you would expect of a Raspberry Pi project (none), and crashed after 4 days because I did not configure it for a RAM-limited environment, but it was a useful learning exercise, so I uploaded it to GitHub anyway. The python and Ansible code can be found here.

How to assemble a Linux software RAID array on a different computer

With Linux software RAID, if you ever toast your computer, you can retrieve the disks and open up the array on a different computer.

They appear as “Linux Software RAID Member” in the disk utility.

Simply install mdadm, and scan for arrays:

$ sudo apt-get install mdadm
$ sudo mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.

The array will then appear as a new disk, which can be formatted, mounted, or cloned via the usual tools.

libvirt: Migrate a VM from qemu:///session to qemu:///system

In recent versions of the libvirt virtualisation libraries, you to create and manage virtual machines as a regular user, using the qemu:///session connection.

This is great, but the networking is quite limited. I found that machines defined in Gnome Boxes could not speak to each-other, and that libvirt commands for networking were unavailable.

For this reason, I’ve written this quick guide for booting up an existing same VM image under the qemu:///system instance, which is faster than re-installing the machine. Unlike most sorts of migrations, this leaves the disk image at the same location on the same host machine.

There’s many different ways to do VM’s in Linux. This setup will be useful only if you use libvirt/kvm using qcow2 images on Debian. As always, consider doing a backup before trying new things.

Configurations

First, find your virtual machine in virsh, and dump its configuration to a text file in your home directory, as a regular user.

$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     foo-machine                    shut off
$ virsh dumpxml foo-machine > foo-machine.xml

Now remove the VM definition from your user:

$ virsh undefine foo-machine
Domain foo-machine has been undefined

Import the definitions into virsh as the root user:

$ sudo virsh define foo-machine.xml 
Domain foo-machin defined from foo-machine.xml

Attempt to start the new VM definition. Depending on where the disk image is, expect an error.

$ sudo virsh start foo-machine

Disk images

The disk image needs to be accessible to the libvirt-qemu user. There’s two basic ways to achieve this: Re-permission the directories above it, or move it.

I chose to just re-permission it, since it’s not an issue to have world-readable directories on this particular box:

$ cat foo-machine.xml | grep source
      <source file='/home/example/.local/share/gnome-boxes/images/foo-machine'/>

This one-liner outputs the commands to run to make a directory work/navigable:

$ dir=`pwd`; while [ "$dir" != "/" ]; do echo "chmod o+x,g+x \"$dir\""; dir=`dirname $dir`; done
chmod o+x,g+x "/home/example/.local/share/gnome-boxes/images"
chmod o+x,g+x "/home/example/.local/share/gnome-boxes"
chmod o+x,g+x "/home/example/.local/share"
chmod o+x,g+x "/home/example/.local"
chmod o+x,g+x "/home/example"
chmod o+x,g+x "/home"

And the user account needs to be able to write as well:

$ sudo chown libvirt-qemu /home/example/.local/share/gnome-boxes/images/foo-machine

Once you have the permissions right, the VM should start, using the same command as before:

$ sudo virsh start foo-machine

More importantly, you can now hook up virt-manager and view your machine on qemu:///system, allowing you to configure the VM with any network settings you need.

How to empty your local user account

If you’re not going to use a user account on your computer again, but can’t delete it for some reason, then emptying it is the next best thing to do.

Note: Save anything you want to keep before you start deleting things. These are destructive commands which delete all of the files and settings in the current user’s profile. If you are at all unsure, consider using a file browser to clear out the profile instead.

Windows:

cd %USERPROFILE%
del /A / F /Q /S .

Linux or Mac:

cd ~
rm -Rf .

This will make sure that the disused account no-longer wastes any disk space.

Backing up from a hosting provider

Backups are great, and they’re not rocket science. I’m writing up how we do backups, not because I think it’s a cool or unique setup (because it’s not), but to highlight how effective a simple solution can be.

We use rsync to take a local copy of whatever is on our web host without wasting bandwidth downloading files that aren’t needed. The layout looks like this:

Our hosting provider is accessible via ssh, and the backup box we use is a Raspberry Pi model B, costing (more or less) 50 AUD to get running.

On the server

On the server, we back up databases with mysqldump. To do this, you need to enter user details into a .my.cnf file, and then something like this will do the trick:

#!/bin/sh
# Remove old dump
rm -f database.sql.gz

# Dump and compress database
mysqldump -h sql.example.com --all-databases > database.sql
gzip database.sql

The above script is called database-dump.sh, and is called from the backup box, to dump the databases to a file before grabbing all the files.

On the backup box

First, a script to get the files. You should use password-less login with ssh-copy-id for this to work non-interactively:

#!/bin/sh
# Update the database dump
ssh user@host.example.com './database-dump.sh'
# Get files
rsync -avz --delete-during user@host.example.com:/home/user .

We save a copy of the files at this date in a dated archive, so we can back-date to find deleted things. At the end of the above script:

mkdir -p archive
now=$(date +"%Y-%m-%d")
tar -czf archive/backup-$now.tar.gz user

There aren’t a huge number of changes to record daily, so we got cron to run the above script weekly on the backup box. Read man crontab for how to do this.

What backup is not

If you think you shouldn’t be doing backups, you’re wrong. The following are not good excuses:

  1. Trust — Whoever is looking after the data wont lose it.
    Our host is pretty good, but their terms of service say they wont be responsible for any data loss. Even providers which have support agreements can make mistakes. You’ll also be able to work faster if you’re not paranoid about any mistake being unrecoverable.
  2. Expense — It’s a nice idea but not worth it.
    It’s dirt cheap, you can learn to do it yourself, and once set up requires virtually no administration. If your organisation can’t afford some kind of backup solution, then it should probably stop using data in any form.
  3. RAID — I invested money in RAID, so I don’t need backups.
    If you accidentally delete something, or notice that some your files have been tampered with, then RAID will not help you. If there is a problem (eg. fire) at the hosting location, then you will be in trouble regardless of disk redundancy.