Building a 1U quiet NAS

I recently built a compact, quiet rackmount NAS for home. I haven’t seen any builds quite like it online, so I’m writing a bit about how it came together.

The problem

My old backup was a mirrored pair of 2 TB hard drives in an old desktop computer, with a portable 2 TB hard drive as an off-site copy. The disks are now 9 years old, and 2 TB is small enough that I need to ration the space. I also recently repurposed most of the components in that system, so I no longer have an up-to-date backup.

I want to solve this properly, and hopefully build a replacement setup which I can install in my network cabinet, to run 24/7.

But if I’m going to do that, the must-haves are:

  • It fits in a 1U rack-mount form-factor with maximum 25cm depth
  • It’s quiet
  • It has front USB for making an offline copy of the backups
  • It has 2 SATA disks – I don’t want to be running a NAS off USB-SATA adapters, for example
  • It has wired ethernet – my network is 1 Gigabit

Nice-to-haves would be:

  • Fast front USB
  • Hot-swap disks or more disks
  • Faster network

I’ve worked with servers, and I’ve built small-form-factor computers, so how hard can it be to build a small-form-factor server?

Parts list

I spent a lot of time sketching out possible builds in LibreOffice Draw. Every combination of parts had some compromise, which is a familiar theme from small-form-factor PC building.

I ended up deciding on parts which follow normal PC standards, hopefully giving me a good chance of keeping it working for many years.

  • Case: Case Athena Power RM1UC138
  • Power supply: HDPLEX GAN 250W
  • Motherboard: Topton N6005 Mini ITX
  • RAM: Crucial 16GB (2 x 8GB) DDR4 3200 SO-DIMM
  • Boot disk: Samsung 970 EVO Plus 1TB M.2 SSD
  • Storage disks: 2 x Samsung 870 QVO 8TiB 2.5″ SSD
  • Fans: 4 x Noctua A4x20 PWM
  • Drive cage: Icy Dock MB608SP-B – 6 x 2.5″ SATA bays
  • Set of 6 x 50cm SATA cables

There are some compatibility issues in the above part list, and it took a little bit of problem-solving to get everything working together.

Closer look at the Athena Power RM1UC138

Short-depth 1U computer cases are nearly impossible to find in Australia. I ordered the Athena Power RM1UC138 from the United States, which is an OEM case with some flexibility built in.

On the front it has a 5.25″ bay, which I plan to use to add a drive cage. The front USB is only USB 2.0, which would normally be a disadvantage, but in this instance is a good match for the motherboard I’m using. The power button looks like a toggle switch, but is actually a momentary switch.

The side shows that the rack ears can be put on the front or back, allowing the case to be mounted in either direction.

On the back there is a wire mesh panel for cutting out a custom I/O shield, which is handy since I have an off-brand motherboard.

On the inside, it’s configured for 2 x 3.5″ hard drives by default. It also includes 2 x 2-pin 12v fans. They are loud like you would find in most managed network switches, but are not jet-engine loud like most servers. The fan controller simply distributes 12v, and has no speed control.

This is not a very common case, and I read everything I could find online about it. In order to help the next person who is searching for it, here are two more random pieces of information which I could not confirm until I had the case in-hand:

  • Stand-offs on the case are all 4mm tall and non-removable
  • Screw spacing of USB 2.0 front panel connector is approx 30.5mm (from centre of each screw). Stacked USB 3 headers that are 30mm spacing are available online and could be made to work.

Closer look at the Topton N6005 Mini ITX

I chose to use a Topton motherboard with a built-in Intel N6005 CPU for this build, since the alternatives were either too tall, use a socketed CPU, use an old CPU, would require add-in cards to get multiple SATA ports, or were not sold in Australia. All of these would make it far more difficult to complete the small, quiet build which I was aiming for.

From the few threads online about this board, I gathered that it is fussy about RAM compatibility, so I booted it up at the first opportunity with an SSD containing Pop!_OS to check that it worked. It’s not my use case, but this motherboard would definitely be viable as a lightweight desktop.

The specific memory I used was a 16 GB kit with the model number CT2K8G4SFRA32A. According to Intel’s product documentation, the N6005 only supports 16 GB maximum, and while I could find claims that higher-capacity memory does work, I couldn’t find anybody who posted actual part numbers.

I was happy to find that the built-in cooler is inaudible at idle loads. This was a bit of a risk: the cooler doesn’t have standard dimensions, so I couldn’t have easily replaced it with an alternative if it was noisy. Based on other people’s experiences with this board, I re-pasted the cooler with Noctua NT-H1 thermal paste, to hopefully help keep temperatures down at higher loads so that the fan will not need to spin up as much. I also also avoided using the M.2 slot which receives the most hot air from the cooler.

Topton also sells an N5105 variant which appears to be more popular (more info here), as well as an alternative layout which has a PCIe slot instead of a second M.2 slot.

Custom power supply adapter plate

The case is designed for a Flex ATX power supply, which is not what I’m using. I’ve instead opted to use a passively-cooled HDPLEX GAN 250W power supply, which ships with both an IEC C6 and IEC C14 cable.

I needed to choose one of these cables, and figure out how to securely mount it to the case.

I designed an adapter plate in FreeCAD around the included IEC C6 cable, since it had threaded holes, and screws were included.

I ordered it from a prototype supplier in laser cut 1mm steel, painted in matte black.

This is the first time I’ve used FreeCAD to design a part, and parametric CAD certainly has a learning curve. For this build, it was well worth it, since the result is better (and safer) than anything I could have improvised.

At the time of writing this post, HDPLEX sells plates for mounting their IEC C14 cables in cases accepting SFX and ATX power supplies, but none for cases which accept Flex ATX power supplies.

Custom fan controller

Cooling this build quietly was always going to be a challenge. The case shipped with 2 x 12 V fans, and had a simple splitter which ran them at max speed, which was just too loud.

I designed a replacement fan controller in KiCad, which allows me to upgrade to high-quality 4-pin fans with PWM speed control, and to set the speed using a potentiometer. I wrote about prototyping this in a separate blog post.

This photo shows the custom controller alongside the original one it replaces.

As you can probably guess, I’ve designed this to use the same mounting location, at the front of the case. My power supply has no Molex connectors, so I’m using a SATA-Molex power adapter.

The main drawback to this simple design is that once I close up the case, I can no longer adjust the fan speed.

Final assembly

Before continuing any further, I took apart the case completely and deleted three standoffs with a belt sander, to leave a flat area for power supply installation later.

Once I got the case back together, the motherboard went in first. I raised it by 1mm using plastic washers, hoping to line it up better with the I/O shield included with the motherboard.

The I/O opening for 1U servers is narrower than standard PC builds, so I needed to cut the I/O shield, which I unfortunately did not do correctly.

Since that did not work, I carefully marked and cut out the wire mesh I/O shield included with the case instead. I still left the motherboard raised up on 1mm washers, though this is not necessary anymore. You need to use slightly longer screws if you try this.

After that I installed the four case fans, plus the fan controller. I’m using a front-to-back airflow, with 2 x 40mm fans mounted at the front, and 2 x 40mm fans at the back. I added a Y splitter to the back fans, which did not have long enough cables to reach the fan controller.

The next component I installed was the drive cage. It’s worth mentioning at this point that the drive cage also has a fan header, which is the same as a 3-pin header that you would find on a PC motherboard. It supplies a different voltage for each of the speed settings. Medium is approximately 7.5 volts and is relatively quiet with the included “Good quality DC fan” fan, and high speed is 12 volts. I set mine to off but left the fan installed.

To install items into the 5.25″ bay in this case, you attach a bracket, then fasten it from above. The bracket allows the depth to be adjusted as well.

I also installed disks in the drive cage at this point, and numbers on the front. Disk 1 is connected to SATA0 on the motherboard, disk 2 is connected to SATA1, and so on.

Next was the power supply. I installed the custom plate for the power connector, and also installed the mounting plate on the bottom of the PSU so that it would have a flat surface. After confirming that it would fit, I cleaned both surfaces with alcohol, and applied double-sided tape.

I then followed a rehearsed path to drop the PSU into place. There is no opportunity to adjust it once it sticks.

At this point I connected everything up and booted up the system to start checking for problems, since it’s easier to troubleshoot in this state. Two modifications I made here were to disconnect the bright red HDD LED, and to introduce a SATA power Y splitter, because the power supply SATA cables were stretched to the limit.

It took a lot of work (and cable ties) to arrange the cables flat so that the case could close. In defence of cable ties, they do make maintenance more difficult, but that’s a worthwhile trade-off for keeping cables clear of airflow paths, fan blades, and the guillotine-like action of the top cover sliding shut.

Completed build

After closing the case, the build is, 434mm × 254mm x 44mm, or 4.8 litres, excluding rack ears.

This is how it appears from the front.

And this is how it appears from the back.

Software

I’m starting with Proxmox, with OpenMediaVault deployed as a virtual machine. I haven’t used either of these before, but both are Debian-based and provide convenient web front-ends to the tools I would otherwise be configuring on the command-line.

I’m passing through the disks as block devices. Running the NAS like this should make it possible to provision extra workloads which need their own SATA disks in future, or to switch from OpenMediaVault to stock Debian if necessary, all without connecting a monitor.

Within OpenMediaVault, I’ve configured Linux software RAID, with an ext4 filesystem, shared via Samba, and can access that file share over the network.

I’ve enabled some basic power management features such as C-states. The system idles in the range of 12-14 watts measured from the wall, and goes up to 20 watts when moving files around.

I don’t need a lot of disk capacity, so I’ve been able to preserve a useful property of my old setup, where every disk in the system has a full copy of the data, in a format which can be understood by a normal Linux system. This it makes single-disk recovery possible using any surviving disk from the system on practically any computer, and that disk can be from either an offline copy or one of the disks in the RAID mirror.

I haven’t tested the process of making an offline copy of the backup volume, but that will be up next.

Wrap-up

This is possibly the most effort I’ve ever put into a PC build. The only unexpected issue I encountered is how heavy it is, and wont be rack-mounting it until I get some generic rails.

The computer uses a strange mix of parts, but meets my requirements well. I hope that by writing this up, I’ll be providing some useful notes to anybody attempting to build something similar.

This project also gave me a chance to practice my entry-level CAD skills to build something which I’ll actually be using. I find a lot of utility in paper prototyping, and printed each design in 1:1 scale to check the physical dimensions before ordering anything.

For the circuit board, I used a print-out to check each part footprint, as well as the hole locations for fitting it in the case.

As with many of the projects which I blog about, I’ve put the design files up on GitHub. The fan controller is at mike42/fan-controller-athena-power, while the Flex ATX adapter plate is at mike42/flexatx-adapter-hdplex.

Three ways to archive a website

I recently needed to archive a small website before decommissioning it. There are a few distinct reasons you might want an archive of a website:

  • To archive the information in case you need it later.
  • To archive the look and feel so that you can see how it has progressed.
  • To archive the digital artifacts so that you can host them elsewhere as a mirror.

Each of these produces files in a different format, which are useful over different time-periods. In this post, I’ll write a bit about all three, since it’s easiest to archive a website while it is still online.

1. Saving webpage content to PDF

To write an individual page to a PDF, you can use wkhtmltopdf. On Debian/Ubuntu, this can be installed with:

sudo apt-get install wkhtmltopdf

The only extra setting I use for this is the “Javascript delay”, since some parts of the page will be loaded after the main content.

mkdir -p pdf/
wkhtmltopdf --javascript-delay 1000 https://example.com/ pdf/index.pdf

This produces a PDF, which you can copy/paste text from, or print.

You then simply repeat this for every page which you want to archive.

2. Saving webpage content to an image

If you are more interested in how the website looked, rather than what it contained, then you can use the same package to write it to an image. I used the jpg format here, because the file sizes were reasonable at higher resolution. I also zoom the page 200% to get higher quality, and selected sizes which are typical of desktop, tablet and mobile screen sizes.

mkdir -p jpg/desktop jpg/mobile jpg/tablet
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 4380 https://example.com/ jpg/desktop/index.jpg
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 2048 https://example.com/ jpg/tablet/index.jpg
wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 960 https://example.com/ jpg/mobile/index.jpg

This gives you three images for the page. This example page is quite short, but a larger page produces a very tall image.

As above, this needs to be repeated for each page which you want to archive.

3. Mirroring the entire site as HTML

A full mirror of the site is a good short-term archive. Some websites have a lot of embedded external content like maps and external social media widgets, which I would expect to gradually stop working over time as these services change. Still, you might still be able to browse the website on your local computer in 10 or 20 years time, depending on how browsers change.

wget is the go-to tool for mirroring sites, but it has a lot of options!

mkdir -p html/
cd html
wget \
  --trust-server-names \
  -e robots=off \
  --mirror \
  --convert-links \
  --adjust-extension \
  --page-requisites \
  --no-parent \
  https://example.com

There are quite a few options here, I’ll briefly explain why I used each one:

--trust-server-names Allow the correct filename to be used when a redirect is used.
-e robots=off Disable rate limiting. This only OK to do if you own the site and can be sure that mirroring it will not cause capacity issues.
--mirror Short-hand for some options to recursively download the site.
--convert-links Change links on the target site to local ones.
--adjust-extension If you get a page called “foo”, save it as “foo.html”.
--page-requisites Also download CSS and Javascript files referenced on the page
--no-parent Only download sub-pages from the starting page. This is useful if you want to fetch only part of the domain.

The result can be opened locally in a web browser:

These options worked well for me on a WordPress site.

Putting it all together

The site I was mirroring was quite small, so I manually assembled a list of pages to mirror, gave each a name, and wrote them in a text file called urls.txt in this format:

https://site.example/ index
https://site.example/foo foo
https://site.example/bar bar

I then ran this script to mirror each URL as an image and PDF, before mirroring the entire site locally in HTML.

#!/bin/bash
set -exu -o pipefail

mkdir -p jpg/desktop jpg/mobile jpg/tablet html/ pdf
while read page_url page_name; do
  echo "## $page_url ($page_name)"
  # JPEG archive
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 4380 $page_url jpg/desktop/$page_name.jpg
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 2048 $page_url jpg/tablet/$page_name.jpg
  wkhtmltoimage --zoom 2.0 --javascript-delay 1000 --width 960 $page_url jpg/mobile/$page_name.jpg
  # Printable archive
  wkhtmltopdf --javascript-delay 1000 $page_url pdf/$page_name.pdf
done < urls.txt

# Browsable archive
MAIN_PAGE=$(head -n1 urls.txt | cut -d' ' -f1)
mkdir -p html/
(cd html && \
  wget --trust-server-names -e robots=off --mirror --convert-links --adjust-extension --page-requisites --no-parent $MAIN_PAGE)

The actual domain example.com has only one page, so after running the script against it, it downloads this set of files:

├── archive.sh
├── html
│   └── example.com
│       └── index.html
├── jpg
│   ├── desktop
│   │   └── index.jpg
│   ├── mobile
│   │   └── index.jpg
│   └── tablet
│       └── index.jpg
├── pdf
│   └── index.pdf
└── urls.txt

Happy archiving!

How to resize a Windows VM image with virt-resize

I recently had a Windows 7 Virtual Machine stored on an undersized qcow2 file. This post steps through the simplest way that I know to produce a new, bigger disk and expand the filesystem onto it.

Empty out the empty space

Because the guest VM is stored on a QCOW2 file, we can recover un-used space on disk by zeroing it out now. Download the sdelete utility from Microsoft and run it on the system.

sdelete -z

One this is done, power off the guest.

Assuming the host is linux, you need the qemu-utls and libguestfs-tools packages to follow these steps. On Debian-

apt-get install libguestfs-tools qemu-utls

Move the VM image to a new filename and inspect it.

mv windows.img windows.img.bak

The file command indicates that the disk is about 30GB expanded.

$ file windows.img.bak 
windows.img.bak: QEMU QCOW Image (v3), 32212254720 bytes

The qemu-img command shows that the disk is 83% full:

$ qemu-img check windows.img.bak 
No errors were found on the image.
411337/491520 = 83.69% allocated, 5.66% fragmented, 0.00% compressed clusters
Image end offset: 26961969152

Check out your FS names, note that /dev/sda2 is the disk we want to up-size in this case

$ virt-filesystems -a windows.img -l
Name       Type        VFS   Label            Size         Parent
/dev/sda1  filesystem  ntfs  System Reserved  104857600    -
/dev/sda2  filesystem  ntfs  -                32105299968  -

Make a new, bigger disk image

Create a new disk of the desired size. In my case, 50G is sufficient:

$ qemu-img create -f qcow2 windows.img 50G
Formatting 'windows.img', fmt=qcow2 size=53687091200 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

The file command shows that this new empty disk image is larger than the old image.

$ file windows.img
windows.img: QEMU QCOW Image (v3), 53687091200 bytes

Copy the old disk to the new one. The --expand option names a partition which will be grown to fill the extra space.

virt-resize --expand /dev/sda2 windows.img.bak windows.img

The virt-resize command shows a progress bar while it works, and zero-blocks will be reclaimed as a result of the output format:

2016-04-disk01

The final line of output suggests holding on to your backup until you’ve checked it, which is wise:

Resize operation completed with no errors. Before deleting the old disk,
carefully check that the resized disk boots and works correctly.

Check that the new disk is valid and contains partitions at the expected size:

$ virt-filesystems -a windows.img -l
Name       Type        VFS   Label            Size         Parent
/dev/sda1  filesystem  ntfs  System Reserved  104857600    -
/dev/sda2  filesystem  ntfs  -                53579939840  -

Boot up the guest

When the machine boots up, you may get a disk check prompt. Because the console I was using triggered the ‘Press any key to cancel’ prompt, I had to reboot and leave the console disconnected in order for the check to start.

2016-04-disk02

2016-04-disk03

After booting, the C:\ drive should display at its new size:

2016-04-disk04

It’s time to migrate away from Outlook Express

Outlook Express is obsolete, so you need to migrate if you’re still using it. This post is a quick guide to saving your local data so that you can jump ship.

If you are keen on desktop-based email, then there are only two real contenders for a replacement mail client:

  • Mozilla Thunderbird (suggested).
  • Windows Mail.

So where is all my data?

The script below is a Windows bat script for backing up an Outlook Express setup. Just fill in the Identity variable and run it from anywhere to produce a folder containing the Outlook saved emails and contacts.

You can find your identity string as a folder name in your profile path, under Local Settings\Application Data\Identities\. The profile path for a user called bob would usually be in a folder like C:\Documents and Settings\bob or C:\Users\bob.

These files can be read by both of the suggested replacements, so a good transition might be:

  1. Make a backup.
  2. Install an alternative & set it up.
  3. Try to import as much as you can.
  4. Once you’re happy, uninstall Outlook Express, or at least delete the shortcuts to it.
@echo off
ECHO -------------------------------------------------------------------------------
SET BACKUPDIR="%username% - Outlook Backup"
SET IDENTITY={AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA}
ECHO Backing up outlook files to: %backupdir%
ECHO --------------------------------------------------------------------------------

ECHO - Clearing backup location... (1 of 3) 
DEL /F /S /Q %backupdir% > NUL 2>&1
MKDIR %backupdir% 2> NUL
ECHO - - Done

ECHO - Copying emails... (2 of 3) 
MKDIR %backupdir%\Emails 2> NUL
XCOPY /E /H /C /R /Y "%userprofile%\Local Settings\Application Data\Identities\%IDENTITY%\Microsoft\Outlook Express" %backupdir%\Emails > NUL
ECHO - - Done

ECHO - Copying address book... (3 of 3)
MKDIR %backupdir%\AddressBook 2> NUL
XCOPY /E /H /C /R /Y "%appdata%\Microsoft\Address Book" %backupdir%\AddressBook > NUL
ECHO - - Done

ECHO - Completed 3 of 3 tasks.
ECHO -------------------------------------------------------------------------------
ECHO The backup is complete.
ECHO Please copy %BACKUPDIR% to external storage.
ECHO --------------------------------------------------------------------------------
PAUSE

Backing up from a hosting provider

Backups are great, and they’re not rocket science. I’m writing up how we do backups, not because I think it’s a cool or unique setup (because it’s not), but to highlight how effective a simple solution can be.

We use rsync to take a local copy of whatever is on our web host without wasting bandwidth downloading files that aren’t needed. The layout looks like this:

Our hosting provider is accessible via ssh, and the backup box we use is a Raspberry Pi model B, costing (more or less) 50 AUD to get running.

On the server

On the server, we back up databases with mysqldump. To do this, you need to enter user details into a .my.cnf file, and then something like this will do the trick:

#!/bin/sh
# Remove old dump
rm -f database.sql.gz

# Dump and compress database
mysqldump -h sql.example.com --all-databases > database.sql
gzip database.sql

The above script is called database-dump.sh, and is called from the backup box, to dump the databases to a file before grabbing all the files.

On the backup box

First, a script to get the files. You should use password-less login with ssh-copy-id for this to work non-interactively:

#!/bin/sh
# Update the database dump
ssh user@host.example.com './database-dump.sh'
# Get files
rsync -avz --delete-during user@host.example.com:/home/user .

We save a copy of the files at this date in a dated archive, so we can back-date to find deleted things. At the end of the above script:

mkdir -p archive
now=$(date +"%Y-%m-%d")
tar -czf archive/backup-$now.tar.gz user

There aren’t a huge number of changes to record daily, so we got cron to run the above script weekly on the backup box. Read man crontab for how to do this.

What backup is not

If you think you shouldn’t be doing backups, you’re wrong. The following are not good excuses:

  1. Trust — Whoever is looking after the data wont lose it.
    Our host is pretty good, but their terms of service say they wont be responsible for any data loss. Even providers which have support agreements can make mistakes. You’ll also be able to work faster if you’re not paranoid about any mistake being unrecoverable.
  2. Expense — It’s a nice idea but not worth it.
    It’s dirt cheap, you can learn to do it yourself, and once set up requires virtually no administration. If your organisation can’t afford some kind of backup solution, then it should probably stop using data in any form.
  3. RAID — I invested money in RAID, so I don’t need backups.
    If you accidentally delete something, or notice that some your files have been tampered with, then RAID will not help you. If there is a problem (eg. fire) at the hosting location, then you will be in trouble regardless of disk redundancy.