How to communicate with USB and networked devices from in-browser Javascript

I recently combined a few tools on Linux to create a local Websocket listener, which could forward raw data to a USB printer, so that it could be accessed using Javascript in a web browser.

Why would you want this? I have point of sale applications (POS) in mind, which need to send raw data to a printer. For these applications, the browser and operating system print systems are not appropriate, since they prompt, spool, and badly render pages by converting them to low-fidelity raster images.

Web interfaces are becoming common for point-of-sale applications. The web page could be served from somewhere outside your local network, which is why we need to get the client-side Javascript involved.

The tools

To run on the client computer:

And to generate the print data on the webserver:

We will use these tools to provide some plumbing, so that we can retrieve the print data, and send it off to the printer from client-side Javascript.

Client computer

The client computer was a Linux desktop system. Both of the tools we need are available in the Debian repositories:

sudo apt-get install websockify socat

Listen for websocket connections on port 5555 and pass them to localhost:7000:

websockify 5555 localhost:7000

Listen for TCP connections on localhost port 7000 and pass them to the USB device (more advanced version of this previous post):

socat -u TCP-LISTEN:7000,fork,reuseaddr,bind=127.0.0.1 OPEN:/dev/usb/lp0

Web page

I made a self-contained web-page to provide a button which requested a print file from the network and passed it to the local websocket.

This is slightly modified from a similar example that I used for a previous project.

<html>
<head>
    <meta charset="UTF-8">
    <title>Web-based raw printing example</title>
</head>
<body>
<h1>Web-based raw printing example</h1>

<p>This snippet forwards raw data to a local websocket.</p>

<form>
  <input type="button" onclick="directPrintBytes(printSocket, [0x1b, 0x40, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x0a, 0x1d, 0x56, 0x41, 0x03]);" value="Print test string"/>
  <input type="button" onclick="directPrintFile(printSocket, 'receipt-with-logo.bin');" value="Load and print 'receipt-with-logo'" />
</form>

<script type="text/javascript">
/**
 * Retrieve binary data via XMLHttpRequest and print it.
 */
function directPrintFile(socket, path) {
  // Get binary data
  var req = new XMLHttpRequest();
  req.open("GET", path, true);
  req.responseType = "arraybuffer";
  console.log("directPrintFile(): Making request for binary file");
  req.onload = function (oEvent) {
    console.log("directPrintFile(): Response received");
    var arrayBuffer = req.response; // Note: not req.responseText
    if (arrayBuffer) {
      var result = directPrint(socket, arrayBuffer);
      if(!result) {
        alert('Failed, check the console for more info.');
      }
    }
  };
  req.send(null);
}

/**
 * Extract binary data from a byte array print it.
 */
function directPrintBytes(socket, bytes) {
  var result = directPrint(socket, new Uint8Array(bytes).buffer);
  if(!result) {
    alert('Failed, check the console for more info.');
  }
}

/**
 * Send ArrayBuffer of binary data.
 */
function directPrint(socket, printData) {
  // Type check
  if (!(printData instanceof ArrayBuffer)) {
    console.log("directPrint(): Argument type must be ArrayBuffer.")
    return false;
  }
  if(printSocket.readyState !== printSocket.OPEN) {
    console.log("directPrint(): Socket is not open!");
    return false;
  }
  // Serialise, send.
  console.log("Sending " + printData.byteLength + " bytes of print data.");
  printSocket.send(printData);
  return true;
}

/**
 * Connect to print server on startup.
 */
var printSocket = new WebSocket("ws://localhost:5555", ["binary"]);
printSocket.binaryType = 'arraybuffer';
printSocket.onopen = function (event) {
  console.log("Socket is connected.");
}
printSocket.onerror = function(event) {
  console.log('Socket error', event);
};
printSocket.onclose = function(event) {
  console.log('Socket is closed');
}
</script>
</body>
</html>

Webserver

On a Apache HTTP webserver, I uploaded the above webpage, and a file with some raw print data, called receipt-with-logo.bin. This file was generated with escpos-php and is available in the repository:

For reference, the test file receipt-with-logo.bin contains this content:

Test

I opened up the web page on the client computer with socat, websockify and an Epson TM-T20II connected. After clicking the “Print” button, the file was sent to my printer. Success!

Because I wasn’t closing the websocket connection, only one browser window could access the printer at a time. Still, it’s a good demo of the basic idea.

To take this from an example to something you might deploy, you would basically just need to keep socat and websockify running in the background as a service (via systemd), close the socket when it’s not being used, and integrate it into a real app.

Different printers, different forwarding

The socat tool can connect to USB, Serial, or Ethernet printers fairly easily.

USB

Forward TCP connections from port 7000 to the receipt printer at /dev/usb/lp0:

socat TCP4-LISTEN:7000,fork /dev/usb/lp0

You can also access the device files directly under /sys/bus/usb/devices/

Serial

Forward TCP connections from port 7000 to the receipt printer at /dev/usb/ttyS0:

socat TCP4-LISTEN:7000,fork /dev/usb/ttyS0

Network

Forward TCP connections from port 7000 to the receipt printer at 10.1.2.3:9100:

socat -u TCP-LISTEN:7000,fork,reuseaddr,bind=127.0.0.1 TCP4-CONNECT:10.1.2.3:9100

You can forward websocket connections directly to an Ethernet printer with websockify:

socat -u TCP-LISTEN:7000,fork,reuseaddr,bind=127.0.0.1 localhost:7000

Other types of printer

If you have another type of printer, such as one accessible only via smbclient or lpr, then you will need to write a helper script.

Direct printing is faster, so I don’t use this method. Check the socat EXEC documentation or man socat if you want to try this.

Future

I’ve had a lot of questions on the escpos-php bug tracker from users who are attempting to print from cloud-hosted apps, which is why I tried this setup.

The browser is a moving target. I have previously written receipt-print-hq/chrome-raw-print, a dedicated app for forwarding WebSocket connections to USB, but that will stop working in a few months when Chrome apps are discontinued. Some time later, WebUSB should become available to make this type of printer available in the browser, which should be infinitely useful for connecting to accessories in point-of-sale setups.

The available tools for generating ESC/POS (receipt printer) binary from the browser are a long way off reaching feature parity with the likes of escpos-php and python-escpos. If you are looking for a side-project, then this a good choice.

Lastly, the socat -u flag makes this all unidirectional, but many types of devices (not just printers) can respond to commands. I couldn’t the end-to-end path to work without this flag, so don’t expect to be able to read from the printer without doing some extra work.

Useful links

Some links that I found while setting this up-

Get the code

View on GitHub →

How to use HiDPI displays on Debian 9

I recently added a 4K monitor to my Debian box, and had to set a few things to make it display things at a good size. These high-density moniotors that are becoming common on laptops and desktops are known as “HiDPI” displays.

Currently I get the best results with:

  • Window scaling factor of 2
  • Font scaling 0.90 to make text slightly smaller

Note that “window scaling” is not “upscaling” (stretching an image). In this version of Gnome, it means “single/double/triple DPI”. The implementations are in the process of changing: Soon you should be able to set any scaling factor.

This post assumes a Gnome version around 3.26, which is what you would get as a default if you installed Debian 9 today.

Apply to one user

Under Settings → Devices → Displays, set the Scale to 200%.

Under Tweaks → Fonts, set the Scaling Factor to 0.90.

Next, add these variables to ~/bashrc to apply similar scaling to QT apps.

QT_AUTO_SCREEN_SCALE_FACTOR=0
QT_SCALE_FACTOR=2

Log out and back in to ensure that the settings have applied everywhere.

Apply to any user

If you have a shared system (eg. domain accounts), or want to style the login box as well, then you can set the same settings as below.

These steps are based on answers to the Ask Ubuntu question: Adjust text scaling factor for all users.

nano /usr/share/glib-2.0/schemas/org.gnome.desktop.interface.gschema.xml

Set the text-scaling-factor to 0.9, and the scaling-factor to 2.

<key name="text-scaling-factor" type="d">
  <range min="0.5" max="3.0"/>
  <default>0.9</default>
  <summary>Text scaling factor</summary>
  <description>
    Factor used to enlarge or reduce text display, without changing font si$
  </description>
</key>
<key name="scaling-factor" type="u">
  <default>2</default>
  <summary>Window scaling factor</summary>
  <description>
    Integer factor used to scale windows by. For use on high-dpi screens.
    0 means pick automatically based on monitor.
  </description>
</key>

Re-compile the schemas:

glib-compile-schemas /usr/share/glib-2.0/schemas

Next drop some similar environent variables for QT apps in /etc/profile.d/hidpi.sh to apply it to all users:

export QT_AUTO_SCREEN_SCALE_FACTOR=0
export QT_SCALE_FACTOR=2

After this, reboot. If the setting has applied, then the gdm3 login box will be scaled as well.

How to install PHP Composer as a regular user

Composer is an essential utility for PHP programmers, and allows you to manage dependencies.

Dependencies

You can use your regular account to install composer, use it, and even update it. You do need to have a few packages installed first though:

sudo apt-get install git curl php-cli

Or on Fedora:

sudo dnf install git curl php-cli

Local install

Next, fetch the installer and deploy composer to your home directory

curl https://getcomposer.org/installer > composer-setup.php
mkdir -p ~/.local/bin
php composer-setup.php --install-dir=$HOME/.local/bin --filename=composer
rm composer-setup.php

Last, add ~/.local/bin to your $PATH:

echo 'PATH=$PATH:~/.local/bin' >> ~/.bashrc
source  ~/.bashrc
echo $PATH

You can now run composer:

$ composer --help
Usage:
  help [options] [--] []
...
$ composer self-update
You are already using composer version 1.5.6 (stable channel).

Make Composer available for all users

Just run this line if you decide that all users should have access to your copy of Composer:

sudo mv ~/.local/bin/composer /usr/local/bin/composer

If you look up a how to install Composer, you will find a tempting one-liner that uses curl to fetch a script from the Composer website, then executes it as root. I don’t think it’s good practice to install software like that, so I would encourage you to just run ‘sudo mv’ at the end.

How to boot Debian in 4 seconds

This blog post is a throwback to “Booting Debian in 14 seconds” from debian-administration.org, where the author went through some fairly advanced steps to get his low-spec Debian laptop to boot quickly. Debian was version 4.0 at the time, and I recall it taking around 40 seconds to boot on a default desktop install.

In a rare exception to Wirth’s law, waiting for a computer to boot is no longer “a thing”. A default desktop install of Debian includes systemd, and uses a multi-core CPU and SSD quite efficiently. Also, sleep/wake works more reliably than it used to, so boot speed is not as important as it used to be.

On a modern desktop PC, booting Debian 9 (default desktop install) takes me 14 seconds with no extra configuration, so that’s our new low water mark.

.

Mainly to illustrate how far open source operating systems have come, I’m going to step through a boot process speed-up, the way it looks in 2018.

Summary

Out

You will read about some of these older tricks if you search for Linux Boot speed, and they are all quite irrelevant in 2018, in my humble opinion-

  • Swapping the /bin/sh shell to dash (already the default, also, init scripts are no longer used).
  • Using readahead (gains are tiny unless you have a HDD).
  • noatime” setting on mounts (“relatime” is a default mount option since Linux 2.6).
In

New things that you wont find in pre-systemd guides:

  • systemd-analyze to instrument the boot
  • systemctl to exclude processes from boot
Still relevant
  • bootchart is still useful for drawing pretty graphs
  • Configure GRUB & UEFI not to prompt for input
  • Don’t enable services you don’t need

Process

Remove bootloader delay

Between UEFI and the OS, you will get the bootloader, which will wait for 5 seconds by default to see if you want to select a different item. Start by switching the grub timeout from 5 seconds to 0.

sudo nano /etc/default/grub

Set GRUB_TIMEOUT=0.

Run:

sudo update-grub2
Look at systemd

Use the tool systemd-analyze to draw a picture:

systemd-analyze plot > plot.svg

In my case, it was clear that 9 seconds of the boot was an optional “waiting for network” step.

So, (thank you askubuntu), I disabled that service and rebooted:

$ sudo systemctl disable NetworkManager-wait-online.service
Removed /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
systemd-analyze plot > plot2.svg

The boot was still taking 4.4 seconds, so, more analysis was in order:

The systemd-timesyncd service was holding things up.

This service runs early in the boot process, reads an old time from a file, and tries to update time over the network. Since I have a working RTC, this is all unnecessary for me, so I removed it and replaced it with chronyd, which is happy to operate in the background.

sudo systemctl disable systemd-timesyncd.service
sudo apt-get install chronyd
sudo systemctl enable chrony

After another reboot:

systemd-analyze plot > plot4.svg

There we go, down to 4.096 seconds with a few minutes of effort. I think that’s acceptable.

The systemd developers are quite certain that you can boot in under 2 seconds, but I wasn’t willing to customise my system to that extent.

How to use parallel to speed up your work

GNU Parallel is a tool to execute multiple commands at once. In its basic usage, you would list your commands in a file, so that it can execute them, several at a time.

It gives the most benefit on processes that don’t fully utilise your CPU. Almost every laptop, desktop and single board computer now has multiple CPU cores available, so you are probably missing out if you frequently perform batch operations without it.

Installation

On Debian or Ubuntu:

sudo apt-get install parallel
parallel --cite

On Fedora the package name is the same:

sudo dnf install parallel
parallel --cite

Example 1: Convert loops to pipes

Using the ImageMagick tool to convert a folder of GIF images to PNG format can be done in a loop:

for i in *.gif; do convert $i -scale 200% ${i%.*}.png; done

Or, you could print each command in a loop then pass them to parallel.

for i in *.gif; do echo convert $i -scale 200% ${i%.*}.png; done | parallel

The second command is many times faster on a multi-core computer.

Example 2: Replace xargs with parallel

This command executes a single “pngcrush” command on each PNG file in a directory, one at a time.

find . -type f -name '*.png' -print0  | xargs -0 -n1 -r pngcrush -q -ow -brute

To convert this to use parallel, you would use the following command-line:

find . -type f -name '*.png' | parallel "pngcrush -q -ow -brute {}"

Don’t use xargs in parallel mode

Expert command line users will also know about xargs -P, which seems to do the same thing at a glance.

xargs is good at making really long command-lines, and not so good at executing multiple commands at once. It will mix the output of the commands, and requires you to specify the number of jobs to run.

Parallel is designed to do lots of things at once, and it does it well. It will choose some good defaults for the number of processes to execute, and adds an insane collection of features that you need for large batches. To name just a few:

  • Control spawning of new jobs based on things like available memory, system load, or an absolute number of jobs to keep running
  • Distribute jobs to remote computers
  • Show progress
  • Control of when to terminate the jobs

How to generate star fields

I recently needed a texture for the skybox in a 3D space game.

I used this ImageMagick one-liner to generate a dotted canvas with some grey and white pixels. It displays well if the texture will be stretched.

convert -size 1600x900 xc: +noise Random -channel R -threshold 0.5% \
        -negate -channel RG -separate +channel \
        -compose multiply -composite stars.png

Alternative style

With a simple modification, the bright stars are made bigger, and the dull ones are made smaller, with only black and white used. This works well if the texture will be shrunk for display.

convert -size 800x450 xc: +noise Random -channel R -threshold 0.5% \
        -negate -channel RG -separate +channel \
        -compose multiply -composite -resize 200% \
        -threshold 10% stars-rounded.png

Full process

Although these are great one-liners, the actual process is a bit hard to follow without some smaller steps.

Here, we will generate a 150×90 star field in several steps. I’ve scaled each of these pictures to 200% of their original size and converted them to PNG for display on the web. I’ve used BMP in the commands only because it saves some plumbing around colour spaces.

Start with a blank canvas:

convert -size 150x90 xc: stars-01.bmp

Add random RGB noise:

convert stars-01.bmp +noise Random stars-02.bmp

In the red channel of the image, apply a black/white threshold: If the red channel is greter than 0.5%, it is set to the maximum, otherwise it is set to the minimum.

Mostly the red channel is now 100%, with random dots of 0%. The blue and green channels are still completely random:

convert stars-02.bmp -channel R -threshold 0.5% stars-03.bmp

Negate the image, so that the red channel is mostly 0%, with dots of 100%:

convert stars-03.bmp -negate stars-04.bmp

Extract the red channel and green channel.

convert stars-04.bmp -channel RG -separate +channel stars-05.bmp

The red channel will be dots of white:

The green channel will be random:

Multiply the channels together, so that the white dots become grey dots, each with a random brightness:

convert stars-05-0.bmp stars-05-1.bmp -compose multiply -composite stars-06.bmp

For the alternative style, scale the image up, then apply a new threshold. Brighter dots will appear as larger patches of white, while dull stars will be smaller or invisible:

convert stars-06.bmp -resize 200% -threshold 10% stars-06-rounded.bmp

How to use a Radeon graphics card on Debian 9

I have previously blogged about Radeon graphics cards on different Debian installs.

ATI has now released a new free driver which works brilliantly on Debian. In the past, Debian users had to choose between using the community-provided free software driver, or the proprietary one. Generally the proprietary driver was more feature-rich, but the free driver worked more reliably across upgrades. So now, you can safely ignore old guides and start using it.

Here’s how:

Upgrade

Make sure you are on Debian 9 (Stretch) or newer.

These steps apply to a fresh install.

Identify

You should use lspci to confirm that you have an ATI card.

$ lspci | grep Radeon
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X]

Install firmware

You need to install a package called firmware-linux-free to get the driver working at all. If you want decent graphics performance, you will need firmware-linux-nonfree as well, which involves adding non-free sources:

nano /etc/apt/sources.list

Add the words “contrib non-free” to the end of your mirror:

deb http://.../debian/debian/ stretch main contrib non-free

Add the packages:

apt-get update
apt-get install firmware-linux-free firmware-linux-nonfree

And reboot:

reboot

What, that’s it?

Well, yes, for a fresh install that’s it. If your install is old, you might also have to remove old drivers or install the xserver-xorg-video-amdgpu and xserver-xorg-video-ati packages (in my case, these were already installed).

The Debian Wiki AtiHowto contains some more detailed information, most of which is not relevant for a simple desktop setup.

How to assemble a Linux software RAID array on a different computer

With Linux software RAID, if you ever toast your computer, you can retrieve the disks and open up the array on a different computer.

They appear as “Linux Software RAID Member” in the disk utility.

Simply install mdadm, and scan for arrays:

$ sudo apt-get install mdadm
$ sudo mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.

The array will then appear as a new disk, which can be formatted, mounted, or cloned via the usual tools.

How to set up Docker containers in Travis CI

This post outlines a method for using Docker for testing on Travis CI. It may be useful to you if you are a web application developer who uses GitHub.

I use this setup in my web-based word puzzle generator, so that every change is spun up and tested with a web browser before it is merged.

I got the idea for writing this from a few lines in the docker-compose documentation, which suggested that Docker is an easy way to perform automated testing over a running application:

$ docker-compose up -d
$ ./run_tests
$ docker-compose down

This snippet was missing some setup and an example app, but these three lines do all the heavy lifting.

The Docker setup

In order to focus on the Docker setup, I made a server which simply responds to TCP requests on port 5000 with the text “Hello World”.

This file is called server.sh, and sits in a directory called foo_server:

#!/bin/sh
while true; do
  # Send 'Hello World' to anybody who connects on port 5000
  echo "Hello World" | nc -l 5000
done

Alongside it, I added a Dockerfile to instruct docker to execute this tiny ‘application’ in a container, after installing the dependencies. This machine is built from the Docker-official Debian image:

FROM debian
ADD . /usr/share/test-server
WORKDIR /usr/share/test-server
RUN apt-get update && apt-get install --assume-yes netcat-openbsd
CMD ./server.sh

Lastly, a .dockerignore file is used to avoid loading the Dockerfile to the container:

# Ignore docker files
Dockerfile
.dockerignore

In the directory above, a simple test script, test.sh is used to see that the server is returning the expected output:

#!/bin/sh
set -e
expected="Hello World"
actual=`nc -v localhost 5000`
echo "Expecting: $expected"
echo "Server says: $actual"
if [ "$expected" != "$actual" ]; then
  echo "Test failed"
  exit 1
else
  echo "Test passed"
  exit 0
fi

Alongside the test file, a file called docker-compose.yml instructs Docker to create a container out of the foo_server example, and forward port 127.0.0.1:5000 to it.

version: '2'
services:
  foo:
    build: foo_server
    ports:
     - "5000:5000"
    container_name: foo_1

To try it out for yourself, you need a relatively recent version of Docker and docker-compose. The versions provided in Debian were not new enough to execute the examples, but the Docker project provides repos containing newer builds for Debian & Ubuntu. For my distro, the install was:

curl -sSL "https://get.docker.com/gpg" | sudo -E apt-key add -
echo "deb https://apt.dockerproject.org/repo debian-stretch main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install docker-engine
sudo pip install docker-compose

The versions this got me were docker 1.11.2, and docker-compose 1.7.1. Straight after the install, I could deploy & test an example locally:

$ docker-compose up --build -d
$ ./test.sh
$ docker-compose down

The CI setup

I’ll assume that if you’re reading this, you are familiar with the basics of Travis CI. The large block of code below is the .travis.yml file to set up the test machine, then execute the tests against a container.

---
# Use Ubuntu 'trusty' distribution
sudo: required
dist: trusty

install:
  # Update docker-engine using Ubuntu 'trusty' apt repo
  - >
    curl -sSL "https://get.docker.com/gpg" |
     sudo -E apt-key add -
  - >
    echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" |
     sudo tee -a /etc/apt/sources.list
  - sudo apt-get update
  - >
    sudo apt-get -o Dpkg::Options::="--force-confdef" \
     -o Dpkg::Options::="--force-confold" --assume-yes install docker-engine
  - docker version
  # Update docker-compose via pip
  - sudo pip install docker-compose
  - docker-compose version

before_script:
  - docker-compose up --build -d

script:
  - ./test.sh

after_script:
  - docker-compose down
...

Note: This uses Travis CI’s trusty distribution, which at the time of writing is the newest stable build platform available on Travis CI. This shipped an outdated version of Docker, which had to be installed over. Because the existing Docker was configured, I had to override a debconf prompt, which is why the apt addon syntax was not used to set up dependencies.

Result

The build result for each commit is displayed in Travis CI:

2016-06-ci-result-1

Under this, the output of the passing test script is shown, showing what has been set up:

2016-06-ci-result-2

Using this setup in practice

Moving this from a demo setup to a real setup would be fairly simple:

  1. Replace the installation with a real software stack
  2. Replace the server run with a command to serve the application (such as a Apache HTTP, Tomcat or Node)
  3. Replace the tests with real tests (such as Cucumber or Selenium).

The example in the pre-amble installs a LAMP stack and tests it with Selenium in its CI build.

If your application is a bit larger, your only extra complexity will come from running multiple containers with docker-compose.

Get the code

All of these scripts in a working CI example are available on GitHub:
mike42/minimal-docker-ci

Quick guide: Running stock Debian on the Raspberry Pi 2

At the time of writing, the ‘Raspbian’ port of Debian is often used on the Raspberry Pi. It was created to match the CPU architecture, for better performance. These reasons don’t apply to the newer Raspberry Pi 2, so if you’re a Debian desktop or server user, you can do away with the fork and just run Debian Jessie armhf.

The info from Debian is: https://wiki.debian.org/RaspberryPi2

A bit more background about why this only applies to the Raspberry Pi 2-

  • The Raspberry Pi 1 uses ARMv6 chipset with hard floats
    • The Debian armhf port requires ARMv7
    • The Debian armel port doesn’t use hard floats, so is unnecessarily slow on the Pi.
    • So Raspbian was created for the Raspberry Pi 1’s ARMv6 w/ hard-floats, and gets the most juice out of the CPU on the Raspberry Pi 1.
  • The Raspberry Pi 2 uses ARMv7 with hard floats, so Debian armhf port is fine.

Install the image

Image is linked to from this page:

I will assume that your machine has an SD card slot. To find the device name, list out disks and look for one of the correct size, which appears when you plug in the card:

df

Download a copy of the image, extract it out, and dd the file on to the card:

wget -c https://images.collabora.co.uk/rpi2/jessie-rpi2-20150705.img.gz
gunzip jessie-rpi2-20150705.img.gz 
sudo dd if=jessie-rpi2-20150705.img of=/dev/sdX bs=4M
sudo sync
umount /media/$USER/*

Plug in the Raspberry pi, and then log in. If you are using SSH, then arp-scan is a good tool to pick up devices on the network:

sudo apt-get install arp-scan
sudo arp-scan -l
ssh root@x.y.z.w

Configure pi- Things like screen resolution and HDMI go here:

cd /boot/firmware/
nano config.txt

Perform a software upgrade:

nano /etc/apt/sources.list
apt-get update
apt-get dist-upgrade

Start fixing security defaults. Remember that this is not a clean install, so start by setting your own passwords:

passwd

Check that there are no other accounts with passwords set:

cat /etc/shadow

Regenerate all SSH Server keys (commands from here):

ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa -b 521
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa

Lastly, generate some locales:

sudo locale-gen en_US en_US.UTF-8 en_GB en_GB.UTF-8