Updating netplan network config on Ubuntu after hardware change

Recently my server bit the dust. Power-on gave me five beeps, which meant there was something wrong with the CMOS battery. I changed that and no luck. It was obvious a deeper problem existed. The machine itself was a 2010 Dell desktop machine that was my primary machine for some years before I replaced it and moved it to a server role. It managed thirteen years of loyal service. But maybe now was the right time to replace a dual-core system with only 6GB of RAM with something more up to date.

I wanted to keep the case, the PSU, the graphics card, and the SATA expansion board. I bought a CPU/Motherboard/RAM bundle from AWD-IT. The service was fast, the price was right. I plugged everything in but no power. My old PSU was a 300W model that seemed ideal for a low-power server. I tried everything I could think of but still nothing. So I ordered a new 500W PSU. Plugged it in, and all fired up. I put everything else back in the case, tidied all the cables up and plugged in the ethernet cable.

The NIC lights on the back of the box flashed and the machine booted, but no network connection. Figuring it was missing drivers I went down a search-engine rabbithole only to learn that network drivers exist in the kernel. I went looking at the network configuration, but it seems things had moved on from the old /etc/network/interfaces file. It turns out Ubuntu now uses something called netplan.

I looked at my netplan configuration, /etc/netplan/00-network.yaml:

network:
  ethernets:
    enp4s0:
      dhcp4: no
      addresses:
      - 192.168.1.1/24
      gateway4: 192.168.1.254
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]
  version: 2

Looked fine to me. So I took a peak at my ip configuration:

> ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp37s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 01:92:03:04:05:06 brd ff:ff:ff:ff:ff:ff
...

Because of the hardware change, enp4s0 no longer existed and had been replaced with enp37s0.

I updated my /etc/netplan/00-network.yaml:

network:
  ethernets:
    enp37s0:
      dhcp4: no
      addresses:
      - 192.168.1.1/24
      gateway4: 192.168.1.254
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]
  version: 2

Then typed sudo netplan generate, and sudo netplan apply and all was good.

I moved to eleventy (11ty)

It's been two years and one month since I last posted on my website. Largely this is because the site ran on Hugo. The build system involved having the Hugo binaries installed and configured and having AWS CLI installed and configured to talk to Backblaze B2. The way I had the markdown files set up meant things got messy. Whenever I had an idea to post something I just couldn't be bothered to go through the process.

A couple of weeks ago I saw a tweet by Lea Verou:

There are no end of CMS options and build systems around, but an endorsement from someone as eminent as Lea piqued my interest so I dove in to eleventy. A couple of days playing with it, and as almost full-time NodeJS developer these days, and I was much happier with it than Hugo (which is still great btw, just not for me).

We'll see how it goes.

SSH in Docker Alpine

I have a Jingo wiki, it's where I keep notes on all aspects. Mostly code and configuration snippets for things I had to learn the hard way and don't want to have to learn again. There's also stuff in there about photography, and other interests I have.

Jingo works by storing everything in markdown files and automatically keeping things backed up using a Git remote repository. Because my wiki is private, I have a private repository. Access to this is controlled using SSH keys.

Jingo is built in NodeJS and requires the NodeJS runtime. Traditionally I have maintained two remote VPS instances, one with Node installed and one with PHP installed for running various personal webapps. But I decided against this and decided to consolodate to one server running each environment in Docker containers. This was prompted by some annoything errors I was getting when running Jingo under NodeJS v14.x. I wanted to run it under v12, but still have other apps run using v14.

Ordinarily that's not a problem, but because Jingo requires a secure connection with Git, I needed to get SSH key access inside the container. Before doing this, ensure you've created your SSH keys on the server.

First approach

My first approach was to use SSH agent, and Docker's ability to forward this using a mount-type of ssh. Here's the short version:

# syntax=docker/dockerfile:1.0.0-experimental
FROM node:12-alpine
RUN apk update && apk add --no-cache git openssh-client
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
RUN --mount=type=ssh,id=id_rsa git clone [email protected]:<workspace>/<my-repo>.git /app
RUN npm i
CMD npm start

The first line above tells Docker that we'll be using some experimental features, in our case the mounting of ssh. For this to work we need to add the following to /etc/docker/daemon.json:

{
  "features" : {
    "buildkit" : true
  }
}

Line three, installs Git and OpenSSH. Line four creates the SSH directory and puts the existing known key in the known_hosts file inside the container. In my case, BitBucket.

Line five mounts the SSH-Agent and makes it available at build time. An important part to note here is the ID.

Make sure your key is added to the SSH-Agent on your system, then pass the key with the corresponding ID to the build command:

docker build --ssh id_rsa="/home/<username>/.ssh/id_rsa" -t <container-tag> .

This works fine, but it doesn't provide the SSH key when running the container.

Second approach

Try as I might, I just could not get SSH_SOCK forwarding to work. I found I was forced to copy my SSH key in to the container at build time. I know this is less than secure. I know there is a risk of leaving it in the directory, and even committing it to the repository. But this is a rare occurrence. Very rarely will I need to git push inside a container.

With that said, I created a build script that copies the private key to the current directory, is ADDED in the build-script, then deleted. We have to do this because Docker cannot ADD a file from outside the working directory. I tried to do it with symlinks, but it kept failing.

#!/bin/bash

cp /home/<username>/.ssh/id_rsa ./
docker build --no-cache -t <container-name> .
rm ./id_rsa

With that in place, the Dockerfile looks like this:

FROM node:12-alpine
RUN apk update && apk add --no-cache git openssh-client
WORKDIR /app
RUN git config --global user.email "<email address>"
RUN git config --global user.name "<user name>"
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
ADD ./id_rsa /root/.ssh/id_rsa
RUN git clone [email protected]:<workspace>/<repository>.git /app
RUN echo "StrictHostKeyChecking no " > /root/.ssh/config
RUN npm i
CMD npm start

I'm sure there must be a better way to do this, and if you feel inclined please get in touch and point me in the right direction. This is a rare need I had to have automated git push inside a container. In most cases having the ability to clone during build is sufficient, in which case the first approach described above should work.

Port Forwarding With IPtables for Wireguard

Setting up a WireGuard VPN on Ubuntu 20.04 was pretty easy, I followed this tutorial: How to setup your own VPN server using WireGuard on Ubuntu

The problems arose when I needed to forward port 27256 on the server to the VPN client. It took me most of a Sunday to figure out.

Initially, set up to forward the different types of packets (NEW, ESTABLISHED, and RELATED) between interfaces (eth0 and wg0):

iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 27256 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i wg0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Second, foward the port from the server's VPN IP address (10.10.0.1) to the client's VPN IP address (10.10.0.2):

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 27256 -j DNAT --to-destination 10.10.0.2
iptables -t nat -A POSTROUTING -o wg0 -p tcp --dport 27256 -d 10.10.0.2 -j SNAT --to-source 10.10.0.1

New Raspberry Pi Server

When I upgrade my desktop PC I tradtionally retire my old server and use the replaced PC in its place. Also taking the opportunity for a clean install of Ubuntu server. I recently realised that the machine currently serving that role is overkill for what I need, namely:

A large desktop machine with 6GB RAM and a big PSU is unnecessary.

I switched to a Raspberry Pi 4 with 4GB RAM, got a little cooling fan for it and set up a simple Ubuntu server. The Pi4 has gigabit ethernet and USB3 support. For files I purchased a 4GB WD Passport drive that can be powered from the Pi's USB3 port. I only have about 2GB of data right now, so room to grow.

Backing this up to two 2GB external drives became tiresome and wasn't happening as often as it should. I purchased a second 4GB Passport drive planning to rsync nightly rather than use software RAID. Unfortuntely there just wasn't enough power from the Pi4 to support two drives.

I realised this was a blessing in disguise. Installing Ubuntu server on a Raspberry Pi and plugging the second 4GB drive in to that would give me the nightly rsync backup I need. The 100mb speed of the Pi3's NIC and USB2 would be slow in comparison, but more than enough for a nightly backup run. Not to mention the peace of mind in knowing if the Pi4 went up in smoke it could take all my data and I'd still have a working backup.

This also meant I could spread the resource load. I moved the PiHole over the Pi3 as it's only DNS traffic. I'm also using it to serve a few old, separately-powered external drives over the network. I can use the Pi3 to try things out and easily rebuild in case I screw up without taking my main fileserver offline.

I had a spare five-port gigabit network switch that I dedicated to use with this little server array, keeping it all in a plastic box.

While technically this met my needs of emulating my old server's abilities, as well as cutting power consumption, it was inelegant. Three power-bricks, and two long USB cables to bridge a gap mere inches in length.

I checked the power-consumption of the switch, which turned out to be 5v 0.7a. The Pi4 needs a good 3a, and the Pi3 2.5a, both 5v. That's about 6.2 amps for the whole thing.

I hopped on one of those Chinese online stores and ordered a 12v 8a power brick, a short micro-usb cable to power the pi3, and a short USB-C cable to power the pi4.

To split the 12v and get it to three 5v devices I wired a connector block in parallel to three 12v-5v buck converters.

With two 30cm cat5e cables I now have a single 12v power brick and a single network cable running the whole thing. I also have two spare network ports, and about 1.8 amps of power going begging.

Maybe I'll put a Raspberry Pi Zero (or similar SBC), or maybe an ESP32 in there and set up some LoRa.

In hindsight, I could've used a 5v power supply (with enough current), but I hadn't finalised the setup and wasn't sure everything I might want to power would be 5v. As it stands, I'm thinking of hooking up an old 12v case fan to move some air around everything.