Updating netplan network config on Ubuntu after hardware change
Recently my server bit the dust. Power-on gave me five beeps, which meant there was something wrong with the CMOS battery. I changed that and no luck. It was obvious a deeper problem existed. The machine itself was a 2010 Dell desktop machine that was my primary machine for some years before I replaced it and moved it to a server role. It managed thirteen years of loyal service. But maybe now was the right time to replace a dual-core system with only 6GB of RAM with something more up to date.
I wanted to keep the case, the PSU, the graphics card, and the SATA expansion board. I bought a CPU/Motherboard/RAM bundle from AWD-IT. The service was fast, the price was right. I plugged everything in but no power. My old PSU was a 300W model that seemed ideal for a low-power server. I tried everything I could think of but still nothing. So I ordered a new 500W PSU. Plugged it in, and all fired up. I put everything else back in the case, tidied all the cables up and plugged in the ethernet cable.
The NIC lights on the back of the box flashed and the machine booted, but no network connection. Figuring it was missing drivers I went down a search-engine rabbithole only to learn that network drivers exist in the kernel. I went looking at the network configuration, but it seems things had moved on from the old /etc/network/interfaces
file. It turns out Ubuntu now uses something called netplan.
I looked at my netplan configuration, /etc/netplan/00-network.yaml
:
network:
ethernets:
enp4s0:
dhcp4: no
addresses:
- 192.168.1.1/24
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.8.8, 1.1.1.1]
version: 2
Looked fine to me. So I took a peak at my ip configuration:
> ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp37s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 01:92:03:04:05:06 brd ff:ff:ff:ff:ff:ff
...
Because of the hardware change, enp4s0
no longer existed and had been replaced with enp37s0
.
I updated my /etc/netplan/00-network.yaml
:
network:
ethernets:
enp37s0:
dhcp4: no
addresses:
- 192.168.1.1/24
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.8.8, 1.1.1.1]
version: 2
Then typed sudo netplan generate
, and sudo netplan apply
and all was good.
I moved to eleventy (11ty)
It's been two years and one month since I last posted on my website. Largely this is because the site ran on Hugo. The build system involved having the Hugo binaries installed and configured and having AWS CLI installed and configured to talk to Backblaze B2. The way I had the markdown files set up meant things got messy. Whenever I had an idea to post something I just couldn't be bothered to go through the process.
A couple of weeks ago I saw a tweet by Lea Verou:
I’m so fed up with Wordpress and generally maintaining a server and a SQL db. My blog is my last remaining non-jamstack website.
— Lea Verou (@LeaVerou) April 29, 2023
I want to convert it to use @eleven_ty but dreading the amount of work (esp. if I want to preserve URLs). Are there any tools to make it less painful?
There are no end of CMS options and build systems around, but an endorsement from someone as eminent as Lea piqued my interest so I dove in to eleventy. A couple of days playing with it, and as almost full-time NodeJS developer these days, and I was much happier with it than Hugo (which is still great btw, just not for me).
We'll see how it goes.
SSH in Docker Alpine
I have a Jingo wiki, it's where I keep notes on all aspects. Mostly code and configuration snippets for things I had to learn the hard way and don't want to have to learn again. There's also stuff in there about photography, and other interests I have.
Jingo works by storing everything in markdown files and automatically keeping things backed up using a Git remote repository. Because my wiki is private, I have a private repository. Access to this is controlled using SSH keys.
Jingo is built in NodeJS and requires the NodeJS runtime. Traditionally I have maintained two remote VPS instances, one with Node installed and one with PHP installed for running various personal webapps. But I decided against this and decided to consolodate to one server running each environment in Docker containers. This was prompted by some annoything errors I was getting when running Jingo under NodeJS v14.x. I wanted to run it under v12, but still have other apps run using v14.
Ordinarily that's not a problem, but because Jingo requires a secure connection with Git, I needed to get SSH key access inside the container. Before doing this, ensure you've created your SSH keys on the server.
First approach
My first approach was to use SSH agent, and Docker's ability to forward this using a mount-type of ssh
. Here's the short version:
# syntax=docker/dockerfile:1.0.0-experimental
FROM node:12-alpine
RUN apk update && apk add --no-cache git openssh-client
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
RUN --mount=type=ssh,id=id_rsa git clone [email protected]:<workspace>/<my-repo>.git /app
RUN npm i
CMD npm start
The first line above tells Docker that we'll be using some experimental features, in our case the mounting of ssh. For this to work we need to add the following to /etc/docker/daemon.json
:
{
"features" : {
"buildkit" : true
}
}
Line three, installs Git and OpenSSH. Line four creates the SSH directory and puts the existing known key in the known_hosts
file inside the container. In my case, BitBucket.
Line five mounts the SSH-Agent and makes it available at build time. An important part to note here is the ID.
Make sure your key is added to the SSH-Agent on your system, then pass the key with the corresponding ID to the build command:
docker build --ssh id_rsa="/home/<username>/.ssh/id_rsa" -t <container-tag> .
This works fine, but it doesn't provide the SSH key when running the container.
Second approach
Try as I might, I just could not get SSH_SOCK
forwarding to work. I found I was forced to copy my SSH key in to the container at build time. I know this is less than secure. I know there is a risk of leaving it in the directory, and even committing it to the repository. But this is a rare occurrence. Very rarely will I need to git push
inside a container.
With that said, I created a build script that copies the private key to the current directory, is ADDED
in the build-script, then deleted. We have to do this because Docker cannot ADD
a file from outside the working directory. I tried to do it with symlinks, but it kept failing.
#!/bin/bash
cp /home/<username>/.ssh/id_rsa ./
docker build --no-cache -t <container-name> .
rm ./id_rsa
With that in place, the Dockerfile
looks like this:
FROM node:12-alpine
RUN apk update && apk add --no-cache git openssh-client
WORKDIR /app
RUN git config --global user.email "<email address>"
RUN git config --global user.name "<user name>"
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
ADD ./id_rsa /root/.ssh/id_rsa
RUN git clone [email protected]:<workspace>/<repository>.git /app
RUN echo "StrictHostKeyChecking no " > /root/.ssh/config
RUN npm i
CMD npm start
I'm sure there must be a better way to do this, and if you feel inclined please get in touch and point me in the right direction. This is a rare need I had to have automated git push
inside a container. In most cases having the ability to clone during build is sufficient, in which case the first approach described above should work.
Port Forwarding With IPtables for Wireguard
Setting up a WireGuard VPN on Ubuntu 20.04 was pretty easy, I followed this tutorial: How to setup your own VPN server using WireGuard on Ubuntu
The problems arose when I needed to forward port 27256 on the server to the VPN client. It took me most of a Sunday to figure out.
Initially, set up to forward the different types of packets (NEW, ESTABLISHED, and RELATED) between interfaces (eth0 and wg0):
iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 27256 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i wg0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
Second, foward the port from the server's VPN IP address (10.10.0.1
) to the client's VPN IP address (10.10.0.2
):
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 27256 -j DNAT --to-destination 10.10.0.2
iptables -t nat -A POSTROUTING -o wg0 -p tcp --dport 27256 -d 10.10.0.2 -j SNAT --to-source 10.10.0.1