Update – January 2020

2 Months of Results

Traffic to

2 months ago I launched I decided to do it to fuel my motivation. Recently I have been working more personal tech projects. I figure the blog will help keep me motivated. If I know others are getting value, its an extra push for me when I am feeling lazy πŸ™‚

So far it seems to be working. I am surprised at the amount of traffic is getting. Almost 2K views in December and over 300 so far this month. The numbers are not earth shattering but in my experience really good for a new blog.

Raspberry Pi PXE boot tutorial a success

The December traffic is mostly due to a Reddit post I made for my Raspberry Pi PXE boot article. I posted to Raspberry Pi sub-reddit on whim. It felt like a high quality post, I figured others might like it. I guess I was right πŸ™‚ It is up to 1K up-votes on Reddit. And I got some Reddit silver as well. I estimate the post took 10 hours to create. 10 hours total between researching, writing, editing, testing and double testing the tutorial. Happy to see the time spent being enjoyed by others.

Reader engagement

When I first started blogging in the 90s, commenting was a big part of blogging. Authors had discussions with each other via their blogs using ping backs. Now it seems blogging is more a 1 way communication channel. Even with 2.5K total page views on the blog, the comment count from readers is still below 10.

I am happy that readers are commenting and asking questions. I try to answer then as quickly as I can. If you have any feedback or questions please feel free to comment on a post.

Topic Requests

If you have topic requests for future posts, please drop a comment with your idea. I can’t promise I will do every suggestion, but feedback is a big motivator.

broot – Navigate, Browse and Search Directory Trees

broot – A new CLI tool

I am always amazed when a new useful CLI tool like broot is released. Using Linux for well over 20 years, I expect everything useful to be built already. From time to time a new tool comes out and I wonder “How did I live with out this?” Some examples of CLI tools that have changed my life:

  • jq – command line JSON parser
  • pv – pipe viewer, a tool for monitoring pipeline progress
  • mosh – mobile shell
  • socat – multipurpose relay

I expect I will be adding broot to the list. Check out the website here.

What is broot?

A CLI tool written in rust. You use it to explore and find directories. “I made broot to both replace the classical fuzzy finders and the old tree.” – Denys SΓ©guret from his reddit post. So think of it as a combination of the tree and fzf commands.

The simplest use case is to launch it via the br shell function and start typing the name of a directory. It does a fuzzy find showing all matching files in the tree. From within to the tool you can do all sorts of things.

Once launched, you can navigate and drill down into directories. You can edit files or open them with command configured for the given file type.

Installing broot

The broot website has installation instructions for various platforms. There are binaries for the following platforms:

  • Windows 10+
  • Linux
  • Raspberry Pi (ARM) Linux

Simply download the binary and put into your $PATH. I installed it in /usr/local/bin.

If you are on OSX you can use Homebrew or MacPorts. Additionally if you have a rust development environment setup you can install the crate via cargo: cargo install broot.

The first time you run broot it will prompt you to install a shell function br. This is the intended way to run broot. Choosing “Yes” will install the shell function in .config/broot in your home directory. It supports a few different shells. I run bash. On my system the function was installed at ~/.config/broot/launcher/bash/br . Additionally my .bashrc was updated to source the function: source /home/ken/.config/broot/launcher/bash/br.

The br function is a wrapper around the broot command:

# This script was automatically generated by the broot program
# More information can be found in
# This function starts broot and executes the command
# it produces, if any.
# It's needed because some shell commands, like `cd`,
# have no useful effect if executed in a subshell.
function br {
        set +e
        broot --outcmd "$f" "$@"
        if [ "$code" != 0 ]; then
            rm -f "$f"
            exit "$code"
    if [ "$code" != 0 ]; then
        return "$code"
    rm -f "$f"
    eval "$d"

If you change the configuration and want to restore it to the original state use the broot --install command. Additionally you can print the shell function for a give shell using the --print-shell-function flag. For example br --print-shell-function bash.


Some of the interesting options are:

  • -d show last modified dates
  • -h show hidden files
  • -f only show directories
  • -p display permissions
  • -s show file an directory size

I like running all of these flags together: br -s -p -d -f.

Running “br -s -p -d -f” to display size, permissions, date and only directories.

You can manipulate/move files. Copying, removing files and editing files. For example, to do the equivalent of rm -rf on a directory, type space key and then rm.Make sure have set in your $EDITOR environment variable.

With the -g or --gitignore flag you can control if .gitignore files are honored.

One command within broot that I find useful is the print_path or pp command.

Wish List

  • Use broot like ls. For example, I want to run br -p -d to print a directory tree and immediately exit. Maybe I am missing something, but I have not found a way to do this.
  • Errors from verbs. For example, I get no feedback when the rm verb fails due to “permission denied”.

More from Linuxhit

Prometheus node exporter on Raspberry Pi – How to install

Introduction – Prometheus Node Exporter on Raspberry Pi

What does this tutorial cover?

In this post we are walking through configuring the Prometheus node exporter on a Raspberry Pi. When done, you will be able to export system metrics to Prometheus. Node exporter has over a thousand data points to export. It covers the basics like CPU and memory. But as you will see later, it also goes much further.

I am using Raspbian for this tutorial. However the instructions are generic enough to work on most Linux distributions that use systemd.

I will not cover how to setup Prometheus or complementing tools like Grafana. I am writing additional monitoring focused posts to cover these topics. Stay tuned!


  • Raspberry Pi running Raspbian Linux
  • Existing Prometheus server running on another system. It can be on a Pi or another type of system. Alternatively you could try a hosted Prometheus service.

If you want to learn more about Prometheus, I suggest the Prometheus Up & Running book from Oreilly.

About Prometheus and the Node Exporter

Prometheus Logo
Prometheus Logo

Prometheus is an open source metrics database and monitoring system. In a typical architecture the Prometheus server queries targets. This is called “scraping”. Scraping targets are HTTP endpoints on the systems being monitored. Targets publish metrics in the Prometheus metrics format.

Prometheus stores the data collected from endpoints. You can query Prometheus data store monitoring and visualization.

Many systems or stacks do not have Prometheus formatted. For example a Raspberry Pi running Raspbian does not have a Prometheus metrics endpoint. This is where the node exporter comes in. The node exporter is an agent. It exposes your host’s metrics in the format Prometheus expects.

Prometheus scrapes the node exporter and stores the data in its time series database. The data can now be queried directly in Prometheus via the API, UI or other monitoring tools like Grafana.

Raspberry Pi temperature graph in Prometheus. Data from Node Exporter.
Raspberry Pi temperature graph in Prometheus. Data from Node Exporter.

Node Exporter Setup on Raspberry Pi running Raspbian

Ok, lets dive into the actual setup of the node exporter. You might notice that I am not installing the node importer via a package management tool like “apt”.

This is intentional. The node exporter is updated frequently. As a result packages contained in a package repo often lag releases. Therefor I prefer to install the latest release from the node exporter Github page.

Step 1 Download Node Exporter to Your Pi

In this step we are simply downloading a release of the node exporter. Releases are published on projects releases page on Github. The node exporter release binaries are architecture specific. This means you need to download the ArmV7 build for Raspberry Pi 4. If you are on a Raspberry Pi 3 you will need the ArmV6 build.

Use the ARMv6 and ARMv7 builds for your respective platform.

Log into your Raspberry Pi and run the following wget command to download node exporter for the ArmV7 architecture.


Now un-tar the release using this command.

tar -xvzf node_exporter-0.18.1.linux-armv7.tar.gz

This will un-tar the files into a sub-directory that looks like this.


Step 2 – Install node_exporter binary and create required directories

The only file we need out of the expanded tarball is the node_exporter binary. Copy that file to /usr/local/bin.

sudo cp node_exporter-0.18.1.linux-armv6/node_exporter /usr/local/bin

Use the chmod command to make the node_exporter binary executable.

chmod +x /usr/local/bin/node_exporter

Create a service account for the node_exporter.

sudo useradd -m -s /bin/bash node_exporter

Make a directly in /var/lib/ that will be used by the node_exporter. Change the ownership to the service account we just created.

sudo mkdir /var/lib/node_exporter
chown -R node_exporter:node_exporter /var/lib/node_exporter

You have completed the node_exporter binary installation and setup of required directories!

Step 3 – Setup systemd unit file

Next step, setting up the unit file. The unit file will allow us to control the service via the systemctl command. Additionally it will ensure node_exporter starts on boot.

Create a file called node_exporter.service in the /etc/sytemd/system directory. The full path to the file should be:


Put the following contents into the file:

Description=Node Exporter

# Provide a text file location for data with the
# parameter.
ExecStart=/usr/local/bin/node_exporter /var/lib/node_exporter/textfile_collector


I also have the unit file posted on this github gist.

Now lets reload systemd, enable and start the service.

sudo systemctl daemon-reload 
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service

Congratulations the node_exporter service should be running now. You can use the systemctl status node_exporter command to verify.

The output should look like this:

sudo systemctl status node_exporter.service
● node_exporter.service - Node Exporter
   Loaded: loaded (/etc/systemd/system/node_exporter.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2020-01-16 18:53:28 GMT; 21s ago
 Main PID: 1740 (node_exporter)
    Tasks: 5 (limit: 4915)
   Memory: 1.2M
   CGroup: /system.slice/node_exporter.service
           └─1740 /usr/local/bin/node_exporter /var/lib/node_exporter/textfile_collector
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - sockstat" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - stat" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - textfile" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - time" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - timex" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - uname" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - vmstat" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - xfs" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg=" - zfs" source="node_exporter.go:104"
Jan 16 18:53:28 raspberrypi node_exporter[1740]: time="2020-01-16T18:53:28Z" level=info msg="Listening on :9100" source="node_exporter.go:1

Listening on :9100” is the key piece of information. It tells us that node_exporter web server is up on port 9100. Try using wget or curl to query the node_exporter.

curl http://localhost:9100/metrics

The output should look similar to this. Additionally I have an example of my Raspberry Pi’s output on this github gist.

Output of curling the metrics endpoint.

Next steps

Now you should add the metrics endpoint as a target to your Prometheus server. You can do this by editing the prometheus configuration file on your prometheus server. For reference mine looks like this.

# my global config
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    - targets: ['localhost:9090']

  - job_name: 'pi1'
    scrape_interval: 5s
    - targets: ['']

This is the standard prometheus config file. However I added the following target at the end for my Raspberry Pi:

- job_name: 'pi1'
    scrape_interval: 5s
    - targets: ['']

The IP address of my Raspberry Pi is The Port 9100 corresponds is the port used by node_exporter.

I restart my Prometheus server and the target is now displayed in Prometheus.

Prometheus Target UI
The Prometheus UI displaying configured scraping target.

If you are using Prometheus, you are likely using Grafana as well. Grafana has an excellent node_exporter dashboard template available.

Which Tutorial Should I do Next?

Clearly Node Exporter is just one piece of the puzzle. Given that, I will be writing some tutorials on Prometheus, Grafana and other monitoring subjects. Vote for which I should do next by leaving a comment. Thanks!


Node Exporter is a powerful tool for getting metrics out of your Raspberry Pi. However there are some downsides. The main downside is the requirement of a Prometheus server. I use Prometheus already so it was a no-brainer for me.

Prometheus can be setup to run a Raspberry Pi. However I typically advise against it. Prometheus does a big job and the Pi is not well suited for any but the smallest Prometheus workloads.

Build a Raspberry Pi image with Packer – packer-builder-arm

Introduction: Building a Raspberry Pi image with Packer

Today we are test driving packer-builder-arm, this tool enables you to build a Raspberry Pi image with Packer (in addition to other ARM platforms). Packer-builder-arm is a plugin for Packer. It extends Packer to support ARM platforms including Raspberry Pi. Packer is tool from Hashicorp for automating OS image builds. Additionally packer-builder-arm enables you to build these image on your local machine, cloud server or other x86 hardware. This means you don’t need your Raspberry Pi handy to build a Raspbian ARM image. To do this it leverages arm emulation available in QEMU. Specifically it copies a statically built QEMU arm emulator into the image which allows us to run files compiled for ARM inside the chroot on an X86 system.

packer-builder-arm github screenshot of boards directory
The boards directory of the packer-builder-arm project show some of the boards you can build images for.

Why use Packer?

Why build your Raspberry Pi image with Packer? Projects with embedded devices such as a Raspberry Pi often need tweaks to the OS installation. For example you might change config files or install some packages. Manually running these commands for one device is no big deal. However repeating this process is time consuming and error prone. It does not scale well to many devices.

A common solution is to customize your OS install and then clone the SD card or storage device. This works well enough. However each time you want to tweak your configuration you still need to manually re-run the process. Iteration is manual.

This is where Packer comes in. Packer enables you to codify your OS configuration and customization. Packer builds your image for you applying your customization. You can store your Packer files in git and now you have a repeatable process. If you need to add a new package, just add it to your packer build files and re-run packer. Presto! You have a new image. You can take this even further by setting up CI/CD pipelines to run your builds automatically.

How does Packer work?

Packer Overview

Packer automates a simple but powerful concept. Start with base installation media or a base image of operating system. Boot or chroot into the image and run commands or scripts to customize it. Then capture the output in an image artifact. You can now take that image and re-use it on multiple machines.

Packer is commonly used in large scale cloud environments. In these environments you often have thousands of machines. OS image builds need to be automated and tested in an environments of this scale. The only reasonable way to achieve this is with the automation that tools like Packer provide.

Packer Plugins

Packer supports plugins for extending its functionality. Packer’s plugin architecture is quite simple. Packer plugins are just stand alone binaries or scripts that Packer executes.

Two examples of plugin types for Packer are builders and provisioners. Builders are focused on setting up the infrastructure required to build the image. Provisioners handle the changes you make at the OS level such as install package. Read the Packer plugins page for more details.

Today we are going to explore a builder plugin called packer-arm-builder.


Packer-builder-arm extends Packer to build arm based images. Specifically it does the following:

  • Fetch a base ARM os image such as Raspbian or Arch Linux.
  • Run the commands you specify in a chroot environment with ARM emulation.
  • Save the customized chroot environment to a new image.

It achieves this by copying the QEMU static ARM emulator into the chroot environment. This allows ARM compiled binaries in the chroot to execute as if they were running on an ARM machine. It is worth noting that this can be fairly slow compared to running directly on an ARM CPU. However it is really handy because you don’t to run on ARM hardware.

Building and installing packer-builder-arm


  • A modern linux system. We are using Ubuntu 18.04.
  • A recent version of Go. If you need help installing Go see our tutorial: How to install Go on Linux.
  • Access to the internet to install dependencies.

Step 1 – Install dependencies

Install Go. We are testing this with Go 1.13. See our tutorial: How to install Go on Linux if you need more help.

Now that we have Go installed we install the following packages:

  • git
  • unzip
  • qemu-user-static
  • e2fsprogs
  • dosfstools
  • bsdtar

We are using Ubuntu 18.04. Hence we will use apt to install our dependencies. If you are using a different distribution you will need to adapt these commands to install the packages on your platform. In our case we run:

sudo apt-get install git
sudo apt-get install unzip
sudo apt-get install qemu-user-static
sudo apt-get install e2fsprogs
sudo apt-get install dosfstools
sudo apt-get install bsdtar

Step 2 – Install Packer

We chose to install Packer directly from the Packer website instead of using packages available in the Ubuntu package repositories. We have a reason for this. Packer moves pretty quickly so the version of Packer in the Ubuntu repos is far behind the latest Packer. Given the rate at which cloud native tools change we want to make sure we have the latest code base. The best way to do this is get Packer directly from Hashicorp.

To install Packer we will first download the zipped binary from the Packer website using the wget command line HTTP client.


Version 1.4.5 is the latest as of this writing. However I suggest you check website to see if there is a newer Packer release available.

Now that we have the Packer zip file we need to unzip it:


You should now have a packer binary in your current working directory. Lets move it to a more permanent location. We can do this putting the packer binary in your /usr/local/bin directory:

sudo mv packer /usr/local/bin/

More than likely you already have /usr/local/bin in your PATH environment variable. If you do then you should be able to run packer --help and see a similar output to what I have below.

packer --help
Usage: packer [--version] [--help] <command> [<args>]

Available commands are:
    build       build image(s) from template
    console     creates a console for testing variable interpolation
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    validate    check that a template is valid
    version     Prints the Packer version

Congratulations! You have Packer installed. Time to move onto packer-builder-arm.

Step 3 – Install packer-builder-arm

In this step we are going to get the latest code for the packer-builder-arm plugin from Github. Next we will build it and then finally install it.

To clone the source code from github, use the following git command:

git clone

After cloning the source code we will need to change directories into the working directory, fetch go modules and build the go source code.

cd packer-builder-arm
go mod download
go build

After the go build command you should have a packer-builder-arm file in your current directory.

ubuntu@packer-test:~/packer-builder-arm$ ls -l
total 32484
-rw-rw-r--  1 ubuntu ubuntu    11357 Dec 13 22:04 LICENSE
-rw-rw-r--  1 ubuntu ubuntu     5097 Dec 13 22:04
drwxrwxr-x 11 ubuntu ubuntu     4096 Dec 13 22:04 boards
drwxrwxr-x  2 ubuntu ubuntu     4096 Dec 13 22:04 builder
drwxrwxr-x  2 ubuntu ubuntu     4096 Dec 13 22:04 config
-rw-rw-r--  1 ubuntu ubuntu      818 Dec 13 22:04 go.mod
-rw-rw-r--  1 ubuntu ubuntu    45835 Dec 13 22:04 go.sum
-rw-rw-r--  1 ubuntu ubuntu      268 Dec 13 22:04 main.go
-rwxrwxr-x  1 ubuntu ubuntu 33173413 Dec 13 22:23 packer-builder-arm

At this point we have a couple of options:

  • Run packer from within this directory. Packer will check the current directory for plugins.
  • Move the packer-builder-arm binary to /usr/local/bin/. Packer will also look for plugins in the same directory as the packer binary.
  • Move the packer-builder-arm binary to $HOME/.packer.d/plugins.

We are going to go with the first one. If you intend to use this a permanent setup, investigate the other two.

Ok. We are almost there. Time to move onto actually building the image!

Step 4 – Build the Raspbian image

This is where the power of Packer really shines. We are going to use one of the existing configurations in packer-builder-arm to build a Raspbian image. But before we do that, we need to fix a small bug that I came across with the raspian.json file.

Fix a bug in raspbian.json

Edit the file in packer-image-arm/boards/raspberry-pi/raspbian.json.

We need to update the value for image_chroot_env. Specifically we change the following line:

"image_chroot_env": ["PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/sbin"],

to include the /usr/sbin directory. The line should now look like this:


The issue I ran into is that packer was unable to find the chroot command on my system. Chroot is in /usr/sbin so we add it the PATH variable that is being passed to image_chroot_env.

Understand the contents of raspbian.json

From within the packer-image-arm working directory, we will run the following packer command with sudo after looking at the raspbian.json file:

sudo packer build boards/raspberry-pi/raspbian.json

Before running the build lets figure out what the Raspbian.json is doing:

  "variables": {},
  "builders": [{
    "type": "arm",
    "file_urls" : [""],
    "file_checksum_url": "",
    "file_checksum_type": "sha256",
    "file_target_extension": "zip",
    "image_build_method": "reuse",
        "image_path": "raspberry-pi.img",
        "image_size": "2G",
    "image_type": "dos",
        "image_partitions": [
                        "name": "boot",
                        "type": "c",
                        "start_sector": "8192",
                        "filesystem": "vfat",
                        "size": "256M",
            "mountpoint": "/boot"
                        "name": "root",
                        "type": "83",
                        "start_sector": "532480",
                        "filesystem": "ext4",
                        "size": "0",
            "mountpoint": "/"
    "image_chroot_env": ["PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"],
        "qemu_binary_source_path": "/usr/bin/qemu-arm-static",
        "qemu_binary_destination_path": "/usr/bin/qemu-arm-static"
  "provisioners": [
      "type": "shell",
      "inline": [
          "touch /tmp/test"

In the raspbian.json file we have two high level objects, builder and provisioner. The builder in this file has the following keys or parameters:

  • "type": "arm" – The builder type we are using.
  • "file_urls": ... – The url of the image we are using as a base.
  • "file_checksum_url"... – The url of the image’s checksum
  • "file_checksum_type": "sha256" – The hashing algorithm used to generate the checksum.
  • "file_target_extension": "zip" – The extension of the image file.
  • "image_build_method": "reuse" – This tells the plugin if we want to reuse the disk image or create a new one from scratch.
  • "image_path": "raspberry-pi.img" – The name of the image we will create.
  • "image_size": "2G" – The size of the image
  • "image_type": "dos"
  • "image_partitions":... – Contains the specifications of the partitions in the image.
  • "image_chroot_env"... – Shell environment that is passed to the chroot command.
  • "qemu_binary_source_path": & "qemu_binary_destination_path": The source where we will find the qemu static binary and the the destination path inside the chroot where we will copy it.

Additionally there is a the provisioner section which is much shorter:

  • "type": "shell" – This tells packer we are using the shell provisioner to configure the image.
  • "inline"... – Inline specifies an array of commands that get passed to the shell provisioner. In this case we pass one command “touch /tmp/test“. You can add additional commands here. See the shell provisioner doc for other options.

Build your Raspberry Pi image with Packer

Ok. Almost there! Let’s run the build now:

sudo packer build boards/raspberry-pi/raspbian.json

Here is the output from my packer run:

ubuntu@packer-test:~/packer-builder-arm$ sudo packer build boards/raspberry-pi/raspbian.json
arm output will be in this color.

==> arm: Retrieving rootfs_archive
==> arm: Trying
==> arm: Trying
==> arm: => /home/ubuntu/packer-builder-arm/packer_cache/
    arm: unpacking /home/ubuntu/packer-builder-arm/packer_cache/ to raspberry-pi.img
    arm: searching for empty loop device (to map raspberry-pi.img)
    arm: mapping image raspberry-pi.img to /dev/loop2
    arm: mounting /dev/loop2p2 to /tmp/495303497
    arm: mounting /dev/loop2p1 to /tmp/495303497/boot
    arm: running extra setup
    arm: mounting /dev with: [mount --bind /dev /tmp/495303497/dev]
    arm: mounting /devpts with: [mount -t devpts /devpts /tmp/495303497/dev/pts]
    arm: mounting proc with: [mount -t proc proc /tmp/495303497/proc]
    arm: mounting binfmt_misc with: [mount -t binfmt_misc binfmt_misc /tmp/495303497/proc/sys/fs/binfmt_misc]
    arm: mounting sysfs with: [mount -t sysfs sysfs /tmp/495303497/sys]
    arm: binfmt setup found at: /proc/sys/fs/binfmt_misc/qemu-arm
    arm: copying qemu binary from /usr/bin/qemu-arm-static to: /tmp/495303497/usr/bin/qemu-arm-static
    arm: running the provision hook
==> arm: Provisioning with shell script: /tmp/packer-shell936190241
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORubuntu@packer-test:~/packer-builder-arm$ sudo packer build boards/raspberry-pi/raspbian.json
arm output will be in this color.

==> arm: Retrieving rootfs_archive
==> arm: Trying
==> arm: Trying
==> arm: => /home/ubuntu/packer-builder-arm/packer_cache/
    arm: unpacking /home/ubuntu/packer-builder-arm/packer_cache/ to raspberry-pi.img
    arm: searching for empty loop device (to map raspberry-pi.img)
    arm: mapping image raspberry-pi.img to /dev/loop2
    arm: mounting /dev/loop2p2 to /tmp/495303497
    arm: mounting /dev/loop2p1 to /tmp/495303497/boot
    arm: running extra setup
    arm: mounting /dev with: [mount --bind /dev /tmp/495303497/dev]
    arm: mounting /devpts with: [mount -t devpts /devpts /tmp/495303497/dev/pts]
    arm: mounting proc with: [mount -t proc proc /tmp/495303497/proc]
    arm: mounting binfmt_misc with: [mount -t binfmt_misc binfmt_misc /tmp/495303497/proc/sys/fs/binfmt_misc]
    arm: mounting sysfs with: [mount -t sysfs sysfs /tmp/495303497/sys]
    arm: binfmt setup found at: /proc/sys/fs/binfmt_misc/qemu-arm
    arm: copying qemu binary from /usr/bin/qemu-arm-static to: /tmp/495303497/usr/bin/qemu-arm-static
    arm: running the provision hook
==> arm: Provisioning with shell script: /tmp/packer-shell936190241
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
Build 'arm' finished.

==> Builds finished. The artifacts of successful builds are:
--> arm: raspberry-pi.img
M}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.
Build 'arm' finished.

==> Builds finished. The artifacts of successful builds are:
--> arm: raspberry-pi.img

As you can see some errors were reported, specifically:

==> arm: ERROR: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ cannot be preloaded (cannot open shared object file): ignored.

You can safely ignore this error if you received it. It is due to the $PLATFORM variable not being set in the environment. This does not impact the image build.

I now have a Raspbian image for my Raspberry Pi. Lets check the size using the du command:

ubuntu@packer-test:~/packer-builder-arm$ du -hs raspberry-pi.img 
3.6G    raspberry-pi.img

You can now dd this image to an SD card and boot it on your Raspberry Pi.


To build a Raspberry Pi image with Packer and packer-builder-arm takes a bit of upfront work. However once working it is a very powerful tool. For a next step I suggest adding your customizations to the shell provisioner section to build and image that meets your needs. I plan on using this to automate the image build for my Raspberry Pi PXE boot tutorial.

Learn more about Packer

If you are interested in learning more about Packer we suggest you check out James Turnbull’s book on Packer.

How to install Go on Linux using official binary releases

Installing Go on Linux

In this tutorial we walk through how to install the Go programming language on Linux. We are using the binary distribution you can download from Additionally we talk about why we prefer this method over other methods for installing Go.

GoLang Language - gophers and people on computers

Why install go on Linux via this method?

Linux distributions have their own package management systems. For example apt on Debian variants and yum on Fedora variants. Package managers are powerful tools enabling you to easily install, track and upgrade software you run on your system.

However for faster moving software and tools the packages typically lag behind what is current. For example the current version of Go is 1.13 as of this writing. If I install Go via apt on Ubuntu 18.04 I will get version 1.10. I can use the –dry-run parameter with apt-get install to see what apt will install.

apt-get --dry-run install golang

If I look through the output I see that 1.10 will be installed.

Inst golang-1.10 (1.10.4-2ubuntu1~18.04.1 Ubuntu:18.04/bionic-updates [all])

1.10 is too old for my needs. When I need Go on a system I prefer to install official binaries released by The result is that I can get whatever version I want. Additionally I am using the builds that are supported and tested by the GoLang community.

Alternative methods to install Go on Linux?

Installing the binary release is not the only way to get the latest Go. For example there are non-official distributions and packages built by users in the community. You can install a later version Go on Ubuntu via these methods:

When considering these option remember the Go project does not support or test these other distributions. Additionally you are trusting the entities that maintain these builds. If the Go binaries were to get hacked, I am more confident the Golang community will catch it sooner and notify me sooner because so many people are using their distribution. I know I can always get the latest version without hoping a package maintainer is going to keep their builds up to date. Hence I choose to use official Go binaries and do not trust the others options for production use. I think its great that they package up Go in easy to consume packages. But it does not meet my requirements.

The Go Github repo has an Ubuntu focused wiki page with more details about these options. Check it out if you are interested in investigating them further.

Install the official Go binary release

Step 1 Download

The first thing we need to do is download the latest Go binary release from the Go download site.

I will download version 1.13.5 for Linux. In this case I am getting the X86 64bit version. I use the following wget command to fetch the archive file.


Step 2 Extract

Next I need to explode the archive file. I will do this using a tar command which decompresses the file and untars it at the same time.

 tar -xvzf go1.13.5.linux-amd64.tar.gz 

The flags to the tar command in this case mean the following:

  • x – extract. The command to extract files.
  • v – verbose. This gives us detailed about about what the command is doing.
  • z – decompress using gzip. This tells the tar command to decompress the file using gzip.
  • f – file. This tells tar that we are giving it a file path parameters. Versus for example using stdin.

When you run this command you get a bunch of output displaying the files being extracted from the archive. Now I should have a directory called go.

ubuntu@packer-test:~$ ls -l
total 117264
drwxr-xr-x 10 ubuntu ubuntu      4096 Dec  4 22:53 go
-rw-rw-r--  1 ubuntu ubuntu 120074076 Dec  5 01:25 go1.13.5.linux-amd64.tar.gz

Step 3 move the go binaries

Next we will move this go directory into /usr/local. We are doing this so the Go binaries are located in a system wide location. We could also move them into our home directory if our intention to is to only make it available for one user. Lets assume we want to put it in a central location. We must use sudo to move the go directory.

sudo mv go /usr/local

Step 4 Update PATH environment Variable

At this point the installation is done. The last remaining step is to update your PATH environment variable to include the go/bin directory. You can do this on the fly by setting the PATH environment variable from the command like. Do it like so:


However this change will only be good for your current session and will not persist. You should update your .bashrc file to include the following line:


If you are thinking that line looks like the command we just ran you are correct! We are just leveraging the .bashrc to run the command to set the variable for us. Because the .bashrc file is loaded every time we start a new shell. You can immediately load your new .bashrc file like so:

source ~/.bashrc

Step 5 Validate

Now lets confirm we can run go. Do this by running go help.

ubuntu@packer-test:~$ go help
Go is a tool for managing Go source code.


        go <command> [arguments]

The commands are:

        bug         start a bug report
        build       compile packages and dependencies
        clean       remove object files and cached files
        doc         show documentation for package or symbol
        env         print Go environment information
        fix         update packages to use new APIs
        fmt         gofmt (reformat) package sources
        generate    generate Go files by processing source
        get         download and install packages and dependencies
        install     compile and install packages and dependencies
        list        list packages or modules
        mod         module maintenance
        run         compile and run Go program
        test        test packages
        tool        run specified go tool
        version     print Go version
        vet         report likely mistakes in packag


We suggest you consider using official Go binaries when installing Go on your system. By using official binary distributions you ensure you can always install the latest version. Typically newer than what is available via the operating system’s package manager. Additionally you are not relying on untrusted parties maintaining a PPA or similar archive. If you are interested in installing and managing multiple versions of the Go binary release for development purposes, check out GVM. GVM is the Go Version Manager.

PXE Boot, What is PXE? How does it work?

PXE Boot – Introduction

What can you expect to learn about PXE from this post?

  • High level overview of PXE boot process.
  • Use cases for PXE boot.
  • Detailed end to end overview of the PXE boot process.
  • Technical details of each stage.

What is PXE?

In this post we are deep diving into PXE boot. PXE stands for preboot execution environment. It is standards base and can be implemented using open source software or vendor supported products. PXE is a key part of data center infrastructure because it enables automated provisioning of servers or work stations over a network. An in depth understanding of the PXE stack benefits anyone working on infrastructure deployment of bare metal servers, embedded devices and IOT devices.

Authors background

I first implemented a PXE boot environment in a production data center 15 years ago. Installing operating systems from CDROM was painfully slow and we desired an automated solution. The knowledge I gained from that project increased in value through out my career. Since then I have worked with PXE in large scale deployments provisioning thousands and thousands of hosts in data centers across the globe. I am excited to share what I have learned through years of hands on experience.

Why did I write this guide?

PXE is often seems like a dark art. Typically only a handful of people in the team truly know how the environment’s PXE infrastructure boot works. Additionally debugging it is hard, debugging remotely even harder. Therefore, I wrote this guide to help demystify PXE boot by explaining it a simple, thorough and interesting fashion.

High level overview of PXE boot

PXE Use Case, What problem does it solve?

PXE solves a problem large enterprises face. How do you automate provisioning or installation of operating systems on large quantities of machines?

Operating system such as Windows or Linux have mechanisms to automate installation. Typically you create a seed file or configuration. The seed file provides answers to the questions asked by the OS installer. In the linux world examples of this are debian preseed files or Redhat kickstart files. But still you need access to the installation media on CD/DVD-ROM or a USB drive. A human running around with the usb drive touching every server does not scale. Its time consuming and error prone. Lets imagine a world where a human puts a server in the rack, powers it on and is done. This has many benefits:

  • Installers can be less technical.
  • Reduced time spent per server.
  • Less error prone due to automation.
  • OS installation tools are centralized and easier to update.

This is where PXE comes in. PXE is a standards based approached to solving the problem of getting the OS onto the system without a human being putting media (USB, CD/DVD-ROM) in the system. It does this by bootstrapping the machine over the network.

In a fully automated environment the human installing the server does the following:

  • Installs server in the rack.
  • Connects power and network.
  • Walks away.

The powered on server automatically fetches a network boot file (NBF) to boot itself up and provisions an operating system. It is a beautiful thing when its working properly πŸ™‚

How does it work?

It all starts with the NIC

The start of a PXE workflow is booting network interface card (NIC). In a typical PC or laptop the NIC will not do anything until the operating system boots and loads the proper driver. However network booting requires a PXE enabled NIC. The NIC contains firmware with a tiny network stack. This firmware is capable of connecting to the network and fetching a file to boot, commonly referred to as the network boot file (NBF). The file could be a kernel or it could be network enabled boot loader.

The server boots the file downloaded off the network. Typically the boot image kicks off an automated installation of an operating system. Now lets dive into the components that make this process possible.

PXE boot components

A typical PXE environment has the following components.

PXE enabled NICs

Not all NICs are equal. Many consumer grade network cards do not have a PXE capabilities. Although that is rapidly changing as advances make it easier to include more features in cheaper devices. PXE enabled NICs are the defacto standard in data center grade servers. We suggest you double check before you buy. However I would be surprised if any major server manufacturer ships a NIC without PXE capability these days.

Some of the PXE enabled NICs even use open source PXE firmware. IPXE is an open source firmware often installed on data center NICs.

DHCP Server

DHCP stands for Dynamic Host Configuration Protocol. There are two types of actors in DHCP. The DHCP server and the DHCP client.

A DHCP server provides a network configuration to clients. Specifically, DHCP provides an IP network configuration to a client. A DHCP client runs on computers that join the network and need a configuration.

An example of real world DHCP use you are probably familiar with is connecting to your office LAN. Your laptop has no idea what IP addresses are in use on the network it has joined. The DHCP client on your laptop sends a broadcast to the network indicating it is looking for a DHCP server. A response is sent from the the server to announce its availability. Your client acknowledges this by sending a request for a DHCP lease. The DHCP server sees this request and finds an unused IP address. Your laptop gets a DHCP lease offer from the server. The lease offer among other things includes the IP address you will use. Your laptop’s DHCP client accepts the offer and begins using the IP address to talk to on the network. As lease expiration time approaches your laptop will ask to renew.

In a PXE boot environment there is always a DHCP server. The machines that are being provisioned are DHCP clients. The PXE enabled NIC has a DHCP client built into its firmware.

DHCP supports a wide range of options that can be provided to network clients. But typically it consists of an IP address for use by the client, a default gateway address and DNS servers to use for name resolution. In the case of PXE, an option that contains the IP address of the server to download its boot files from.

TFTP Server

TFTP stands for trivial file transfer protocol. TFTP is a simple UDP based protocol for getting or sending a file. It’s simplicity lends well to being implemented in firmware environments where resources are limited. Due to its simple nature TFTP has no bells or whistles. Getting and putting files are supported, that’s it. There is no directory listing, you must know the exact path of the file you want to download. Additionally there is no authentication or authorization.

While TFTP is still commonly used in PXE environments, advances is in technology has resulted in some PXE implementations supporting more complex protocols like HTTP or ISCSI. For example the IPXE firmware supports:

  • HTTP
  • ISCSI Storage Area Networks (SAN)
  • Fiber channel over ethernet (FCOE) Storage Area Networks (SAN)
  • ATA over etherent (AOE)

Putting it all together

This diagram illustrates the PXE boot flow from power on to network boot file download.

The above diagram illustrates a basic PXE workflow. Lets review each of the steps.


  • Client PXE enabled NIC powers on and boots firmware.
  • Firmware’s DHCP client sends a broadcast packet to the local area network indicating it needs a network configuration from the DHCP server.
  • The DHCP server responds with what is called an “offer”. The offer contains the network configuration as specified by the DHCP protocol specification.
  • The DHCP client, happy with the result now sends a DHCP request. This request basically means “I got the offer, I want to confirm before moving forward”.
  • The DHCP server then responds with a unicast packet directed at the assigned IP address. Note that up until this point all packets have been broadcast.
  • The DHCP client gets the response and starts using the network configuration.


At this point the NIC firmware in the PXE client has an IP configuration. Part of that configuration should have been what is referred to “next-server” option. The next-server option is a DHCP option that tells the client where it should go to download the network boot file.

  • NIC firmware makes a TFTP request to the server using the IP or name specified in the next-server option of the DHCP lease.
  • TFTP server sends the requested file in a udp data stream.
  • NIC firmware receives the file storing it in memory.
  • Server then executes the downloaded file.

Next steps after TFTP

What happens at this point will vary depending on the environment and goal of the PXE boot configuration. Some examples are OS installation or full network boot.

OS Installation Use Case

The system boots up an automated OS installer image that installs an OS to the local drive. After the installation a reboot is performed to reboot into the local OS.

Full Network Boot Use Case

In this use case the server boots entirely over the network on every boot. Typically the root file system is mounted via NFS. Pros of this configuration are the servers can run with no local storage. Cons are that the network needs to be functional to boot the server and performance may not be as good as local storage.


The PXE environment we just described is a simple and common configuration. It is a good starting point for newcomers trying to understand PXE for the first time.

Here are some variations you will see in the real world. Especially in enterprises.

  • DHCP relay or “helper”. The relay forwards DHCP request to a DHCP server not on the local LAN. This functionality is common on enterprise routers.
  • PXE proxy or relay. This is often used when one does not have the access required to modify the DHCP server configuration. In this case the relay responds to the DHCP request with just the server and filename of the network boot file. Letting the existing DHCP server provide the standard IP configuration.
  • HTTP or HTTPS instead of TFTP for retrieval for the network boot file.


In conclusion PXE is a very powerful tool for automating and managing the provisioning and updates of data center infrastructure, embedded devices, IOT devices and even workstations. We have covered the basics and hope you walk away from this article with a better understanding.

Appendix & further reading


We appreciate feedback. If you have ideas on how we can make this article or site better please leave a comment.

Raspberry Pi PXE Boot – Netbooting a Pi 4 without an SD card

What does this Raspberry Pi PXE Boot tutorial cover?

This Raspberry Pi PXE Boot tutorial walks you through netbooting a Raspberry Pi 4 without an SD card. We use another Raspberry Pi 4 with an SD card as the netboot server. Allocate 90-120 minute for completing this tutorial end to end. It can faster if you already familiar with some of the material.

Why I wrote this tutorial

Does the world need another Raspberry Pi PXE boot tutorial? I read many amazing docs, forum posts and blog posts on the topic before starting this project. However they all have some gaps I filled in myself. So I decided to write a tutorial that addresses the following gaps.

  • Most are geared to Pi 3s. Understandable since the Pi 4 is newer. However there are subtle differences between PXE booting the Pi 3 and Pi4. This tutorial focuses on Pi 4.
  • Glossing over the underlying technologies assuming knowledge of PXE boot. I aim to provide more insight into the PXE boot process.
  • Troubleshooting tips tended to be lacking. I provide a troubleshooting guide in this how to.

Why PXE boot or netboot a Raspberry Pi?

I am embarking on an IOT project using Raspberry Pis in a Kubernetes cluster. 10 Pis will be in the cluster for running containerized workloads. I want to make provisioning and re-provisioning the cluster nodes easy as pie (pun intended). As a result of this my first stage of the project is figuring out how to PXE boot the Raspberry Pi 4. Which led me to creating this tutorial.

My goals are:

  • Simplify Pi provisioning and maintenance as much as possible.
  • Automate updates/upgrades as much as possible.

Netbooting is a good path to achieve these. For example, when you netboot a Pi it does not require an SD card to boot. The OS and file system live on a central server. Because most of the provisioning happens on a central server I can eventually automate it via scripts.

What is PXE, How does it work?

This is a basic overview of PXE. If you want to dive deeper on PXE we suggest you read out post What is PXE? How does it work?

PXE stands for Preboot Execution Environment. At a high level PXE is a standard for network booting a computer. It uses standard networking protocols to achieve network booting. Specifically IP, UDP, DHCP and TFTP. PXE is typically used in one of two ways:

  • Initial bootstrap or provisioning of a network enabled server. In this use case the PXE boot process initializes the system by installing an operating system on local storage. For example, using dd to write a disk image or using a debian preseed installer.
  • Disk-less systems which always boot off the network. For example the process we follow in this tutorial.

The diagram below shows the high level flow of the PXE boot process. Understanding the flow will help in the event you need to troubleshoot a boot failure.

The PXE boot flow. Implementations can differ. Also the server components can be spread across multiple hosts.

Overview of the PXE flow

  • Client powers on, the clients network interface card firmware sends a DHCP request over the network.
  • A DHCP server responds with a DHCP lease offer. This lease offer will have an option for the “next-server”. The “next-server” option value is the IP or name of the server the client will download its initial boot files from. The next-server field is know as option 66 in the DHCP protocol.
  • The client downloads files via TFTP from the host specified in the next-server field of the lease. Typically the files are a kernel and initrd image. However it could be something else. For example it could chain load a network boot loader or another PXE client like IPXE.
  • The client boots the downloaded files and starts its boot strap process.
  • At this point the client could start an OS installation or boot as a disk-less system.



  • You have an existing network with Internet access that can be used to install packages on your Pi 4.
  • You have a dedicated or stand alone network for running the PXE boot client and server. This can be a network switch or it can simply be an ethernet cable between the two Raspberry Pis.
  • You will use the following network IP addresses for your Raspberry Pis, PXE Server:, PXE Client will get an IP address via DHCP. Your subnet mask on the server should be a /24 ( You can tweak this however you want, but all the documentation in the tutorial assumes these addresses.

Phase 1 – PXE Boot Client Configuration

The Raspberry Pi 4 has an EEPROM. The EEPROM is capable of network booting. Unfortunately the only way I have found to configure network booting is from Linux. Hence you must boot the system at least once with an SD card to configure it.

Install Raspbian on an SD card and install needed tools

Let’s start configuring your client system for netboot. This is the Raspberry Pi that will eventually boot without a micro SD card installed.

  • Download Raspbian Lite. For this tutorial I used the Buster release. Link to direct download. Link to the torrent.
  • Copy the Buster image onto an SD card. I suggest reading this page for instructions on how to do this. I used the dd command below, replacing sdX with my SD card device. Warning! This will overwrite data on the device specified. Triple check you are writing to the SD card and not your laptop drive!
  • If your SD card already has a partition table on it your system might auto mount it on insertion. Un-mount or eject any volumes mounted from the micro SD card. Then use the dd command below to copy the image to your micro SD card. The dd command takes a few minutes to complete on my laptop.
sudo dd if=2019-09-26-raspbian-buster-lite.img of=/dev/sdX bs=4M
  • Put the SD card in your client Raspberry Pi 4 and boot it. Using the lite version of raspbian give you a text console only. If you want a graphical console you can use the full version and it should work. I have not tested this workflow with the full version.
  • Log in via the console using the default login: pi/raspberry
  • Connect your Raspberry Pi to the internet via an ethernet cable.
  • Update the Raspbian OS via apt-get and install the rpi-config program:
sudo apt-get update
sudo apt-get full-upgrade
sudo apt-get install rpi-eeprom

Configure the Rasperry Pi 4 bootloader to PXE boot

Next lets examine your boot loader configuration using this command:

vcgencmd bootloader_config

Here is the output on my fresh out of the box Raspberry Pi 4:

pi@raspberrypi:~ $ vcgencmd bootloader_config

We need to modify the boot loader config to boot off the network using the BOOT_ORDER parameter. To do that we must extract it from the EEPROM image. Once extracted, make our modifications to enable PXE boot. Finally install it back into the boot loader.

We do that with these steps:

  • Go to the directory where the bootloader images are stored:
cd /lib/firmware/raspberrypi/bootloader/beta/
  • Make a copy of the latest firmware image file. In my case it was pieeprom-2019-11-18.bin:
cp pieeprom-2019-11-18.bin new-pieeprom.bin
  • Extract the config from the eeprom image
rpi-eeprom-config new-pieeprom.bin > bootconf.txt
  • In bootconf.txt, change the BOOT_ORDER variable to BOOT_ORDER=0x21. In my case it had defaulted to BOOT_ORDER=0x1. 0X1 means only boot from SD card. 0x21 means attempt SD card boot first, then network boot. See this Raspberry Pi Bootloader page for more details on the values and what they control.
  • Now save the new bootconf.txt file to the firmware image we copied earlier:
rpi-eeprom-config --out netboot-pieeprom.bin --config bootconf.txt new-pieeprom.bin
  • Now install the new boot loader:
sudo rpi-eeprom-update -d -f ./netboot-pieeprom.bin
  • If you get an error with the above command, double check that your apt-get full-upgrade completed successfully.

Disabling automatic rpi-eeprom-update

As pointed out by a reddit user, rpi-update will update itself by default. The rpi-eeprom-update job does this. Considering that we are using beta features, a firmware update could disable PXE boot in the eeprom. You can disable automatic updates by masking the rpi-eeprom-update via systemctl. You can manually update the eeprom by running rpi-eeprom-update when desired. See the Raspberry Pi docs on rpi-eeprom-update for more details.

sudo systemctl mask rpi-eeprom-update

Phase 1 Conclusion

Congratulations! We are half way to first net boot. Our Raspberry Pi net boot client is configured for PXE boot. Before you shut down the Pi 4 please make note of ethernet interface MAC address. You can do this by running ip addr show eth0 and copying the value from the link/ether field. In my case it was link/ether dc:a6:32:1c:6a:2a.

Unplug and put aside your Raspberry Pi PXE boot client for now. We are moving on to configuring the server. Now is also a good time to remove the SD card. It is no longer needed now that the Pi will net boot.

Phase 2 – Raspberry Pi PXE Boot Server Configuration

If you completed the client configuration you can use the same SD card for the server or use a second one. For example I use two different micro SD cards in case I need to boot the client off micro SD for debugging purposes.

Are you are using two micro SD cards? Make sure install Raspbian on the second card as well. Follow the the instructions earlier in the tutorial. Then boot your server off the SD card. Some of the initial server configuration steps will be familiar. Boot the server connected to an Internet connection. We need the Internet connection to update and install packages. Later in this phase we will remove it from the Internet and plug directly into the other Raspberry Pi.

Update Raspbian and install rpi-eeprom, rsync and dnsmasq

Update the Raspbian OS via apt-get and install the rpi-config program. Note this step can take a while. Time will vary based on the speed of your Internet connection.

sudo apt-get update
sudo apt-get full-upgrade
sudo apt-get install rpi-eeprom

Install rsync and dnsmasq. We will use rsync to make a copy of the base os and we will use dnsmasq as the DHCP and TFTP server. NFS will be used expose the root file system to the client.

sudo apt-get install rsync dnsmasq nfs-kernel-server

Create the NFS, tftp boot directories and create our base netboot filesystem

Make the NFS and tftpboot directories. The /nfs/client1 directory will be the root of the file system for your client Raspberry Pi. If you add more Pis you will need to add more client directories. The /tftpboot directory will be used by all your netbooting Pis. It contains the bootloader and files needed to boot the system.

sudo mkdir -p /nfs/client1
sudo mkdir -p /tftpboot
sudo chmod 777 /tftpboot

Copy your Pi’s OS filesystem in the /nfs/client1 directory. We are going to exclude some files from the rsync. This is a preventative measure in case you run this command again after configuring the network and dnsmasq. This command takes some time due to the IO characteristics of SD cards. They are slow πŸ™‚

sudo rsync -xa --progress --exclude /nfs/client1 \
    --exclude /etc/systemd/network/10-eth0.netdev \
    --exclude /etc/systemd/network/ \
    --exclude /etc/dnsmasq.conf \
    / /nfs/client1

Now we use chroot to change root into that directory. But before we chroot we need to bind mount the required virtual filesystems into the base client directory.

Once in the chroot we delete server SSH keys. Next we reconfigure the openssh server package which will regenerate the keys. Additionally we enable the ssh server so we can remotely login when the client comes online.

cd /nfs/client1
sudo mount --bind /dev dev
sudo mount --bind /sys sys
sudo mount --bind /proc proc
sudo chroot . rm /etc/ssh/ssh_host_*
sudo chroot . dpkg-reconfigure openssh-server
sudo chroot . systemctl enable ssh
sudo umount dev sys proc

Configure the PXE server to use a static IP

Our PXE server is a DHCP server. Meaning it assigns IP addresses and network configuration to clients which request them. In this case our Raspberry Pi PXE boot client. If we do not want the PXE boot server itself to run the DHCP client. Therefore we should disable the DHCP client. Let’s do that now. Create a new systemd file to disable the DHCP client on eth0. The path for the file we wish to create is /etc/systemd/network/10-eth0.netdev. Its contents should be:


Create the /etc/systemd/network/ file with the following contents. Please note that I am specifying as the DNS server and gateway address. In this tutorial I do not have a gateway or DNS server at that address. Further, none are needed for this tutorial. I have them there as a place holder so if I want to connect this system I can drop a router on the network at that address. You can probably leave DNS and Gateway out if you prefer.



No we are going to disable to the dhcp client service dhcpcd that is enabled by default on raspbian. Please pay extra careful attention to the fact that is “dhcpcd” and not “dhcpd”. The first is a DHCP client, the second a server.

sudo systemctl stop dhcpcd
sudo systemctl disable dhcpcd

Configure dnsmasq for PXE boot

This step configures dnsmasq to support our PXE boot. Replace your /etc/dnsmasq.conf file with the following contents:

pxe-service=0,"Raspberry Pi Boot"

Next we copy the boot files from our /boot directory into the tftpboot directory.

sudo cp -r /boot/* /tftpboot/

Enable systemd-networkd and dnsmasq. Restart dnsmasq to confirm the config is valid. Finally reboot and ensure the Pi comes up with the network configured properly.

sudo systemctl enable systemd-networkd
sudo systemctl enable dnsmasq.service
sudo systemctl restart dnsmasq.service
sudo reboot

Now we must update the cmdline.txt file in /tftpboot. This file contains the kernel parameters that are passed to our client Raspberry Pi at boot time. Edit /tftpboot/cmdline.txt replace it with:

console=serial0,115200 console=tty1 root=/dev/nfs 
nfsroot=,vers=3 rw ip=dhcp rootwait elevator=deadline

Configure the NFS exports on the PXE boot server

This steps configures the exports. Exports are file systems that are being shared or exported via NFS. To do this we must configure the /etc/exports service and the restart the NFS related services.

The contents of /etc/exports should be as follows.

/nfs/client1 *(rw,sync,no_subtree_check,no_root_squash)
/tftpboot *(rw,sync,no_subtree_check,no_root_squash)

Configure the /etc/fstab to mount via NFS

We are almost done! One last step to modify the /etc/fstab file in our client’s file system. This will tell the client to mount its root volume off the NFS server on our PXE Boot server Raspberry Pi. Put the following into /nfs/client1/etc/fstab.

proc       /proc        proc     defaults    0    0 /boot nfs defaults,vers=3 0 0

Finally enable and restart NFS related services.

sudo systemctl enable rpcbind
sudo systemctl restart rpcbind
sudo systemctl enable nfs-kernel-server
sudo systemctl restart nfs-kernel-server

Now do one last reboot on the server for good measure. Take a look at the system logs and systemctl statuses to see if everything started correctly.

Complete. Does it work?

Nice work getting through the tutorial. Now is the final test. Plug your client Raspberry Pi into the network or directly to the server via ethernet. Now connect a keyboard and LCD screen to your client Raspberry Pi. Power on and wait. Hopefully you will see the following after a few moments!

Raspberry Pi PXE Troubleshooting Guide

Hopefully you are up and running. But if you are experiencing problems this section can help you debug your kit. The trickiest part of troubleshooting this setup is that the graphical console on the client emits no information until the OS kernel starts booting. As a result I had to do all troubleshooting on the server side.

It is possible the client does emit some useful information via serial console. But I have not tried because I don’t have the right equipment today.

Troubleshooting Tools

  • Check dnsmasq is running
sudo systemctl status dnsmasq.service
  • check nfs server and rpc bind are running
sudo systemctl status rpcbind.service
sudo systemctl status nfs-mountd.service
  • See stats from your NFS server. Useful for seeing if the NFS client has connected
sudo nfsstat
  • Tail the daemon log file
sudo tail -f /var/log/daemon.log
  • Use tcpdump to packet trace.
tcpdump -n -i eth0 
  • Use tcpdump filters to narrow down your trace. For example to only see DHCP traffic (port 67) use the following command.
tcpdump -n -i eth0 port 67

What stage is the failure?

The key to troubleshooting PXE boot problems is figuring out where in the workflow it is failing. Hence if you are new to PXE, re-reading the earlier section of this post (What is PXE, How does it work?) will help.

The first question you need to answer is: “What stage is the failure in?” It could be in the following stages:

  • Bootloader DHCP stage.
  • Bootloader TFTP stage.
  • Linux/OS NFS mount stage.

DHCP Stage

If your client is properly configured it should be making a DHCP request at boot time. Lets see if DHCP is working.

  • Tail the server daemon.log file and power on your client Pi. See “Tailing the daemon log file” below.
  • Do you see dnsmasq log messages indicating it is serving dhcp requests to your client Pi? If yes, you know that the DHCP server is working, the client is properly configured and the network between the two Pis is functional.
  • If you don’t see dnsmasq messages about DHCP, your next step is to probably packet trace using tcpdump. Run tcpdump on the server. Do you see DHCP traffic coming from the client. If yes, is the server responding?

TFTP Stage

  • Tail the server deamon.log and look for dnsmasq messages related to TFTP. All client requests should be logged.
  • If DHCP is working but TFTP is not, you can probably assume the network is OK. Otherwise DHCP would not work. Next step is to double check your TFTP configuration and permissions on the /tftproot directory.
  • Try plugging in your laptop and using a tftp client to connect. What happens?

NFS Stage

  • Check /var/log/daemon.log /var/log/syslog and /var/log/messages for clues.
  • Did you restart the nfs and rpc-bind services after updating /etc/exports?
  • Double check your /etc/exports file for typos.

If all else fails

Try again, the network boot is a beta feature and could have bugs. For example, reports on the Raspberry Pi site indicate a reboot can be required if it not working.

Room for improvement

This process is hacky. In other words, plenty of room for improvement. If time permits I will implement the following improvements.

  • Stop copying the files for clients off the server root file system. It is bound to cause problems at some point. For example if you make a server specific configuration change and then re-sync the files you end up with that change on the clients. Creating a pristine file system tarball and using it as your base for new client directories is a better solution.
  • Experiment with the Packer Arm Image Builder. Using Packer is a much cleaner solution. As a result of using Packer, it will be much easier to automate image builds.
  • Creating a small pristine base image for the client. Using debootstrap or multistrap for example. This should result in a smaller base image.
  • Make the root file system read-only and configure the client image to use tmpfs for ephemeral writes.
  • Improve the security model especially around NFS.
  • We currently single client configuration. An automated process for adding and removing clients is cleaner and scales better.
  • This workflow results in the clients having the same hostname as the server unless you change it by hand.
  • Increase the security of the NFS configuration. Possibly convert to mount the root file system read only.


I want to make this guide as thorough as possible. Please provide feedback to this post in the comments. with any feedback. Constructive feedback will be worked into future edits.

Suggested Products


The following resources were instrumental in this project.

Update Log

  • Added instructions on disabling the DHCP client via systemctl. – Dec 4th 2019
  • Found an error in the systemd network file. Gateway was in the wrong stanza. – Dec 4th 2019
  • Fixed typos where raspbian was mis-spelled raspian. – Dec 1st 2019
  • Added notes on disabling rpi-eeprom-update to prevent automatic updates. – Dec 1st 2019
  • Readability improvements. – Dec 1st 2019

WordPress Jetpack – Invalid Client, Unknown client_id

Invalid request, please go back and try again. Error Code:invalid_client Error Message: Unknown client_id.

Warning: This post has instructions that involve manipulating the WordPress database. This can break your site or result in data loss if not done properly. Take this actions at your own risk. Please back up your database before attempting this.

I received this error when attempting to activate the Jetpack WordPress plugin through the WordPress admin interface on a site I manage.

Invalid request, please go back and try again.

I don’t know exactly how I got the WordPress installation into this state. But I suspect its because I performed the following actions in the following order:

  • I installed the Jetpack plugin on the site, did not activate it.
  • I upgraded WordPress to 5.3.
  • I attempted to activate the Jetpack plugin.

Solution Overview

The resolve this error you must do the following:

  • Backup your database.
  • Deactivate the Jetpack plugin if it is activated.
  • Delete/Uninstall the Jetpack plugin.
  • Remove all Jetpack related entries from the wp_options table in your WordPress installation’s database.
  • Re-install and re-activate the Jetpack plugin.

Solution Details

Backup your database

Backing up your database is outside the scope of this post. There are many ways to backup a WordPress site or database. If you run a WordPress site you should ideally have regular backups running already.

Deactivate and delete Jetpack

The next two steps are fairly straight forward. Through the WordPress admin interface, deactivate and delete the plugin.

Remove Jetpack rows from the wp_options table

This is where things get a little more tricky. I provide two options to do this. The first is using PhpMyAdmin. PhPMyAdmin is a web based tool for working with SQL databases. If you are running WordPress on your own VPS or dedicated server you probably have this installed. If not the second option will show you how to use SQL statements via the MySQL CLI.

Remove Jetpack rows using PhpMyAdmin

Log into your PhPMyAdmin web interface. On my server its installed at

On the left hand side pane, find the database for your WordPress site. Expand the database to see the list of tables and click on the wp_options table. This will show you a paginated view of the rows in that table.

Next click on the SQL button to run a SQL query. Paste the following SQL statement in the box:

SELECT * FROM `wp_options` WHERE `option_name` LIKE '%jetpack_%'

This query will return all rows that have “jetpack” in the option name.

SELECT all jetpack rows from wp_options

Next, click the show all check box. Then finally click the delete button at the bottom of the query. This will delete all of these rows.

Finally re-install Jetpack via the admin interface and re-activate it.

Remove Jetpack rows using MYSQL CLI

Typically you will be running the MySQL CLI from the server itself. So login to the server via SSH or whatever means you have. Next start the MySQL CLI and use the database you are working on. Substitute “username” for your MySQL username and enter the password when prompted. Then run the “use” command to switch to your database, substituting “databasename” for your WordPress database name.

mysql -u username -p
mysql>use databasename;

Next we are going to run a query that does two things:

  • Runs a query to get all rows from wp_options that have “jetpack” in them.
  • Delete those rows.

We are using sub-select or sub-query to do this. The code is as follows:

DELETE FROM wp_options WHERE id IN (
        SELECT id FROM `wp_options` WHERE `option_name` LIKE '%jetpack_%'
    ) AS p

Now re-install and reactivate Jetpack.


Do you have feedback on how we can make this post better? Please leave a comment.