Freelance 👨🏻‍💻 software developer. Addicted to 🏍️ motorcycles and 🏞️ travel.
1,889 words
https://cannonfodder.dev @the___tourist

Fighting COVID-19 at home

COVID-19, a global pandemic. Stuck at home in quarantine. Can't go outside. Can't see family or friends. Stuck working remote, netflixing or playing games. Feeling useless. What can you do?

You can boinc-client on all your PC's, laptops, NAS and servers! If you join the Rosetta project, you'll help research COVID-19 and possibly find a cure.

I'm running boinc in combination with boinctui (a terminal boinc manager) on multiple machines, and even on my Synology NAS through Docker!

Set it up today and join the fight!

Webstorm plugins

I've been using IntelliJ products for over 10 years now, and I've been a massive fan of loads of different plugins. The past two years I've moved primarily to Front-End engineering and I've adopted Webstorm as my current go-to editor.

Here's a list of the plugins I'm currently using:

1. Material Theme UI

Plugin homepage

2. CodeGlance

Plugin homepage

3. Rainbow brackets

Plugin homepage

4. Indent Rainbow

Plugin homepage

5. .env files support

Plugin homepage

6. Styled Components & Styled JSX

Plugin homepage

7. Kubernetes

Plugin homepage

8. Swagger

Plugin homepage

9. PlantUML integration

Plugin homepage

10. Zero Width Characters locater 2

Plugin homepage

11. Main Menu Toggler

Plugin homepage

PSA: Mind your .NET loglevel

A wild issue appears

At my current client, we have been chasing a frustrating issue between our NodeJS frontend and a specific .NET service. Adding distributed tracing from front- to back-end services didn't give us any clues. We sprinkled logging everywhere, but this also didn't give us any clues. Frustrated and burnt-out developers roamed the office, crying tears of failure.

Pinpointing a possible source

Going through the massive amount of logging, we finally ended up pinpointing the issue to a request failing with ECONNABORTED due to a timeout of 3000ms being hit. Writing a test script that hit the service from the front-end Kubernetes pod to the .NET service pod seemed to point us in the direction of net.core.somaxconn - a kernel-level setting describing the maximum amount of socket connections handled in a queue. Reproducing it through docker seemed to yield the same results, especially since running the .NET service directly didn't give us the error.

All was not well though, as one of our developers decided to up the amount of requests of our test script to hit the local .NET service, and you'll probably guess what happened. The requests also started failing: the difference of the docker/k8s layer removed gave the service better performance, but it still ended up hitting some limit!

Solution

Several days of pair-debugging ensued. A hybrid mix of front- and backenders formed and tried out all kinds of scenarios. Then, during a moment of despair, one of the backenders peered at his Rider window, saw the logging coming in way slower than the NodeJS test script, and suddenly remembered an article he read.

We changed the settings according to the article, fired up the test script and......... bliss. We had fixed the error. Now, if you're wondering what exactly this magical fix entailed, I'll give you a very, very simple tldr:

DISABLE INFORMATION LEVEL LOGGING IN YOUR .NET SERVICES

This immidiately increased the succesful response count from 1000 to about 7000. That's roughly a free 7 time performance boost for this specific back-end service.

Please forward this article to all your colleagues, friends, grandmothers, pals and other assorted acquaintances.

/PSA

Cheap UPS to protect your NAS

Secure. That. Media!

I bought a new Synology DS1019+ last year and have 3x 8Tb disks filled with thousands of photos and video's. Also, I recently became a dad (🎆!) and the storage of my photos has become much, much more important. I'd like to have all digital media of my child, plus any media they might be interested in (pictures of young mom & dad boozing it up) securely stored for later consumption.

A NAS, however, is not a backup (obviously). So I'm already running nightly incremental encrypted backups to a cloud-based storage, which is itself snapshotted daily and which are stored off-site.

Cheap alternative

With that taken care of, I'd like to be more protective of my NAS. Recently there have been two short power outages in the Amsterdam-area, plus we had a power spike due to bad weather. Protecting the NAS with a power spike plug and hooking it into a UPS seems like the best option. But what if I don't want to spend a lot of money on both? Well...

[17:16] User "Cheap-UPS-and-Spike-Protector" joined the chat.

Synology provides a list of compatible devices, but those for my model ended up begin slightly too pricy. I was looking for an affordable yet reliable solution.

The solution

Enter the Eaton 3S 550, a cheap battery + spike protector in one. Officially not supported by Synology, but I decided to take the gamble anyway. When plugging it into my NAS the DSM software had no trouble recognizing it, gave no warning and allowed me to set my power-off configuration.

I'm too afraid to pull the plug on it, so I'll just have to wait and see if it's actually secure. But I'm quite confident it'll work and should give my NAS about 20 minutes to power down.

NodeJS HPE_HEADER_OVERFLOW

NodeJS HPE_HEADER_OVERFLOW in DataDog

Quick knowledge tip

We ran into an issue on production where our k8s ingress controller was returning 400 Bad Request on multiple responses a day. After debugging the cause was a header was being sent that was larger than 8Kb, which was rejected (dropped) by NodeJS. Nginx then returned a 400 since the request did not return a valid response.

It was caused by two things:

  • An Akamai rule duplicating header values, so we got multiple headers multiple times
  • A frontender setting a ludicrous amount of cookies

Both issues are pending testing, so we did a quick fix by expanding Node's HTTP size limit to 16Kbw ith the following param:

--max-http-header-size=16000

NextJS + Ingress subpath prefixing

Prefixing your NextJS app through Ingress

What do we want to achieve?

By default, NextJS doesn't support serving an application with it's assets from a custom app prefix (in both dev and prod), for example:

https://www.myapp.com/    <-- SomeOtherApp
https://www.myapp.com/portal <--- My NextJS app
https://www.myapp.com/static/portal <--- My NextJS static (cached by CDN)

NextJS configuration

To be able to do this we need to add a custom server.(js|ts), use the setAssetPrefix and pass handling the assets into the NextJS request handler.

const assetPrefix = '/static/portal';

const app = next({ dev: process.env.NODE_ENV !== 'production' });
const handle = app.getRequestHandler();

// Handle the asset and rewrite the pathname
const handleAppAssets = (assetRegexp, handle) => (req, res) => {
  const parsedUrl = parse(req.url, true);
  const pathname = parsedUrl.pathname as string;
  const assetMatch = assetRegexp.exec(pathname);
  const [, asset] = assetMatch;
  req.url = format({
    ...parsedUrl,
    pathname: asset,
  });

  return handle(req, res);
};

app.prepare().then(() => {
  // Set the asset prefix
  app.setAssetPrefix(assetPrefix);

  const server = express();

  // Handle the app assets and route them
  server.get(
    `${assetPrefix}/*`,
    handleAppAssets(new RegExp(`^${assetPrefix}(/.*$)`), handle),
  );

  server.all('*', (req, res) => handle(req, res));
  server.listen(3000, error => {
    if (error) throw error;
    console.log('Server started.');
  });
});

Ingress configuration

By default, ingress will rewrite+proxy to our pod(s), but it'll keep the subpath. By using the rewrite-target annotation, it'll rewrite to the root of our container.

metadata:
  name: frontend-portal
  labels:
    owner: myorg
  annotations:
    # Route all traffic to pod, but don't keep subpath (!)
    nginx.ingress.kubernetes.io/rewrite-target: /

spec:
  rules:
    - host: {{ .Values.clusterDomain }}
      http:
        paths:
            - path: /portal(/.*|$)
              backend:
                serviceName: frontend-portal
                servicePort: 80
            - path: /static/portal(/.*|$)
              backend:
                serviceName: frontend-portal
                servicePort: 80

Result

Now your NextJS app will live in the subpath, with it's assets served separately, and your dev and production environment will be in sync.

Quick access to local offline reference docs

Introduction

Premise

Through a simple key-binding (shift+ctrl+d) I can quickly launch or focus reference documentation (using devdocs).

Quick open reference docs after pressing a key-binding

Steps

So, how are we going to do this?

  1. We're going to install some software to facilitate the mechanics: wmctrl, nativefier and devdocs
  2. We're going to write a bash script that launches or focuses the devdocs
  3. We're going to bind the bash script to a key-binding

(nb: This guide was tested on Ubuntu 18.04)

Installing software

🛃 Installing wmctrl (1 of 3)

What's wmctrl and why do we need it?

$ man wmctrl
NAME
   wmctrl - interact with a EWMH/NetWM > compatible X Window Manager.
DESCRIPTION
   wmctrl  is  a  command that can be used to interact with an X Window manager that is compatible with the EWMH/NetWM specification.  wmctrl can query the window manager for information, and it can request that certain window management actions be taken.

We'll use wmctrl to check if our API reference application is running, and focus on it. If it's not running, we'll start it. wmctrl is easily installed on Ubuntu through the package manager:

# Installs wmctrl
sudo apt install wmctrl

💡 Installing nativefier (2 of 3)

Nativefier is a tool to make a native wrapper for any web page. We'll use it to get a local version of DevDocs installed, in case we're ever without Wi-fi.

It's quite easy to install, just run the following npm command to install it as a global dependency:

# Install nativefier globally
npm install nativefier -g

📖 Installing devdocs (3 of 3)

This is the actual documentation tool we'll be using. You can insert your own if you like.

We'll use nativefier to install a local version of DevDocs:

# Create software folder
mkdir ~/Software && cd ~/Software

# Generate local electron version of devdocs
nativefier --name "DevDocs" "https://devdocs.io/"

We also want to have a .desktop file installed, so we can launch the application and save it in our favorites sidebar:

echo "[Desktop Entry]" \
   "Type=Application" \
   "Terminal=false" \
   "Exec=/home/<your-username>/Software/dev-docs-linux-x64/dev-docs" \
   "Name=DevDocs" > \
   "# Icon=/path-to-optional-icon.png" \
   > ~/.local/share/applications/devdocs.desktop

(If you're having an issue with adding the application to your favorites in a later stage, add the following to this file: StartupWMClass=dev-docs-nativefier-49fe5b. Where dev-docs-nativefier-49fe5b is the name you see when you hover over the running app icon in your application bar).

Action time

🚀 Implementing the launch script

# Create a folder to store our script
mkdir ~/bin

# Create launch/focus script
echo "#!/bin/bash" \
   (wmctrl -l | grep -q DevDocs) && wmctrl -a DevDocs || gtk-launch devdocs" \
   > `~/bin/devdocs.sh`

# Make the script executable for the user
chmod u+x ~/bin/devdocs.sh

What exactly does this script do? Let's break it down:

  • (wmctrl -l | grep -q DevDocs): List all applications and find DevDocs
  • wmctrl -a DevDocs: If found, focus on application DevDocs
  • gtk-launch devdocs if not found, launch application devdocs

⌨️ Adding key-binding

Adding the key-binding is easy in Ubuntu, go to Settings -> Keyboard and hit the + icon to add a new key-binding:

Quickly configure key-binding

🎇 That's it!

Test your key-binding by hitting shift+ctrl+d (or your chosen binding). The devdocs application should launch, and if it's already running it should focus when using the key-binding.

If you want to run all of the above commands in one go, use the following gist:

bash <(curl -s https://gist.githubusercontent.com/flipflopsandrice/07d84567f4197ef253055066669078b3/raw/6976125db4d5b7b22ba69e5ed3206be223f1ea68/install-devdocs.sh)