Welcome to life

Life sucks, but we are here to make the most of it.

This book aims to keep track of things.

Have you ever wondered "wow i wish life came with a manual that just explained this"?

This is that manual. Well.. not really, but its a start.

Housekeeping

Dishwashing

How, tips, and tricks

Technology Connections made a great video on how dishwashers work and tips on how to use them.

If youre interested in more interesting content about dishwashers, check out this earlier video:

TLDR;

  • Run the tap before you turn on the dishwasher - This improves the water temperature in the prewash cycle. It is important your water is HOT.
  • Insert detergent into the prewash slot - This is a great way to get more use out of the prewash cycle.
  • Clean filter / macerator - Helps with improved water flow and reduces the risk of clogging.

Organization

Label things

Show and hide. Visible clutter = bad.

Zones

Sort by urgency

Zone 1

  • Batteries

Zone 2

  • "Dont know looks important"

Zone 3

Not urgent but nice to access

  • Stickers & Stamps
  • Travel tools
  • Patches & Pouches

Zone 4

Infrequent access

  • Warranty Documentation
  • Tax filings
  • Sentimental stuff

Aesthetics & decor

Rules of 3, things in groups of 3.

Obsidian

Tasks

3D Printing

Gridfinity

Modeling for Gridfinity

Fusion360 plugin

There is a GrdifinityGenerator fusion360 plugin that will literally do all most all of the heavy lifting for you, simply pick wether you want magnets, holes, a label, and what dimensions you want, et voila.

Baseplates

My personal favorite baseplate is the magnet snap-fit baseplate from Gridfinity Magnet Light Baseplate.

Multiboard

Multiboard is another grid-based mounting solution that works in multiple orientations. It can handle a significant amount of weight especially when distributed across multiple mounting points.

Underware

Need under your desk cable management?

Networking

Ubiquiti

Home Assistant

Information Displays

Lets say you wanted to have a display in your office that is on all the time (24/7). It toggles between maybe grafana dashboards, your favourite webistes, or your own custom content.

The Setup

For hardware you will want a screen (any old TV will do), and a small computer (like a Raspberry Pi or a Dell Optiplex).

Software

Pre-requisites (based on Arch Linux):

chromium
xorg-server
xorg-xinit
openbox
xorg-xrandr
ddcutil
i2c-tools
konsole
nodejs
npm

Create /etc/systemd/system/[email protected]/override.conf with the following content: (replace DISPLAY_USERNAME_HERE with the username of the user you want to autologin as)

[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin DISPLAY_USERNAME_HERE --noclear %I $TERM
chmod 0644 /etc/systemd/system/[email protected]/override.conf

Then create /home/DISPLAY_USERNAME_HERE/.xinitrc with the following content:

#!/bin/sh
# Dynamically detect the connected output and rotate it
OUTPUT=$(xrandr --query | grep " connected" | cut -d ' ' -f1 | head -n 1)
xrandr --output "$OUTPUT" --rotate right --brightness 1
exec openbox-session &
sleep 2

# Disable screen blanking and power management
xset s off
xset s noblank
xset -dpms

bun run /home/DISPLAY_USERNAME_HERE/script.js &
chromium --kiosk --noerrdialogs --disable-infobars --disable-session-crashed-bubble --disable-features=TranslateUI "CHROMIUM_URL_HERE"

Ensure its owned by the user and has the correct permissions: 0755

Create /home/DISPLAY_USERNAME_HERE/.bash_profile (0644 owned by user) with the following content:

#!/bin/sh
if [[ -z $DISPLAY ]] && [[ $(tty) == /dev/tty1 ]]; then
    startx
fi

Mission Control

Lets say you have a display that you want to have on all the time. You might want to have some sort of daemon running on the display so that you can control it from elsewhere. To do this we can use v3x-mission-control.

Mission Control is a lightweight rust daemon that allows you to expose your displays backlight, brightness, and chromium controls to homeassistant via the mqtt integration.

Software

VSCode / Codium / Cursor / whatever IDE is hot these days

Tips & Tricks

Auto Import Missing Imports

This is going to be a life-saver. From the days in Eclipse where ctrl+shift+o would auto-import missing imports. I introduce, the feature already built into VSCode but for some reason not a default keybind.

Open ctrl+shift+p and type open keyboard shortcuts and press enter.

Throw the following in your keybindings.json file:

{
    "key": "ctrl+shift+i",
    "command": "editor.action.codeAction",
    "args": {
        "kind": "source.addMissingImports",
        "apply": "ifSingle"
    },
    "when": "editorHasCodeActionsProvider && editorTextFocus && !editorReadonly"
}

Now every time you press ctrl+shift+i it will auto-import missing imports.

PS you can trigger this by hand by clicking on the "Quick Fix" button next to the error and then selecting "Add All Missing Imports".

Prometheus

Label Naming

When you want to make sure metrics end up looking nice on a grafana dashboard, use the default label name label_name.

OpenTelemetry

Semantic Conventions https://opentelemetry.io/docs/concepts/semantic-conventions/

Configuration

TOML

Rust Crates

Rust has a few crates that allow for ridiculously powerful configuration management. The most notable ones are config and figment. They are capable of partial loading from env, config files, and more.

Validation

Although at the time of writing this (2025-02-26), TOML does not yet have a standard for validation. However, the Taplo project has an implementation called directives that can be used to validate the configuration.

You can unlock validation support in VSCode by installing the Even Better TOML extension.

Adding validation to your configuration

To add validation to your configuration, you need to add the #:schema ./schema.json directive to the root of your configuration file.

#:schema https://json.schemastore.org/github-action.json

This also has additional support for URLs, allowing you to easily validate your own configurations against a schema.

Virtual Private Networks

What is a VPN?

Private Subnets

Tailscale

Zero Tier

Authentication

Json Web Tokens

JWT's are just a way to encode JSON data into a string.

It is made up of three base64 encoded strings, separated by dots.

OAuth

LDAP

Radius

Containers

Kubernetes

Role Based Access Control

Generate a key using openssl:

openssl genrsa -out john.key 2048

Create a matching certificate signing request for user john (CN=) belonging to group group1 (O=)

openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"

Create a CertificateSigningRequest kubernetes object and apply it

cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
>   name: john
> spec:
>   groups:
>   - system:authenticated
>   request: $(cat john.csr | base64 | tr -d '\n')
>   signerName: kubernetes.io/kube-apiserver-client
>   usages:
>   - client auth
> EOF

Our signing request should be available on the cluster and ready to sign, you can confirm its existance by doing

kubectl get csr

Which should output something along the lines of

NAME   AGE   SIGNERNAME                            REQUESTOR      CONDITION
john   39s   kubernetes.io/kube-apiserver-client   system:admin   Pending

Now we approve the request, causing the server to sign it

kubectl certificate approve john

Verify our requests has been approved and signed

kubectl get csr

Retrieve the certificate and store it locally

kubectl get csr john -o jsonpath='{.status.certificate}'  | base64 -d > john.crt

Creating the Role

Next we create the role that our user will have, for demonstration purposes im calling my role john-role, although any name is possible here as roles can be assigned to multiple groups, users, etc.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
    name: john-role
rules:
    - apiGroups:
          - ""
      resources:
          - pods
      verbs:
          - get
          - list

In the above example we are giving the get and list permissions pertaining to the pods resource. Lets apply our role

kubectl apply -f role.yml

Creating a RoleBinding

To be able to connect our user and the role it belongs to we create a RoleBinding. The rolebinding should look something like below.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
    name: john-binding
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: john-role
subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: john

Next we apply the role binding.

kubectl apply -f rolebinding.yml

Loading the user into your config

Now that we have arrived at the final steps we can load the user into our config using the following command. This will add the credentials, and name them john to our kubeconfig containing the certificate and keys we just generated and signed.

kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true

All that's left is to create a new context (in this case named it john aswell).

kubectl config set-context john --cluster=default --user=john

And voila, you're done. Your newly created user should now have all the permissions you so desire, and be ready to go.

Setting up local environment

Locally we now have the john user account. Lets put it to work. First lets tell kubectl to use the john context.

kubectl config use-context john

You can verify that you have switched contexts by using the following command

kubectl config current-context

Now lets try and test our permissions.

kubectl get pods

The above command should have returned a valid output. Now lets try something we shouldn't be allowed to do. The following should throw a permission denied:

kubectl run web-2 --image=nginx

Final

If you've followed all of the above steps correctly you should now have your very own new user account created john, and should be able to have access to your cluster. If you're looking to read more into how and what kubernetes RBAC, don't forget to check the rbac documentation. In addition to that this post was inspired by this SO answer

Metrics & Service Monitors

To keep track of the health of your services you can setup a monitoring.coreos.com/v1.ServiceMonitor resource.

This resource defines a service that will be monitored, and what endpoints, authentication, interval, etc to use.

Creating your first ServiceMonitor

To create a ServiceMonitor you need to create a ServiceMonitor resource.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-service-monitor
  namespace: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
    - port: http
      interval: 30s
      path: /metrics

The above service monitor will monitor the http port on the my-app namespace, and will scrape the /metrics endpoint every 30 seconds.

Service Implementation

On the side of your service you will need to expose an http endpoint (generally /metrics) that returns the metrics in a format that Prometheus can scrape.

An example of what this output might look like is:

# HELP my_metric_1 A metric
# TYPE my_metric_1 gauge
my_metric_1 1.0