Feuerfest

Just the private blog of a Linux sysadmin

The 11th commandment: Thou shalt not copy from the artificious intellect without understanding

It happened again. Someone asked an LLM a benign technical question. "How to check the speed of a hard drive?"

And the LLM answered!

Only the human wasn't clever enough to understand the answer. Nor was it particularly careful in formulating the question. Certain details were left out, critical information was not recognised as such. The human was too greedy to acquire this long-sought knowledge.

But... Is an answer to a thoughtlessly asked question a reliable & helpful one?

The human didn't care. He copy & pasted the command directly into the root shell of his machine.

The sacred machine spirit awakened to life and fulfilled its divine duty.

And... It actually gave the human the answer he was looking for. Only later did the human learn that the machine had given him even more. The machine spirit answered the question, "How quickly can the hard drive overwrite all my cherished memories from 20 years ago that I never backed up?"

TL;DR: r/selfhosted: TIFU by copypasting code from AI. Lost 20 years of memories

People! Make backups! Never, ever work on your only, single, lifetime copy of your data while executing potentially harmful commands. Jesus! Why do so many people fail to grasp this? I don't get it..

And as for AI tools like ChatGPT, DeepSeek & others: Yes, they can be great & useful. But they don't understand. They have no sentience. They can't comprehend. They have no understanding of syntax or semantics. And therefore they can't check if the two match. ChatGPT won't notice that the answer it gives you doesn't match the question. Hell, there are enough people out there who won't notice. YOU have to think for the LLM! YOU have to give the full, complete context. Anything left out will not be considered in the answer.

In addition: Pick a topic you're well-versed in. Do you know how much just plain wrong stuff there is on the internet about that subject? Exactly.

Now ask yourself: Why should it be any different in a field you know nothing about?

All the AI companies have just pirated the whole internet. Copied & absorbed what they could in the vague hope of getting that sweet, sweet venture capital money. Every technically incorrect solution. Every "easy fix" that says "just do a chmod -R 777 * and it works".

And you just copy and paste that into your terminal?

If an LLM gives you an answer you do the following:

  1. You ask the LLM to check if the answer is correct. Oh, yes. You will be surprised how often an LLM will correct itself.
    • And we haven't even touched the topic of hallucinations...
  2. Then you find the documentation/manpage for that command
  3. You read said documentation/manpage(s) until you understand what the code/command does. How it works
  4. Now you may execute that command

Sounds like too much work? Yeah.. About that...

Comments

Quo vadis Bludit? And a new blog theme

I switched to a new blog theme. While Solen was great, I didn't like the mandatory article images. It just makes no sense to search for the 20th "Code lines displayed in some kind of terminal or IDE" image for a blogpost. Also the overall look & feel was a bit too much "early 2000s". I wanted something that looked a bit more refined. More clean.

Browsing through https://themes.bludit.com/ I found the Keep It Simple-Theme. Only that it was last modified in 2020 for a 2.x Bludit version, while we are now at 3.16.2. 

Luckily only a few modifications were needed and I finally took the time to create a separate Git-Repository for my theme modifications. Have a look at if you want to use it too: https://github.com/ChrLau/keep-it-simple

Although it contains some CSS changes, but I kept the original CSS in via comments so it should be rather easy to switch back. The changes mostly affect colours, blockquotes, fonts. Not the general layout, hence incorporating updates from the original StyleShout template should be easy.

Some minor tweaks are still coming, as I still have to check mobile & widescreen support and I want a different font used in blockquotes. The current one merriweather-italic doesn't look good as the vertical alignment is too uneven for my liking. Especially the letter "e" is too high and gives every word with an "e" a somewhat strange look.

But in general I am happy with the current look & feel.

I mean, the W3 Validator still isn't completely happy, but there are some things I can't fix directly and the only option I have is to open a pull request: Bludit: Fix canoncial links in siteHead plugin. Sadly alt-tags for images are also not possible and the corresponding Issue in the Bludit repository is closed since 2022 as "this will be fixed with Bludit 4.x". Meanwhile we are still at Bludit 3.x, a 4.x branch doesn't even exist and development really slowed down and there isn't much activity from the only developer. I seriously hope these are not bad omens..

Also no activity on my Cookie security issue Enhance cookie security by setting samesite attribute and adding __Secure- prefix to sessionname (Bludit issue 1582) and at least one unpatched Stored XSS does exist: Bludit - Stored XSS Vulnerability (Bludit issue 1579).

Let's just hope the best for now...

Comments

gethomepage 1.0 release and new security parameter introduced

gethomepage.dev: https://gethomepage.dev/assets/homepage_demo_clip.webp

homepage (GitHub) - the dashboard I use to keep an overview of all the services I run in my LAN released their 1.0 version yesterday on March 14th 2025.

With that they introduced a new parameter to limit which hosts can show the dashboard. I haven't yet read about why this was introduced, but it's fixed quickly.

As Watchtower does all the maintenance work for the containers running on my installation of Portainer I was already greeted with the following error message:

In the logfile for the container running homepage I saw the following error:

[2025-03-15T16:54:13.497Z] error: Host validation failed for: portainer.lan:3000. Hint: Set the HOMEPAGE_ALLOWED_HOSTS environment variable to allow requests from this host / port.

As I don't use a separate IP or hostname for the dashboard and just forward the port 3000/tcp towards the homepage-container I access it using the hostname of my Portainer host. Therefore this message makes sense.

Luckily the documentation for the newly required environment variable is already on their homepage: https://gethomepage.dev/installation/#homepage_allowed_hosts

Armed with this knowledge we can change the stack file (Portainers term for a docker-compose file - not to be confused with the docker swarm command docker stack) and introduce the HOMEPAGE_ALLOWED_HOSTS parameter. I added the IP-address too, in case the DNS servers in my LAN should stop working.

services:
  homepage:
    image: ghcr.io/gethomepage/homepage:latest
    container_name: homepage
    ports:
      - 3000:3000
    environment:
      HOMEPAGE_ALLOWED_HOSTS: 192.168.178.21:3000,portainer.lan:3000
    volumes:
      - /opt/docker/homepage.dev/config:/app/config # Make sure your local config directory exists
      - /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations
      - /opt/docker/homepage.dev/icons:/app/public/icons # icons, reference as /icons/picture.png in YAML

After that just hit the "Update the stack" button and it's working again.

Comments

One big step for Mastodon to rule the social media world

Photo by Jonathan Cooper: https://www.pexels.com/photo/animals-head-on-exhibition-9660890/

For years Mastodon had one prominent missing feature. You wouldn't see all replies to a toot (that's the term for "Tweet" in the Mastodon world) because of the federated nature of Mastodon. There were exceptions, but in general it means that you have to open the post on the Mastodon instance it originated from. Only then you would be able to read all comments.

Naturally this feature was sought after for years. One GitHub issue was opened in 2018 (Mastodon #9409: Fetch whole conversation threads).

Now it seems the biggest technical part has been resolved! In Mastodon #32615: Add Fetch All Replies Part 1: Backend user sneaker-the-rat did all the basic work to incorporate that functionality into the backend of Mastodon. And while he mentions that there is still work to do the Mastodon community so far seems happy that this issue is finally getting fixed providing a much smoother experience for everyone.

Comments

Every recommendation algorithm, ever

Photo by Tima Miroshnichenko: https://www.pexels.com/photo/a-computer-monitor-5380589/

Algorithm: Yo, look here! On the start page! A recommendation for a movie/video/song/article from a genre you've never watched/listened to/read! But it's one of our own productions!

Algorithm: Or content you've already consumed 4 weeks ago. You surely like to re-consume it again while that memory is still fresh, right?

Algorithm: On the other hand we have content you rated with a "Like" years ago. - But we completely ignore your recent interests and likes when proposing those.

Me: Uh, where is the notice about this new piece of content, which was released today, from the series I'm watching since months and always consume each new part directly on the day of its release? Do I really have to use the search?

Algorithm: Uh.. Can I interest you in some World War documentation?

*sigh* Every. Single. Time.

Folks! Don't declare your algorithm helps users finding new interesting content, when all it does is advertising.

Comments

uMatrix: Fehler 403 bei Aufruf von Links im web.de/GMX Webmailer beheben

Dall-E

Das Problem

Bekannte erhalten an ihre GMX-Mailadresse eine Mail. Diese enthält klickbare Links. Bei einem Klick landet man jedoch nicht auf der GMX-Weiterleitungsseite und anschließend auf der Seite auf die man eigentlich aufrufen möchte.
Stattdessen bekommt man die Fehlermeldung:

Ups, hier hat sich ein Fehler eingeschlichen...
Fehler-Code: 403

Bei einer web.de Adresse ist es genau so. Gut, das ist zu erwarten, da sowohl GMX als auch web.de zur gleichen Firma gehören und die Webmailer die gleichen sind. Lediglich etwas im Aussehen angepasst.

Stellt sich raus: Man verwendet nun endlich einen Adblocker. In diesem Fall uMatrix. Und uMatrix hat u.a. das Feature sog. HTTP-Referer für andere Seiten zu verbergen.
Normalerweise enthält der Referrer die Adresse der Webseite über die ich auf eine andere Webseite gekommen bin.

Suche ich z.B. auf Google nach einem Problem und klicke auf eine der Webseiten in den Ergebnissen, dann wird der Referrer an die aufrufende Webseite übermittelt. Somit kann man auswerten von wo ein Besucher auf die Webseite kam und wonach er gesucht hat. Durchaus relevante Informationen für viele Webseitenbetreiber. Aber natürlich unter Umständen ein Verlust an Privatsphäre für den Benutzer.

Daher ersetzt uMatrix den Referrer durch einen anderen Wert. Dies ist hier beschrieben: https://github.com/gorhill/uMatrix/wiki/Per-scope-switches#spoof-referer-header

Allerdings basiert die web.de/GMX Weiterleitung der Links auf dem HTTP-Referer. Da uMatrix diese Daten aber ersetzt, weiß die Weiterleitung nicht wohin sie weiterleiten soll und man erhält den Fehler 403 (welcher vermutlich für HTTP-403 Forbidden steht).

Die Lösung des Problems

Die Option das der Referer ersetzt wird nennt sich "Referrer verschleiern" und findet sich im Menü von uMatrix. Dies ist über die 3 Punkte zu erreichen.

Konkret müssen wir bei web.de dies für die Domains navigator.web.de & deref-web.de deaktivieren.
Bei GMX analog für die Domain der Weboberfläche und von deref-gmx.de.

Zuerst öffnen wir die Übersicht von uMatrix indem wir auf das Symbol von uMatrix klicken (üblicherweise rechts neben der Adresszeile).

Schritt 1: Wir ändern den Bereich auf navigator.web.de so das die Änderungen exakt nur für diese Domain gilt. Als Standard ist hier web.de ausgewählt, das wollen wir aber nicht. Also sicherstellen dass das komplette Feld blau hervorgehoben ist.

Schritt 2: Wir klicken auf das 3 Punkte Menü

Schritt 3: Wir deaktivieren die "Referrer verschleiern" Option, so das diese, wie im Bild, ausgegraut ist.

Schritt 4: Anschließend auf das nun blau hervorgehobene Schloß-Symbol klicken um die Änderungen zu speichern.

Nun müssen wir dies noch einmal exakt genau so für die Domain deref-web.de bzw. deref-gmx.de durchführen. Hierzu genügt es einfach auf einen Link in einer Mail zu klicken, so das sich die Seite mit der Fehlermeldung öffnet.

Schritt 1: Wir belassen den Bereich auf deref-web.de bzw. deref-gmx.de. Da wir hier keine Subdomain haben, ist dies bereits korrekt ausgewählt.

Schritt 2: Wir klicken auf das 3 Punkte Menü

Schritt 3: Wir deaktivieren die "Referrer verschleiern" Option, so das diese, wie im Bild, ausgegraut ist.

Schritt 4: Anschließend auf das nun blau hervorgehobene Schloß-Symbol klicken um die Änderungen zu speichern.

Nun am besten zur Sicherheit einmal den Browser komplett beenden, mindestens aber den Tab mit der web.de/GMX Weboberfläche neu laden.

Klickt man dann auf einen Link sollte der gewohnte Weiterleitungshinweis erscheinen und man nach wenigen Sekunden auf der eigentlichen Seite sein.

Comments