Feuerfest

Just the private blog of a Linux sysadmin

One big step for Mastodon to rule the social media world

Photo by Jonathan Cooper: https://www.pexels.com/photo/animals-head-on-exhibition-9660890/

For years Mastodon had one prominent missing feature. You wouldn't see all replies to a toot (that's the term for "Tweet" in the Mastodon world) because of the federated nature of Mastodon. There were exceptions, but in general it means that you have to open the post on the Mastodon instance it originated from. Only then you would be able to read all comments.

Naturally this feature was sought after for years. One GitHub issue was opened in 2018 (Mastodon #9409: Fetch whole conversation threads).

Now it seems the biggest technical part has been resolved! In Mastodon #32615: Add Fetch All Replies Part 1: Backend user sneaker-the-rat did all the basic work to incorporate that functionality into the backend of Mastodon. And while he mentions that there is still work to do the Mastodon community so far seems happy that this issue is finally getting fixed providing a much smoother experience for everyone.

Comments

Every recommendation algorithm, ever

Photo by Tima Miroshnichenko: https://www.pexels.com/photo/a-computer-monitor-5380589/

Algorithm: Yo, look here! On the start page! A recommendation for a movie/video/song/article from a genre you've never watched/listened to/read! But it's one of our own productions!

Algorithm: Or content you've already consumed 4 weeks ago. You surely like to re-consume it again while that memory is still fresh, right?

Algorithm: On the other hand we have content you rated with a "Like" years ago. - But we completely ignore your recent interests and likes when proposing those.

Me: Uh, where is the notice about this new piece of content, which was released today, from the series I'm watching since months and always consume each new part directly on the day of its release? Do I really have to use the search?

Algorithm: Uh.. Can I interest you in some World War documentation?

*sigh* Every. Single. Time.

Folks! Don't declare your algorithm helps users finding new interesting content, when all it does is advertising.

Comments

Looking at the Kobayashi Maru simulation from a completely different perspective

Memory Alpha Wiki https://memory-alpha.fandom.com/wiki/Kobayashi_Maru

This is one of those texts that left me with an open mouth.

I know Star Trek. I know about the fictional Kobayashi Maru simulation used in Starfleet to put cadets in an 100% guaranteed "No-win" scenario. But never ever have I thought about the whole simulation, it's tasks, it's challenges and outcomes from this perspective. Never ever have I realised how contradicting the simulation can be to the core values of the Federation.

Hence I really urge you to read "The thing about the Kobayashi Maru" written by Greg Pogorzelski. Really puts up a new perspective on everything.

And if medium.com should ever vanish and take this glorious text with it, here is a link to an archived version: https://archive.is/Cg3pd

Comments

uMatrix: Fehler 403 bei Aufruf von Links im web.de/GMX Webmailer beheben

Dall-E

Das Problem

Bekannte erhalten an ihre GMX-Mailadresse eine Mail. Diese enthält klickbare Links. Bei einem Klick landet man jedoch nicht auf der GMX-Weiterleitungsseite und anschließend auf der Seite auf die man eigentlich aufrufen möchte.
Stattdessen bekommt man die Fehlermeldung:

Ups, hier hat sich ein Fehler eingeschlichen...
Fehler-Code: 403

Bei einer web.de Adresse ist es genau so. Gut, das ist zu erwarten, da sowohl GMX als auch web.de zur gleichen Firma gehören und die Webmailer die gleichen sind. Lediglich etwas im Aussehen angepasst.

Stellt sich raus: Man verwendet nun endlich einen Adblocker. In diesem Fall uMatrix. Und uMatrix hat u.a. das Feature sog. HTTP-Referer für andere Seiten zu verbergen.
Normalerweise enthält der Referrer die Adresse der Webseite über die ich auf eine andere Webseite gekommen bin.

Suche ich z.B. auf Google nach einem Problem und klicke auf eine der Webseiten in den Ergebnissen, dann wird der Referrer an die aufrufende Webseite übermittelt. Somit kann man auswerten von wo ein Besucher auf die Webseite kam und wonach er gesucht hat. Durchaus relevante Informationen für viele Webseitenbetreiber. Aber natürlich unter Umständen ein Verlust an Privatsphäre für den Benutzer.

Daher ersetzt uMatrix den Referrer durch einen anderen Wert. Dies ist hier beschrieben: https://github.com/gorhill/uMatrix/wiki/Per-scope-switches#spoof-referer-header

Allerdings basiert die web.de/GMX Weiterleitung der Links auf dem HTTP-Referer. Da uMatrix diese Daten aber ersetzt, weiß die Weiterleitung nicht wohin sie weiterleiten soll und man erhält den Fehler 403 (welcher vermutlich für HTTP-403 Forbidden steht).

Die Lösung des Problems

Die Option das der Referer ersetzt wird nennt sich "Referrer verschleiern" und findet sich im Menü von uMatrix. Dies ist über die 3 Punkte zu erreichen.

Konkret müssen wir bei web.de dies für die Domains navigator.web.de & deref-web.de deaktivieren.
Bei GMX analog für die Domain der Weboberfläche und von deref-gmx.de.

Zuerst öffnen wir die Übersicht von uMatrix indem wir auf das Symbol von uMatrix klicken (üblicherweise rechts neben der Adresszeile).

Schritt 1: Wir ändern den Bereich auf navigator.web.de so das die Änderungen exakt nur für diese Domain gilt. Als Standard ist hier web.de ausgewählt, das wollen wir aber nicht. Also sicherstellen dass das komplette Feld blau hervorgehoben ist.

Schritt 2: Wir klicken auf das 3 Punkte Menü

Schritt 3: Wir deaktivieren die "Referrer verschleiern" Option, so das diese, wie im Bild, ausgegraut ist.

Schritt 4: Anschließend auf das nun blau hervorgehobene Schloß-Symbol klicken um die Änderungen zu speichern.

Nun müssen wir dies noch einmal exakt genau so für die Domain deref-web.de bzw. deref-gmx.de durchführen. Hierzu genügt es einfach auf einen Link in einer Mail zu klicken, so das sich die Seite mit der Fehlermeldung öffnet.

Schritt 1: Wir belassen den Bereich auf deref-web.de bzw. deref-gmx.de. Da wir hier keine Subdomain haben, ist dies bereits korrekt ausgewählt.

Schritt 2: Wir klicken auf das 3 Punkte Menü

Schritt 3: Wir deaktivieren die "Referrer verschleiern" Option, so das diese, wie im Bild, ausgegraut ist.

Schritt 4: Anschließend auf das nun blau hervorgehobene Schloß-Symbol klicken um die Änderungen zu speichern.

Nun am besten zur Sicherheit einmal den Browser komplett beenden, mindestens aber den Tab mit der web.de/GMX Weboberfläche neu laden.

Klickt man dann auf einen Link sollte der gewohnte Weiterleitungshinweis erscheinen und man nach wenigen Sekunden auf der eigentlichen Seite sein.

Comments

Why I prefer !requiretty over "ssh -t"

Dall-E https://admin.brennt.net/bl-content/uploads/pages/dad5b98ab9f04a2cdca5de3afe2f6b0e/dall-e_sudo.jpg

Claudio Künzler, whom I know briefly from working with him on enhancing is check_equallogic back in 2010, wrote an article over at Geeker's Digest on How to use sudo inside SSH command. Of course he mentions the ssh -t parameter, as without it, we would get the following error message when calling sudo: (Example shamelessly stolen from his article. 😇)

ck@linux:~$ ssh targetserver "sudo whoami"
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required

And ssh -t is the right call here. Well, to be fair: It's not the only solution and in my eyes even not the best solution.

No, I am not talking about piping the password into the command prompt which is so often recommend as a solution (it's not!) that it makes me sad.

I am talking about the usage of negating requiretty in the /etc/sudoers file or a file under /etc/sudoers.d/ respectively.

Lets take the /etc/sudoers.d/icinga2 file I use in my article How to monitor your APT-repositories with Icinga:

Here I must use NOPASSWD for all executed commands and monitoring plugins as well as the line Defaults:icinga2 !requiretty. This negates the need for a tty for the icinga2 user completely. Omitting either the NOPASSWD or the !requiretty will give us the error message we see above.

root@admin:~ # cat /etc/sudoers.d/icinga2
# This line disables the need for a tty for sudo
#  else we will get all kind of "sudo: a password is required" errors
Defaults:icinga2 !requiretty

# sudo rights for Icinga2
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/unattended-upgrades
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/unattended-upgrade
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/apt-get
icinga2  ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/check_apt

It's also possible to just negate requiretty based on the path to the binary. As mentioned in this StackExchange question: How to disable requiretty for a single command in sudoers?

However keep in mind that the ordering of lines in a sudoers file is important! Quoting man sudoers from the SUDOERS FILE FORMAT section:

When multiple entries match for a user, they are applied in order. Where there are multiple matches, the last match is used (which is not necessarily the most specific match).

Why not just use ssh -t?

Personally I prefer the configuration/setting of sudo-related parameters in an /etc/sudoers.d/ file. My reasons are:

When properly configured via a sudoers file it doesn't matter if a command is called via ssh, ssh -t or any other way. Hence enhancing operational stability and making it easier for users as they don't have to remember adding the -t parameter.

And it, at least, servers as some form of documentation that this user/binary is called from another script/host/etc. giving you a clue that these sudo rights are needed/used for.

Comments

Puppet is dead. Long live OpenVox!

Photo by cottonbro studio: https://www.pexels.com/photo/people-toasting-wine-glasses-3171837/

This is an update to my previous blogpost Puppet goes enshittyfication (Updated).

I the meantime the fork has happened. It's called OpenVox (https://github.com/openvoxproject) in reference to VoxPupuli - the name of community of module maintainer, authors and various other contributors around Puppet. But as we all now know, the community is dead. At least for Perforce. Well... As long as they can't get any real money out of it anyways..

The most non-surprising part? As of now there are no officially maintained container images for Puppet anymore.

Martin Alfke's betadots GmbH took over the maintenance of the puppetserver and puppetdb container images. As Puppet by Perforce did abandon them years ago. Now with the fork happening and Puppet effectively being a closed-source software product - and depending on how it develops - little to absolutely no support from Perforce, it simply makes no sense for the betadots GmbH anymore to do the free work for Puppet by Perforce. And they did so effective immediately. Understandably.

Hence they will be maintaining the openvoxserver & openvoxdb container images.

They announced this in a post on their betadots dev.to blog, in german but translation should work well. Read it here: https://dev.to/betadots/das-neue-puppet-open-source-projekt-heisst-openvox-2330

A questionable decision...

This means that in 2025 there are no container images for Puppet with official vendor support.

And as much as I like how this shows the power of a strong open source community...

I can't find a phrase strongly worded enough to describe this utter strategic mismanagement, so just look at this picture instead:

Photo by Photo By: Kaboompics.com: https://www.pexels.com/photo/a-short-haired-woman-holding-a-pair-of-eyeglasses-4491435/

Comments