Feuerfest

Just the private blog of a Linux sysadmin

Linkdump July 2024

Photo by Element5 Digital: https://www.pexels.com/photo/person-holding-book-from-shelf-1370298/

The last Linkdump was in January 2024? How time flies..

https://www.simplermachines.com/why-you-need-a-wtf-notebook/: A nice method to collect things were you think "WTF?" when being in a new team/at a new customer. Just collect first, learn, list, take notes. Then after sometime start crossing points off the list for various reasons and work on the "real WTF ones".

https://www.spacebar.news/stop-using-brave-browser/: A text about what is wrong about everything around the Brave Browser.

https://privacytests.org/: A side run by a Brave employee listing/comparing the various browser privacy features.

https://github.com/fork-maintainers/iceraven-browser: A fork of Firefox for Android giving you more options. Most notably: about:config support!

https://developer.chrome.com/blog/resuming-the-transition-to-mv3: Yep, google really pushes on with it's Manifest v3 which will considerably limit the technical capabilities of AdBlocking addons in Chrome. The main reason why I switched back to Firefox.

https://github.com/ratatui-org/ratatui: "Ratatui is a crate for cooking up terminal user interfaces in Rust. It is a lightweight library that provides a set of widgets and utilities to build complex Rust TUIs. Ratatui was forked from the tui-rs crate in 2023 in order to continue its development." Just bookmarked that one in case I need it in the future.

https://anytype.io/: The Everything App. Haven't used it, but someone said he is looking forward to replacing Microsoft Teams with that App in his company.

https://jamesg.blog/advent-of-technical-writing/: Lots and lots of articles from a technical writer who shares what he has learned of the years.
From the same author there is also a book "Software Technical Writing: A Guidebook" available as PDF from his site: https://jamesg.blog/book.pdf

https://www.netways.de/blog/2024/01/19/check-system-basics/: One Icinga CheckPlugin to rule them all! This plugin bundles the checks for Memory, Filesystem, PSI, Load, Sensors, Netdev

https://www.kernel.org/doc/html/latest/accounting/psi.html: Documentation for the Pressure Stall Information (PSI) interface. If enabled in your kernel, reachable via /proc/pressure/. Apparently I didn't know about this and just learned of this throught the Netways check_system_basics plugin.

https://docs.cwtch.im/docs/intro/: "Cwtch (/kʊtʃ/ - a Welsh word roughly translating to “a hug that creates a safe place”) is a decentralized, privacy-preserving, metadata resistant messaging app." I don't use it yet, but bookmarked it to see how the project develops. I would really love to uninstall Whatsapp and Telegram from my mobile...

https://bios-pw.org/: Forget your BIOS password? This generator will tell you the Master password of your BIOS if your provide the manufacturer and shown hash.

https://archief.ntr.nl/tuinderlusten/en.html: Ever wanted to explore Jheronimus Bosch's painting "The Garden of Earthly Delights"? Now you can in detail with audio explanations. Really impressive.

https://www.wheresyoured.at/the-men-who-killed-google/: An article about Googles shift from better search results to more revenue and user engagement and how that came to be.

https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1: Imagine your S3 bucket gets hit with 100.000 of requests each hour, as your bucket name is listed as default config in some OpenSource tool. Imagine then that the tool actually makes requests to your S3 bucket.
Now imagine the surprised face of the person from that text, when he found out he has to pay for all the denied traffic which resulted in HTTP-4xx errors.
Hot takeaway from this: 1. Include some random chars into your bucket name. 2. If your are evil, you can increase the bill for every S3 bucket..

https://www.youtube.com/watch?v=OQoqlBog7UI: Cooking content! I'd searched for a new anti-sticking cooking pan and came around Hexclad. However as they claim to have develop a new anti-sticking material - not using polytetrafluoroethylene (PTFE) or in short: Teflon - I was sceptical. After all there is a reason why Teflon is used since decades. Turns out:The pans work, but only for 2 years. Heavily depending on usage. After 2 years you will have to send your pan in and get a free replacement because of the lifetime guarantee. Ok, good customerservice on one hand, but terrible for the environment on the other hand. Also.. I don't consider pans with a lifespan of just 2 years just to be of high quality.. And I found many Youtubers who made videos such as this one. Therefore this isn't a single incident, no, it's a flow in the material/product itself. There is also a nice text from someone (which I forgot to bookmark..) who wrote about how these pans were used in a series of Gordon Ramsay's "Hells Kitchen" and there the pans really showed that the new material is not that durable after all. There was a nice scene were one could see how Gordon Ramsay quarrels with himself as a dish couldn't be prepared properly because evidently the pan lost its anti-sticking capability in some region of the surface. Yep.. I bought a Teflon pan. 😅

https://jan.wildeboer.net/2023/02/Jekyll-Mastodon-Comments/: "Client-side comments with Mastodon on a static Jekyll website" Jan Wildeboer did that and I found the implementation interesting.

https://slate.com/technology/2024/05/deviantart-what-happened-ai-decline-lawsuit-stability.html: DeviantArt was one of the many sites of the relatively early Internet which got a lot of attention. Providing a community for artists to share their pictures. Over the years DeviantArt took a big hit. Instagram, Facebook.. Many sites gnawed at their userbase. In the last years they made a small revival. Adding long sought-after features, etc.
And now they throw everything in the trash bin by jumping right onto the AI hypetrain and not doing much to combat the growing army of bots using the original art of it's users to create hundreds of fake profiles with AI generated art, then circle-boosting themself - all to make the original artist vanish in the search results.
Seems like that was it for DeviantArt if they don't do anything against it. After all.. Which artist would join such a site?
(But to be fair: That problem exists everywhere, Youtube, Tiktok, etc.)

https://mpv.io/: An OpenSource video player for the command-line.

https://www.udm14.com/: You want a Google search result free of AI generated content? Add the search parameter "&udm=14". Which is now lovingly called "the disenshittification Konami code".

https://tedium.co/2024/05/17/google-web-search-make-default/: A text about the "disenshittification Konami code".

https://endoflife.date/: A nice website listing all of those EOL dates for software. Handy!

https://12factor.net/: This one was listed in a job-ad as a "nice to have/know" point. As "The Twelve-Factor App" didn't rign a bell, I searched for it and found the website. Basically 12 design principles about how to structure your build environment, design your software architecture and other processes.
And yes, I know some projects who would be benefit from following these.

Comments

Der Faktor Mensch in der Softwareentwicklung

Photo by cottonbro studio: https://www.pexels.com/photo/man-kissing-a-gypsum-head-3693078/

Durch Zufall heute auf den Youtube-Channel von David Tielke aufmerksam geworden.
Und nach 2 Videos auf seinem Kanal schlug mir Youtube seine Keynote von der DWX23 vor. Titel: "Der Faktor Mensch in der Softwareentwicklung"

Ist eine Stunde, die aber wirklich unterhaltsam und lehrreich ist.
Und seine Aussagen mehr auf seine Kollegen zu achten bzgl. Work-Life-Balance, Burnout, Depression und im Leben (privat wie beruflich) nicht nur die IT zu haben. Die kann ich voll und ganz unterschreiben.

Ich war 2x für mehrere Monate aufgrund von Depressionen in der Tagesklinik, zwar wegen bis dato nicht diagnostiziertem Aufmerksamkeits-Defizit-Syndrom (da ist Depression das häufigste Symptom bei Erwachsenen) und nicht wegen Überarbeitung etc.
Dennoch habe ich aufgrund dessen Dinge in meinem Leben geändert. Mir Hobbies und Freunde abseits der IT gesucht.

Und gerade weil ich damit so gute Erfahrungen gemacht habe, bin ich damit so offen & auch offensiv. Depression, Burnout, etc. sind keine lebenslangen Stigmata. Mit der richtigen Hilfe und etwas Umstellung lässt sich das meistens sehr gut in den Griff bekommen. (Klar, jeder Fall ist anders & individuell.) Aber ich sehe eine psychische Erkrankung nicht als K.O.-Kriterium für eine Karriere oder gar als Charakterschwäche. Menschen die so denken wünsche ich, wirklich(!), von ganzem Herzen das sie niemals selbst in so eine Situation geraten. Denn die Kraft die man aufbringen muss, während man selbst am Boden liegt, es sich anfühlt als ob die Welt auf einen einprügelt und man dann noch Zirkuskunststückchen vollführen darf... Nur um mal irgendwann nach etlichen Wochen oder Monaten einen Termin bei einem/-r Psychologen/-therapeuten zu bekommen..
Diese Kraft traue ich auch manchem gesunden Menschen nicht zu.

Also: Passt auf euch auf. Kein Job ist wichtiger als euer Leben. Egal wie geil euer Arbeitgeber ist.

Das Video ist unten eingebettet. Oder hier direkt als Link: https://www.youtube.com/watch?v=Eh-UaaxBYDk

Comments

Icinga2 error "check command does not exist" because of missing constant

Photo by Christina Morillo: https://www.pexels.com/photo/software-engineer-standing-beside-server-racks-1181354/

Apparently this problem kept me busy far too long, as I kept looking into the Icinga2 Master logfiles only. Main due to the service definition for my icinga CheckCommand still being from a time when it was only one Master without any Agents. This lead to it being executed on the Master and hence I never saw the problems on the agent..

Additionally the cluster and cluster-health checks only check if all endpoints are connected. Which was the case all the time. Therefore I got no error there too.

But what happened?

I defined a new CheckCommand. It worked fine on the master. Then I re-rewrote the service apply-Rule so that it matches for all Linux hosts being monitored. And then I got Check command not found for all these new service checks on all agent hosts.

I deleted the API config sync directories and restarted Icinga2 on the agents to trigger a new sync:

root@agent:/etc/icinga2# rm /var/lib/icinga2/api/zones-stage/* -rf && rm /var/lib/icinga2/api/zones/* -rf
root@agent:/etc/icinga2# systemctl restart icinga2.service

And suddenly all CheckCommands which are not part of the Icinga Template Library stopped working on the agents.

Uhm, ok. At this point I suspected I had somehow messed up my /etc/icinga2/zones.conf file some time ago. Turns out, this wasn't the case.

The root cause

Some weeks ago I defined a service check which is only executed on my Icinga2 master. However I stored the CheckCommand and Service-Configuration under /etc/icinga2/zones.d/master anyway as you never know when this comes in handy. (This has since been corrected in the article.) But the Telegram API requires a Token. And I defined that in /etc/icinga2/constants.conf - but this file isn't synced as it is outside of /etc/icinga2/zones.d/master. Something which I did on purpose, as I didn't want to sync the Token to all agents.

This apparently caused the config file sync to run into an syntax error as the constant for the Token couldn't be resolved.
But again.. This was only logged in the logfiles on the agents..

root@agent:/etc/icinga2# cat /var/log/icinga2/icinga2.log
[...]
[2024-07-17 22:39:04 +0200] information/ApiListener: Received configuration for zone 'global-templates' from endpoint 'master.domain.tld'. Comparing the timestamp and checksums.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/eventcommands.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/groups.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/host-templates.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/notifications.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/service-templates.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/telegrambot-notifications.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/templates.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/timeperiods.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Stage: Updating received configuration file '/var/lib/icinga2/api/zones-stage/global-templates//_etc/users.conf' for zone 'global-templates'.
[2024-07-17 22:39:04 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones-stage/global-templates' (6688 Bytes).
[2024-07-17 22:39:04 +0200] information/ApiListener: Received configuration updates (2) from endpoint 'master.domain.tld' are different to production, triggering validation and reload.
[2024-07-17 22:39:04 +0200] critical/ApiListener: Config validation failed for staged cluster config sync in '/var/lib/icinga2/api/zones-stage/'. Aborting. Logs: '/var/lib/icinga2/api/zones-stage//startup.log'
[...]

The /var/lib/icinga2/api/zones-stage/startup.log has the details:

root@agent:/etc/icinga2# cat /var/lib/icinga2/api/zones-stage/startup.log
[2024-07-17 23:36:19 +0200] information/cli: Icinga application loader (version: r2.12.3-1)
[2024-07-17 23:36:19 +0200] information/cli: Loading configuration file(s).
[2024-07-17 23:36:19 +0200] information/ConfigItem: Committing config item(s).
[2024-07-17 23:36:19 +0200] critical/config: Error: Error while evaluating expression: Tried to access undefined script variable 'TelegramBotToken'
Location: in /var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf: 46:26-46:41
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(44):     HOSTDISPLAYNAME = "$host.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(45):     SERVICEDISPLAYNAME = "$service.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(46):     TELEGRAM_BOT_TOKEN = TelegramBotToken
                                                                                                               ^^^^^^^^^^^^^^^^
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(47):     TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(48):

[2024-07-17 23:36:19 +0200] critical/config: Error: Error while evaluating expression: Tried to access undefined script variable 'TelegramBotToken'
Location: in /var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf: 20:26-20:41
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(18):     NOTIFICATIONCOMMENT = "$notification.comment$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(19):     HOSTDISPLAYNAME = "$host.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(20):     TELEGRAM_BOT_TOKEN = TelegramBotToken
                                                                                                               ^^^^^^^^^^^^^^^^
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(21):     TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(22):

[2024-07-17 23:36:19 +0200] critical/config: 2 errors
[2024-07-17 23:36:19 +0200] critical/cli: Config validation failed. Re-run with 'icinga2 daemon -C' after fixing the config.

However... The tricky part is that a config validation will succeed!

root@agent:/etc/icinga2# icinga2 daemon -C
[2024-07-18 00:00:16 +0200] information/cli: Icinga application loader (version: r2.12.3-1)
[2024-07-18 00:00:16 +0200] information/cli: Loading configuration file(s).
[2024-07-18 00:00:16 +0200] information/ConfigItem: Committing config item(s).
[2024-07-18 00:00:16 +0200] information/ApiListener: My API identity: agent.domaint.tld
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 1 CheckerComponent.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 5 Zones.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 1 IcingaApplication.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 2 Endpoints.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 1 FileLogger.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 235 CheckCommands.
[2024-07-18 00:00:16 +0200] information/ConfigItem: Instantiated 1 ApiListener.
[2024-07-18 00:00:16 +0200] information/ScriptGlobal: Dumping variables to file '/var/cache/icinga2/icinga2.vars'
[2024-07-18 00:00:16 +0200] information/cli: Finished validating the configuration file(s).

And this was the reason why I was too focused on the master..

What I learned later is, that you can utilize the following command to validate the configuration from the stage-dir.
Documentation for the Config Sync: Receive Config is here.

root@agent:/var/log/icinga2# icinga2 daemon -C --define System.ZonesStageVarDir=/var/lib/icinga2/api/zones-stage/
[2024-07-21 16:28:51 +0200] information/cli: Icinga application loader (version: r2.12.3-1)
[2024-07-21 16:28:51 +0200] information/cli: Loading configuration file(s).
[2024-07-21 16:28:51 +0200] information/ConfigItem: Committing config item(s).
[2024-07-21 16:28:51 +0200] critical/config: Error: Error while evaluating expression: Tried to access undefined script variable 'TelegramBotToken'
Location: in /var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf: 20:26-20:41
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(18):     NOTIFICATIONCOMMENT = "$notification.comment$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(19):     HOSTDISPLAYNAME = "$host.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(20):     TELEGRAM_BOT_TOKEN = TelegramBotToken
                                                                                                               ^^^^^^^^^^^^^^^^
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(21):     TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(22):

[2024-07-21 16:28:51 +0200] critical/config: Error: Error while evaluating expression: Tried to access undefined script variable 'TelegramBotToken'
Location: in /var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf: 46:26-46:41
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(44):     HOSTDISPLAYNAME = "$host.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(45):     SERVICEDISPLAYNAME = "$service.display_name$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(46):     TELEGRAM_BOT_TOKEN = TelegramBotToken
                                                                                                               ^^^^^^^^^^^^^^^^
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(47):     TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
/var/lib/icinga2/api/zones-stage//global-commands/_etc/telegrambot-commands.conf(48):

[2024-07-21 16:28:51 +0200] critical/config: 2 errors
[2024-07-21 16:28:51 +0200] critical/cli: Config validation failed. Re-run with 'icinga2 daemon -C' after fixing the config.

The solution

On the master I moved the 2 files from /etc/icinga2/zones.d to /etc/icinga2/conf.d and restarted the service.

root@master:/etc/icinga2# mv /etc/icinga2/zones.d/global-commands/telegrambot-commands.conf /etc/icinga2/conf.d/
root@master:/etc/icinga2# mv /etc/icinga2/zones.d/global-templates/telegrambot-notifications.conf /etc/icinga2/conf.d/
root@master:/etc/icinga2# systemctl restart icinga2.service

On the agent a simple restart is enough:

root@agent:/etc/icinga2# systemctl restart icinga2.service

And after that everything worked again.

Another problem detected - an even deeper rooted cause

In the aftermath I was curious why & how Icinga didn't notify me that the config in the stage-dir couldn't be validated. Shouldn't there be some kind of included check for this?

Yes, turns out the built-in Icinga CheckCommand does exactly this. But it was never executed on my agent. As I still had a service definition from a time when I didn't have any agents. Initially the configuration was the following:

// Checks the agent health
apply Service "icinga" {
  import "generic-service"

  check_command = "icinga"

  assign where (host.address || host.address6) && host.vars.os == "Linux"
}

This was still a remnant of having only the Icinga Master and no agents. But this lead to it being executed on the Master. Which is... Not smart if you want to validate the configuration on the Agent.

After changing it to the following:

// Checks the agent health - must be executed on the agent
apply Service "icinga" {
  import "generic-service"

  check_command = "icinga"

  command_endpoint = host.vars.agent_endpoint

  assign where host.vars.agent_endpoint
}

The check worked as intended.

Oh, and I opened a pull request to enhance Icinga's documentation regarding the config sync: https://github.com/Icinga/icinga2/pull/10101. Let's see if it get's accepted.

Comments

Using and configuring unattended-upgrades under Debian Bookworm - Part 1: Preparations

Photo by Markus Winkler: https://www.pexels.com/photo/wood-writing-mathematics-typography-19915915/

Preface

At the time of this writing Debian Bookworm is the stable release of Debian. It utilizes Systemd timers for all automation tasks like the updating of the package lists and execution of the actual apt-get upgrade. Therefore we won't need to configure APT-Parameters in files like /etc/apt/apt.conf.d/02periodic. In fact some of these files don't even exist on my systems. Keep that in mind if you read this article along with others, who might do things differently - or for older/newer releases of Debian.

Part 2 can be found here: Using and configuring unattended-upgrades under Debian Bookworm - Part 2: Practise

unattended-upgrades is not your enemy

I've read and heard my fair share of people condemning unattended-upgrades as it:

  • Made things unnecessary complicated
  • Broke the system
  • Ignored APT-Pinnings and caused problems because of that
  • Just didn't work
  • Caused problems with High Availability (HA) services
  • Made the database fail
  • Split-Brains occurred only due to unattended-upgrades restarting services
  • Restarted services during work hours causing disruptions and annoyances

You should get the gist. And then there is my experience, running unattended-upgrades on over 6000 Debian servers running all sort of services. From Apaches, Tomcats, JBoss and HAProxy to DRBD, GlusterFS, Corosync, Pacemaker some NoSQL-DBs and services like keepalived and ipvsadm for Layer2/3-Loadbalancing or Quagga/FRR for BGP-Route announcements. And in all those years unattended-upgrades wasn't the root cause of a single problem, outage or incident.

Therefore I suspect that many of the people whose complaints I've read just installed it and never cared about it again. Or, at least, didn't care enough. Presumably they just wanted a quick solution to help them get their systems updated. Well yes, unattended-upgrades is the right tool for this. But as always:

Know your system and plan accordingly!

Things to consider beforehand

Let me give you a list of things to consider before you even install unattended-upgrades.

  1. Are all of your APT-Repositories in order?
    • GPG-keys valid and present?
    • Correct repository for your Debian release?
    • Do you use only official Debian repositories?
    • Do you use Debian repositories from other projects/vendors?
    • Are your repositories in sync on all your servers?
      • Preferably you use the same Debian Mirror on all of your systems. At least make sure you don't use different repositories for security and system updates across all your Debian systems.
      • This prevents problems from outdated mirrors. Happens rarely, but can happen.
        • Trust me: When 2 machines out of a cluster of 5 behave differently, your first thought or troubleshooting step won't be to check if the different Apt-Repositories have the same content
      • Remember: Those are often provided as-is by volunteers. They are NOT a commercial service. And yet most of these volunteers do an exceptional & awesome job despite not being paid for it.
      • Best case scenario: Your company has an internal Debian Mirror. Saving you money on bandwidth usage.
      • Have a look at https://www.debian.org/mirror/ftpmirror.en.html or Aptly if you plan on setting up a mirror.
    • Basically: If an apt-get update prints out anything other then your configured repositories, followed by the line Reading package lists... Done: Fix your repositories!
    • Yes, you can have vendors with broken Debian repositories. Most often the Release file will be buggy or missing at all. If you can't get your vendor to fix it, well, then the best option is to not specify the repository at all and ensure you have another form of automation to update those packages.
      • Sadly a wget http://some-company.tld/some.deb && dpkg -i some.deb is still considered valuable quality work at far too many enterprise software companies out there..
        • Or you just work around that nuisance, create an internal repository yourself and upload the packages there.
      • A good time to remind them that you pay them and that you need a fully working Debian repository, following the Debian guidelines so you can automatically patch all your systems.
    • Do NOT continue otherwise. You have been warned.
  2. What kind of services does your system provide?
    • Here it is essential to know what the system does. Technically and organizationally.
    • What services are provided, why, to whom?
    • Are there any Service Level Agreements (SLAs) in place?
      • Do downtimes of a service need to be scheduled first?
      • Or can service restarts happen at any time?
      • Also automatically? Or is human supervision required by law/standards? (Looking at you PCI DSS (Wikipedia) 😉)
      • Or is there already an agreed upon maintenance window during which the service is allowed to be unresponsive?
    • Do all services have High Availability (HA)?
      • Do the service(s) survive if the primary/master/main system is shut down?
      • How many systems can be unavailable at the same time?
      • Or do other tasks need to be done (manually) on secondary/slave/standby systems?
      • Does your failover work?
      • Is the failover tested regularly?
    • How are you informed if the service(s) stop responding?
      • Is your monitoring set to check right after updates did happen? (You can specify the time when unattended-upgrades does install the updates.)
  3. This will enable you to classify all your installed packages into the following categories for unattended-upgrades:
    • Can be updated anytime
    • Can be updated at certain times / Only a certain number of systems is allowed to be unavailable at any time
      • See: systemctl list-timers especially apt-daily-upgrade.timer and maybe apt-daily.timer
    • Only manual updates (Blacklisting)
      • Unattended-Upgrade::Package-Blacklist in /etc/apt/apt.conf.d/50unattended-upgrades is your friend
      • Then execute apt-get update && apt-get upgrade manually to update the blacklisted packages
    • Never update this package / We need a specific version
      • This will need blacklisting AND APT-Pinnings to be in place
      • As an apt-get update && apt-get upgrade can still be executed manually
  4. In certain situations (HA-Nodes, manually triggered fail-over, etc.) you will need to run specific commands which need to be executed before or after an package update.
    • This is something unattended-upgrades can't help you with
    • Sometimes these are commands which should be included in the Debian package itself. Namely in the preinst, prerm, postinst and postrm files. The so-called package maintainer scripts (Official Debian documentation).
      • But more often the commands are customer/environment specific and adding them to the maintainer script makes no sense or can even cause disruptions in the service.
    • A feasible workaround is the utilisation of a drop-in file if your service is started via a Systemd Unit-File, see https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html and search for "drop-in". The ArchWiki also has a good article: https://wiki.archlinux.org/title/systemd#Drop-in_files - but keep in mind that Arch is based on Gentoo and hence maybe has different paths than Debian
    • If the correct procedure is missing from the package, or isn't suitable for your environment.. Then its best to exclude the packages from unattended-upgrades and get yourself some tool like Ansible, Puppet or Rundeck to automate the execution of manual tasks. (Good how I love Rundeck. 🥰) There you are able to ensure everything is valid and verified before switching your primary cluster node and running package updates.
  5. If you have some kind of ITIL ChangeManagement process in place this, of course, also can't be checked by unattended-upgrades
    • The only viable solution would be to have different repositories based on your environment classifications (development, QA testing, production) or approval classifications (untested, testing, approved) and push packages to the corresponding repository once the ChangeManagement process is completed
    • Then you can install the updates automatically from there
  6. Unattended-Upgrades is great! But you can't automate every single scenario by just using it!
    • Sometimes this even means to go back to the drawing board and optimise your internal company processes first. Especially in situations where approvals have to be given manually by real humans
  7. When you start rolling out unattended-upgrades it's better to have an overview first on how many patches are missing on each server. The more updates you install, the more likely it is that things will change or even break. Or may it even just those informal log messages that some parameter or option will be deprecated in the next major release.
    • I advise to start with non-critical systems first
    • Ideally systems which are somewhat up-to-date so you can identify & troubleshoot problems easier
    • It's perfectly fine to update all systems manually first and only then enable unattended-upgrades
      • As then you will have a common ground for all your systems and bugs are easier to identify as the same bug will affect all systems at somewhat the same time
  8. You need to have a quick & easy, bureaucracy-free process to add packages to the blacklist
    • There once were broken corosync and keepalived packages in Debian in the early 2010s
    • Once we saw that, we immediately added them to the blacklist and downgraded the other systems where the update was already installed
    • You don't want to spent one hour on the phone frantically trying to reach a member of your Change Advisory Board to greenlight the change which lets you modify the blacklist while unattended-upgrades happily wrecks one system after another
  9. Point 8 sets a requirement: How do you roll out a new blacklist/config for unattended-upgrades to 6000 servers in under 15 minutes?
    • You are too slow to do that manually, no matter how fast you can type
    • for HOST in $(cat host-list.txt); do ssh $HOST some-command; done? Yeah... No. It works, sure... But.. Come on, really? And this does only one host at a time.. Still taking too long.
    • You do want something like Puppet, Ansible or Rundeck for this very reason.
      • Change the configuration in hiera
      • Commit it to git
      • Log in to Rundeck and execute the "Do a manually triggered Puppet/Ansible run" job
      • Then drink a coffee while you watch Rundeck doing it's job and can care for the 10 or so servers who will fail for various other reasons.

Understanding apt-cache policy

The next thing that will help you vastly in providing a smoothly running unattended-upgrades service is understanding and using the apt-cache tool, especially with the policy argument: apt-cache policy. Even more if you use Debian-Repositories from other projects or vendors.

Why? unattended-upgrades comes with some default configured Origins-Pattern in /etc/apt/apt.conf.d/50unattended-upgrades. These are useable for the official Debian Security updates and when new point releases of Debian are published (12.5, 12.6, etc.). These incorporate all updates since the last point release. Non-Security Updates for installed packages between point releases won't be installed with the default configuration.

If you want these when they are ready, you have to uncomment the line for "origin=Debian,codename=${distro_codename}-updates";.

${distro_codename}-proposed-updates are updates which may not be stable yet. Read https://wiki.debian.org/StableProposedUpdates for the details. Personally I keep it disabled. Especially on production systems.

For Debian Bookworm the default Origins-Pattern are:

Unattended-Upgrade::Origins-Pattern {
[...]
//      "origin=Debian,codename=${distro_codename}-updates";
//      "origin=Debian,codename=${distro_codename}-proposed-updates";
        "origin=Debian,codename=${distro_codename},label=Debian";
        "origin=Debian,codename=${distro_codename},label=Debian-Security";
        "origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
[...]
};

This means only packages matching these Origins-Pattern will be updated. Therefore you have to verify that these patterns include all packages you want to update and, additionally, that packages you do not wish to update are excluded. Although the use of the blacklist might be easier here, as this simply works on the name of the package.

Also note that codename=${distro_codename} is always part of an Origins-Pattern. It is important to not just use terms like stable or oldstable as these do not identify a unique distribution. Those change whenever a new major Debian release happens. (This is also true if you are referencing OS images in things like Docker. debian:latest is most certainly a bad idea, debian-bookworm:latest is better.)

Matching Origin-Patterns to apt-cache policy's output

How do you translate these Origins-Patterns into the lines apt-cache policy gives us?

Short side-note: apt-cache uses the metadata (repository information) obtained via apt-get update. Therefore it also works if the configured repositories are offline, but can also show outdated data if you haven't updated the package list information via apt-get update recently.

If executed you will see output like the following:

root@lanadmin:~# apt-cache policy
Package files:
 100 /var/lib/dpkg/status
     release a=now
 500 http://debian.tu-bs.de/debian bookworm-updates/non-free-firmware amd64 Packages
     release v=12-updates,o=Debian,a=stable-updates,n=bookworm-updates,l=Debian,c=non-free-firmware,b=amd64
     origin debian.tu-bs.de
 500 http://debian.tu-bs.de/debian bookworm-updates/main amd64 Packages
     release v=12-updates,o=Debian,a=stable-updates,n=bookworm-updates,l=Debian,c=main,b=amd64
     origin debian.tu-bs.de
 500 http://security.debian.org/debian-security bookworm-security/non-free-firmware amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=non-free-firmware,b=amd64
     origin security.debian.org
 500 http://security.debian.org/debian-security bookworm-security/main amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=main,b=amd64
     origin security.debian.org
 500 http://debian.tu-bs.de/debian bookworm/non-free-firmware amd64 Packages
     release v=12.5,o=Debian,a=stable,n=bookworm,l=Debian,c=non-free-firmware,b=amd64
     origin debian.tu-bs.de
 500 http://debian.tu-bs.de/debian bookworm/main amd64 Packages
     release v=12.5,o=Debian,a=stable,n=bookworm,l=Debian,c=main,b=amd64
     origin debian.tu-bs.de
Pinned packages:
root@lanadmin:~#

For the sake of easiness, let us look at just this single line:

 500 http://security.debian.org/debian-security bookworm-security/non-free-firmware amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=non-free-firmware,b=amd64
     origin security.debian.org

I wont go over each and every single listed value. If you want to know more: man apt_preferences probably has all the details and apt-cache policy packagename lists all policies for a single package.

We want to pay attention to the 2nd line starting with release. Here we have the relevant values. These are defined in the Release-File for each Debian Release/Repository. A documentation can be found here: https://wiki.debian.org/DebianRepository/Format#A.22Release.22_files

But what do they mean? man apt_preferences also explains them, but as they are relevant, let's make a short table.

Field Alias unattended-upgrades variable Description
Version v N/A Contains the release version
Origin o ${distro_id} Originator of the packages. (Debian if used for packages from the Debian project. If you have commercial packages most likely the company or product name will show up here.)
Archive or Suite a N/A Names the archive to which all the packages in the directory tree belong (on the repository server)
Codename n ${distro_codename} Codename of the Debian release. In our case: bookworm
Label l N/A Names the label of the packages in the directory tree of the Release file.
Component c N/A The licensing component associated with the packages in the directory tree. You may know this as: main, contrib, non-free-firmware, etc.
Architecture b N/A The processor architecture for which a package is compiled. (amd64, i386, arm, etc.)
Site N/A N/A FQDN for the APT-Repository. Useful when fields like Archive/Suite/Label are empty or ambiguous. (Example with component: "site=www.example.com,component=main";)

All this information is usually listed in the first 30 lines of /etc/apt/apt.conf.d/50unattended-upgrades. Therefore: Read it, to ensure you get the currently valid information.

Understanding this makes it easy to match the configured Origin-Patterns to your configure Debian-Repositories.

If you are in doubt: Browse your Debian repository via HTTP/HTTPS and have a look at the Release file, for example: http://debian.tu-bs.de/debian/dists/bookworm/Release. The first 5 lines are:

Origin: Debian
Label: Debian
Suite: stable
Version: 12.5
Codename: bookworm

Filling in the variables we see that only the following Origin-Pattern matches:

"origin=Debian,codename=${distro_codename},label=Debian";

All others either have a different label and/or codename.

Looking at http://debian.tu-bs.de/debian/dists/bookworm-updates/Release we can see that this matches the following commented out Origin-Pattern:

"origin=Debian,codename=${distro_codename}-updates";

This means:

http://debian.tu-bs.de/debian/dists/bookworm/ holds all packages for the current point release of Debian Bookworm.
http://debian.tu-bs.de/debian/dists/bookworm-updates/ has all published updates which came out before a new point release is made. That is the reason why I always enable this repository too. But depending on your operating strategy going only with updates when a new point release is published is also fine.

Security updates are always published via the http://security.debian.org/debian-security/ repository and will be installed when available. 

Key points / Story time

What you should have understood is:

  1. Each repository must be uniquely identifiable.
    • This means: Each Origins-Pattern should only match one of all your configured repositories
    • Of course you can have multiple repositories matching the same Origins-Pattern, but keep the possible implications in mind!
  2. Packages that share the same name must have the same content
    • And by content I mean: Their hashsums must be identical for each given version

If it doesn't you are potentially in for a wild ride.

At a former company we used many internal repositories. And this was fine for a long time. Suddenly some developer started pushing Debian packages with the exact same name as official Debian packages to those internal repositories. We immediately complained. Laying out how that could wreck havoc on our infrastructure as we have to use those repositories and at the same time we can't blacklist that package, as blacklisting works solely on name of a package.

You can't do things like: "Blacklist package test-foo, but only if its coming from repository on repo.coolhost.tld"

We urged him to simply rename those packages or upload them to another repository - as those packages had to share the same name as they contained a not-yet included fix for a bug the company encountered. He wouldn't as he saw no problem with his approach. "Just don't use them." (Yeah thanks.. That's not how it works with automation.. Especially not if you - sort of - hijack repositories which are used for something entirely different..)

The workaround we made was by utilizing the apt priority for each repository, so packages from our internal official Debian mirror took priority. And that worked, but it was still annoying.

Some weeks later those packages caused an incident and in the root cause analysis the problem was identified and those packages were moved to a separate repository.

Lesson learned. Care for your repositories.

And this ends the first part of this post. The next part will focus on the practical side. We will look at the Systemd Unit- and Timer-File, how we can add new repositories to unattended-upgrades, for example to also upgrade our Proxmox installation using the Proxmox Debian repositories, how to blacklist packages and more.

Comments

Why blocking whole countries on the Internet isn't a precise process

Photo by Yan Krukau: https://www.pexels.com/photo/close-up-of-a-person-holding-uno-cards-9068976/

I just read it again on the Internet. Someone is asking: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US."

This inspired me to write a short post about why this is a dangerous and - let's call it politely - sub-clever idea.

You know what "Internet" means, do you?

The term Internet is short for "interconnected networks". The Internet isn't one big network. It's thousands and thousands of small and bigger networks linked together via so called routing protocols. They transport the information on which routers decide how to route your packet so it arrives at its destination. Routers are, to use an analogy, the traffic signs along the highway. Giving each packet directions on which lane it needs to take to reach its destination. In protocol terms, we speak about iBGP, eBGP, OSPF, RIP v1/v2, IGRP, EIGRP, and so on. The only real distinction is whether these protocols are Intra-AS (routing inside one AS - for example iBGP) or Inter-AS (routing between several AS - for example eBGP) routing protocols.

What is an AS, you ask? AS is short for autonomous system (Wikipedia). That's the technical term for a network under the control of a single entity, like a company. Each AS is identified by its unique number, the ASN. This number is used in routing protocols like BGP to exactly identify to which AS a rule belongs.

And as you must have already guessed by now: None of this respects real-world borders. Packets don't stop at borders. Here in Europe, even we people don't stop at borders. You just have to love Schengen (Wikipedia).

Therefore, the task of only allowing customers from the US is a little bit complicated to set up. Technically spoken. Data packets don't contain information from which country they originated. Just the source IP address.

But.. My firewall-/router-/hosting-/DDoS-/CDN-/whatever-provider provides such an option in the control panel of my/our account? So it must be possible!

I didn't say it couldn't be done under any circumstances. I just said, It's complicated and will constantly cause you pain and money loss.

Even BGP in itself isn't 100% safe and attack vectors like BGP hijacking (Wikipedia) do exist, but due to how BGP works, they are always pretty quickly noticed, and the culprit is easily and clearly identified.

So, when it is possible, how do they do it?

They are taking many and many educated guesses.

...

Yeah, ok. Sprinkled and garnished with some bureaucratic facts as their starting point for their educated guesses.

...

Ok, sometimes they outright pay internet service providers or other companies to give them those data. This might or might not be legal under your countries data privacy laws..

...

Not the answer you expected to read? Yeah, life is disappointing sometimes.

How the bureaucratic layer of the Internet works

Tackle the problem from another angle: Do you know how IP addresses are managed on the bureaucratic layer?

Do terms like IANA, RIR, Ripe and ARIN ring a bell? No? Ok, let me explain.

IANA is the Internet Assigned Numbers Authority. In their words, they "Perform the global coordination of the DNS Root, IP addressing, and other Internet protocol resources" Relevant for us in the article is the "IP addressing" part.

The IANA assigns chunks of IP addresses to the so-called RIRs. That's Regional Internet registry (Wikipedia). Those RIRs are (with their founding dates and current area of operations):

  • 1992 RIPE NCC - Europe, Russia, Middle East, Greenland
  • 1993 APNIC - Asian/Pacific region, Australia, China, India, etc.
  • 1997 ARIN - United States of America, Canada
  • 1999 LACNIC - Mexico, South American Continent
  • 2004 AFRINIC - African continent

These RIRs then provide companies in their assigned areas with IP addresses they can manage themselves. And to make this picture easier I left the ICANN & NRO, two other governing bodies, out of the picture.

As you can see some RIRs were founded later than others. This also means: Even if you filter based on which RIR manages the IP addresses, this isn't set in stone forever. Even if a RIR is responsible for a whole continent this can change.

What these companies, which offer geo-blocking, do is: They look where an IP address is located on the bureaucratic layer. Which RIR is responsible for the IP block? Which companies "own" the IPs? Where are they routed to/announced from. But these are all bureaucratic and technical information. These information can't be mapped 1 to 1 to a country. And these bits of information are extremely volatile.

Side note: And there is no RIR for each single country. The term LIR or Local Internet registry (Wikipedia) does exist. But it commonly refers to your Internet Service Provider (ISP) who assigns your Internet Modem/Router an IP address so you can browse the Internet. This has nothing to do with countries. The Internet itself isn't technically designed with the concept of "countries" or "borders" in mind. Never was and most likely (hopefully!!) never will.

Another problem are the systems who provide these information: Some provide real time information. Others don't. Additionally you don't know which metrics your vendor uses and how the vendor obtains them. And usually they don't make the process how they obtain and classify the information publicly available.

I had customer support agents who, instead of resolving a domain name via the ping or host command typed it into Google and used that information. Sometimes obtaining wrong information which was months old and therefore led to other errors...

And what about multi national businesses?

A company from Germany can have assigned IPs from the ARIN for their US business. Maybe they have a subsidiary company for their US business, but this still makes it a German company. How do you filter that?

Keep in mind: Maybe their US subsidiary was only established for jurisdictional problems and all people working with you are sitting in Germany. Hence mails, phone calls, letters, etc. will all come from Germany.

Additionally this company is free to use the IPs as they like. They can announce their BGP routes as they like. Nothing is preventing them from using IPs assigned by the RIPE NCC in the United States. This is done on a regular level. As especially IPv4 addresses are rare and sometimes IPs need to be moved around to satisfy the ever growing demand.

Side note: The NRO publishes the data of all delegated IP blocks under https://ftp.ripe.net/pub/stats/ripencc/nro-stats/latest/. And the file nro-delegated-stats contains the information which IP blocks were assigned by any RIR. You will find lines that the ARIN (Only responsible for the US & Canada) assigned IPs to an entity in Singapore.

Jan Schaumann used that file to present some cool statistics about IP allocations: https://labs.ripe.net/author/jschauma/whose-cidr-is-it-anyway/

To make the picture more complex: IPs issued by a RIR can be used in any country. Their is no rule nor enforcement that IPs issued by a RIR are only to be used in their sphere of influence. Therefore even that first starting piece of information can differ completely from reality.

Hence my statement that all this geolocation business is based on educated guesses. Yes, many positions will be precise. But the question is "For how long?" and do you really want to make your communication depending on that?

The technical reality

BGP routes themselves can change at any time. There is no "You can change them only once every 30 days." You can change them every 5 minutes if you like. They can even change completely automatically. Heck, they have to change automatically if we want a working Internet. There are always equipment malfunctions..

When I worked at a major German telecommunications provider, we utilized BGP to build an automatic fail-over in case an entire datacenter went offline. Both datacenters announced their routes (how traffic can reach them) via BGP towards the route reflector of our network team. Datacenter A announced with a local-preference of 200, datacenter B with a local-preference of 100. In iBGP the highest local-preference value takes priority. This means: If datacenter A should ever cease to function (the iBGP announcements from that datacenter stop reaching the route reflector) the traffic will immediately go towards datacenter B.

In our case, both datacenters A and B were located in Germany. But that was pure chance. My employer also had datacenters in France, the UK, Spain, etc. and of course also in the US. It just happened that the datacenter where my team was allocated the necessary rack space for our servers were both located in Germany.

So the endpoint can literally change every millisecond. And with it the country where traffic is sent to or originates from.

Of course we did regular fail-over tests. Now think about the following scenario: We are doing a live fail-over test. Datacenter A switched to B and datacenter B happens to be located in France. The traffic will be arriving in France for 5 minutes (the duration of our test). In exactly these 5 minutes a scan from a vendor notices that traffic for all IPs affected by our test will be located in France. The software will write this into its database and happily move along.

How long will that false, inaccurate and outdated information be kept in their database? What trouble will that cause your business?

Looking at it from the other side

Ok, so we clarified why geo-blocking is taking educated guesses with a bit of Voodoo. It is time to look at it from the other side, right? As this is a viewpoint which is regularly forgotten completely.

Let's go with the example above: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US.."

Is this really the reality? Are your suppliers and customers located in the US?

I bet 100% that you haven't even understood why you are making that claim. Most people will look at: "Where do we have stores? Where do we ship? What are our target customers?"

This usually leads to an opinion based on bureaucratic metrics. Or in other words: Delivery and invoice addresses.

But what about the customer in Idaho who just recently moved there from Spain and still uses his/her mail account from a Spanish mail provider?

Have you checked which IPs their email server uses? Are they hosted by a big cloud provider like Google, Azure or AWS? Do you have complete and absolute knowledge on how these biggest three tech companies manage their IPs and hundreds of networks today? Tomorrow? Next week?

Even they don't.

Which measures and workarounds they undertake should a datacenter be down? Or just be in a planned maintenance state?

It's fairly normal that in times of need workarounds are done to ensure customers can use the services, for which they are paying, again as quickly as possible.

Businesses change too

How about your biggest client suddenly stopping buying from you? Are you getting no calls for bids any more?

Could it be that the company you did business with was recently acquired by another company? And now they send all their mail from an entirely different mail server hosted in an entirely different country? Could that be the reason the RFQs (requests for quotations) stopped coming?

How much money will you lose before you notice this error?

Last words

I tried to explain in easy words for non-techie people why geo-blocking is usually bad. Yes, it's used by Netflix and many others. Yes, many products offer some kind of feature to achieve some form of geo-blocking.

But keep in mind: They have to do this for jurisdictional reasons. They bought rights to movies to show in certain countries. The owners of these rights want Netflix to ensure only those customers can watch these movies. Because they themselves sold the exact same rights to at least 25 other companies in other countries. And each of their customers will sue them once they notice that a competitor has the same movie in the same country. Hence, Netflix is trapped in a never-ending cat-and-mouse game with VPN companies that constantly change their endpoints.

I haven't even talked about VPNs. I haven't talked about DNS. I haven't talked about mail. All these require IPs to function. All these add several other layers of complexity. But all these are needed for your business to work in the 21st century.

You won't be more secure by blocking China, Russia, or North Korea from your firewall.

You will be more secure by applying patches on time. Using maintained software products. Separating your production environments from your development/test networks and the networks where the PCs/Laptops of your employees are located. By running regular security audits. By following NIST recommendations regarding password security. By defining a good manageable firewall rule framework. By having a ticket system that makes changes traceable AND reproducible. By introducing ITIL or some ISO stuff if you want to go that route.

Be advised: The bad guys are not just sitting in those countries that you are afraid of. China isn't solely attacking out of China in the cyberspace. No. Probably they utilize a nice hacked internet account from John Doe just around the corner of your shop.

Some links

If you want to read further I can recommend the site https://networklessons.com/. If you want to learn more about BGP you can visit https://networklessons.com/bgp and start from there.

Comments

Why I don't consider Outlook to be a functional mail client

Photo by Pixabay: https://www.pexels.com/photo/flare-of-fire-on-wood-with-black-smokes-57461/

This topic comes up far to often, therefore I decided to make a blogpost out of it. After all copy & pasting a link is easier than repeatedly writing the same bullet points.

Also: This is my private opinion and this article should rather be treated as a rant.

  • Mail templates are separate files? And the workflow to create them is seriously that antique?
    • Under Create an email message template (microsoft.com) Microsoft details how to create an email template. But you notice something? They use the term "[...] that include information that infrequently changes [...]" means only static text is allowed.
    • Yep, you can't draft mail templates where certain values get auto-filled and the like. I mean, how many employees, consultants, etc. have to sent their weekly/monthly time-sheet to someone? Is it so hard to automatically fill in the week number, month and automatically attach the latest file with a certain file name in a specified folder?
      • Yes! Automating this with software is surely the best way. But we all know how the reality in many companies looks like, right?
    • Additionally the mail templates are stored as files on your filesystem under: C:\users\username\appdata\roaming\microsoft\templates.
      • This means: Mail templates are not treated as mails in draft mode or the like. No, you have to load an external file via a separate dialogue into Outlook. That's user experience from the 1980s?
    • Workaround: Create a folder templates, create a sub-folder templates-for-templates. Store mail drafts (with recipients, subject, text, etc.) in templates-for-templates. When needed copy to templates. Attach file. Edit text manually. Hit send.
    • Never send directly out of templates-for-templates as else your template is gone.
    • But seriously? Why is this process so old and convoluted? I suspect the feature is kept this way because Microsoft is afraid of people utilizing it to send spam. But.. Sending spam manually? I think this stopped to be a thing at May 5th, 2000 (Wikipedia) at the latest.. Every worm/virus out there has it's own build-in logic to generate different subjects/texts/etc. Why deliberately keep a feature in such a broken state and punish your legitimate users?
  • No regular expressions in filter keywords
    • This annoys me probably the most. When you specify a filter "Sort all mails, where the subject begins with Newsletter PC news into a folder", Outlook will only sort mail with the exact subject of "Newsletter PC news"
    • Which is stupid when there is a static & changing part in the subject. I mean it's 2024. Support some kind of wildcard string matching via asterisks is not really new, isn't it? Like: "Sort all mail where the subject starts with "Newsletter PC news*" and then "Newsletter PC news April 2024" will also get sorted.. No. Not in Outlook.
  • Constant nuisance: Ctrl+F doesn't bring up the search bar - Instead it opens the new mail window..
    • I mean really? Ctrl+F is the shortcut for search everywhere. Why change that!?
    • Info: Ctrl+E activates the search field on top
  • Only one organizer for events
    • Ok, technically this isn't outlook but rather CalDAV and hence Google calendar, etc. suffer the from the same problems. But I still list it as a fault.
    • Why? Microsoft has repeatedly shown the middle finger to organizations like the ISO and the like. When it suits Microsoft's market share, they basically are willing to ignore a lot of common standards (like Google, Facebook, etc..). With their Active Directory infrastructure and Office Suite they have everything in-house and 100% under their own control to make this feature work in Windows environments - which most companies do run. But they don't care.
    • I mean.. On the other hand I'm glad that they follow the standard. It just turns out so often to be a feature we are in need of that I stopped counting.
    • And you already need proprietary connectors to properly integrate your Exchange calendar into other mail programs like Mozilla Thunderbird. So this shouldn't be really a big deal-breaker either..
  • Only one reminder for events
    • Due to my Attention deficit disorder I tend to have what is called "Altered time perception" or "time blindness". This means I won't experience 15 minutes as 15 minutes or grossly under-/overestimate how much time I really have left. Best description for non-ADDers I can give is: This means I will think of 15 minutes as "Ah, I still have 1 hour left." That this can lead to situations where I am late or wasn't able to fully prepare something for a meeting should be clear.
    • Therefore it really helps me to be able to set multiple reminders for an event.
    • Usually I do the following: 1 hour before, 30min, 15min. This helps me to break out of the time blindness and synchronize my altered time perception with reality. Enabling me to finish tasks before the meeting/event happens.
    • For events like a business trip which take more time to prepare I often set a reminder 1 or 2 weeks in advance. This way I have time to do my laundry in time and so on.
    • Outlook however only supports the setting of ONE reminder.. Yeah..
    • My workaround is to have events also in my private calendar. (Of course without any details and often just a generic title/description as to not store client information on my private device.)
  • Remember Xobni? / The search is horrible
    • Outlook search is a single input field and then it searches over everything. You can't specify if the search term you used is a name, part of the name of a file or part of an email address.
    • In the early 2000s there was Xobni. Slogan: "It reverts your Inbox." - Hence the name Xobni. It was a an add-on which added another sidebar to Outlook. There it displayed all people you've mailed with. And when you clicked on a person you saw all mails, all mail threads and, most importantly, all attachments this person had sent to you (or you to them). You could even add links to the persons social media profiles, etc. It was brilliant. And made work so easy. As often I remembered only the person who sent a file to me or the thread in which it was attached - but not the actual mail or even the subject of the mail, etc. Xobni made it pretty easy to work around that. Making it possible to search Outlook in a way in which our brain works.
    • Well, sadly Yahoo bought Xobni in July 2013 and shut it down in July 2014.
    • But it's 2024 and Microsoft hasn't come up with a similar functionality yet? Really?
Comments