Feuerfest

Just the private blog of a Linux sysadmin

Using and configuring unattended-upgrades under Debian Bookworm - Part 1: Preparations

Photo by Markus Winkler: https://www.pexels.com/photo/wood-writing-mathematics-typography-19915915/

Preface

At the time of this writing Debian Bookworm is the stable release of Debian. It utilizes Systemd timers for all automation tasks like the updating of the package lists and execution of the actual apt-get upgrade. Therefore we won't need to configure APT-Parameters in files like /etc/apt/apt.conf.d/02periodic. In fact some of these files don't even exist on my systems. Keep that in mind if you read this article along with others, who might do things differently - or for older/newer releases of Debian.

Part 2 can be found here: Using and configuring unattended-upgrades under Debian Bookworm - Part 2: Practise

unattended-upgrades is not your enemy

I've read and heard my fair share of people condemning unattended-upgrades as it:

  • Made things unnecessary complicated
  • Broke the system
  • Ignored APT-Pinnings and caused problems because of that
  • Just didn't work
  • Caused problems with High Availability (HA) services
  • Made the database fail
  • Split-Brains occurred only due to unattended-upgrades restarting services
  • Restarted services during work hours causing disruptions and annoyances

You should get the gist. And then there is my experience, running unattended-upgrades on over 6000 Debian servers running all sort of services. From Apaches, Tomcats, JBoss and HAProxy to DRBD, GlusterFS, Corosync, Pacemaker some NoSQL-DBs and services like keepalived and ipvsadm for Layer2/3-Loadbalancing or Quagga/FRR for BGP-Route announcements. And in all those years unattended-upgrades wasn't the root cause of a single problem, outage or incident.

Therefore I suspect that many of the people whose complaints I've read just installed it and never cared about it again. Or, at least, didn't care enough. Presumably they just wanted a quick solution to help them get their systems updated. Well yes, unattended-upgrades is the right tool for this. But as always:

Know your system and plan accordingly!

Things to consider beforehand

Let me give you a list of things to consider before you even install unattended-upgrades.

  1. Are all of your APT-Repositories in order?
    • GPG-keys valid and present?
    • Correct repository for your Debian release?
    • Do you use only official Debian repositories?
    • Do you use Debian repositories from other projects/vendors?
    • Are your repositories in sync on all your servers?
      • Preferably you use the same Debian Mirror on all of your systems. At least make sure you don't use different repositories for security and system updates across all your Debian systems.
      • This prevents problems from outdated mirrors. Happens rarely, but can happen.
        • Trust me: When 2 machines out of a cluster of 5 behave differently, your first thought or troubleshooting step won't be to check if the different Apt-Repositories have the same content
      • Remember: Those are often provided as-is by volunteers. They are NOT a commercial service. And yet most of these volunteers do an exceptional & awesome job despite not being paid for it.
      • Best case scenario: Your company has an internal Debian Mirror. Saving you money on bandwidth usage.
      • Have a look at https://www.debian.org/mirror/ftpmirror.en.html or Aptly if you plan on setting up a mirror.
    • Basically: If an apt-get update prints out anything other then your configured repositories, followed by the line Reading package lists... Done: Fix your repositories!
    • Yes, you can have vendors with broken Debian repositories. Most often the Release file will be buggy or missing at all. If you can't get your vendor to fix it, well, then the best option is to not specify the repository at all and ensure you have another form of automation to update those packages.
      • Sadly a wget http://some-company.tld/some.deb && dpkg -i some.deb is still considered valuable quality work at far too many enterprise software companies out there..
        • Or you just work around that nuisance, create an internal repository yourself and upload the packages there.
      • A good time to remind them that you pay them and that you need a fully working Debian repository, following the Debian guidelines so you can automatically patch all your systems.
    • Do NOT continue otherwise. You have been warned.
  2. What kind of services does your system provide?
    • Here it is essential to know what the system does. Technically and organizationally.
    • What services are provided, why, to whom?
    • Are there any Service Level Agreements (SLAs) in place?
      • Do downtimes of a service need to be scheduled first?
      • Or can service restarts happen at any time?
      • Also automatically? Or is human supervision required by law/standards? (Looking at you PCI DSS (Wikipedia) 😉)
      • Or is there already an agreed upon maintenance window during which the service is allowed to be unresponsive?
    • Do all services have High Availability (HA)?
      • Do the service(s) survive if the primary/master/main system is shut down?
      • How many systems can be unavailable at the same time?
      • Or do other tasks need to be done (manually) on secondary/slave/standby systems?
      • Does your failover work?
      • Is the failover tested regularly?
    • How are you informed if the service(s) stop responding?
      • Is your monitoring set to check right after updates did happen? (You can specify the time when unattended-upgrades does install the updates.)
  3. This will enable you to classify all your installed packages into the following categories for unattended-upgrades:
    • Can be updated anytime
    • Can be updated at certain times / Only a certain number of systems is allowed to be unavailable at any time
      • See: systemctl list-timers especially apt-daily-upgrade.timer and maybe apt-daily.timer
    • Only manual updates (Blacklisting)
      • Unattended-Upgrade::Package-Blacklist in /etc/apt/apt.conf.d/50unattended-upgrades is your friend
      • Then execute apt-get update && apt-get upgrade manually to update the blacklisted packages
    • Never update this package / We need a specific version
      • This will need blacklisting AND APT-Pinnings to be in place
      • As an apt-get update && apt-get upgrade can still be executed manually
  4. In certain situations (HA-Nodes, manually triggered fail-over, etc.) you will need to run specific commands which need to be executed before or after an package update.
    • This is something unattended-upgrades can't help you with
    • This is stuff that belongs in the Debian package itself. Namely in the preinst, prerm, postinst and postrm files. The so-called package maintainer scripts (Official Debian documentation).
    • A feasable workaround is the utilization of a drop-in file if your service is started via a Systemd Unit-File, see https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html and search for "drop-in". The ArchWiki also has a good article: https://wiki.archlinux.org/title/systemd#Drop-in_files - but keep in mind that Arch is based on Gentoo and hence maybe has different paths than Debian
    • If the correct procedure is missing from the package, or isn't suitable for your environment.. Then its best to exclude the packages from unattended-upgrades and get yourself some tool like Ansible, Puppet or Rundeck to automate the execution of manual tasks. (Good how I love Rundeck. 🥰) There you are able to ensure everything is valid and verified before switching your primary cluster node and running package updates.
  5. If you have some kind of ITIL ChangeManagement process in place this, of course, also can't be checked by unattended-upgrades
    • The only viable solution would be to have different repositories based on your environment classifications (development, QA testing, production) or approval classifications (untested, testing, approved) and push packages to the corresponding repository once the ChangeManagement process is completed
    • Then you can install the updates automatically from there
  6. Unattended-Upgrades is great! But you can't automate every single scenario by just using it!
    • Sometimes this even means to go back to the drawing board and optimize your internal company processes first. Especially in situations where approvals have to be given manually by real humans
  7. When you start rolling out unattended-upgrades it's better to have an overview first on how many patches are missing on each server. The more updates you install, the more likely it is that things will change or even break. Or may it even just those informal log messages that some parameter or option will be deprecated in the next major release.
    • I advise to start with non-critical systems first
    • Ideally systems which are somewhat up-to-date so you can identify & troubleshoot problems easier
    • It's perfectly fine to update all systems manually first and only then enable unattended-upgrades
      • As then you will have a common ground for all your systems and bugs are easier to identify as the same bug will affect all systems at somewhat the same time
  8. You need to have a quick & easy, bureaucracy-free process to add packages to the blacklist
    • There once were broken corosync and keepalived packages in Debian in the early 2010s
    • Once we saw that, we immediately added them to the blacklist and downgraded the other systems where the update was already installed
    • You don't want to spent one hour on the phone frantically trying to reach a member of your Change Advisory Board to greenlight the change which lets you modify the blacklist while unattended-upgrades happily wrecks one system after another
  9. Point 8 sets a requirement: How do you roll out a new blacklist/config for unattended-upgrades to 6000 servers in under 15 minutes?
    • You are too slow to do that manually, no matter how fast you can type
    • for HOST in $(cat host-list.txt); do ssh some-command; done? Yeah... No. It works, sure... But.. Come on, really? And this does only one host at a time.. Still taking too long.
    • You do want something like Puppet, Ansible or Rundeck for this very reason.
      • Change the configuration in hiera
      • Commit it to git
      • Log in to Rundeck and execute the "Do a manually triggered Puppet/Ansible run" job
      • Then drink a coffee while you watch Rundeck doing it's job and can care for the 10 or so servers who will fail for various other reasons.

Understanding apt-cache policy

The next thing that will help you vastly in providing a smoothly running unattended-upgrades service is understanding and using the apt-cache tool, especially with the policy argument: apt-cache policy. Even more if you use Debian-Repositories from other projects or vendors.

Why? unattended-upgrades comes with some default configured Origins-Pattern in /etc/apt/apt.conf.d/50unattended-upgrades. These are useable for the official Debian Security updates and when new point releases of Debian are published (12.5, 12.6, etc.). These incorporate all updates since the last point release. Non-Security Updates for installed packages between point releases won't be installed with the default configuration.

If you want these when they are ready, you have to uncomment the line for "origin=Debian,codename=${distro_codename}-updates";.

${distro_codename}-proposed-updates are updates which may not be stable yet. Read https://wiki.debian.org/StableProposedUpdates for the details. Personally I keep it disabled. Especially on production systems.

For Debian Bookworm the default Origins-Pattern are:

Unattended-Upgrade::Origins-Pattern {
[...]
//      "origin=Debian,codename=${distro_codename}-updates";
//      "origin=Debian,codename=${distro_codename}-proposed-updates";
        "origin=Debian,codename=${distro_codename},label=Debian";
        "origin=Debian,codename=${distro_codename},label=Debian-Security";
        "origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
[...]
};

This means only packages matching these Origins-Pattern will be updated. Therefore you have to verify that these patterns include all packages you want to update and, additionally, that packages you do not wish to update are excluded. Although the use of the blacklist might be easier here, as this simply works on the name of the package.

Matching Origin-Patterns to apt-cache policy's output

How do you translate these Origins-Patterns into the lines apt-cache policy gives us?

Short side-note: apt-cache uses the metadata (repository information) obtained via apt-get update. Therefore it also works if the configured repositories are offline, but can also show outdated data if you haven't updated the package list information via apt-get update recently.

If executed you will see output like the following:

root@lanadmin:~# apt-cache policy
Package files:
 100 /var/lib/dpkg/status
     release a=now
 500 http://debian.tu-bs.de/debian bookworm-updates/non-free-firmware amd64 Packages
     release v=12-updates,o=Debian,a=stable-updates,n=bookworm-updates,l=Debian,c=non-free-firmware,b=amd64
     origin debian.tu-bs.de
 500 http://debian.tu-bs.de/debian bookworm-updates/main amd64 Packages
     release v=12-updates,o=Debian,a=stable-updates,n=bookworm-updates,l=Debian,c=main,b=amd64
     origin debian.tu-bs.de
 500 http://security.debian.org/debian-security bookworm-security/non-free-firmware amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=non-free-firmware,b=amd64
     origin security.debian.org
 500 http://security.debian.org/debian-security bookworm-security/main amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=main,b=amd64
     origin security.debian.org
 500 http://debian.tu-bs.de/debian bookworm/non-free-firmware amd64 Packages
     release v=12.5,o=Debian,a=stable,n=bookworm,l=Debian,c=non-free-firmware,b=amd64
     origin debian.tu-bs.de
 500 http://debian.tu-bs.de/debian bookworm/main amd64 Packages
     release v=12.5,o=Debian,a=stable,n=bookworm,l=Debian,c=main,b=amd64
     origin debian.tu-bs.de
Pinned packages:
root@lanadmin:~#

For the sake of easiness, let us look at just this single line:

 500 http://security.debian.org/debian-security bookworm-security/non-free-firmware amd64 Packages
     release v=12,o=Debian,a=stable-security,n=bookworm-security,l=Debian-Security,c=non-free-firmware,b=amd64
     origin security.debian.org

I wont go over each and every single listed value. If you want to know more: man apt_preferences probably has all the details and apt-cache policy packagename lists all policies for a single package.

We want to pay attention to the 2nd line starting with release. Here we have the relevant values. These are defined in the Release-File for each Debian Release/Repository. A documentation can be found here: https://wiki.debian.org/DebianRepository/Format#A.22Release.22_files

But what do they mean? man apt_preferences also explains them, but as they are relevant, let's make a short table.

Field Alias unattended-upgrades variable Description
Version v N/A Contains the release version
Origin o ${distro_id} Originator of the packages. (Debian if used for packages from the Debian project. If you have commercial packages most likely the company or product name will show up here.)
Archive or Suite a N/A Names the archive to which all the packages in the directory tree belong (on the repository server)
Codename n ${distro_codename} Codename of the Debian release. In our case: bookworm
Label l N/A Names the label of the packages in the directory tree of the Release file.
Component c N/A The licensing component associated with the packages in the directory tree. You may know this as: main, contrib, non-free-firmware, etc.
Architecture b N/A The processor architecture for which a package is compiled. (amd64, i386, arm, etc.)

All this information is usually listed in the first 30 lines of /etc/apt/apt.conf.d/50unattended-upgrades. Therefore: Read it, to ensure you get the currently valid information.

Understanding this makes it easy to match the configured Origin-Patterns to your configure Debian-Repositories.

If you are in doubt: Browse your Debian repository via HTTP/HTTPS and have a look at the Release file, for example: http://debian.tu-bs.de/debian/dists/bookworm/Release. The first 5 lines are:

Origin: Debian
Label: Debian
Suite: stable
Version: 12.5
Codename: bookworm

Filling in the variables we see that only the following Origin-Pattern matches:

"origin=Debian,codename=${distro_codename},label=Debian";

All others either have a different label and/or codename.

Looking at http://debian.tu-bs.de/debian/dists/bookworm-updates/Release we can see that this matches the following commented out Origin-Pattern:

"origin=Debian,codename=${distro_codename}-updates";

This means:

http://debian.tu-bs.de/debian/dists/bookworm/ holds all packages for the current point release of Debian Bookworm.
http://debian.tu-bs.de/debian/dists/bookworm-updates/ has all published updates which came out before a new point release is made. That is the reason why I always enable this repository too. But depending on your operating strategy going only with updates when a new point release is published is also fine.

Security updates are always published via the http://security.debian.org/debian-security/ repository and will be installed when available.

Key points / Story time

What you should have understood is:

  1. Each repository must be uniquely identifiable.
    • This means: Each Origins-Pattern should only match one of all your configured repositories
    • Of course you can have multiple repositories matching the same Origins-Pattern, but keep the possible implications in mind!
  2. Packages that share the same name must have the same content
    • And by content I mean: Their hashsums must be identical for each given version

If it doesn't you are potentially in for a wild ride.

At a former company we used many internal repositories. And this was fine for a long time. Suddenly some developer started pushing Debian packages with the exact same name as official Debian packages to those internal repositories. We immediately complained. Laying out how that could wreck havoc on our infrastructure as we have to use those repositories and at the same time we can't blacklist that package, as blacklisting works solely on name of a package.

You can't do things like: "Blacklist package test-foo, but only if its coming from repository on repo.coolhost.tld"

We urged him to simply rename those packages or upload them to another repository - as those packages had to share the same name as they contained a not-yet included fix for a bug the company encountered. He wouldn't as he saw no problem with his approach. "Just don't use them." (Yeah thanks.. That's not how it works with automation.. Especially not if you - sort of - hijack repositories which are used for something entirely different..)

The workaround we made was by utilizing the apt priority for each repository, so packages from our internal official Debian mirror took priority. And that worked, but it was still annoying.

Some weeks later those packages caused an incident and in the root cause analysis the problem was identified and those packages were moved to a separate repository.

Lesson learned. Care for your repositories.

And this ends the first part of this post. The next part will focus on the practical side. We will look at the Systemd Unit- and Timer-File, how we can add new repositories to unattended-upgrades, for example to also upgrade our Proxmox installation using the Proxmox Debian repositories, how to blacklist packages and more.

Comments

Why blocking whole countries on the Internet isn't a precise process

Photo by Yan Krukau: https://www.pexels.com/photo/close-up-of-a-person-holding-uno-cards-9068976/

I just read it again on the Internet. Someone is asking: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US."

This inspired me to write a short post about why this is a dangerous and - let's call it politely - sub-clever idea.

You know what "Internet" means, do you?

The term Internet is short for "interconnected networks". The Internet isn't one big network. It's thousands and thousands of small and bigger networks linked together via so called routing protocols. They transport the information on which routers decide how to route your packet so it arrives at its destination. Routers are, to use an analogy, the traffic signs along the highway. Giving each packet directions on which lane it needs to take to reach its destination. In protocol terms, we speak about iBGP, eBGP, OSPF, RIP v1/v2, IGRP, EIGRP, and so on. The only real distinction is whether these protocols are Intra-AS (routing inside one AS - for example iBGP) or Inter-AS (routing between several AS - for example eBGP) routing protocols.

What is an AS, you ask? AS is short for autonomous system (Wikipedia). That's the technical term for a network under the control of a single entity, like a company. Each AS is identified by its unique number, the ASN. This number is used in routing protocols like BGP to exactly identify to which AS a rule belongs.

And as you must have already guessed by now: None of this respects real-world borders. Packets don't stop at borders. Here in Europe, even we people don't stop at borders. You just have to love Schengen (Wikipedia).

Therefore, the task of only allowing customers from the US is a little bit complicated to set up. Technically spoken. Data packets don't contain information from which country they originated. Just the source IP address.

But.. My firewall-/router-/hosting-/DDoS-/CDN-/whatever-provider provides such an option in the control panel of my/our account? So it must be possible!

I didn't say it couldn't be done under any circumstances. I just said, It's complicated and will constantly cause you pain and money loss.

Even BGP in itself isn't 100% safe and attack vectors like BGP hijacking (Wikipedia) do exist, but due to how BGP works, they are always pretty quickly noticed, and the culprit is easily and clearly identified.

So, when it is possible, how do they do it?

They are taking many and many educated guesses.

...

Yeah, ok. Sprinkled and garnished with some bureaucratic facts as their starting point for their educated guesses.

...

Ok, sometimes they outright pay internet service providers or other companies to give them those data. This might or might not be legal under your countries data privacy laws..

...

Not the answer you expected to read? Yeah, life is disappointing sometimes.

How the bureaucratic layer of the Internet works

Tackle the problem from another angle: Do you know how IP addresses are managed on the bureaucratic layer?

Do terms like IANA, RIR, Ripe and ARIN ring a bell? No? Ok, let me explain.

IANA is the Internet Assigned Numbers Authority. In their words, they "Perform the global coordination of the DNS Root, IP addressing, and other Internet protocol resources" Relevant for us in the article is the "IP addressing" part.

The IANA assigns chunks of IP addresses to the so-called RIRs. That's Regional Internet registry (Wikipedia). Those RIRs are (with their founding dates and current area of operations):

  • 1992 RIPE NCC - Europe, Russia, Middle East, Greenland
  • 1993 APNIC - Asian/Pacific region, Australia, China, India, etc.
  • 1997 ARIN - United States of America, Canada
  • 1999 LACNIC - Mexico, South American Continent
  • 2004 AFRINIC - African continent

These RIRs then provide companies in their assigned areas with IP addresses they can manage themselves. And to make this picture easier I left the ICANN & NRO, two other governing bodies, out of the picture.

As you can see some RIRs were founded later than others. This also means: Even if you filter based on which RIR manages the IP addresses, this isn't set in stone forever. Even if a RIR is responsible for a whole continent this can change.

What these companies, which offer geo-blocking, do is: They look where an IP address is located on the bureaucratic layer. Which RIR is responsible for the IP block? Which companies "own" the IPs? Where are they routed to/announced from. But these are all bureaucratic and technical information. These information can't be mapped 1 to 1 to a country. And these bits of information are extremely volatile.

Side note: And there is no RIR for each single country. The term LIR or Local Internet registry (Wikipedia) does exist. But it commonly refers to your Internet Service Provider (ISP) who assigns your Internet Modem/Router an IP address so you can browse the Internet. This has nothing to do with countries. The Internet itself isn't technically designed with the concept of "countries" or "borders" in mind. Never was and most likely (hopefully!!) never will.

Another problem are the systems who provide these information: Some provide real time information. Others don't. Additionally you don't know which metrics your vendor uses and how the vendor obtains them. And usually they don't make the process how they obtain and classify the information publicly available.

I had customer support agents who, instead of resolving a domain name via the ping or host command typed it into Google and used that information. Sometimes obtaining wrong information which was months old and therefore led to other errors...

And what about multi national businesses?

A company from Germany can have assigned IPs from the ARIN for their US business. Maybe they have a subsidiary company for their US business, but this still makes it a German company. How do you filter that?

Keep in mind: Maybe their US subsidiary was only established for jurisdictional problems and all people working with you are sitting in Germany. Hence mails, phone calls, letters, etc. will all come from Germany.

Additionally this company is free to use the IPs as they like. They can announce their BGP routes as they like. Nothing is preventing them from using IPs assigned by the RIPE NCC in the United States. This is done on a regular level. As especially IPv4 addresses are rare and sometimes IPs need to be moved around to satisfy the ever growing demand.

Side note: The NRO publishes the data of all delegated IP blocks under https://ftp.ripe.net/pub/stats/ripencc/nro-stats/latest/. And the file nro-delegated-stats contains the information which IP blocks were assigned by any RIR. You will find lines that the ARIN (Only responsible for the US & Canada) assigned IPs to an entity in Singapore.

Jan Schaumann used that file to present some cool statistics about IP allocations: https://labs.ripe.net/author/jschauma/whose-cidr-is-it-anyway/

To make the picture more complex: IPs issued by a RIR can be used in any country. Their is no rule nor enforcement that IPs issued by a RIR are only to be used in their sphere of influence. Therefore even that first starting piece of information can differ completely from reality.

Hence my statement that all this geolocation business is based on educated guesses. Yes, many positions will be precise. But the question is "For how long?" and do you really want to make your communication depending on that?

The technical reality

BGP routes themselves can change at any time. There is no "You can change them only once every 30 days." You can change them every 5 minutes if you like. They can even change completely automatically. Heck, they have to change automatically if we want a working Internet. There are always equipment malfunctions..

When I worked at a major German telecommunications provider, we utilized BGP to build an automatic fail-over in case an entire datacenter went offline. Both datacenters announced their routes (how traffic can reach them) via BGP towards the route reflector of our network team. Datacenter A announced with a local-preference of 200, datacenter B with a local-preference of 100. In iBGP the highest local-preference value takes priority. This means: If datacenter A should ever cease to function (the iBGP announcements from that datacenter stop reaching the route reflector) the traffic will immediately go towards datacenter B.

In our case, both datacenters A and B were located in Germany. But that was pure chance. My employer also had datacenters in France, the UK, Spain, etc. and of course also in the US. It just happened that the datacenter where my team was allocated the necessary rack space for our servers were both located in Germany.

So the endpoint can literally change every millisecond. And with it the country where traffic is sent to or originates from.

Of course we did regular fail-over tests. Now think about the following scenario: We are doing a live fail-over test. Datacenter A switched to B and datacenter B happens to be located in France. The traffic will be arriving in France for 5 minutes (the duration of our test). In exactly these 5 minutes a scan from a vendor notices that traffic for all IPs affected by our test will be located in France. The software will write this into its database and happily move along.

How long will that false, inaccurate and outdated information be kept in their database? What trouble will that cause your business?

Looking at it from the other side

Ok, so we clarified why geo-blocking is taking educated guesses with a bit of Voodoo. It is time to look at it from the other side, right? As this is a viewpoint which is regularly forgotten completely.

Let's go with the example above: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US.."

Is this really the reality? Are your suppliers and customers located in the US?

I bet 100% that you haven't even understood why you are making that claim. Most people will look at: "Where do we have stores? Where do we ship? What are our target customers?"

This usually leads to an opinion based on bureaucratic metrics. Or in other words: Delivery and invoice addresses.

But what about the customer in Idaho who just recently moved there from Spain and still uses his/her mail account from a Spanish mail provider?

Have you checked which IPs their email server uses? Are they hosted by a big cloud provider like Google, Azure or AWS? Do you have complete and absolute knowledge on how these biggest three tech companies manage their IPs and hundreds of networks today? Tomorrow? Next week?

Even they don't.

Which measures and workarounds they undertake should a datacenter be down? Or just be in a planned maintenance state?

It's fairly normal that in times of need workarounds are done to ensure customers can use the services, for which they are paying, again as quickly as possible.

Businesses change too

How about your biggest client suddenly stopping buying from you? Are you getting no calls for bids any more?

Could it be that the company you did business with was recently acquired by another company? And now they send all their mail from an entirely different mail server hosted in an entirely different country? Could that be the reason the RFQs (requests for quotations) stopped coming?

How much money will you lose before you notice this error?

Last words

I tried to explain in easy words for non-techie people why geo-blocking is usually bad. Yes, it's used by Netflix and many others. Yes, many products offer some kind of feature to achieve some form of geo-blocking.

But keep in mind: They have to do this for jurisdictional reasons. They bought rights to movies to show in certain countries. The owners of these rights want Netflix to ensure only those customers can watch these movies. Because they themselves sold the exact same rights to at least 25 other companies in other countries. And each of their customers will sue them once they notice that a competitor has the same movie in the same country. Hence, Netflix is trapped in a never-ending cat-and-mouse game with VPN companies that constantly change their endpoints.

I haven't even talked about VPNs. I haven't talked about DNS. I haven't talked about mail. All these require IPs to function. All these add several other layers of complexity. But all these are needed for your business to work in the 21st century.

You won't be more secure by blocking China, Russia, or North Korea from your firewall.

You will be more secure by applying patches on time. Using maintained software products. Separating your production environments from your development/test networks and the networks where the PCs/Laptops of your employees are located. By running regular security audits. By following NIST recommendations regarding password security. By defining a good manageable firewall rule framework. By having a ticket system that makes changes traceable AND reproducible. By introducing ITIL or some ISO stuff if you want to go that route.

Be advised: The bad guys are not just sitting in those countries that you are afraid of. China isn't solely attacking out of China in the cyberspace. No. Probably they utilize a nice hacked internet account from John Doe just around the corner of your shop.

Some links

If you want to read further I can recommend the site https://networklessons.com/. If you want to learn more about BGP you can visit https://networklessons.com/bgp and start from there.

Comments

Why I don't consider Outlook to be a functional mail client

Photo by Pixabay: https://www.pexels.com/photo/flare-of-fire-on-wood-with-black-smokes-57461/

This topic comes up far to often, therefore I decided to make a blogpost out of it. After all copy & pasting a link is easier than repeatedly writing the same bullet points.

Also: This is my private opinion and this article should rather be treated as a rant.

  • Mail templates are separate files? And the workflow to create them is seriously that antique?
    • Under Create an email message template (microsoft.com) Microsoft details how to create an email template. But you notice something? They use the term "[...] that include information that infrequently changes [...]" means only static text is allowed.
    • Yep, you can't draft mail templates where certain values get auto-filled and the like. I mean, how many employees, consultants, etc. have to sent their weekly/monthly time-sheet to someone? Is it so hard to automatically fill in the week number, month and automatically attach the latest file with a certain file name in a specified folder?
      • Yes! Automating this with software is surely the best way. But we all know how the reality in many companies looks like, right?
    • Additionally the mail templates are stored as files on your filesystem under: C:\users\username\appdata\roaming\microsoft\templates.
      • This means: Mail templates are not treated as mails in draft mode or the like. No, you have to load an external file via a separate dialogue into Outlook. That's user experience from the 1980s?
    • Workaround: Create a folder templates, create a sub-folder templates-for-templates. Store mail drafts (with recipients, subject, text, etc.) in templates-for-templates. When needed copy to templates. Attach file. Edit text manually. Hit send.
    • Never send directly out of templates-for-templates as else your template is gone.
    • But seriously? Why is this process so old and convoluted? I suspect the feature is kept this way because Microsoft is afraid of people utilizing it to send spam. But.. Sending spam manually? I think this stopped to be a thing at May 5th, 2000 (Wikipedia) at the latest.. Every worm/virus out there has it's own build-in logic to generate different subjects/texts/etc. Why deliberately keep a feature in such a broken state and punish your legitimate users?
  • No regular expressions in filter keywords
    • This annoys me probably the most. When you specify a filter "Sort all mails, where the subject begins with Newsletter PC news into a folder", Outlook will only sort mail with the exact subject of "Newsletter PC news"
    • Which is stupid when there is a static & changing part in the subject. I mean it's 2024. Support some kind of wildcard string matching via asterisks is not really new, isn't it? Like: "Sort all mail where the subject starts with "Newsletter PC news*" and then "Newsletter PC news April 2024" will also get sorted.. No. Not in Outlook.
  • Constant nuisance: Ctrl+F doesn't bring up the search bar - Instead it opens the new mail window..
    • I mean really? Ctrl+F is the shortcut for search everywhere. Why change that!?
    • Info: Ctrl+E activates the search field on top
  • Only one organizer for events
    • Ok, technically this isn't outlook but rather CalDAV and hence Google calendar, etc. suffer the from the same problems. But I still list it as a fault.
    • Why? Microsoft has repeatedly shown the middle finger to organizations like the ISO and the like. When it suits Microsoft's market share, they basically are willing to ignore a lot of common standards (like Google, Facebook, etc..). With their Active Directory infrastructure and Office Suite they have everything in-house and 100% under their own control to make this feature work in Windows environments - which most companies do run. But they don't care.
    • I mean.. On the other hand I'm glad that they follow the standard. It just turns out so often to be a feature we are in need of that I stopped counting.
    • And you already need proprietary connectors to properly integrate your Exchange calendar into other mail programs like Mozilla Thunderbird. So this shouldn't be really a big deal-breaker either..
  • Only one reminder for events
    • Due to my Attention deficit disorder I tend to have what is called "Altered time perception" or "time blindness". This means I won't experience 15 minutes as 15 minutes or grossly under-/overestimate how much time I really have left. Best description for non-ADDers I can give is: This means I will think of 15 minutes as "Ah, I still have 1 hour left." That this can lead to situations where I am late or wasn't able to fully prepare something for a meeting should be clear.
    • Therefore it really helps me to be able to set multiple reminders for an event.
    • Usually I do the following: 1 hour before, 30min, 15min. This helps me to break out of the time blindness and synchronize my altered time perception with reality. Enabling me to finish tasks before the meeting/event happens.
    • For events like a business trip which take more time to prepare I often set a reminder 1 or 2 weeks in advance. This way I have time to do my laundry in time and so on.
    • Outlook however only supports the setting of ONE reminder.. Yeah..
    • My workaround is to have events also in my private calendar. (Of course without any details and often just a generic title/description as to not store client information on my private device.)
  • Remember Xobni? / The search is horrible
    • Outlook search is a single input field and then it searches over everything. You can't specify if the search term you used is a name, part of the name of a file or part of an email address.
    • In the early 2000s there was Xobni. Slogan: "It reverts your Inbox." - Hence the name Xobni. It was a an add-on which added another sidebar to Outlook. There it displayed all people you've mailed with. And when you clicked on a person you saw all mails, all mail threads and, most importantly, all attachments this person had sent to you (or you to them). You could even add links to the persons social media profiles, etc. It was brilliant. And made work so easy. As often I remembered only the person who sent a file to me or the thread in which it was attached - but not the actual mail or even the subject of the mail, etc. Xobni made it pretty easy to work around that. Making it possible to search Outlook in a way in which our brain works.
    • Well, sadly Yahoo bought Xobni in July 2013 and shut it down in July 2014.
    • But it's 2024 and Microsoft hasn't come up with a similar functionality yet? Really?
Comments

Your content needs a date!

Photo by Pixabay: https://www.pexels.com/photo/clear-glass-with-red-sand-grainer-39396/

It's far too often that I encounter blogs, "What's new?"-sections or other content which doesn't have any form of date or timestamp indicating when the content was first published, last modified, etc. And, to a certain degree, I find it annoying. As these information provide a crucial context. It allows me to make certain assumptions and sort it in correctly.

It's like when you read a Changelog for a piece of software and the added/changed/removed features are not attributed to the version of the software where they did change. Not helpful at all.

A political piece, written at the height of a scandal might not include crucial information. Which only was discovered months after. During the lengthy and boring police investigation. About which - of course - nobody writes in detail. With a date next to that text I can sort the piece into it's correct position in the timeline and explain to myself why certain arguments weren't done or are plain wrong - but maybe were the current knowledge at the time it was written.

Today I got curious about what happened to the german PC handbook publishing company Data Becker. And I found this blogpost (in german) by Thomas Vehmeier: Data Becker – eine Ära geht zu Ende (vehmeier.com). Apparently he worked at Data Becker in the middle of the 1990's. And in his text he writes about his experience and how & why Data Becker failed when the Internet, and therefore the market, began to change.

But.. There is no date. Nowhere. He also doesn't mention the year when Data Becker got out of business. Classical archaeological problem. We can only definitely say "It happened after the 1990's". But apart from that? Well he links to the WirtschaftsWoche. A german business magazine. They do a have date on their article. 9th October 2013. And they wrote that Data Becker will go out of business in 2014.

Does this clarify when his text was written? No, but it answers it somewhat sufficiently.

Albeit it illustrates my problem. Yes, it is not an unsolvable one, but still annoying - for me. And, I guess, I'm again in the minority here.

Comments

ASUS RMA process is broken by design to maximize profit?

Photo by ThisIsEngineering: https://www.pexels.com/photo/woman-working-on-computer-hardware-19895718/

I watched an interesting video from the Youtube Channel GamersNexus. It's title is "ASUS Scammed Us".

And in this video they show how the ASUS RMA process is broken and many customers are faced with repair bills higher than the original costs for the devices. Or ASUS claims parts need to be repaired which are not broken according to the customers. Another big topic is also that ASUS regularly claims the customer caused the defect and hence repair isn't covered under their guarantee.

Yeah.. While watching the video you get the feeling the process was designed that way to maximize profit. This means it's intransparent, not flexible enough and generally doesn't have the customer at the core of it's view/goal.

Which sucks. And gained ASUS a place on my "Do not buy from ever again"-list... The video is linked below:

Or, if you prefer a link, here: https://www.youtube.com/watch?v=7pMrssIrKcY

Comments

Go home GoDaddy, you're drunk!

Photo by Tim Gouw: https://www.pexels.com/photo/man-in-white-shirt-using-macbook-pro-52608/

I'm just so fucking happy right now I have never been a customer of GoDaddy. As I learned via Reddit yesterday GoDaddy closed the access to their DNS API for many customers.

No prior information.

No change of the documentation regarding API access.

Nothing.

For many customers this meant that their revenue stream was affected as, for example, the SSL-Certificates for web services couldn't be automatically renewed. Which is the case when you are using Let's Encrypt.

Therefore I can't say it in any other words: GoDaddy deliberately sabotaged it's customers in order to maximize it's income.

Yeah, fuck you GoDaddy. You are on my personal blacklist now. Never going to do business with you. Not that I planned, but sometimes decisions like this must be called out and sanctioned.

When customers asked why their API calls returned an HTTP 403 error (Forbidden) GoDaddy provided the following answer (accentuation done by myself):

Hi, We have recently updated the account requirements to access parts of our production Domains API. As part of this update, access to these APIs are now limited: If you have lost access to these APIs, but feel you meet these requirements, please reply back with your account number and we will review your account and whitelist you if we have denied you access in error. Please note that this does not affect your access to any of our OTE APIs. If you have any further questions or need assistance with other API questions, please reach out. Regards, API Support TeamAvailability API: Limited to accounts with 50 or more domains Management and DNS APIs: Limited to accounts with 10 or more domains and/or an active Discount Domain Club plan.

Wow. The mentioned OTE API meanwhile is no workaround. It's GoDaddy's test API. Used to verify that your API-Calls work, prior to sending them to the productive API. You can't do anything there which would help GoDaddy's customers to find a solution without having to pay.

Sources

Am I the only one who can't use the API? (Reddit)

Warning: Godaddy silently cut access to their DNS API unless you pay them more money. If you're using Godaddy domain with letsencrypt or acme, be aware because your autorenewal will fail. (Reddit)

Comments