Feuerfest

Just the private blog of a Linux sysadmin

Switching Two-Factor Authentication Apps

Photo by Pixabay: https://www.pexels.com/photo/qr-code-on-screengrab-278430/

As I'm preparing to update the firmware on my mobile, updating the ROM and rooting it I'm currently in the "Backup, document and save everything" phase. This means I'm checking that I can backup and restore all my Two-Factor authentication codes (2FA) properly.

Partly because I didn't backup the enrolment QR-Codes for every service I signed up over the years. Documenting the otpauth:// URL and/or the initial QR-Code is still the best way. Then it doesn't even matter if all your devices get lost. You can just enter the information in your 2FA app use the code to sign in and then disable 2FA and re-enable it to invalidate the old secret. Locking out anyone else possibly using your devices/accounts.

As I played around a little bit with 2FA apps over the time I got three apps installed:

And here my problems are starting.

Google Authenticator: Only allows you to export your entries in a QR-Code. (Beside the Cloud sync, but whoever uses this hasn't properly understood 2FA in my personal opinion..)

Aegis Authenticator: Allows the export in un-/encrypted clear text. Even with proper otpauth:// URLs. Nice!

FreeOTP: Offers exporting your entries in an externalBackup.xml called file which contains JSON structured data!? Okay.. The secrets are encrypted with the password you chose when you installed the app. It cannot be changed or retrieved otherwise afterwards so I hope you remember it. 😉

There is a discussion on GitHub about how to decrypt that file, extract the secrets and build proper otpauth:// URLs, but that solution didn't work for me.

I only got the following error message:

user@host:~$ python3 freeotp.py
Traceback (most recent call last):
  File "/home/user/freeotp.py", line 26, in <module>
    tree = ET.parse("externalBackup.xml")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/xml/etree/ElementTree.py", line 1218, in parse
    tree.parse(source, parser)
  File "/usr/lib/python3.11/xml/etree/ElementTree.py", line 580, in parse
    self._root = parser._parse_whole(source)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 0

Anyway, after deleting myself from several services in the last years only two entries were still needed and it was easier and faster to just disable 2FA on these accounts. So that is what I did.

Using Aegis Authenticator to migrate to any 2FA app of choice

Regarding my entries in the Google Authenticator I generated the QR-Code and scanned that with Aegis Authenticator. Aegis properly imported all entries and the generated 2FA tokens were correct when I checked them against Google Authenticator.

As Aegis allows me to export everything in clear text I can use that to migrate to any 2FA app of my choice. But most likely I will stick to Aegis.

Yes, this clear text export is a potential security risk. I get it. But if it means I have a way to easily migrate 30+ 2FA accounts I'm willing to make that compromise. Yes, I mean.. Now that I have all my secrets and otpauth:// URLs that shouldn't be a concern anymore, right? Well, now I have everything. I'm pretty sure in the future I'm forgetting to properly document some 2FA codes again, hence this being the better choice.

Or are there other solutions I'm missing?

And what about the Microsoft Authenticator?

Honestly? I'm forced to use this by my employer as we don't allow any other form of 2FA for authentication in our company. As it also implements some sort of custom 2FA no other app supports I couldn't be bothered to search for a solution.

Hence there is only one account tied to it. So I did what was reasonable: I removed the app, deleted all settings and cached files, reinstalled the app and just enrolled my account again.

Yes, this required a ticket for our IT Helpdesk to remove the old Authenticator from my account, but I had no problem with that.

Comments

I made an Oopsie or: How to request a new Telegram BotToken

Photo by Antoni Shkraba: https://www.pexels.com/photo/man-in-green-hoodie-and-black-sunglasses-sitting-on-orange-chair-5475784/

While writing my script that notifies me when comments are pending approval (read: Development of a custom Telegram notification mechanism for new Isso blog comments), I made a mistake and committed the script to my public GitHub repository along with the BotToken.

Even though it was only up for about 20 minutes before I realised what I had done I considered it compromised. Therefore I needed a new BotToken for the Telegram HTTP API.

Luckily this is very easy, as Telegram keeps track of which account was used to create a BotAccount, I was able to do this in 2 minutes via Telegram Web (including googling the commands).

All I had to do was to ensure I message BotFather from the account I created the bot.:

  1. Search for @BotFather on Telegram
  2. Message @BotFather with the command: /revoke
  3. Provide @BotFather with the username of the bot: @SomeExampleBotBla
  4. @BotFather will reply with the new token
  5. Update your scripts and restart services such as Icinga2
  6. Test/verify that the new token is being used everywhere.

Done.

Cleaning Git

As you may know even deleted files along with their content stay in Git and are easily searchable. GitHub has a good article about the topic discussing some tools who make it easier to remove such files: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository

I utilized git-filter-repo (Documentation: https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html).

However keep in mind that git-filter-repo removes your configured remotes (git-filter-repo Documentation). You have to set them again if you plan on keep using the same repository.

And yes, while the BotToken is gone from the history of the script https://github.com/ChrLau/scripts/commits/master/check-isso-comments.sh you can still view it if you know the original commit ID.

Apparently deleting stuff from a Git repository completely is pretty hard.. Lesson learned..

Comments

Hypocrisy

Photo by Madison Inouye: https://www.pexels.com/photo/self-care-isn-t-selfish-signage-2821823/

One of the attributes which is used to describe me, and that I get to hear regularly, is, that I am critical. Sometimes this comes in the form of an accolades that I have good discernment or that I am brave enough to publicly speak out things which many dare not to. And sometimes in form of constructive feedback that I should focus more on the positive side of a certain task or project.

However I always try to not be a hypocrite. I regularly question myself if I am the one to blame. If I could have done better, missed a crucial piece of information or if my words contradict my actions. And if they do: Do I have a just reason for this? A cause that explains it in a comprehensible way?

Additionally I try to keep my emotions out. Yes, I do not succeed in this 100% of the time. After all I'm not a machine. Still manage in succeeding often enough to not look like a raging barbarian. Failing to think over the issue in a neutral way often leads to missing key points. And makes it hard to see it through the eyes of the other involved parties/stakeholders. This in turn causes inaccurate statements or incoherent lines of reasoning. Nothing of this helps to convince other people or to get to the root of the problem.

Therefore it shouldn't surprise anyone that I don't like hypocrisy. Especially so when it touches a topic I have first hand personal experience with and is important to me.

Mental Health Day

October the 10th is the international day of awareness for all topic related to Mental Health. Be it a proper Work-Life-Balance, the poor care for people suffering from diseases such as depression (and many others) or the sadly still existing prejudices against people who have suffered from - or still do - Mental Health issues.

A complicated & delicate topic

Mental Health issues are a tricky thing. In nearly all cases I got to know in detail the ones suffering from it are not the ones responsible, nor to blame. Some people crumble under all the injustice in this world. Shattering while trying to just make things right but were doomed from the start as a single person can't beat the company, yet alone the system.

Others experienced such malevolent acts, even without getting hurt physically, that it left them in ruins. Just think about the child which constantly experienced injustices from it parents. Never getting to know what the word family should mean.

Yet these very same people have to accumulate an immense amount of strength and pick up the fight for their own sanity. Just to live a happy life.

And then there are outsiders who make fun of them for that. Who belittle them. Who question their ability to ever regain their mental health. That they can ever be a productive person again.

These are the people I strongly recommend a therapy - or at least speaking with, for example, an recovered alcoholic or a rape survivor. As the immense lack of sympathy and humility they show is shocking. They can't even imagine what these people have been through and how much work therapy is. Yet, again, some people make fun of therapy as they think of it as "It's just singing in a circle and clapping with your hands." No, it's not.

A special place in hell

And then.. There are certain companies I know of. Posting on Linkedin, Twitter, Instagram and all those other social-media and business platform how "Mental Health aware" they are. How much they care to enable their employees to live a good work-life balance. Etc. And so on. Yada Yada.

All this while they engage in union-busting with the help of a specialised law-firm. And have absolutely no issues in threatening, admonishing or taking people to court over nonsense. Sometimes even utilizing their knowledge of the mental health issues of certain employees to even quicken the process of making them resign (or leaving with a severance package and a signed NDA). Effectively using it as a weapon against them. Just to reach their goal of preventing a union.

And the only thing these people did was trying to organize a union to get their rights and better their situation.

Yeah, I seriously hope those people get a special place in hell.

Comments

No, you don't get Microsoft Office for free via the Office Customization Tool (OCT) and Generic Volume License Keys (GVLKs)

Photo by RDNE Stock project: https://www.pexels.com/photo/close-up-shot-of-a-person-holding-a-contract-7841486/

There are a bunch of websites and Youtube videos trying to tell you that the locally installable Microsoft Office versions can be downloaded, legally, free of charge, directly from Microsoft. This is, of course, nonsense.

The bait

All videos I watched and articles I read told their audience to visit https://config.office.com/deploymentsettings. Which is the official site from Microsoft for the so-called Office Customization Tool (OCT). The OCT is meant for sysadmin administrators to generate a configuration file that defines which office version (Office LTSC Professional Plus 2024, Microsoft 354 Apps for Enterprise, etc.) along with which office components (Word, Excel, PowerPoint, Outlook, etc.) are going to be installed.

It is a tool used to ensure installer packages for Microsoft Office are configured the same way. Helping IT-Administrators to automate and standardize office installations on all of the companies' various systems (notebook, workstations, etc.).

The end result in using the OCT is a configuration file. Not an actual installer. You won't get a .msi or .exe file. You've just created a configuration file telling the office installer which components to install & how to configure them.

However - and this is the main reason this scam works - a license key is pre-filled. And this is almost always highlighted prominently in the videos & articles. Like this is some sort of proof that "Microsoft is giving away Office for free" or "This is legitimate". That's a lie. Straight out. I'll elaborate on that in the next paragraph.

The next step is that you download the official Office Deployment Tool (ODT) from Microsoft. Then the setup routine from the ODT is called and the configuration file you created with the OCT is supplied.

And yes, of course you will end up with an installation of your chosen Microsoft Office version. Alas, this version will never be useful and only work for 30 days and you won't be able to activate your installation. Why? Read the next paragraph.

The catch

What all of them don't seem to grasp - or intentionally misrepresent - is that they are using a Microsoft Office version with a pre-filled Generic Volume License Key (GVLK). This key still must be activated via a so-called Key Management Service (KMS). The KMS is NOT your Microsoft activation service used to activate your windows installation. It's a separate software component running in the Active Directory Domain in corporate networks. And guess what? You need to buy license keys and add them to the KMS. Only then your copy will be activated.

The thought "Well I got a license key, so it's working." is outdated. That was maybe the case with software in the early 2000s. Nowadays there are several other methods to validate if a copy is genuine or not.

After all.. If the license keys are so precious - the most valuable part. Why are they then pre-filled and non-changeable in the Office Deployment Tool from Microsoft? Why are they listed in a separate article?

That's why all videos are non-surprisingly tight-lipped on the whole "How to activate the Office installation?" part. Usually they just show the successful installation and open a few Word documents to proof it's working. Well yes, it is - for at least 30 days. But after that you're starting at zero again.

And.. I have no idea if using that Office installation will have consequences for your Microsoft account..

Side-note: Generic Volume License Keys being sold as used license keys

At least here in Germany it is legal to sell used license keys. In terms of Generic Volume License Keys (GVLKs) this adds another possibility for fraud.

Many shops which sell used Microsoft Office license keys will offer you retail keys in their shop. Think of retail keys as the stickers on the DVD Box of the software you just bought in your store of choice. Here almost always the key is unique and identifies exactly one copy of Microsoft Office. And the shop will gladly take your money when you buy a retail key from them. Only for you to receive a mail shortly afterwards which states that, sadly, retail keys are not in stock. But hey! They offer to upgrade you to a volume license key. Maybe even together with an upgrade to a higher Office version. As in: You bought a retail key for Microsoft Office 2024 Home and they offer you to upgrade you to Microsoft Office 2024 Professional Plus.

See where this is going?

At this point you should deny that upgrade and demand your money back. As it's quite likely that you will receive a Generic Volume License Key. Of course the shop will state that "Activation cannot be guaranteed" after you complain with your issues, etc.

That's the reason why you should read the Terms & Conditions carefully prior to buying used software license keys..

Why not just use OpenSource software?

If you don't want to pay for your office software suite, why not just use LibreOffice? LibreOffice is OpenSource and free-of-charge. It can handle documents created with Microsoft Office as well. So why all the hassle?

If you've never heard of it before, feel free to read the Wikipedia article about it: https://en.wikipedia.org/wiki/LibreOffice

Comments

How to stop logging HTTP-Requets from check_http in Apache 2

Photo by Sena: https://www.pexels.com/photo/wood-covered-in-snow-10944259/

I encountered a situation where the HTTP(S)-Requests done by Icinga via check_http checkplugin seriously messed up the generated website traffic reports. The software used to generate said reports wasn't capable of filtering out certain requests based on User-Agents. Restructuring the report in a way that the hits from the monitoring requests could be easily identified was also out of the question as this would generate too much follow-up work for other involved parties & systems.

The solution (or rather: workaround...) was to simply omit the logging of said monitoring requests to the logfile of the Apache vHost.

The relevant line in the virtual host config was the following:

CustomLog /var/log/apache2/host.domain-tld-access.log combined

This defines where to store the logfile in the "NCSA extended/combined log format" as specified on https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#formats respectively in the /etc/apache2/apache2.conf in Debian.

root@host:~ # grep LogFormat /etc/apache2/apache2.conf
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

SetEnvIf to the rescue!

Apache2 allows us to set environment variables for each request via the SetEnvIf directive (among others). This is documented on the page for mod_setenvif: https://httpd.apache.org/docs/2.4/mod/mod_setenvif.html#setenvif

A typical line in the access.log would log like the following:

256.133.7.42 - - [30/Dec/2024:05:46:18 +0100] "GET / HTTP/1.0" 302 390 "-" "check_http/v2.3.1 (monitoring-plugins 2.3.1)

Since the requests were for the URI / we can't exclude requests based on Request_URI. Instead, we have to go for the supplied user-agent. Fortunately this is quite unique.

To catch future versions we use a wildcard-regex for the version numbers. As environment variable I choose monitoring.

# Don't log Icinga monitoring requests
SetEnvIf User-Agent "check_http/v(.*) \(monitoring-plugins (.*)\)" monitoring

Now all that is needed is to adopt the CustomLog-directive in the virtual host configuration. As the documentation for the CustomLog-directive explains we can access the environment variables via the env= option. Negating this will simply omit all requests with that environment variable set.

Therefore our resulting CustomLog line now looks like this:

CustomLog /var/log/apache2/admin.brennt.net-access.log combined env=!monitoring

Now just check your configuration for errors via apach2ctl configtest and restart your Apache webserver.

Comments

Using ping to monitor if your systems are working is wrong

Generated via Dall-E

I've seen this far too often. When monitoring a system a simple ping command is used to verify that the system is "up and running". In reality nothing could be further from the truth as this is not what you are actively checking.

Ping utilizes the Internet Control Message Protocol (ICMP) to send so-called ping-request packets to a server, which the server will answer with ping-reply packets. Giving us a method to verify that the server is actively answering our requests. But if the server is answering the packets, it doesn't mean that the server itself is in working condition. That all services are running.

And this is because of several reasons:

1. The network is reliable - mostly

In https://aphyr.com/posts/288-the-network-is-reliable Kyle Kingsbury, a.k.a "Aphyr" and Peter Bailis discuss common network fallacies, similar to the Fallacies of distributed computing by L. Peter Deutsch. As it is commonly assumed that the network "just works" and no strange things will happen. When indeed they do happen all the time.

In regard to our ICMP ping this means:

  1. There can be a firewall blocking ICMP or simply all traffic from our monitoring system
  2. Routing can be misconfigured
  3. Datacenter outages can happen
  4. Bandwidth can be drastically reduced
  5. VLANs can be misconfigured
  6. Cables can be broken
  7. Switchports can be defect
  8. Add your own ideas what can go wrong in a network

And you do want a monitoring method which allows you to reliably distinguish between network and system problems.

2. CPU Lockups

ICMP packets are answered by the kernel itself. This can have the nasty side-effect that your server literally hangs. Trapped in a state known as either Soft or Hard Lockup. And while overall they are somewhat rare - CPU Soft Lockups still do occur from time to time in my experience. Especially with earlier versions of hypervisors for virtual machines (VMs) as a CPU Soft Lockup can be triggered if there is simply too much CPU load on a system.

But the nasty side-effect of CPU Soft Lockups? The system will still reply to ICMP packets, while all other services are unreachable.

I once had problems with power management (ACPI) with a servers hardware. Somehow the ACPI kernel module would lock resources without freeing them. This effectively meant that the system came to a complete stop - but it didn't reboot or shutdown. Nor did it crash as in "Completely unreachable". No, ICMP packets were still answered quite fine.

Just no SSH connection was possible. No TCP or UDP services reachable. As the CPU was stuck at a certain operation and never switched to process other tasks.

Only disabling ACPI by adding the acpi=off parameter to the grub kernel boot command line "fixed" this.

Regarding soft lockups I can recommend reading the following:

  1. Linux Magic System Request Key Hacks: Here you learn how you can trigger a kernel panic yourself and how to configure a few things 
  2. https://www.baeldung.com/linux/terminal-kernel-panic also has a nice list of ways to trigger a kernel panic from the command line
  3. https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt Kernel documentation regarding "Softlockup detector and hardlockup detector (aka nmi_watchdog)"
  4. This SuSE knowledge base article also has some good basic information on how to configure timers, etc. https://www.suse.com/support/kb/doc/?id=000018705

Takeaways

  1. ICMP is suited to check if the system is reachable in the network
    • After all ICMP is more cost-effective than TCP in terms of package size and number of packages sent
  2. A TCP connect to the port providing the used service is usually better for the reasons stated above
  3. You must incorporate your network topology in your monitoring system; only then you will be able to properly distinguish between: "System unreachable", "Misconfigured switchport" and "Service stopped responding"
    • This means switches, firewalls, routers, loadbalancers, gateways - everything your users/service depends upon to be reachable must be included in your monitoring system, and:
  4. If possible the dependencies between them should be defined
    • Like: Router → Firewall → LoadBalancer → Switch → System → Service

Conclusion

Knowing all this you should keep the following in mind: A successful ping only verifies that the system is reachable via your network. And this doesn't imply anything about the state of the OS.

Yes, this is no mind-blowing truth that I reveal here. But still I encounter monitoring setups where ICMP is used to verify that a system is "up and running" far too often.

Comments