Feuerfest

Just the private blog of a Linux sysadmin

How to write better documentation by learning about the "Bloomfield Bridge Mystery"

Photo by Pixabay: https://www.pexels.com/photo/an-opened-old-book-161366/

"No one writes down the real reason for infrastructure projects."

Through Mastodon (definitely the better Twitter 😛) I was made aware of the text "The Mystery of the Bloomfield Bridge" by Tyler Vigen.

This text starts humble. It's just someone asking why this particular Bridge was built in Bloomfield, Minnesota (US). As it seemed superfluous and simply not needed.

Spoiler: It was built 70 years ago primarily for children visiting a nearby school (which now seems to be long gone) and the church (still present).

But just when it seems that Tyler Vigen had consulted all sources. Spoke to anyone he could imagine and still had open questions. And, more importantly, was in need of a primary source backing up his theories and link his findings. He received the following tip:

"No one writes down the real reason for infrastructure projects."

What the woman who gave him this tip meant was: Projects (especially civil ones) have a political side which is seldom actively documented. As it was the case with this particular bridge project.

Curiously I first understood it in the following way:

Rarely anyone notes down the volatile Zeitgeist knowledge. The: "We are currently at this point of our journey. We came here because of A, B and C. Now we have the following problem with C. Hence we try D."

But it's this knowledge which enables me to provide better solutions and guidelines to my clients. Contextual wisdom is important.

"Can't you just talk with your client?"

Sure thing. And I do. After all I'm not tight-lipped.

Another aspect which I encounter regularly: There is a plaque at the bridge. Prominently declaring: "Federal Aid Project FAI 494-4-32 Minnesota 1959."
Ok, yeah.. That seems to be the project which built this bridge.
Apart from that? Well, just another cryptic abbreviation which we can use for our research.

Yeah.. And this is usually the time when I tell the story of this big IT company and it's KF1 test environment.

This company switched all its technical systems, all its processes to the UTF-8 character encoding after having used Latin1, also known as ISO-8859-1, for decades. But as UTF-8 provides support for characters from any alphabet & language it seemed only logical to use this. After all it made expanding into markets with other alphabets (like Cyrillic or Greek) easier.

Each and every process was built up a second time in this KF1 environment. Too big was the fear that a single not-migrated process could wreck havoc. Each process was tested end-to-end and all systems were switched during 3 weeks. Which left the company somewhat inoperable for this period.

Sometime during this project I asked: "Hey, out of curiosity what does KF1 mean? What does it stand for? Everyone just uses the abbreviation."

Nobody, not one single person knew it. Some said they did know it. Once. Years ago. After all this project was running for several years. And in all these years no one saw it necessary to note down the full wording of this abbreviation. Not in one single wiki page or document.

And now we are back to our bridge in Bloomfield, the plaque and our quote on top of this article.

Another sad aspect is: Each and every person which Tyler Vigen could have interviewed is dead.
An aspect which I do encounter often in a similar form:
"Oh, we don't know precisely why it was done this way. All colleagues who built this system are in different parts of the company now or have left it."
What do we learn from this? Just because something is as clear to you as glass. And you think it's absolutely obvious, self-explaining and everyone knows it anyway. - Then this is still not a valid reason to not document it.

After all archaeologists and historians can tell you a thing or two about this.

You don't know what I mean with this? Well..

Until today we just don't know what the Xylospongium (a sponge on a stick) was used for in Roman lavatories. (Yes, Wikipedia writes it was used to clean the butt. But (pun intended) this theory is old and doesn't match Roman hygiene customs - as there were only few per lavatory. Current consensus seems to be that it was used like a toilet brush.
But we can't say for sure. We simply don't know as we have no reliable primary sources.
Oh.. No. Wait, we do have some. They complain that the Xylospongium is often used in a wrong way - but WITHOUT describing how this misuse looks like. (Sounds familiar to you? 😂)

And now don't get me started on the roman dodecahedron.

Comments

Don't call it UUID!

Made by myself https://admin.brennt.net/bl-content/uploads/pages/44e0aefb15224b22617e9f62071dda3f/uuid.jpg

This is a rant about software.

Dear Software-Vendors,
when you write that your software expects an UUID. Then please make sure that YOU actually first understand what a UUID is. Or to be precise: How the syntax of an UUID looks like and what it tries to achieve (the semantic of a UUID so to speak).

This is a UUID: 550e8400-e29b-11d4-a716-446655440000

That is: EIGHT DASH FOUR DASH FOUR DASH FOUR DASH TWELVE.
Repeat after me: 8-4-4-4-12

All those numbers are represented by strings consisting of hexadecimal characters. Meaning each and every character can either be 0-9 or a-f (NOT a-z because that wouldn't be hexadecimal).
There is no "It has to be a 3 at the seventh position". No. All hexadecimal, all random. W
ell.. In UUID v4 at least. But for the sake of ranting I won't go into detail here.
You can accept upper and lowercase but that is not allowed to matter.
Similarly like upper-/lowercase in emailadresses doesn't matter.

If you request an UUID of: 9-3-4-4-12

AND/OR

expect the first character to be an upper or lowercase character

AND/OR

you accept characters from A to Z...

Then you should be ashamed and I have no words left for you.

If you then follow up with: "But it's for sEcUriTy! z0mg!" No. Just no. Stop that. Seriously.
That's just your carefully chosen incompatibility (to keep your users nicely tucked in your software ecosystem) and FUD (Fear, uncertainty and doubt). But nothing else.
Also you just broke every single tool out there which verifies and checks UUIDs. Which actually comes in very handy in.. Uhm.. Software-Security? Like you know.. Don't use the same UUID twice, etc. Or Code-Linting tools and the like - which are part of the OPERATIONAL security of your customers.
So please: Stop that BS. Seriously.

Please: Don't be that kind of software vendor. Thank you, please make sure to visit my TED-Talk. 😂

Don't get me wrong: You can make up your own unique identifier syntax. But then: DON'T call it UUID! That name is standardized world-wide with the OSF, IETF, ISO and probably many other important standardization organisations.
Instead: Feel free to create a new one of this lovingly VS3LA's (Vendor specific 3 letter acronyms) which every software vendors seems to like..
Then at least every IT person will know that we are talking about something different.

Comments

Protect against malicious AirTags (and some other tracking devices)

Photo by cottonbro studio: https://www.pexels.com/photo/man-observing-woman-through-doorway-8626372/

When Apple introduced the AirTag (see Wikipedia) it was primarily marketed as a "Find your device/stuff" product. Allowing you to locate the item to which the AirTag is attached, even when it's hundreds of meters (or kilometers) away. As long as there is some Apple device which receives the Bluetooth signal and forwards this to the Apple servers, you will have the location of the device. Of course, depending on time passed and location it can be inaccurate. But in our connected world it's likely that some device will pick up the signal again and you'll have up-to-date location information.

Additionally AirTags can produce a sound, so that you can get a audio hint on where the device is.

AirTags do have many useful cases. From tracking your stolen bike (if you hide some AirTag on/in it), to locating your lost luggage at an airport.. Even pets! Sure this is useful. But sadly.. The principle of dual usability is real and hence even in the beta phase Apple already rolled out a feature that allowed you to view all AirTags in your vicinity. As the potential for illegitimate usages was too high, to simply ignore it. After all.. Watch someone retrieving money at an ATM, occasionally bump into this person and put an AirTag into the jacked of that person. And then just follow and wait until this person is in some place where there are no cameras and/or eyewitnesses. Or think about the whole stalking and Online-Dating problem.

No, AirTags can be a security and privacy disaster. I'm sure many of you have read the story about Lilith Wittmann, who used an Apple AirTag to uncover an office of a secret german intelligence agency (article on appleinsider.com) or read the original german article, published by herself, here: Bundesservice Telekommunikation — enttarnt: Dieser Geheimdienst steckt dahinter.

Ok, so Apple has this feature included in iOS for it's phones/tablets, etc. - What about Android?

Good things first: Apple and Google recognized the threat and are working together towards an industry specification which aims to put an end to stalking via AirTag and similar devices. But, as this has just been announced in May 2023, it's still too early to have produced any meaningful results (sadly).

Apple press release: https://www.apple.com/newsroom/2023/05/apple-google-partner-on-an-industry-specification-to-address-unwanted-tracking/
Google blog post in their security blog: https://security.googleblog.com/2023/05/google-and-apple-lead-initiative-for.html 

Well, Android being the fragmented Android market it is, not every manufacturer has such an option included. I know that Google Pixel devices have such a feature. And I was told Samsung and OnePlus phones too. But there are many other Android versions around. And: What about custom ROMs? I use LineageOS on my OnePlus phone and wasn't able to find such a feature. That's why I searched for an app that does this for me, and was pleasantly surprised to find one.

AirGuard is even released for iOS and there it's also able to find trackers which Apples feature won't detect. So.. I guess this is a recommendation to install this app on iOS too.

Introducing AirGuard

AirGuard is an Android app, developed by the Secure Mobile Networking Lab (SEEMOO) which is part of the Technical University of Darmstadt - specifically their computer science department. You may have heard from them occasionally as they regularly find security vulnerabilities in Apple products and do a lot of research on Bluetooth and Bluetooth security. The neat point? AirGuard is OpenSource, it's code is being published on GitHub. This allows me to install the App using F-Droid (which only offers OpenSource apps).

The icing on the cake? It can not only track Apple AirTags, but Samsung SmartTags and Chipolo Tags too.

From here it's just a normal app installation. Allow the app to use Bluetooth, disable battery saving mechanisms (so it stays active while being executed in the background) and that's it.

As I own no AirTag or similar device I can't test it, but I will update this article when I was able to test this.

If you want to stay up-to-date with the development, there is a Twitter account for that: https://twitter.com/AirGuardAndroid

Comments

Icinga2 Monitoring notifications via Telegram Bot (Part 1)

Photo by Kindel Media: https://www.pexels.com/photo/low-angle-shot-of-robot-8566526/

One thing I wanted to set up for a long time was to get my Icinga2 notifications via some of the Instant Messaging apps I have on my mobile. So there was Threema, Telegram and Whatsapp to choose from.

Well.. Threema wants money for this kind of service, Whatsapp requires a business account who must be connected with the bot. And Whatsapp Business means I have to pay again? - Don't know - didn't pursue that path any further. As this either meant I would've needed to convert my private account into a business account or get a second account. No, sorry. Not something I want.

Telegram on the other hand? "Yeah, well, message the @botfather account, type /start, type /newbot, set a display and username, get your Bot-Token (for API usage) that's it. The only requirement we have? The username must end in bot." From here it was an easy decision which app I choose. (Telegram documentation here.)

Creating your telegram bot

  1. Search for the @botfather account on Telegram; doesn't matter if you use the mobile app or do it via Telegram Web.
  2. Type /start and a help message will be displayed.
  3. To create a new bot, type: /newbot
  4. Via the question "Alright, a new bot. How are we going to call it? Please choose a name for your bot." you are asked for the display name of your bot.
    • I choose something generic like "Icinga Monitoring Notifications".
  5. Likewise the question "Good. Now let's choose a username for your bot. It must end in `bot`. Like this, for example: TetrisBot or tetris_bot." asks for the username.
    • Choose whatever you like.
  6. If the username is already taken Telegram will state this and simply ask for a new username until you find one which is available.
  7. In the final message you will get your token to access the HTTP API. Note this down and save it in your password manager. We will need this later for Icinga.
  8. To test everything send a message to your bot in Telegram

That's the Telegram part. Pretty easy, right?

Testing our bot from the command line

We are now able to receive (via /getUpdates) and send messages (via /sendMessage) from/to our bot. Define the token as a shell variable and execute the following curl command to get the message that was sent to your bot. Note: Only new messages are received. If you already viewed them in Telegram Web the response will be empty. As seen in the first executed curl command.

Just close Telegram Web and the App on your phone and sent a message via curl. This should do the trick. Later we define our API-Token as a constant in the Icinga2 configuration.

For better readability I pipe the output through jq.

When there is a new message from your Telegram-Account to your bot, you will see a field with the named id. Note this number down. This is the Chat-ID from your account and we need this, so that your bot can actually send you messages.

Relevant documentation links are:

user@host:~$ TOKEN="YOUR-TOKEN"
user@host:~$ curl --silent "https://api.telegram.org/bot${TOKEN}/getUpdates" | jq
{
  "ok": true,
  "result": []
}
user@host:~$ curl --silent "https://api.telegram.org/bot${TOKEN}/getUpdates" | jq
{
  "ok": true,
  "result": [
    {
      "update_id": NUMBER,
      "message": {
        "message_id": 3,
        "from": {
          "id": CHAT-ID,
          "is_bot": false,
          "first_name": "John Doe Example",
          "username": "JohnDoeExample",
          "language_code": "de"
        },
        "chat": {
          "id": CHAT-ID,
          "first_name": "John Doe Example",
          "username": "JohnDoeExample",
          "type": "private"
        },
        "date": 1694637798,
        "text": "This is a test message"
      }
    }
  ]
}

Configuring Icinga2

Now we need to integrate our bot into the Icinga2 notification process. Luckily there were many people before us doing this, so there are already some notification scripts and example configuration files on GitHub.

I choose the scripts found here: https://github.com/lazyfrosch/icinga2-telegram

As I use the distributed monitoring I store some configuration files beneath /etc/icinga2/zones.d/. If you don't use this, feel free to store those files somewhere else. However as I define the Token in /etc/icinga2/constants.conf which isn't synced via the config file sync, I have to make sure that the Notification configuration is also stored outside of /etc/icinga2/zones.d/. Else the distributed setup will fail as the config file sync throws an syntax error on all other machines due to the missing TelegramBotToken constant.

First we define the API-Token in the /etc/icinga2/constants.conf file:

user@host:/etc/icinga2$ grep -B1 TelegramBotToken constants.conf
/* Telegram Bot Token */
const TelegramBotToken = "YOUR-TOKEN-HERE"

Afterwards we download the host and service notification script into /etc/icinga2/scripts and set the executeable bit.

user@host:/etc/icinga2/scripts$ wget https://raw.githubusercontent.com/lazyfrosch/icinga2-telegram/master/telegram-host-notification.sh
user@host:/etc/icinga2/scripts$ wget https://raw.githubusercontent.com/lazyfrosch/icinga2-telegram/master/telegram-service-notification.sh
user@host:/etc/icinga2/scripts$ chmod +x telegram-host-notification.sh telegram-service-notification.sh

Based on the notifications we want to receive, we need to define the variable vars.telegram_chat_id in the appropriate user/group object(s). An example for the icingaadmin is shown below and can be found in the icinga2-example.conf on GitHub: https://github.com/lazyfrosch/icinga2-telegram/blob/master/icinga2-example.conf along with the notification commands which we are setting up after this.

user@host:~$ cat /etc/icinga2/zones.d/global-templates/users.conf
object User "icingaadmin" {
  import "generic-user"

  display_name = "Icinga 2 Admin"
  groups = [ "icingaadmins" ]

  email = "root@localhost"
  vars.telegram_chat_id = "YOUR-CHAT-ID-HERE"
}

Notifications for Host & Service

We need to define 2 new NotificationCommand objects which trigger the telegram-(host|service)-notification.sh scripts. These are stored in /etc/icinga2/conf.d/telegrambot-commands.conf.

Note: We store the NotificationCommands and Notifications in /etc/icinga2/conf.d and NOT in /etc/icinga2/zones.d/master. This is because I have only one master in my setup which sends out notifications and as we defined the constant TelegramBotToken in /etc/icinga2/constants.conf - which is not synced via the zone config sync. Therefore we would run into an syntax error on all Icinga2 agents.

See https://admin.brennt.net/icinga2-error-check-command-does-not-exist-because-of-missing-constant for details.

Of course we could also define the constant in a file under /etc/icinga2/zones.d/master but I choose not to do so for security reasons.

user@host:/etc/icinga2$ cat /etc/icinga2/conf.d/telegrambot-commands.conf
/*
 * Notification Commands for Telegram Bot
 */
object NotificationCommand "telegram-host-notification" {
  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/telegram-host-notification.sh" ]

  env = {
    NOTIFICATIONTYPE = "$notification.type$"
    HOSTNAME = "$host.name$"
    HOSTALIAS = "$host.display_name$"
    HOSTADDRESS = "$address$"
    HOSTSTATE = "$host.state$"
    LONGDATETIME = "$icinga.long_date_time$"
    HOSTOUTPUT = "$host.output$"
    NOTIFICATIONAUTHORNAME = "$notification.author$"
    NOTIFICATIONCOMMENT = "$notification.comment$"
    HOSTDISPLAYNAME = "$host.display_name$"
    TELEGRAM_BOT_TOKEN = TelegramBotToken
    TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"

    // optional
    ICINGAWEB2_URL = "https://host.domain.tld/icingaweb2"
  }
}

object NotificationCommand "telegram-service-notification" {
  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/telegram-service-notification.sh" ]

  env = {
    NOTIFICATIONTYPE = "$notification.type$"
    SERVICEDESC = "$service.name$"
    HOSTNAME = "$host.name$"
    HOSTALIAS = "$host.display_name$"
    HOSTADDRESS = "$address$"
    SERVICESTATE = "$service.state$"
    LONGDATETIME = "$icinga.long_date_time$"
    SERVICEOUTPUT = "$service.output$"
    NOTIFICATIONAUTHORNAME = "$notification.author$"
    NOTIFICATIONCOMMENT = "$notification.comment$"
    HOSTDISPLAYNAME = "$host.display_name$"
    SERVICEDISPLAYNAME = "$service.display_name$"
    TELEGRAM_BOT_TOKEN = TelegramBotToken
    TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"

    // optional
    ICINGAWEB2_URL = "https://host.domain.tld/icingaweb2"
  }
}

As I want to get all notifications for all hosts and services I simply apply the notification object for all hosts and services which have a set host.name. - Same as in the example.

user@host:/etc/icinga2$ cat /etc/icinga2/conf.d/telegrambot-notifications.conf
/*
 * Notifications for alerting via Telegram Bot
 */
apply Notification "telegram-icingaadmin" to Host {
  import "mail-host-notification"
  command = "telegram-host-notification"

  users = [ "icingaadmin" ]

  assign where host.name
}

apply Notification "telegram-icingaadmin" to Service {
  import "mail-service-notification"
  command = "telegram-service-notification"

  users = [ "icingaadmin" ]

  assign where host.name
}

Checking configuration

Now we check if our Icinga2 config has no errors and reload the service:

root@host:~ # icinga2 daemon -C
[2023-09-14 22:19:37 +0200] information/cli: Icinga application loader (version: r2.12.3-1)
[2023-09-14 22:19:37 +0200] information/cli: Loading configuration file(s).
[2023-09-14 22:19:37 +0200] information/ConfigItem: Committing config item(s).
[2023-09-14 22:19:37 +0200] information/ApiListener: My API identity: hostname.domain.tld
[2023-09-14 22:19:37 +0200] information/ConfigItem: Instantiated 1 NotificationComponent.
[...]
[2023-09-14 22:19:37 +0200] information/ScriptGlobal: Dumping variables to file '/var/cache/icinga2/icinga2.vars'
[2023-09-14 22:19:37 +0200] information/cli: Finished validating the configuration file(s).
root@host:~ # systemctl reload icinga2.service

Verify it works

Log into your Icingaweb2 frontend, click the Notification link for a host or service and trigger a custom notification.

And we have a message in Telegram:

What about CheckMK?

If you use CheckMK see this blogpost: https://www.srcbox.net/posts/monitoring-notifications-via-telegram/

Lessons learned

And if you use this setup you will encounter one situation/question rather quickly: Do I really need to be woken up at 3am for every notification?

No. No you don't want to. Not in a professional context and even less in a private context.

Therefore: In part 2 we will register a second bot account and then use these to differentiate between important and unimportant notifications. The unimportant ones will be muted in the Telegram client on our smartphone. The important ones won't, and therefore are able to wake us at 3am in the morning.

Comments

Snap packages and SSL-certificates.. A neverending story

Photo by Magda Ehlers: https://www.pexels.com/photo/a-broken-bicycle-leaning-on-the-wall-5342343/

Alternative title: Why I hate developers who reinvent the wheel - but forget the spokes.

This is more of a rant and link-dump article. I didn't research everything in detail as I generally avoid Snaps. So far they've caused me more troubles than benefits.. But I wanted a place to keep all my arguments and links to certain documentation sites in one place.. So here we go.

Snap is, compared to packaging formats like RPM and APT, relatively new. Maybe this explains why it still has some teething problems. The most annoying one for me is: Snap packages don't use your custom SSL-Certificates, stored in /usr/share/ca-certificates. Which is the default place to store your companies Root-CA certificates. Every browser respects this. There is tooling in every Linux distribution to take care of stored certificates and adding them to the truststore. There is tooling to automatically add these certificates to any Java (or other language) truststore that might reside on your system.

But no, that would be too easy mirroring that behaviour in Snap, right? After all.. Why just reinvent the wheel, when you can have more fun by forgetting the spokes.
Yes, I am aware that centreless/hubless wheels do exist. ;-)

The root cause is often: Snap Confinement (or to be precise: The strict confinement mode). Which means a snap is separated from the system and only has access to directories which are configured at build time of the snap, as stated in Interface Management documentation (see also: https://snapcraft.io/docs/home-interface and https://snapcraft.io/docs/supported-interfaces). And as desirable and good this is. Reality teaches us that for every application you always need a way to modify, to configure it. At least certain aspects. With snap.. Not so much?

In the latest Ubuntu releases Firefox and Chromium have been migrated exclusively to snap. You cannot install a Firefox via apt and have a normal install. You will get a snap package. Which means: Goodbye SSL Client cert authentication, goodbye internal company SSL certificates. Bug 1901586: [snap] CA Certificates from /usr/local/share/ca-certificates are not used has all the details.

Yes, clever people might add: But you can do a bind mount and mount that directory under /home where nearly every Snap application has access. But why do I need to do this? Why does Snap impose that burden on me, the user? And this doesn't fix the SSL issue..

Oh, I should copy them to /etc/ssl/certs? And what about the other troubles this might cause? .. Hello? Nothing? Oh, okay..

And it keeps getting better: If you install the Videoplayer VLC as a snap package and your files are not located under /home.. You can't access them. Snap developers will happily advise you to move your files, change the mountpoint or do a mount --bind under /home. Which can be seen here: Bug 1643706: snap apps need to be able to browse outside of user $HOME dir. for Desktop installs

And this is the point I want to make. It's OK to design a new packaging format for applications. It's okay to add security mechanism like snap confinement. But designing them in way so that each and every user is forced to manage/store/organize files in the way snap dictates? Nope, sorry.

Someone on the Internet wrote that Snap is a great idea for things like phones, PCs in public space (libraries, schools, etc.) or other confined environments where the user doesn't and shouldn't be allowed to configure many aspects. Where a system is designed and offered for only some specific purposes which are more or less static and change seldom.

And I agree. For me the root cause with all my problems regarding Snap is: It isn't the only daemon on my system. It HAS to integrate into the existing system and processes, like update-ca-certificates or even where in the filesystem I store my files. This was all there before Snap existed and now applications which always worked won't because someone thought it is better that way.. No, sorry. Like I said before: It HAS to integrate into the existing system. If it doesn't.. It might still have it use-cases. I'm not arguing against that. But then please let me have a choice! But breaking existing workflows, file structures, etc. that is not acceptable for me. And as the Internet shows, also not for many other users.

The sad part is that the decision to make Firefox only available as a Snap was done by Mozilla & Canoncial. Therefore you can't download the .deb from the Mozilla Webpage or the like. (Luckily there is still a PPA which build Firefox as .deb package offered by volunteers. This means it can potentially vanish if there is no one left doing it. But that's more or less the risk with any piece of software/technology.)

The announcement is on their Discourse: https://discourse.ubuntu.com/t/feature-freeze-exception-seeding-the-official-firefox-snap-in-ubuntu-desktop/24210/1

Oh and Snaps tend to auto-upgrade, even without unattended-upgrades configured. Which can be a problem if you require a specific version. Luckily snap refresh --hold can hold updates for all Snap packages. Or, if you just want it for some specific package/time use snap refresh --hold=72h vlc. This post has more information: Hold your horses, I mean snaps! New feature lets you stop snap updates, for as long as you need (snapcraft.io)

Security considerations

Snap was invented by Canoncial. The Snap-Store is run by Canoncial. Snap packages are slowly replacing more and more .deb-packages in Ubuntu, when you type apt-get install packagename you will automatically get a snap package if one exists.

Or if you try to execute a command which isn't present on your system. Then the command-not-found helper from Ubuntu will recommend you the associated .deb or Snap package.

The problem? Aquasec discovered that many traditional programs are not available as a Snap, but Canoncial still allows you to register Snaps with the exact same name, despite providing an entirely different program. (aquasec.com)

Additionally you can specify aliases or register Snaps for the literal filenames. The example shows how, when you try to execute the command tarquingui, command-not-found recommends to install the tarquin Snap. Which provides the tarquingui program. But they were able to additionally register a Snap with the name tarquingui. What happens? command-not-found now recommends both Snaps.

My personal opinion is that it is a dangerous and dumb oversight to not reserve every APT-package on the Snap store. Preventing the hijacking of well established software packages by potentially malicious third parties. After all it was Canoncial who introduced Snap and it's Canoncial who solely operates the Snap-Store...

How to install Firefox as .deb package (Ubuntu 23.04)

Update: I recently learned of a bug in unattended-upgrades which ignores apt-pinnings if the origin is not listed in the allowed-origins: #2033646 unattended-upgrade ignores apt-pinning to not-allowed origins

To prevent this bug, use the following workaround:

root@host:~# echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox
  1. Remove the Firefox snap (omit if not present)
    • snap remove firefox
    • If dpkg -l firefox still shows it as installed, execute:
      • apt-get remove --purge firefox
  2. Add the Mozilla PPA
    • add-apt-repository ppa:mozillateam/ppa
      • If the command add-apt-repository is missing, install the package software-properties-common
  3. Pin the Mozilla PPA-Repository with higher priority so packages are installed out of this PPA instead of other repositories where they might be available too
    • echo -e "Package: *\nPin: release o=LP-PPA-mozillateam\nPin-Priority: 1001" | sudo tee /etc/apt/preferences.d/mozilla-firefox
  4. Update repository information
    • apt-get update
  5. Now check that your APT-Package Policies are correct
    • apt-cache policy firefox
      • If set up correctly Candidate: will list the package name which is not a Snap. See the example below.
      • root@ubuntu:/# apt-cache policy firefox
        firefox:
          Installed: (none)
          Candidate: 1:1snap1-0ubuntu3
          Version table:
             1:1snap1-0ubuntu3 500
                500 http://archive.ubuntu.com/ubuntu lunar/main amd64 Packages
             120.0.1+build1-0ubuntu0.23.04.1~mt1 500
                500 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu lunar/main amd64 Packages
        
        root@ubuntu:/# echo -e "Package: *\nPin: release o=LP-PPA-mozillateam\nPin-Priority: 1001" | tee /etc/apt/preferences.d/mozilla-firefox
        Package: *
        Pin: release o=LP-PPA-mozillateam
        Pin-Priority: 1001
        
        root@ubuntu:/# apt-cache policy firefox
        firefox:
          Installed: (none)
          Candidate: 120.0.1+build1-0ubuntu0.23.04.1~mt1
          Version table:
             1:1snap1-0ubuntu3 500
                500 http://archive.ubuntu.com/ubuntu lunar/main amd64 Packages
             120.0.1+build1-0ubuntu0.23.04.1~mt1 1001
               1001 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu lunar/main amd64 Packages
  6. Install Firefox. You should see that https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu is being used to download Firefox. If not search for errors.
      • apt-get install firefox
    Comments

    Howto use FreeOTP for Two-Factor-Authentication (2FA) on LinkedIn

    Photo by Pixabay: https://www.pexels.com/photo/black-android-smartphone-on-top-of-white-book-39584/

    Too Long;Didn't Read (TL;DR):

    You can omit the steps listed below. If your 2FA/OTP App allows to specify the secret key, type, algorithm and interval use the following settings for LinkedIn:

    Type: TOTP
    Number of digits: 6
    Algorithm: SHA1
    Interval: 30 seconds

    Original article

    I try to enable Two-Factor-Authentication, or 2FA in short, on any of my accounts that supports it. But: I dislike it, when the 2FA-Codes are sent via Mail or SMS. This is just too insecure as both can be intercepted. And personally I would go so far to say "SMS & Mail isn't a valid & secure second factor." As there are too many reports how scammers and phishers intercept SMS or mails. Yet many companies still default to this. LinkedIn too.

    Therefore I wanted to switch to my Authenticator App of choice: FreeOTP - https://freeotp.github.io/
    The source code is on GitHub: https://github.com/freeotp

    It is completely OpenSource (sponsored by RedHat) and even available in the alternative Android App-Store F-Droid, which only offers Apps which can be build completely from source.

    As naive as I am sometimes I thought it's just the following steps:

    1. Enable 2FA in my LinkedIn profile
    2. Provide password to authenticate
    3. Scan the QR-Code in FreeOTP
    4. Enter the generated code to verify it works
    5. Generate & Save the backup keys in my password manager

    But not so on LinkedIn. They don't display a QR-Code. Well.. To be precise. They did. Before Microsoft bought LinkedIn. After that this changed. Nowadays they only display you the so-called secret key (encoded in Base32) and that's it.
    Then LinkedIn tells you to install the Microsoft Authenticator App, while mentioning, that you can, of course, use any other Authenticator App.

    The problem? The described workflow on what to do with that key only works in the Microsoft Authenticator App.
    Side-Note: Someone told me Google Authenticator should be able to use that code too. But I can't verify this.

    LinkedIn gives you absolutely no additional technical information.

    • No otpauth:// URL
    • No information if TOTP or HOTP must be used
      • Well, to be fair, we can safely assume it's TOTP.
    • Which algorithm must be used?
    • What is the lifetime (interval) of the generated codes?

    Nothing. But this is what I need with FreeOTP. I tried a few combinations, but had no luck.

    So I resorted to the Linux command-line.

    1. Enable 2FA in your account until the secret key is displayed
    2. Install qrencode (or use one of the available Web-Generators for QR-Codes at your own risk)
    3. Build the following string: otpauth://totp/LinkedIn:MyAccount?secret=KEY-YOU-GOT-FROM-LINKEDIN
      • All in one line, no spaces at the end, no enter.
      • You can change "MyAccount" to something more meaningful like your mail address
      • Example: otpauth://totp/LinkedIn:JohnDoe@company.tld?secret=U4NGHXFW6C3CLHWLQEVCBDLM5FQMAQ7E
    4. Paste that string into a textfile.
      • Again, no enter or spaces at the end
    5. Execute: qrencode -r /path/to/file.txt -t png -o /path/to/image.png
      • This will generate a PNG-Image at the location specified by the -o parameter
      • -r is the input file containing the string
    6. Display the QR-Code and scan it with FreeOTP
    7. Verify the code works
    8. Generate your backup keys and save them in your password manager
    9. Profit!

    Some documentation regarding the otpauth:// URL, it's syntax and the parameters you can use is available in the old Google Authenticator repository on GitHub: https://github.com/google/google-authenticator/wiki/Key-Uri-Format
    (Google Authenticator once was OpenSource too, but sadly isn't any more.)

    And while at it, I created the corresponding GitHub Issue for the FreeOTP project #360 [Feature-Request] Allow adding of entries by just specifying label & secret to properly take care of this nuisance. ;-)

    Lessons learned

    FreeOTP assumes the algorithm of SHA1 and an interval of 30 when these parameters are not part of the otpauth-URL. Choosing these works out of the box and I can omit the QR-Code step this way.

    Comments