Feuerfest

Just the private blog of a Linux sysadmin

Protect against malicious AirTags (and some other tracking devices)

Photo by cottonbro studio: https://www.pexels.com/photo/man-observing-woman-through-doorway-8626372/

When Apple introduced the AirTag (see Wikipedia) it was primarily marketed as a "Find your device/stuff" product. Allowing you to locate the item to which the AirTag is attached, even when it's hundreds of meters (or kilometers) away. As long as there is some Apple device which receives the Bluetooth signal and forwards this to the Apple servers, you will have the location of the device. Of course, depending on time passed and location it can be inaccurate. But in our connected world it's likely that some device will pick up the signal again and you'll have up-to-date location information.

Additionally AirTags can produce a sound, so that you can get a audio hint on where the device is.

AirTags do have many useful cases. From tracking your stolen bike (if you hide some AirTag on/in it), to locating your lost luggage at an airport.. Even pets! Sure this is useful. But sadly.. The principle of dual usability is real and hence even in the beta phase Apple already rolled out a feature that allowed you to view all AirTags in your vicinity. As the potential for illegitimate usages was too high, to simply ignore it. After all.. Watch someone retrieving money at an ATM, occasionally bump into this person and put an AirTag into the jacked of that person. And then just follow and wait until this person is in some place where there are no cameras and/or eyewitnesses. Or think about the whole stalking and Online-Dating problem.

No, AirTags can be a security and privacy disaster. I'm sure many of you have read the story about Lilith Wittmann, who used an Apple AirTag to uncover an office of a secret german intelligence agency (article on appleinsider.com) or read the original german article, published by herself, here: Bundesservice Telekommunikation — enttarnt: Dieser Geheimdienst steckt dahinter.

Ok, so Apple has this feature included in iOS for it's phones/tablets, etc. - What about Android?

Good things first: Apple and Google recognized the threat and are working together towards an industry specification which aims to put an end to stalking via AirTag and similar devices. But, as this has just been announced in May 2023, it's still too early to have produced any meaningful results (sadly).

Apple press release: https://www.apple.com/newsroom/2023/05/apple-google-partner-on-an-industry-specification-to-address-unwanted-tracking/
Google blog post in their security blog: https://security.googleblog.com/2023/05/google-and-apple-lead-initiative-for.html 

Well, Android being the fragmented Android market it is, not every manufacturer has such an option included. I know that Google Pixel devices have such a feature. And I was told Samsung and OnePlus phones too. But there are many other Android versions around. And: What about custom ROMs? I use LineageOS on my OnePlus phone and wasn't able to find such a feature. That's why I searched for an app that does this for me, and was pleasantly surprised to find one.

AirGuard is even released for iOS and there it's also able to find trackers which Apples feature won't detect. So.. I guess this is a recommendation to install this app on iOS too.

Introducing AirGuard

AirGuard is an Android app, developed by the Secure Mobile Networking Lab (SEEMOO) which is part of the Technical University of Darmstadt - specifically their computer science department. You may have heard from them occasionally as they regularly find security vulnerabilities in Apple products and do a lot of research on Bluetooth and Bluetooth security. The neat point? AirGuard is OpenSource, it's code is being published on GitHub. This allows me to install the App using F-Droid (which only offers OpenSource apps).

The icing on the cake? It can not only track Apple AirTags, but Samsung SmartTags and Chipolo Tags too.

From here it's just a normal app installation. Allow the app to use Bluetooth, disable battery saving mechanisms (so it stays active while being executed in the background) and that's it.

As I own no AirTag or similar device I can't test it, but I will update this article when I was able to test this.

If you want to stay up-to-date with the development, there is a Twitter account for that: https://twitter.com/AirGuardAndroid

Comments

Icinga2 Monitoring notifications via Telegram Bot (Part 1)

Photo by Kindel Media: https://www.pexels.com/photo/low-angle-shot-of-robot-8566526/

One thing I wanted to set up for a long time was to get my Icinga2 notifications via some of the Instant Messaging apps I have on my mobile. So there was Threema, Telegram and Whatsapp to choose from.

Well.. Threema wants money for this kind of service, Whatsapp requires a business account who must be connected with the bot. And Whatsapp Business means I have to pay again? - Don't know - didn't pursue that path any further. As this either meant I would've needed to convert my private account into a business account or get a second account. No, sorry. Not something I want.

Telegram on the other hand? "Yeah, well, message the @botfather account, type /start, type /newbot, set a display and username, get your Bot-Token (for API usage) that's it. The only requirement we have? The username must end in bot." From here it was an easy decision which app I choose. (Telegram documentation here.)

Creating your telegram bot

  1. Search for the @botfather account on Telegram; doesn't matter if you use the mobile app or do it via Telegram Web.
  2. Type /start and a help message will be displayed.
  3. To create a new bot, type: /newbot
  4. Via the question "Alright, a new bot. How are we going to call it? Please choose a name for your bot." you are asked for the display name of your bot.
    • I choose something generic like "Icinga Monitoring Notifications".
  5. Likewise the question "Good. Now let's choose a username for your bot. It must end in `bot`. Like this, for example: TetrisBot or tetris_bot." asks for the username.
    • Choose whatever you like.
  6. If the username is already taken Telegram will state this and simply ask for a new username until you find one which is available.
  7. In the final message you will get your token to access the HTTP API. Note this down and save it in your password manager. We will need this later for Icinga.
  8. To test everything send a message to your bot in Telegram

That's the Telegram part. Pretty easy, right?

Testing our bot from the command line

We are now able to receive (via /getUpdates) and send messages (via /sendMessage) from/to our bot. Define the token as a shell variable and execute the following curl command to get the message that was sent to your bot. Note: Only new messages are received. If you already viewed them in Telegram Web the response will be empty. As seen in the first executed curl command.

Just close Telegram Web and the App on your phone and sent a message via curl. This should do the trick. Later we define our API-Token as a constant in the Icinga2 configuration.

For better readability I pipe the output through jq.

When there is a new message from your Telegram-Account to your bot, you will see a field with the named id. Note this number down. This is the Chat-ID from your account and we need this, so that your bot can actually send you messages.

Relevant documentation links are:

user@host:~$ TOKEN="YOUR-TOKEN"
user@host:~$ curl --silent "https://api.telegram.org/bot${TOKEN}/getUpdates" | jq
{
  "ok": true,
  "result": []
}
user@host:~$ curl --silent "https://api.telegram.org/bot${TOKEN}/getUpdates" | jq
{
  "ok": true,
  "result": [
    {
      "update_id": NUMBER,
      "message": {
        "message_id": 3,
        "from": {
          "id": CHAT-ID,
          "is_bot": false,
          "first_name": "John Doe Example",
          "username": "JohnDoeExample",
          "language_code": "de"
        },
        "chat": {
          "id": CHAT-ID,
          "first_name": "John Doe Example",
          "username": "JohnDoeExample",
          "type": "private"
        },
        "date": 1694637798,
        "text": "This is a test message"
      }
    }
  ]
}

Configuring Icinga2

Now we need to integrate our bot into the Icinga2 notification process. Luckily there were many people before us doing this, so there are already some notification scripts and example configuration files on GitHub.

I choose the scripts found here: https://github.com/lazyfrosch/icinga2-telegram

As I use the distributed monitoring I store some configuration files beneath /etc/icinga2/zones.d/. If you don't use this, feel free to store those files somewhere else. However as I define the Token in /etc/icinga2/constants.conf which isn't synced via the config file sync, I have to make sure that the Notification configuration is also stored outside of /etc/icinga2/zones.d/. Else the distributed setup will fail as the config file sync throws an syntax error on all other machines due to the missing TelegramBotToken constant.

First we define the API-Token in the /etc/icinga2/constants.conf file:

user@host:/etc/icinga2$ grep -B1 TelegramBotToken constants.conf
/* Telegram Bot Token */
const TelegramBotToken = "YOUR-TOKEN-HERE"

Afterwards we download the host and service notification script into /etc/icinga2/scripts and set the executeable bit.

user@host:/etc/icinga2/scripts$ wget https://raw.githubusercontent.com/lazyfrosch/icinga2-telegram/master/telegram-host-notification.sh
user@host:/etc/icinga2/scripts$ wget https://raw.githubusercontent.com/lazyfrosch/icinga2-telegram/master/telegram-service-notification.sh
user@host:/etc/icinga2/scripts$ chmod +x telegram-host-notification.sh telegram-service-notification.sh

Based on the notifications we want to receive, we need to define the variable vars.telegram_chat_id in the appropriate user/group object(s). An example for the icingaadmin is shown below and can be found in the icinga2-example.conf on GitHub: https://github.com/lazyfrosch/icinga2-telegram/blob/master/icinga2-example.conf along with the notification commands which we are setting up after this.

user@host:~$ cat /etc/icinga2/zones.d/global-templates/users.conf
object User "icingaadmin" {
  import "generic-user"

  display_name = "Icinga 2 Admin"
  groups = [ "icingaadmins" ]

  email = "root@localhost"
  vars.telegram_chat_id = "YOUR-CHAT-ID-HERE"
}

Notifications for Host & Service

We need to define 2 new NotificationCommand objects which trigger the telegram-(host|service)-notification.sh scripts. These are stored in /etc/icinga2/conf.d/telegrambot-commands.conf.

Note: We store the NotificationCommands and Notifications in /etc/icinga2/conf.d and NOT in /etc/icinga2/zones.d/master. This is because I have only one master in my setup which sends out notifications and as we defined the constant TelegramBotToken in /etc/icinga2/constants.conf - which is not synced via the zone config sync. Therefore we would run into an syntax error on all Icinga2 agents.

See https://admin.brennt.net/icinga2-error-check-command-does-not-exist-because-of-missing-constant for details.

Of course we could also define the constant in a file under /etc/icinga2/zones.d/master but I choose not to do so for security reasons.

user@host:/etc/icinga2$ cat /etc/icinga2/conf.d/telegrambot-commands.conf
/*
 * Notification Commands for Telegram Bot
 */
object NotificationCommand "telegram-host-notification" {
  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/telegram-host-notification.sh" ]

  env = {
    NOTIFICATIONTYPE = "$notification.type$"
    HOSTNAME = "$host.name$"
    HOSTALIAS = "$host.display_name$"
    HOSTADDRESS = "$address$"
    HOSTSTATE = "$host.state$"
    LONGDATETIME = "$icinga.long_date_time$"
    HOSTOUTPUT = "$host.output$"
    NOTIFICATIONAUTHORNAME = "$notification.author$"
    NOTIFICATIONCOMMENT = "$notification.comment$"
    HOSTDISPLAYNAME = "$host.display_name$"
    TELEGRAM_BOT_TOKEN = TelegramBotToken
    TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"

    // optional
    ICINGAWEB2_URL = "https://host.domain.tld/icingaweb2"
  }
}

object NotificationCommand "telegram-service-notification" {
  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/telegram-service-notification.sh" ]

  env = {
    NOTIFICATIONTYPE = "$notification.type$"
    SERVICEDESC = "$service.name$"
    HOSTNAME = "$host.name$"
    HOSTALIAS = "$host.display_name$"
    HOSTADDRESS = "$address$"
    SERVICESTATE = "$service.state$"
    LONGDATETIME = "$icinga.long_date_time$"
    SERVICEOUTPUT = "$service.output$"
    NOTIFICATIONAUTHORNAME = "$notification.author$"
    NOTIFICATIONCOMMENT = "$notification.comment$"
    HOSTDISPLAYNAME = "$host.display_name$"
    SERVICEDISPLAYNAME = "$service.display_name$"
    TELEGRAM_BOT_TOKEN = TelegramBotToken
    TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"

    // optional
    ICINGAWEB2_URL = "https://host.domain.tld/icingaweb2"
  }
}

As I want to get all notifications for all hosts and services I simply apply the notification object for all hosts and services which have a set host.name. - Same as in the example.

user@host:/etc/icinga2$ cat /etc/icinga2/conf.d/telegrambot-notifications.conf
/*
 * Notifications for alerting via Telegram Bot
 */
apply Notification "telegram-icingaadmin" to Host {
  import "mail-host-notification"
  command = "telegram-host-notification"

  users = [ "icingaadmin" ]

  assign where host.name
}

apply Notification "telegram-icingaadmin" to Service {
  import "mail-service-notification"
  command = "telegram-service-notification"

  users = [ "icingaadmin" ]

  assign where host.name
}

Checking configuration

Now we check if our Icinga2 config has no errors and reload the service:

root@host:~ # icinga2 daemon -C
[2023-09-14 22:19:37 +0200] information/cli: Icinga application loader (version: r2.12.3-1)
[2023-09-14 22:19:37 +0200] information/cli: Loading configuration file(s).
[2023-09-14 22:19:37 +0200] information/ConfigItem: Committing config item(s).
[2023-09-14 22:19:37 +0200] information/ApiListener: My API identity: hostname.domain.tld
[2023-09-14 22:19:37 +0200] information/ConfigItem: Instantiated 1 NotificationComponent.
[...]
[2023-09-14 22:19:37 +0200] information/ScriptGlobal: Dumping variables to file '/var/cache/icinga2/icinga2.vars'
[2023-09-14 22:19:37 +0200] information/cli: Finished validating the configuration file(s).
root@host:~ # systemctl reload icinga2.service

Verify it works

Log into your Icingaweb2 frontend, click the Notification link for a host or service and trigger a custom notification.

And we have a message in Telegram:

What about CheckMK?

If you use CheckMK see this blogpost: https://www.srcbox.net/posts/monitoring-notifications-via-telegram/

Lessons learned

And if you use this setup you will encounter one situation/question rather quickly: Do I really need to be woken up at 3am for every notification?

No. No you don't want to. Not in a professional context and even less in a private context.

Therefore: In part 2 we will register a second bot account and then use these to differentiate between important and unimportant notifications. The unimportant ones will be muted in the Telegram client on our smartphone. The important ones won't, and therefore are able to wake us at 3am in the morning.

Comments

Snap packages and SSL-certificates.. A neverending story

Photo by Magda Ehlers: https://www.pexels.com/photo/a-broken-bicycle-leaning-on-the-wall-5342343/

Alternative title: Why I hate developers who reinvent the wheel - but forget the spokes.

This is more of a rant and link-dump article. I didn't research everything in detail as I generally avoid Snaps. So far they've caused me more troubles than benefits.. But I wanted a place to keep all my arguments and links to certain documentation sites in one place.. So here we go.

Snap is, compared to packaging formats like RPM and APT, relatively new. Maybe this explains why it still has some teething problems. The most annoying one for me is: Snap packages don't use your custom SSL-Certificates, stored in /usr/share/ca-certificates. Which is the default place to store your companies Root-CA certificates. Every browser respects this. There is tooling in every Linux distribution to take care of stored certificates and adding them to the truststore. There is tooling to automatically add these certificates to any Java (or other language) truststore that might reside on your system.

But no, that would be too easy mirroring that behaviour in Snap, right? After all.. Why just reinvent the wheel, when you can have more fun by forgetting the spokes.
Yes, I am aware that centreless/hubless wheels do exist. ;-)

The root cause is often: Snap Confinement (or to be precise: The strict confinement mode). Which means a snap is separated from the system and only has access to directories which are configured at build time of the snap, as stated in Interface Management documentation (see also: https://snapcraft.io/docs/home-interface and https://snapcraft.io/docs/supported-interfaces). And as desirable and good this is. Reality teaches us that for every application you always need a way to modify, to configure it. At least certain aspects. With snap.. Not so much?

In the latest Ubuntu releases Firefox and Chromium have been migrated exclusively to snap. You cannot install a Firefox via apt and have a normal install. You will get a snap package. Which means: Goodbye SSL Client cert authentication, goodbye internal company SSL certificates. Bug 1901586: [snap] CA Certificates from /usr/local/share/ca-certificates are not used has all the details.

Yes, clever people might add: But you can do a bind mount and mount that directory under /home where nearly every Snap application has access. But why do I need to do this? Why does Snap impose that burden on me, the user? And this doesn't fix the SSL issue..

Oh, I should copy them to /etc/ssl/certs? And what about the other troubles this might cause? .. Hello? Nothing? Oh, okay..

And it keeps getting better: If you install the Videoplayer VLC as a snap package and your files are not located under /home.. You can't access them. Snap developers will happily advise you to move your files, change the mountpoint or do a mount --bind under /home. Which can be seen here: Bug 1643706: snap apps need to be able to browse outside of user $HOME dir. for Desktop installs

And this is the point I want to make. It's OK to design a new packaging format for applications. It's okay to add security mechanism like snap confinement. But designing them in way so that each and every user is forced to manage/store/organize files in the way snap dictates? Nope, sorry.

Someone on the Internet wrote that Snap is a great idea for things like phones, PCs in public space (libraries, schools, etc.) or other confined environments where the user doesn't and shouldn't be allowed to configure many aspects. Where a system is designed and offered for only some specific purposes which are more or less static and change seldom.

And I agree. For me the root cause with all my problems regarding Snap is: It isn't the only daemon on my system. It HAS to integrate into the existing system and processes, like update-ca-certificates or even where in the filesystem I store my files. This was all there before Snap existed and now applications which always worked won't because someone thought it is better that way.. No, sorry. Like I said before: It HAS to integrate into the existing system. If it doesn't.. It might still have it use-cases. I'm not arguing against that. But then please let me have a choice! But breaking existing workflows, file structures, etc. that is not acceptable for me. And as the Internet shows, also not for many other users.

The sad part is that the decision to make Firefox only available as a Snap was done by Mozilla & Canoncial. Therefore you can't download the .deb from the Mozilla Webpage or the like. (Luckily there is still a PPA which build Firefox as .deb package offered by volunteers. This means it can potentially vanish if there is no one left doing it. But that's more or less the risk with any piece of software/technology.)

The announcement is on their Discourse: https://discourse.ubuntu.com/t/feature-freeze-exception-seeding-the-official-firefox-snap-in-ubuntu-desktop/24210/1

Oh and Snaps tend to auto-upgrade, even without unattended-upgrades configured. Which can be a problem if you require a specific version. Luckily snap refresh --hold can hold updates for all Snap packages. Or, if you just want it for some specific package/time use snap refresh --hold=72h vlc. This post has more information: Hold your horses, I mean snaps! New feature lets you stop snap updates, for as long as you need (snapcraft.io)

Security considerations

Snap was invented by Canoncial. The Snap-Store is run by Canoncial. Snap packages are slowly replacing more and more .deb-packages in Ubuntu, when you type apt-get install packagename you will automatically get a snap package if one exists.

Or if you try to execute a command which isn't present on your system. Then the command-not-found helper from Ubuntu will recommend you the associated .deb or Snap package.

The problem? Aquasec discovered that many traditional programs are not available as a Snap, but Canoncial still allows you to register Snaps with the exact same name, despite providing an entirely different program. (aquasec.com)

Additionally you can specify aliases or register Snaps for the literal filenames. The example shows how, when you try to execute the command tarquingui, command-not-found recommends to install the tarquin Snap. Which provides the tarquingui program. But they were able to additionally register a Snap with the name tarquingui. What happens? command-not-found now recommends both Snaps.

My personal opinion is that it is a dangerous and dumb oversight to not reserve every APT-package on the Snap store. Preventing the hijacking of well established software packages by potentially malicious third parties. After all it was Canoncial who introduced Snap and it's Canoncial who solely operates the Snap-Store...

How to install Firefox as .deb package (Ubuntu 23.04)

Update: I recently learned of a bug in unattended-upgrades which ignores apt-pinnings if the origin is not listed in the allowed-origins: #2033646 unattended-upgrade ignores apt-pinning to not-allowed origins

To prevent this bug, use the following workaround:

root@host:~# echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox
  1. Remove the Firefox snap (omit if not present)
    • snap remove firefox
    • If dpkg -l firefox still shows it as installed, execute:
      • apt-get remove --purge firefox
  2. Add the Mozilla PPA
    • add-apt-repository ppa:mozillateam/ppa
      • If the command add-apt-repository is missing, install the package software-properties-common
  3. Pin the Mozilla PPA-Repository with higher priority so packages are installed out of this PPA instead of other repositories where they might be available too
    • echo -e "Package: *\nPin: release o=LP-PPA-mozillateam\nPin-Priority: 1001" | sudo tee /etc/apt/preferences.d/mozilla-firefox
  4. Update repository information
    • apt-get update
  5. Now check that your APT-Package Policies are correct
    • apt-cache policy firefox
      • If set up correctly Candidate: will list the package name which is not a Snap. See the example below.
      • root@ubuntu:/# apt-cache policy firefox
        firefox:
          Installed: (none)
          Candidate: 1:1snap1-0ubuntu3
          Version table:
             1:1snap1-0ubuntu3 500
                500 http://archive.ubuntu.com/ubuntu lunar/main amd64 Packages
             120.0.1+build1-0ubuntu0.23.04.1~mt1 500
                500 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu lunar/main amd64 Packages
        
        root@ubuntu:/# echo -e "Package: *\nPin: release o=LP-PPA-mozillateam\nPin-Priority: 1001" | tee /etc/apt/preferences.d/mozilla-firefox
        Package: *
        Pin: release o=LP-PPA-mozillateam
        Pin-Priority: 1001
        
        root@ubuntu:/# apt-cache policy firefox
        firefox:
          Installed: (none)
          Candidate: 120.0.1+build1-0ubuntu0.23.04.1~mt1
          Version table:
             1:1snap1-0ubuntu3 500
                500 http://archive.ubuntu.com/ubuntu lunar/main amd64 Packages
             120.0.1+build1-0ubuntu0.23.04.1~mt1 1001
               1001 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu lunar/main amd64 Packages
  6. Install Firefox. You should see that https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu is being used to download Firefox. If not search for errors.
      • apt-get install firefox
    Comments

    Howto use FreeOTP for Two-Factor-Authentication (2FA) on LinkedIn

    Photo by Pixabay: https://www.pexels.com/photo/black-android-smartphone-on-top-of-white-book-39584/

    Too Long;Didn't Read (TL;DR):

    You can omit the steps listed below. If your 2FA/OTP App allows to specify the secret key, type, algorithm and interval use the following settings for LinkedIn:

    Type: TOTP
    Number of digits: 6
    Algorithm: SHA1
    Interval: 30 seconds

    Original article

    I try to enable Two-Factor-Authentication, or 2FA in short, on any of my accounts that supports it. But: I dislike it, when the 2FA-Codes are sent via Mail or SMS. This is just too insecure as both can be intercepted. And personally I would go so far to say "SMS & Mail isn't a valid & secure second factor." As there are too many reports how scammers and phishers intercept SMS or mails. Yet many companies still default to this. LinkedIn too.

    Therefore I wanted to switch to my Authenticator App of choice: FreeOTP - https://freeotp.github.io/
    The source code is on GitHub: https://github.com/freeotp

    It is completely OpenSource (sponsored by RedHat) and even available in the alternative Android App-Store F-Droid, which only offers Apps which can be build completely from source.

    As naive as I am sometimes I thought it's just the following steps:

    1. Enable 2FA in my LinkedIn profile
    2. Provide password to authenticate
    3. Scan the QR-Code in FreeOTP
    4. Enter the generated code to verify it works
    5. Generate & Save the backup keys in my password manager

    But not so on LinkedIn. They don't display a QR-Code. Well.. To be precise. They did. Before Microsoft bought LinkedIn. After that this changed. Nowadays they only display you the so-called secret key (encoded in Base32) and that's it.
    Then LinkedIn tells you to install the Microsoft Authenticator App, while mentioning, that you can, of course, use any other Authenticator App.

    The problem? The described workflow on what to do with that key only works in the Microsoft Authenticator App.
    Side-Note: Someone told me Google Authenticator should be able to use that code too. But I can't verify this.

    LinkedIn gives you absolutely no additional technical information.

    • No otpauth:// URL
    • No information if TOTP or HOTP must be used
      • Well, to be fair, we can safely assume it's TOTP.
    • Which algorithm must be used?
    • What is the lifetime (interval) of the generated codes?

    Nothing. But this is what I need with FreeOTP. I tried a few combinations, but had no luck.

    So I resorted to the Linux command-line.

    1. Enable 2FA in your account until the secret key is displayed
    2. Install qrencode (or use one of the available Web-Generators for QR-Codes at your own risk)
    3. Build the following string: otpauth://totp/LinkedIn:MyAccount?secret=KEY-YOU-GOT-FROM-LINKEDIN
      • All in one line, no spaces at the end, no enter.
      • You can change "MyAccount" to something more meaningful like your mail address
      • Example: otpauth://totp/LinkedIn:JohnDoe@company.tld?secret=U4NGHXFW6C3CLHWLQEVCBDLM5FQMAQ7E
    4. Paste that string into a textfile.
      • Again, no enter or spaces at the end
    5. Execute: qrencode -r /path/to/file.txt -t png -o /path/to/image.png
      • This will generate a PNG-Image at the location specified by the -o parameter
      • -r is the input file containing the string
    6. Display the QR-Code and scan it with FreeOTP
    7. Verify the code works
    8. Generate your backup keys and save them in your password manager
    9. Profit!

    Some documentation regarding the otpauth:// URL, it's syntax and the parameters you can use is available in the old Google Authenticator repository on GitHub: https://github.com/google/google-authenticator/wiki/Key-Uri-Format
    (Google Authenticator once was OpenSource too, but sadly isn't any more.)

    And while at it, I created the corresponding GitHub Issue for the FreeOTP project #360 [Feature-Request] Allow adding of entries by just specifying label & secret to properly take care of this nuisance. ;-)

    Lessons learned

    FreeOTP assumes the algorithm of SHA1 and an interval of 30 when these parameters are not part of the otpauth-URL. Choosing these works out of the box and I can omit the QR-Code step this way.

    Comments

    Why I don't accept connect/friendship requests from recruiters

    Photo by Andrea Piacquadio: https://www.pexels.com/photo/cheerful-young-woman-screaming-into-megaphone-3761509/

    When you work in IT you are in the privileged situation that people are actively offering you positions. If I wanted, I wouldn't have needed to search for a single one of my jobs. I got plenty of offerings on Xing or LinkedIn, via email or sometimes even through Twitter direct messages or Google Hangouts.

    So, as the question just recently arised from a recruiter on Xing: "Why didn't I accept the request to become connected?"
    Well, short answer: My starting page feed and too much clutter.

    In fact, I did tend to accept those requests years ago. Quickly, this had a rather unpleasant side-effect: My feed was full of job advertisements for various positions in far too many industries. Jobs which were absolutely not relevant for me. For which I did have no skills, no training, no interest, no passion. And.. 99,9% of the time I'm not searching for a job. So why should I be forced to read job ad after job ad and - sorry for the wording - waste my time with it? Especially when it is not a one-time occurrence but a constant stream of non-interesting content.

    Additionally, because of all that clutter, I sometimes didn't notice crucial personal updates from old colleagues & friends. As sadly nowadays it's the standard to just have one single feed where all postings show up. Not sorted into categories or whatever. Therefore I am more or less forced to read every article (and advertisement...) even if I'm not interested in it. Feel free to read my post The problem with social networks - and why I still miss Google+ if you want to know more.

    Thais is the sole reason why I stopped doing that.
    It is really nothing personal. It's just that 99,9% of the time your content is irrelevant to me - as I'm simply not on the lookout for a new challenge. And on top of that: In the short time frames when it is relevant to me, 99% of the content is - again - irrelevant to me - because the jobs don't fit what I'm searching for.

    I like my feed/stream to be about stuff I'm interesting in. I don't like it when I constantly have to read stuff which is neither interesting nor relevant for me.

    If, for example, LinkedIn changes the way their feed works, then my position could change. But until things stay as they are I sadly have to be a bit more rigid in who I accept as a contact.

    Comments

    What does "rc" and ".d" stand for in file-/directory names?

    Photo by Tima Miroshnichenko: https://www.pexels.com/photo/close-up-view-of-system-hacking-in-a-monitor-5380664/

    This was a question which came up recently and while I could have given the answer based on my experience where these two strings commonly occur, I never really researched the initial origin.

    Turns out rc has quite a heritage. Quoting from the Indiana Universities Knowledge Base:

    runcom (as in .cshrc or /etc/rc)

    The rc command derives from the runcom facility from the MIT CTSS system, ca. 1965. From Brian Kernighan and Dennis Ritchie, as told to Vicki Brown:

    "There was a facility that would execute a bunch of commands stored in a file; it was called runcom for "run commands", and the file began to be called "a runcom". rc in Unix is a fossil from that usage."

    Note: The name of the shell from the Plan 9 operating system is also rc.

    And then I learned about the Plan 9 OS of which I've never heard before: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

    For .d there is not a real distinctive answer. It seems the common consensus is that it originated from /etc/rc?.d as to have a directory for runcoms and to distinguish it from a normal daemon configuration file - as these are typically stored under /etc. And/or to prevent naming-problems, as files & folders can't share the same name under Unix/Linux the .d as indicator for a directory was added.

    And from there this mechanism was used for other daemons as well. What seems to have started as /etc/rc1.d, /etc/rc2.d, etc. as a way to separate runcoms to be executed on different runlevels, became commonly used today for the configuration of daemons/services.

    Today a something.d directory holds configurations files for the something daemon. The advantage is that you are able to split your configuration in different files. Making it, for example, easier to deploy only certain files to certain hosts via your configuration management tool of choice (Puppet, Ansible, Chef, etc.). Additionally it's easier to spot configuration errors in a relatively small than in one big "contains it all"-file.

    Comments