Feuerfest

Just the private blog of a Linux sysadmin

Using rgxg (ReGular eXpression Generator) to generate a RegEx that matches all IPv4 & IPv6 addresses

In "Little Helper Scripts - Part 3: My Homelab CA Management Scripts", I mention that the regular expressions I use for identifying IPv4 and IPv6 addresses are rather basic. In particular, the IPv6 RegEx simply assumes that anything containing a colon is an IPv6 address.

When I jokingly asked on Mastodon if anyone had a better RegEx, I mentioned my script enhancements. My former colleague Klaus Umbach recommended rgxg (ReGular eXpression Generator) to me. It sounded like it would solve my problem exactly.

Installing rgxg

The installation on Debian is pretty easy as there is a package available.

root@host:~# apt-get install rgxg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  librgxg0
The following NEW packages will be installed:
  librgxg0 rgxg
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.4 kB of archives.
After this operation, 81.9 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://debian.tu-bs.de/debian bookworm/main amd64 librgxg0 amd64 0.1.2-5 [15.3 kB]
Get:2 http://debian.tu-bs.de/debian bookworm/main amd64 rgxg amd64 0.1.2-5 [9,096 B]
Fetched 24.4 kB in 0s (200 kB/s)
Selecting previously unselected package librgxg0:amd64.
(Reading database ... 40580 files and directories currently installed.)
Preparing to unpack .../librgxg0_0.1.2-5_amd64.deb ...
Unpacking librgxg0:amd64 (0.1.2-5) ...
Selecting previously unselected package rgxg.
Preparing to unpack .../rgxg_0.1.2-5_amd64.deb ...
Unpacking rgxg (0.1.2-5) ...
Setting up librgxg0:amd64 (0.1.2-5) ...
Setting up rgxg (0.1.2-5) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for libc-bin (2.36-9+deb12u10) ...
root@host:~#

Generating a RegEx for IPv6 and IPv4

Klaus already delivered the example for the complete IPv6 address space. For IPv4 it is equally easy:

# RegEx for the complete IPv6 address space
user@host:~$ rgxg cidr ::0/0
((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})

# RegEx for the complete IPv4 address space
user@host:~$ rgxg cidr 0.0.0.0/0
(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}

Modifying hostcert.sh

All that is left to is define definite start and endpoints for the RegEx (^ and $) and test it.

user@host:~/git/github/chrlau/scripts/ca$ git diff
diff --git a/ca/hostcert.sh b/ca/hostcert.sh
index f743881..26ec0b0 100755
--- a/ca/hostcert.sh
+++ b/ca/hostcert.sh
@@ -42,16 +42,18 @@ else
        CN="$1.lan"
 fi

-# Check if Altname is an IPv4 or IPv6 (yeah.. very basic check..)
-#  so we can set the proper x509v3 extension
+# Check if Altname is an IPv4 or IPv6 - so we can set the proper x509v3 extension
+# Note: Everything which doesn't match the IPv4 or IPv6 RegEx is treated as DNS altname!
 for ALTNAME in $*; do
-  if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
+  if [[ $ALTNAME =~ ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}$ || $ALTNAME =~ ^((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})$ ]]; then
+  #if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
     IP_ALTNAMES+=("$ALTNAME")
   else
     DNS_ALTNAMES+=("$ALTNAME")
   fi
 done

+# TODO: Add DNS check against all DNS Altnames (CN is always part of DNS Altnames)
 echo "CN: $CN"
 echo "DNS ANs: ${DNS_ALTNAMES[@]}"
 echo "IP ANs: ${IP_ALTNAMES[@]}"

And it seems to work fine. Notably all altnames who don't match any of the RegExes are treated as DNS-Altname which can cause trouble hence I think about adding a check to resolve all provided DNS names prior the certificate creation.

root@host:~/ca# ./hostcert.sh service.lan 1.2.3.4 fe80::1234 service-test.lan 2abt::0000 999.5.4.2 2a00:1:2:3:4:5::ffff
CN: service.lan
DNS ANs: service.lan service-test.lan 2abt::0000 999.5.4.2
IP ANs: 1.2.3.4 fe80::1234 2a00:1:2:3:4:5::ffff
Enter to confirm.
^C
root@host:~/ca#
Comments

Installing Unbound as recursive DNS server on my PiHole

I run a Pi-hole installation on each of my Raspberry 3 & 4. As I do like to keep my DNS queries as much under my control as I can, I also installed Unbound to serve as recursive DNS server. This way all DNS queries will be handled by my Raspberry Pis.

Pi-hole is already installed using one of the following methods: https://github.com/pi-hole/pi-hole/#one-step-automated-install. If you don't have that done yet, do it first.

There is a good guide at the Pi-hole website which I will basically following.

https://docs.pi-hole.net/guides/dns/unbound/

root@host:~# apt install unbound

Regarding the configuration file I go with the one in the guide. However as I did have some problems in that past I needed to troubleshoot I include the following lines regarding loglevels and verbosity:

root@host:~# head /etc/unbound/unbound.conf.d/pihole.conf
server:
    # If no logfile is specified, syslog is used
    logfile: "/var/log/unbound/unbound.log"
    val-log-level: 2
    # Default is 1
    #verbosity: 4
    verbosity: 1

    interface: 127.0.0.1
    port: 5335
root@host:~# 

You can add that if you want but it's not needed to make Unbound work.

Next the guide tells us to download the root hints. A file maintained by Internic which contains information about the 13 DNS root name servers. Under Debian we don't need to download the named.root file from Internic as shown in the guide. Debian has its own package for that: dns-root-data.

It no only contains information about the 13 DNS root name servers but also the needed DNSSEC keys (also called root trust anchors). And together with unattended-upgrades we even automate updating that. Saving us the creation of a Cronjob or systemd timer.

root@host:~# apt install dns-root-data

In order for Unbound to have a directory and logfile to write into we need to create that:

root@host:~# mkdir -p /var/log/unbound
root@host:~# touch /var/log/unbound/unbound.log
root@host:~# chown unbound /var/log/unbound/unbound.log

As we are running under Debian we now need to tweak the Unbound config a little bit. Else we will get problems with DNSSEC. For this we are deleting a Debian generated file from Unbound and comment out the unbound_conf= line in /etc/resolvconf.conf so that it isn't included anymore.

root@host:~# sed -Ei 's/^unbound_conf=/#unbound_conf=/' /etc/resolvconf.conf
root@host:~# rm /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

Now all that is left is restarting Unbound.

root@host:~# systemctl restart unbound.service

Testing DNS resolution:

root@host:~# dig pi-hole.net @127.0.0.1 -p 5335

; <<>> DiG 9.18.33-1~deb12u2-Raspbian <<>> pi-hole.net @127.0.0.1 -p 5335
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46191
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;pi-hole.net.                   IN      A

;; ANSWER SECTION:
pi-hole.net.            300     IN      A       3.18.136.52

;; Query time: 169 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1) (UDP)
;; WHEN: Sun May 25 18:21:25 CEST 2025
;; MSG SIZE  rcvd: 56

And to verify & falsify DNSSEC. This request must return an A-Record for dnssec.works.

root@host:~# dig dnssec.works @127.0.0.1 -p 5335

; <<>> DiG 9.18.33-1~deb12u2-Raspbian <<>> dnssec.works @127.0.0.1 -p 5335
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14076
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;dnssec.works.                  IN      A

;; ANSWER SECTION:
dnssec.works.           3600    IN      A       46.23.92.212

;; Query time: 49 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1) (UDP)
;; WHEN: Sun May 25 18:22:52 CEST 2025
;; MSG SIZE  rcvd: 57

This request will not result in an A-Record.

root@host:~# dig fail01.dnssec.works @127.0.0.1 -p 5335

; <<>> DiG 9.18.33-1~deb12u2-Raspbian <<>> fail01.dnssec.works @127.0.0.1 -p 5335
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 1552
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;fail01.dnssec.works.           IN      A

;; Query time: 19 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1) (UDP)
;; WHEN: Sun May 25 18:23:41 CEST 2025
;; MSG SIZE  rcvd: 48

Now all that is left to connect our Pi-hole with Unbound. Logon to your Pi-hole website and navigate to Settings -> DNS. Expand the line Custom DNS servers and enter to IP and Port to our Unbound server. 127.0.0.1#5335 for IPv4 and ::1#5335 for IPv6. If you don't use one of these two just don't add the line. After that hit "Save & Apply" and we are done.

Creating a logrotate config for Unbound

Sadly Unbound still doesn't deliver a logrotate config with its package. Therefore I just copy & paste from my previous article Howto properly split all logfile content based on timestamps - and realizing my own fallacy.

root@host:~# cat /etc/logrotate.d/unbound
/var/log/unbound/unbound.log {
        monthly
        missingok
        rotate 12
        compress
        delaycompress
        notifempty
        sharedscripts
        create 644
        postrotate
                /usr/sbin/unbound-control log_reopen
        endscript
}

Troubleshooting

fail01.dnssec.works timed out

The host fail01.dnssec.works tends to not answer requests sometimes. Others noticed this too. dig will only show the following message:

root@host:~# dig fail01.dnssec.works @127.0.0.1 -p 5335
;; communications error to 127.0.0.1#5335: timed out
;; communications error to 127.0.0.1#5335: timed out
;; communications error to 127.0.0.1#5335: timed out

; <<>> DiG 9.18.33-1~deb12u2-Raspbian <<>> fail01.dnssec.works @127.0.0.1 -p 5335
;; global options: +cmd
;; no servers could be reached

If that is the case, just execute the command again. Usually it will work the second time. Or just wait a few minutes. Sometimes the line ;; communications error to 127.0.0.1#5335: timed out will be printed, but the dig query will work after that nonetheless.

Comments

Little helper scripts - Part 2: automation.sh / automation2.sh

Part 1 of this series is here: Little helper scripts - Part 1: no-screenlock-during-meeting.ps1 or use the following tag-link: https://admin.brennt.net/tag/littlehelperscripts

This script is no rocket science. Nothing spectacular. But the amount of hours it saved me in various projects is astonishing.

The sad reality

There are far too many companies (even IT-focused companies!) out there who have a very low level of automation. Virtual machines are created by hand - not by some script using an API. Configurations are not deployed via of some Configuration Management software like Puppet/OpenVox/Chef/Ansible or a Runbook automation software like Rundeck - no, they are handcrafted. Bespoke. System administration like it's 1753. With all the implications and drawbacks that brings.

Containerisation? Yeah.. Well.. "A few docker containers here and there but nobody in the company really knows how that stuff works so we leave it alone" is a phrase I have heard more than a few times. Either directly, or reported from colleagues working in other companies.

This means that I have to log on to systems manually and execute commands by hand. Something I can do and do regularly in my home lab. But to do it for dozens or even hundreds of systems? Yeah... No. Sorry, I've got better things to do. And as an external consultant, the client always keeps an eye on my performance metrics. After all, they are paying my employer a lot of money for my services. Sitting there all day and getting paid to copy and paste commands? It doesn't look good on my performance reporting spreadsheet and it doesn't meet my personal standards of what a consultant should be able to deliver.

I'm just a guest

What's more, because of my work as a consultant, I'm just an external contractor. I come in for a few months to solve a problem or help with a task, and then I move on to the next project at another company. That means I can't just do everything the way I want. I can't just go and install software on all the systems, even though I've been given root privileges. I can't just implement Ansible. I have to design my solutions so that they survive and continue to work when I'm gone. Sure, I can introduce dozens of new technologies and whole new technology stacks. I'm sure my employer would love to have the follow-on support contracts for those constructs. But going it alone will seriously damage the customer relationship. Especially with the IT people. After all, they'll be the ones stuck with new technology they don't understand and will have to spend time learning and familiarising themselves with. And I have been a sysadmin long enough to know what they will think of me if I start pulling such stunts.

Of course I can suggest changes. I can push for standardisation and automation. But for most customers, that will make no difference. After all, there are reasons why a company has stopped keeping up with technology. And fixing this takes time and usually involves a complete change of the dominating mindset. Something I cannot achieve as a lone consultant.

Scripting to the rescue!

First I went with the cheap & easy solution of for server in hosta hostb hostc; do ssh user@$server "command --some-parameter bla"; done but I grew tired of writing it all completely anew for each task.

Natively systems are often grouped into categories (webservers, etc.) or perform the same tasks (think of clusters). Hence commands must be executed on the same set of hosts again and again. One of my colleagues already compiled lists of hostnames group by tasks, roles and installed software. As some systems had the same software installed but were just configured to do different tasks with that software.

Through these list I got an idea: Why not feed those into a for or do-while loop and be done?

In the end I added some safety & DNS checks and named the script automation.sh. Later I added the capability to log the output on each host and named the script automation2.sh, which can be viewed below.

Yes, it's just a glorified nesting of if-statements but the amount of time this script saved me is insane. And as it utilizes only basic Posix & Bash commands I've yet to find a system were it can't be executed.

As always: Please check my GitHub for the most recent version as I won't update the script shown in this article.

#!/bin/bash
# vim: set tabstop=2 smarttab shiftwidth=2 softtabstop=2 expandtab foldmethod=syntax :
#
# Small script to automate custom shell command execution
# Current version can be found here:
# https://github.com/ChrLau/scripts/blob/master/automation2.sh

# Bash strict mode
#  read: http://redsymbol.net/articles/unofficial-bash-strict-mode/
set -euo pipefail
IFS=$'\n\t'

# Set pipefail variable
# As we use "ssh command | tee" and tee will always succeed our check for non-zero exit-codes doesn't work
#
# The exit status of a pipeline is the exit status of the last command in the pipeline,
#  unless the pipefail option is enabled (see: The Set Builtin).
# If pipefail is enabled, the pipeline's return status is the value of the last (rightmost)
#  command to exit with a non-zero status, or zero if all commands exit successfully.

VERSION="1.6"
SCRIPT="$(basename "$0")"
SSH="$(command -v ssh)"
TEE="$(command -v tee)"
# Colored output
RED="\e[31m"
GREEN="\e[32m"
ENDCOLOR="\e[0m"

# Test if ssh is present and executeable
if [ ! -x "$SSH" ]; then
  echo "${RED}This script requires ssh to connect to the servers. Exiting.${ENDCOLOR}"
  exit 2;
fi

# Test if tee is present and executeable
if [ ! -x "$TEE" ]; then
  echo "${RED}tee not found.${ENDCOLOR} ${GREEN}Script can still be used,${ENDCOLOR} ${RED}but option -w CAN NOT be used.${ENDCOLOR}"
fi

function HELP {
  echo "$SCRIPT $VERSION: Execute custom shell commands on lists of hosts"
  echo "Usage: $SCRIPT -l /path/to/host.list -c \"command\" [-u <user>] [-a <YES|NO>] [-r] [-s \"options\"] [-w \"/path/to/logfile.log\"]"
  echo ""
  echo "Parameters:"
  echo " -l   Path to the hostlist file, 1 host per line"
  echo " -c   The command to execute. Needs to be in double-quotes. Else getops interprets it as separate arguments"
  echo " -u   (Optional) The user used during SSH-Connection. (Default: \$USER)"
  echo " -a   (Optional) Abort when the ssh-command fails? Use YES or NO (Default: YES)"
  echo " -r   (Optional) When given command will be executed via 'sudo su -c'"
  echo " -s   (Optional) Any SSH parameters you want to specify Needs to be in double-quotes. (Default: empty)"
  echo "                 Example: -s \"-i /home/user/.ssh/id_user\""
  echo " -w   (Optional) Write STDERR and STDOUT to logfile (on the machine where $SCRIPT is executed)"
  echo ""
  echo "No arguments or -h will print this help."
  exit 0;
}

# Print help if no arguments are given
if [ "$#" -eq 0 ]; then
  HELP
fi

# Parse arguments
while getopts ":l:c:u:a:hrs:w:" OPTION; do
  case "$OPTION" in
    l)
      HOSTLIST="${OPTARG}"
      ;;
    c)
      COMMAND="${OPTARG}"
      ;;
    u)
      SSH_USER="${OPTARG}"
      ;;
    a)
      ABORT="${OPTARG}"
      ;;
    r)
      SUDO="YES"
      ;;
    s)
      SSH_PARAMS="${OPTARG}"
      ;;
    w)
      LOGFILE="${OPTARG}"
      ;;
    h)
      HELP
      ;;
    *)
      HELP
      ;;
# Not needed as we use : as starting char in getopts string
#    :)
#      echo "Missing argument"
#      ;;
#    \?)
#      echo "Invalid option"
#      exit 1
#      ;;
  esac
done

# Give usage message and print help if both arguments are empty
if [ -z "$HOSTLIST" ] || [ -z "$COMMAND" ]; then
  echo "You need to specify -l and -c. Exiting."
  exit 1;
fi

# Check if username was provided, if not use $USER environment variable
if [ -z "$SSH_USER" ]; then
  SSH_USER="$USER"
fi

# Check for YES or NO
if [ -z "$ABORT" ]; then
  # If empty, set to YES (default)
  ABORT="YES"
# Check if it's not NO or YES - we want to ensure a definite decision here
elif [ "$ABORT" != "NO" ] && [ "$ABORT" != "YES" ]; then
  echo  "-a accepts either YES or NO (case-sensitive)"
  exit 1
fi

# If variable logfile is not empty
if [ -n "$LOGFILE" ]; then

  # Check if logfile is not present
  if [ ! -e "$LOGFILE" ]; then
    # Check if creating it was unsuccessful
    if ! touch "$LOGFILE"; then
      echo "${RED}Could not create logfile at $LOGFILE. Aborting. Please check permissions.${ENDCOLOR}"
      exit 1
    fi
  # When logfile is present..
  else
    # Check if it's writeable and abort when not
    if [ ! -w "$LOGFILE" ]; then
      echo "${RED}$LOGFILE is NOT writeable. Aborting. Please check permissions.${ENDCOLOR}"
      exit 1
    fi
  fi
fi

# Execute command via sudo or not?
if [ "$SUDO" = "YES" ]; then
  COMMANDPART="sudo su -c '${COMMAND}'"
else
  COMMANDPART="${COMMAND}"
fi

# Check if hostlist is readable
if [ -r "$HOSTLIST" ]; then
  # Check that hostlist is not 0 bytes
  if [ -s "$HOSTLIST" ]; then
  
    while IFS= read -r HOST
    do

      getent hosts "$HOST" &> /dev/null
      
      # getent returns exit code of 2 if a hostname isn't resolving
      # shellcheck disable=SC2181
      if [ "$?" -ne 0 ]; then
        echo -e "${RED}Host: $HOST is not resolving. Typo? Aborting.${ENDCOLOR}"
        exit 2
      fi

      # Log STDERR and STDOUT to $LOGFILE if specified
      if [ -n "$LOGFILE" ]; then
        echo -e "${GREEN}Connecting to $HOST ...${ENDCOLOR}" 2>&1 | tee -a "$LOGFILE"
        ssh -n -o ConnectTimeout=10 "${SSH_PARAMS}" "$SSH_USER"@"$HOST" "${COMMANDPART}" 2>&1 | tee -a "$LOGFILE"

        # Test if ssh-command was successful
        # shellcheck disable=SC2181
        if [ "$?" -ne 0 ]; then
          echo -n -e "${RED}Command was NOT successful on $HOST ... ${ENDCOLOR}" 2>&1 | tee -a "$LOGFILE"

          # Shall we proceed or not?
          if [ "$ABORT" = "YES" ]; then
            echo -n -e "${RED}Aborting.${ENDCOLOR}\n" 2>&1 | tee -a "$LOGFILE"
            exit 1
          else
            echo -n -e "${GREEN}Proceeding, as configured.${ENDCOLOR}\n" 2>&1 | tee -a "$LOGFILE"
          fi
        fi

      else

        echo -e "${GREEN}Connecting to $HOST ...${ENDCOLOR}"
        ssh -n -o ConnectTimeout=10 "${SSH_PARAMS}" "$SSH_USER"@"$HOST" "${COMMANDPART}"

        # Test if ssh-command was successful
        # shellcheck disable=SC2181
        if [ "$?" -ne 0 ]; then
          echo -n -e "${RED}Command was NOT successful on $HOST ... ${ENDCOLOR}"

          # Shall we proceed or not?
          if [ "$ABORT" = "YES" ]; then
            echo -n -e "${RED}Aborting.${ENDCOLOR}\n"
            exit 1
          else
            echo -n -e "${GREEN}Proceeding, as configured.${ENDCOLOR}\n"
          fi
        fi

      fi

    done < "$HOSTLIST"

  else
    echo -e "${RED}Hostlist \"$HOSTLIST\" is empty. Exiting.${ENDCOLOR}"
    exit 1
  fi

else
  echo -e "${RED}Hostlist \"$HOSTLIST\" is not readable. Exiting.${ENDCOLOR}"
  exit 1
fi
Comments

Opinion: fail2ban doesn't increase system security, it's just a mere logfile cleanup tool

Like many IT people, I pay to have my own server for personal projects and self-hosting. As such, I am responsible for securing these systems as they are, of course, connected to the internet and provide services to everyone. Like this blog for example. So I often read about people installing Fail2Ban to "increase the security of their systems".

And every time I read this, I am like this popular meme from the TV series Firefly:

As I don't share this view of Fail2Ban - in fact, I'm against the view that it improves security - but I'll keep quiet, knowing that starting this discussion is simply not helpful. Nor that it is wanted.

For me, Fail2Ban is just a log cleanup tool. Its only benefit is that it will catch repeated login attempts and deny them by adding firewall rules to iptables/nftables to block traffic from the offending IPs. This prevents hundreds or thousands of extra logfile lines about unsuccessful login attempts. So it doesn't improve the security of a system, as it doesn't prevent unauthorised access or strengthen authorisation or authentication methods. No, Fail2Ban - by design - can only act when an IP has been seen enough times to trigger an action from Fail2Ban.

With enough luck on the part of the attacker - or negligence on the part of the operator - a login will still succeed. Fail2Ban won't save you if you allow root to login via SSH with the password "root" or "admin" or "toor".

Granted, even Fail2Ban knows this and they write this prominently on their project's GitHub page:

Though Fail2Ban is able to reduce the rate of incorrect authentication attempts, it cannot eliminate the risk presented by weak authentication. Set up services to use only two factor, or public/private authentication mechanisms if you really want to protect services.

Source: https://github.com/fail2ban/fail2ban

Yet, the number of people I see installing Fail2Ban to "improve SSH security" but refusing to use public/private key authentication is staggering.

I only allow public/private key login for select non-root users specified via AllowUsers. Absolutely no password logins allowed. I've changed the SSH port away from port 22/tcp and I don't run Fail2Ban. As with this setup, there are not that many login attempts anyway. And those that do tend to abort pretty early on when they realise that password authentication is disabled.

Although in all honesty: Thanks to services like https://www.shodan.io/ and others finding out the changed SSH port is not a problem. There are dozens of tools that can detect what is running behind a port and act accordingly. Therefore I do see my fair share of SSH bruteforce attempts. Denying password authentication is the real game changer.

So do yourself a favour: Don't rely on Fail2Ban for SSH security. Rely on the following points instead:

  • Keep your system up to date! As this will also remove outdated/broken ciphers and add support for new, more secure ones. All the added & improved SSH security gives you nothing if an attacker can gain root privileges via another vulnerability.
  • AllowUsers or AllowGroups: To only specified users to login in via SSH. This is generally preferred over using DenyUsers or DenyGroups as it's generally wiser to specify "what is allowed" as to specify "what is forbidden". As the bad guys are pretty damn good in finding the flaws and holes in the later one.
  • DenyUsers or DenyGroups: Based on your groups this may be useful too but I try to avoid using this.
  • AuthorizedKeysFile /etc/ssh/authorized_keys/%u: This will place the authorized_keys file for each user in the /etc/ssh/authorized_keys/ directory. This ensures users can't add public keys by themselves. Only root can.
  • PermitEmptyPasswords no: Should be self-explaining. Is already a default.
  • PasswordAuthentication no and PubkeyAuthentication yes: Disables authentication via password. Enabled authentication via public/private keys.
  • AuthenticationMethods publickey: To only offer publickey authentication. Normally there is publickey,password or the like.
  • PermitRootLogin no: Create a non-root account and use su. Or install sudo and use that if needed. See also AllowUsers.
Comments

Why I prefer !requiretty over "ssh -t"

Dall-E https://admin.brennt.net/bl-content/uploads/pages/dad5b98ab9f04a2cdca5de3afe2f6b0e/dall-e_sudo.jpg

Claudio Künzler, whom I know briefly from working with him on enhancing is check_equallogic back in 2010, wrote an article over at Geeker's Digest on How to use sudo inside SSH command. Of course he mentions the ssh -t parameter, as without it, we would get the following error message when calling sudo: (Example shamelessly stolen from his article. 😇)

ck@linux:~$ ssh targetserver "sudo whoami"
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required

And ssh -t is the right call here. Well, to be fair: It's not the only solution and in my eyes even not the best solution.

No, I am not talking about piping the password into the command prompt which is so often recommend as a solution (it's not!) that it makes me sad.

I am talking about the usage of negating requiretty in the /etc/sudoers file or a file under /etc/sudoers.d/ respectively.

Lets take the /etc/sudoers.d/icinga2 file I use in my article How to monitor your APT-repositories with Icinga:

Here I must use NOPASSWD for all executed commands and monitoring plugins as well as the line Defaults:icinga2 !requiretty. This negates the need for a tty for the icinga2 user completely. Omitting either the NOPASSWD or the !requiretty will give us the error message we see above.

root@admin:~ # cat /etc/sudoers.d/icinga2
# This line disables the need for a tty for sudo
#  else we will get all kind of "sudo: a password is required" errors
Defaults:icinga2 !requiretty

# sudo rights for Icinga2
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/unattended-upgrades
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/unattended-upgrade
icinga2  ALL=(ALL) NOPASSWD: /usr/bin/apt-get
icinga2  ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/check_apt

It's also possible to just negate requiretty based on the path to the binary. As mentioned in this StackExchange question: How to disable requiretty for a single command in sudoers?

However keep in mind that the ordering of lines in a sudoers file is important! Quoting man sudoers from the SUDOERS FILE FORMAT section:

When multiple entries match for a user, they are applied in order. Where there are multiple matches, the last match is used (which is not necessarily the most specific match).

Why not just use ssh -t?

Personally I prefer the configuration/setting of sudo-related parameters in an /etc/sudoers.d/ file. My reasons are:

When properly configured via a sudoers file it doesn't matter if a command is called via ssh, ssh -t or any other way. Hence enhancing operational stability and making it easier for users as they don't have to remember adding the -t parameter.

And it, at least, servers as some form of documentation that this user/binary is called from another script/host/etc. giving you a clue that these sudo rights are needed/used for.

Comments

What the G? Exploring the current state of Google Apps & Play alternatives for Android ROMs

Photo by Kyle Roxas: https://www.pexels.com/photo/crackers-cheese-and-fruits-2122278/

I use LineageOS on my OnePlus 8T. Sadly I still run it with Android 11 and LineageOS 18.1. Which is quite old in 2025. My plans to update the firmware and ROM got postponed again as non-surprisingly: I need my phone.

However one of the new year resolutions I choose is to finally update my mobile so I'm not lacking behind on Android security updates for several years. *ahem*

To de-google or to not de-google?

Along with this a long-standing idea surfaced again: I want to remove as many Google services as possible without having to give up needed tools/services.

Therefore, the first question regarding updating my mobile is: Do I still want to keep using the Google Apps? Or do I want to use another suite of alternative GApps that enhance privacy at the cost of some features? Or do I want to get rid of Google's apps altogether?

Depending on the choice, I can only use some (or all) Google Apps.

The rule of thumb is: The more functionality you need, the more you are paying with your data. The less privacy you have.

On a normal Android phone, the Google apps (YouTube, Maps, Wallet, etc.) are almost always pre-installed. Contrary custom ROMs often can't pre-install these, as a) Google doesn't allow this, and b) nowadays ROM developers know that users choose an alternative ROM to gain more control over their digital life and therefore explicitly want to disable some (or all) features related to Google's services and apps.

Alternative Google apps packages: The agony of choice

For this, several Google Apps packages exist. All vary in features and what they try to accomplish.

The big three in this field are:

MindTheGapps

Currently I am running LineageOS with MindTheGapps. Which is their current standard (LineageOS Wiki). MindTheGapps uses the proprietary Google services/packages. It just servers as an installer for the official Google packages. This means you can use the Google Play store, Google Maps, every method of authentication with Google Firebase (more relevant than you might think), and the SafetyNet (here is a good site with a presentation, whitepaper & lectures about SafetyNet - also known as DroidGuard) feature on which banking apps often rely. Among all other Google services.

The downside? Google happily leeching as much data from you as they do.
But helpful if you just want a custom ROM and no hassle with non-working apps.

MicroG

A viable alternative is MicroG. MicroG is, to quote the project itself, "A free-as-in-freedom re-implementation of Google’s proprietary Android user space apps and libraries." This means MicroG is a redevelopment to match the functionality while keeping privacy-harming features out. This also means that privacy-focused ROMs like CalyxOS include them and allow users to choose if they want to activate them.

There is an overview about the implementation status of each feature in the MicroG apps: https://github.com/microg/GmsCore/wiki/Implementation-Status. And it also serves as a good example: In MicroG, two Firebase authentication methods are currently not supported. This means if an app you use only uses these two methods, you won't be able to sign in. It's generally advisable to check your important apps if they work with MicroG or not.

The neat thing is that MicroG uses signature spoofing to present itself as the Google apps. Additionally, the long-standing issue with allowing signature spoofing in LineageOS has been fixed recently. This means that if MicroG is installed via F-Droid and the app package is signed with the signature from the MicroG development team, LineageOS allows MicroG to spoof the signatures. Which eliminates many long-standing issues.

The neat thing is: This allows us to simply install MicroG via F-Droid and be done with it. No more incorporating the MicroG image during the flash process.

Information about their F-Droid repository can be found here: https://microg.org/fdroid.html

Please, only install MicroG from their own repository. Don't install it from the PlayStore or other GitHub repositories. It's common to find Malware posing as the official MicroG package. One such example is documented in this Reddit-Thread: https://www.reddit.com/r/MicroG/comments/1icxc8h/malware/

And here is an interview with the MicroG developer about how it all came to be: https://community.e.foundation/t/microg-what-you-need-to-know-a-conversation-with-its-developer-marvin-wissfeld/22700

OpenGApps

Honestly, I still did know this project from when I first flashed my phone. And it was the first project I had a looked after. But sadly it seems that development is currently very slow.

The last news post is from August 23, 2019.
Android 11 is the last supported OS version, according to their homepage.
The last update in their SourceForge repository happened on May 3rd, 2022. Stating that support for Android 11 is, for most variants, still in the testing phase. According to their GitHub issues, some support for Android 12 should be there, but I didn't dig deeper into this.

In this GitHub issue, it's stated that they currently suffer from a lack of supporters/maintainers and have problems with their build server infrastructure, etc.

Therefore, I didn't look any deeper into this. If you want: Here is a link about what package contains which features: https://github.com/opengapps/opengapps/wiki/Advanced-Features-and-Options

Google Play store alternatives

Now that we have decided which package we want to use, we probably want to quit using the Google Play Store. After all, the more Google apps we replace, the less we are dependent on their services running on our phone.

Luckily, there are two app store alternatives that solve all our problems together.

F-Droid

F-Droid is the most commonly used alternative app store. It focuses on OpenSource software, rebuilding the .apk packages from the official repositories. Undesired features such as tracking, ads, or proprietary code are listed on the apps' site. However, due to hosting all files themselves and having a focus on privacy and OpenSource they have their own inclusion policy, which apps need to fulfill in order to be added to F-Droid. Hence, many commercial apps simply can't be added as they either directly use Google's Play services or other proprietary software to build their app. Or the company simply doesn't agree to being hosted on F-Droid.

This has the direct result that many popular apps will & can never be available on F-Droid. Which leads us to...

Aurora OSS

Aurora OSS is another Google Play store alternative. One that is even usable with and without Google services or even MicroG. As Aurora is directly accessing the Google Play store you can download all apps, even the ones you paid for - as long as you log in with your Google account.

For the rest. I recommend reading their project description: https://gitlab.com/AuroraOSS/AuroraStore. The "nicer" looking URL is https://auroraoss.com/; however, this is just to forward you to their GitLab project.

There is however a report that a freshly created Google account that solely used Aurora got deleted after just two hours: https://www.reddit.com/r/degoogle/comments/13td3iq/google_account_deleted_after_2_hours_of_aurora/ and https://news.ycombinator.com/item?id=36098970

But then again, this was the only big reporting on that matter. So I can't really tell what will happen to accounts which are also used for YouTube. Or that have paid apps in the Play store connected to their account.

This discussion on Reddit provides a little bit of insight as to why this could happen to an account.

Conclusion

Aurora combined with F-Droid means we have a solution to get any and all apps we need.

And even if you plan on flashing your Android device or not. Both app stores can be used on any phone. Allowing you to test them out prior to making any fundamental changes to your phone.

Side note: Vanishing developers

While doing my research for this article, I spent some time in the XDA developers forum, on GitHub, GitLab instances, and even SourceForge. And I noticed a constantly reoccurring pattern: many of the developers simply stopped being present in the last few years. No new posts in the XDA Developers forum. No activity on GitHub, etc.

Now it's not uncommon for OpenSource projects to lose contributors or even main project developers. After all, people do have a job, a family, a life. And somehow they need to earn money. Nothing surprising about that.

But I additionally noticed that many seem to be male and of Russian nationality. Given the fact that the Russian attack on Ukraine started in February 2022 and many developers went silent in the last 2-3 years, I have a bad feeling that many of them were conscripted into the Russian army.

Let's just hope that I am wrong...

Comments