Feuerfest

Just the private blog of a Linux sysadmin

Using rgxg (ReGular eXpression Generator) to generate a RegEx that matches all IPv4 & IPv6 addresses

In "Little Helper Scripts - Part 3: My Homelab CA Management Scripts", I mention that the regular expressions I use for identifying IPv4 and IPv6 addresses are rather basic. In particular, the IPv6 RegEx simply assumes that anything containing a colon is an IPv6 address.

When I jokingly asked on Mastodon if anyone had a better RegEx, I mentioned my script enhancements. My former colleague Klaus Umbach recommended rgxg (ReGular eXpression Generator) to me. It sounded like it would solve my problem exactly.

Installing rgxg

The installation on Debian is pretty easy as there is a package available.

root@host:~# apt-get install rgxg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  librgxg0
The following NEW packages will be installed:
  librgxg0 rgxg
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.4 kB of archives.
After this operation, 81.9 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://debian.tu-bs.de/debian bookworm/main amd64 librgxg0 amd64 0.1.2-5 [15.3 kB]
Get:2 http://debian.tu-bs.de/debian bookworm/main amd64 rgxg amd64 0.1.2-5 [9,096 B]
Fetched 24.4 kB in 0s (200 kB/s)
Selecting previously unselected package librgxg0:amd64.
(Reading database ... 40580 files and directories currently installed.)
Preparing to unpack .../librgxg0_0.1.2-5_amd64.deb ...
Unpacking librgxg0:amd64 (0.1.2-5) ...
Selecting previously unselected package rgxg.
Preparing to unpack .../rgxg_0.1.2-5_amd64.deb ...
Unpacking rgxg (0.1.2-5) ...
Setting up librgxg0:amd64 (0.1.2-5) ...
Setting up rgxg (0.1.2-5) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for libc-bin (2.36-9+deb12u10) ...
root@host:~#

Generating a RegEx for IPv6 and IPv4

Klaus already delivered the example for the complete IPv6 address space. For IPv4 it is equally easy:

# RegEx for the complete IPv6 address space
user@host:~$ rgxg cidr ::0/0
((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})

# RegEx for the complete IPv4 address space
user@host:~$ rgxg cidr 0.0.0.0/0
(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}

Modifying hostcert.sh

All that is left to is define definite start and endpoints for the RegEx (^ and $) and test it.

user@host:~/git/github/chrlau/scripts/ca$ git diff
diff --git a/ca/hostcert.sh b/ca/hostcert.sh
index f743881..26ec0b0 100755
--- a/ca/hostcert.sh
+++ b/ca/hostcert.sh
@@ -42,16 +42,18 @@ else
        CN="$1.lan"
 fi

-# Check if Altname is an IPv4 or IPv6 (yeah.. very basic check..)
-#  so we can set the proper x509v3 extension
+# Check if Altname is an IPv4 or IPv6 - so we can set the proper x509v3 extension
+# Note: Everything which doesn't match the IPv4 or IPv6 RegEx is treated as DNS altname!
 for ALTNAME in $*; do
-  if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
+  if [[ $ALTNAME =~ ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}$ || $ALTNAME =~ ^((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})$ ]]; then
+  #if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
     IP_ALTNAMES+=("$ALTNAME")
   else
     DNS_ALTNAMES+=("$ALTNAME")
   fi
 done

+# TODO: Add DNS check against all DNS Altnames (CN is always part of DNS Altnames)
 echo "CN: $CN"
 echo "DNS ANs: ${DNS_ALTNAMES[@]}"
 echo "IP ANs: ${IP_ALTNAMES[@]}"

And it seems to work fine. Notably all altnames who don't match any of the RegExes are treated as DNS-Altname which can cause trouble hence I think about adding a check to resolve all provided DNS names prior the certificate creation.

root@host:~/ca# ./hostcert.sh service.lan 1.2.3.4 fe80::1234 service-test.lan 2abt::0000 999.5.4.2 2a00:1:2:3:4:5::ffff
CN: service.lan
DNS ANs: service.lan service-test.lan 2abt::0000 999.5.4.2
IP ANs: 1.2.3.4 fe80::1234 2a00:1:2:3:4:5::ffff
Enter to confirm.
^C
root@host:~/ca#
Comments

Little helper scripts - Part 3: My homelab CA management scripts

Part 2 of this series is here: Little helper scripts - Part 2: automation.sh / automation2.sh or use the following tag-link for all parts: https://admin.brennt.net/tag/littlehelperscripts

My homelab situation

Like many IT enthusiasts I run my homelab with various services. And of course HTTPS (and SSL/TLS in general) plays a crucial role in securing the transport layer of applications. And using a self-signed certificate without a CA is possible, but then I would get certificate warnings constantly and always would have too add each certificate in every browsers list. Having my own self-signed CA enables me to add the CA certificate once into every browser/truststores an all devices and be done with that.

While I happily use Let's Encrypt on all services reachable via Internet (using getssl or Apache's mod_md as ACME clients) I can't do so for my homelab domains. The reason is that I deliberately choose lan. as TLD for my home network to prevent overlap with any other registered domain/TLD. And while lan. is current not used/registered as an official TLD it is also not listed/reserved a Special-Use Domain Name by the IANA. Hence Let's Encrypt won't issue certificates for that domain as it can't verify the domain.

If course I can get in trouble when lan. is ever registered as a official TLD. But I doubt that will happen at all.

Managing your own self-signed Certificate Authority

Years ago I read Running one's own root Certificate Authority in 2023 and used the scripts to create my own self-signed CA (GitHub). I created my CA certificate using the cacert.sh script and new certificates for domains got create with hostcert.sh. The sign.sh script is called automatically to sign the CSRs but can also be called separately. This worked for a few years. 

However recently I noticed that some features are missing. As the logic to add SubjectAltNames only supported DNS AltNames and had some shortcomings. Commit 56fe6a93675818a483be3abe02cc1ac963a76aed fixed that.

The change was rather easy. Just add some regular expressions to differentiate between DNS, IPv4 and IPv6 altnames and add them with the right prefix to the X509v3 Subject Alternative Name extension (DNS altnames have to be prefixed with DNS: and IP altnames with IP:).

Generating a new certificate with DNS, IPv4 and IPv6 altnames

Now my hostcert.sh supports the following:

user@host:~/ca$ ./hostcert.sh service.lan 192.168.1.114  2a00:2d:bd11:c569:abcd:efff:cb42:1234 service-test.lan
CN: service.lan
DNS ANs: service.lan service-test.lan
IP ANs: 192.168.1.114  2a00:2d:bd11:c569:abcd:efff:cb42:1234
Enter to confirm.

writing RSA key

Reading pass from $CAPASS
CA signing: service.lan.csr -> service.lan.crt:
Using configuration from ca.config
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'DE'
localityName          :ASN.1 12:'City'
organizationName      :ASN.1 12:'LAN CA'
commonName            :ASN.1 12:'service.lan'
Certificate is to be certified until Jul  5 23:08:40 2026 GMT (365 days)

Write out database with 1 new entries
Database updated
CA verifying: service.lan.crt <-> CA cert
service.lan.crt: OK

And the resulting certificate is also fine:

user@host:~/ca# grep "X509v3 Subject Alternative Name" -A1 service.lan.crt
            X509v3 Subject Alternative Name:
                DNS:service.lan, DNS:service-test.lan, IP Address:192.168.1.114, IP Address:2a00:2d:bd11:c569:abcd:efff:cb42:1234
Comments

Bye bye Brave Browser

mthie wrote a short blog post in which he linked to the following XDA Developers article: https://www.xda-developers.com/brave-most-overrated-browser-dont-recommend/.

For me, personally, this article was a real eye-opener. I only knew about roughly half of the things that Brave has done over the years. I was especially unaware of the "Accepting donations for creators - who don't know anything about the Brave program" or the Honey-like "Let's replace referral codes in links and cookies with our own to steal the commission".

If a company pulls such stunts, it is not acting in good faith. Not acting in good faith makes you a bad actor.

Do you really want to use software from a bad actor? Yet alone have it installed on your mobile? It's the most private piece of technical equipment we own in these days.

Yeah, me neither. Hence Brave has been officially uninstalled from all my devices as of 20 minutes ago.

And I doubt I will miss it much, as Firefox on Android officially supports all extensions since end of 2023/beginning of 2024. Therefore extensions like uBlock Origin and uMatrix work perfectly fine. The fact that those were not supported back then was the main reason for choosing Brave in the first place.

Now with this reason being gone, so too is the crypto- and scam-infested Brave browser.

Comments

Pi-hole, IPv6 and NTP - How to fix: "No valid NTP replies received, check server and network connectivity"

The following log message would only sporadically be logged on my Pi-hole. Not every hour, and not even every day. Just... sometimes. When the stars aligned... When, 52 years ago, Monday fell on a full moon and a 12th-generation carpenter was born... You get the idea.  😄

The error message was:

"No valid NTP replies received, check server and network connectivity"

Strange. NTP works. Despite Pi-hole sometimes fancy otherwise.

Inspecting the Pi-hole configuration

pihole-FTL returned the following NTP configuration:

user@host:~$ pihole-FTL --config ntp
ntp.ipv4.active = true
ntp.ipv4.address =
ntp.ipv6.active = true
ntp.ipv6.address =
ntp.sync.active = true
ntp.sync.server = 1.de.pool.ntp.org
ntp.sync.interval = 3600
ntp.sync.count = 8
ntp.sync.rtc.set = false
ntp.sync.rtc.device =
ntp.sync.rtc.utc = true

That looked good to me.

It was here that I had my suspicions: Wait, does the NTP Pool Project already offer IPv6? I have never knowingly used public NTP pools with IPv6. In customer networks, NTP servers are usually only reachable via IPv4. I don't have an NTP server in my home network. Sadly, many services are still not IPv6 ready.

Some companies even remove IPv6 support, like DigiCert (a commercial certificate authority!), who removed IPv6 support when they switched to a new CDN provider. This left me speechless. Read https://knowledge.digicert.com/alerts/digicert-certificate-status-ip-address if you want to know more.

NTP & IPv6? Only with pools that start with a 2

A short search for IPv6 support in NTP-Pools and https://www.ntppool.org/en/use.html provided the answer:

Please also note that the system currently only provides IPv6 addresses for a zone in addition to IPv4 addresses if the zone name is prefixed by the number 2, e.g. 2.pool.ntp.org (provided there are any IPv6 NTP servers in the respective zone). Zone names not prefixed by a number, or prefixed with any of 0, 1 or 3, currently provide IPv4 addresses only.

It turns out that the problem lies in my dual-stack setup, since I use IPv4 and IPv6 in parallel. Or rather... It's with the NTP pools. I checked with dig to see if any AAAA records were returned for 1.de.pool.ntp.org. The pool I was using.

dig aaaa 1.de.pool.ntp.org returns no AAAA-Records.

user@host:~$ dig aaaa 1.de.pool.ntp.org

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> aaaa 1.de.pool.ntp.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43230
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 3 (Stale Answer)
;; QUESTION SECTION:
;1.de.pool.ntp.org.             IN      AAAA

;; AUTHORITY SECTION:
pool.ntp.org.           0       IN      SOA     d.ntpns.org. hostmaster.pool.ntp.org. 1749216969 5400 5400 1209600 3600

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Jun 06 16:10:31 CEST 2025
;; MSG SIZE  rcvd: 134

And surely enough a dig aaaa 2.de.pool.ntp.org returns AAAA-Records.

user@host:~$ dig aaaa 2.de.pool.ntp.org

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> aaaa 2.de.pool.ntp.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47906
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;2.de.pool.ntp.org.             IN      AAAA

;; ANSWER SECTION:
2.de.pool.ntp.org.      130     IN      AAAA    2a0f:85c1:b73:62:123:123:123:123
2.de.pool.ntp.org.      130     IN      AAAA    2a01:239:2a6:d500::1
2.de.pool.ntp.org.      130     IN      AAAA    2606:4700:f1::1
2.de.pool.ntp.org.      130     IN      AAAA    2a01:4f8:141:282::5:1

;; Query time: 656 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Jun 06 16:33:32 CEST 2025
;; MSG SIZE  rcvd: 158

My new Pi-hole configuration

The fix was easy, just configure 2.de.pool.ntp.org instead of 1.de.pool.ntp.org. Done.

user@host:~$ pihole-FTL --config ntp
ntp.ipv4.active = true
ntp.ipv4.address =
ntp.ipv6.active = true
ntp.ipv6.address =
ntp.sync.active = true
ntp.sync.server = 2.de.pool.ntp.org
ntp.sync.interval = 3600
ntp.sync.count = 8
ntp.sync.rtc.set = false
ntp.sync.rtc.device =
ntp.sync.rtc.utc = true

Now my Pi-hole instances aren't running long enough to really verify that the error is gone but I suspect so.

Comments

How to fix Pi-hole FTL error: EDE: DNSSEC bogus

If you are instead searching for an explanation of the error code have a look at RFC 8914.

I noticed that the DNS resolution on my secondary Pi-hole instance wasn't working. host wouldn't resolve a single DNS name. As the /etc/resolv.conf included only the DNS servers running on localhost (127.0.0.1 and ::1) DNS resolution didn't work at all. Naturally I started looking at the Pi-hole logfiles.

/var/log/pihole/pihole.log would log this for all domains.

Jun  4 00:02:54 dnsmasq[4323]: query 1.de.pool.ntp.org from 127.0.0.1
Jun  4 00:02:54 dnsmasq[4323]: forwarded 1.de.pool.ntp.org to 127.0.0.1#5335
Jun  4 00:02:54 dnsmasq[4323]: forwarded 1.de.pool.ntp.org to ::1#5335
Jun  4 00:02:54 dnsmasq[4323]: validation 1.de.pool.ntp.org is BOGUS
Jun  4 00:02:54 dnsmasq[4323]: reply error is SERVFAIL (EDE: DNSSEC bogus)

Ok that was a first hint. I checked /var/log/pihole/FTL.log and there would be this message repeated all over again.

2025-06-03 00:02:52.505 CEST [841/T22762] ERROR: Error NTP client: Cannot resolve NTP server address: Try again
2025-06-03 00:02:52.509 CEST [841/T22762] INFO: Local time is too inaccurate, retrying in 600 seconds before launching NTP server

NTP is not the culprit

I checked the local time and it matched the time on the primary Pi-hole instance. Strange. I even opened https://uhr.ptb.de/ which is the official time clock for Germany (yes, per law). And it matched to the second. timedatectl would also print the correct time for both UTC and CEST and state that the system clock is synchronized.

root@host:~# timedatectl
               Local time: Wed 2025-06-04 00:51:07 CEST
           Universal time: Tue 2025-06-03 22:51:07 UTC
                 RTC time: n/a
                Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

What the heck was going on?

Unbound leftovers

I googled "EDE: DNSSEC bogus" dnsmasq and found the solution in https://www.reddit.com/r/pihole/comments/zsrjzn/2_piholes_with_unbound_breaking_dns/.

Turns out I forgot to execute two critical steps.

  1. I didn't delete /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf
  2. I didn't comment out the line starting with unbound_conf= in /etc/resolvconf.conf

Or they came back, when I updated that Raspberry from Debian Bullseye to Bookworm today. Anyway after doing these two steps and restarting Unbound it now works flawlessly.

And I learned which files are not kept in sync by nebula-sync. 😉

Comments

Blocking Googles AI answers in search results with uBlock Origin

If you have uBlock Origin installed as an extension in your browser there is a way to hide/block Googles AI answers.

  1. Click the uBlock Origin icon
  2. Click the button with the 3 wheels in the lower right
  3. Go to "My filters"
  4. Copy and paste the following into the field below: google.com##.hdzaWe
  5. Make sure the checkbox next to "Activate own filters" is set
  6. Hit "Apply changes" on top

Of course this only works until Google changes the name of the element, but if that happens there is usually a thread in https://www.reddit.com/r/uBlockOrigin/ with the new filter.

Or alternatively just put a swear word somewhere into your Google search query. Then the AI is also absent. 😂

Instead of: programname error-message
Google for: programname fucking error-message

😆

The udm=14 parameter

And the "cheat code" for AI free search with Google also still exists. Just add the udm=14 parameter.

https://tedium.co/2024/05/17/google-web-search-make-default/ has all the details and https://tenbluelinks.org shows you how to achieve that for various browsers.

Enjoy!

Comments