Feuerfest

Just the private blog of a Linux sysadmin

Bye bye Brave Browser

mthie wrote a short blog post in which he linked to the following XDA Developers article: https://www.xda-developers.com/brave-most-overrated-browser-dont-recommend/.

For me, personally, this article was a real eye-opener. I only knew about roughly half of the things that Brave has done over the years. I was especially unaware of the "Accepting donations for creators - who don't know anything about the Brave program" or the Honey-like "Let's replace referral codes in links and cookies with our own to steal the commission".

If a company pulls such stunts, it is not acting in good faith. Not acting in good faith makes you a bad actor.

Do you really want to use software from a bad actor? Yet alone have it installed on your mobile? It's the most private piece of technical equipment we own in these days.

Yeah, me neither. Hence Brave has been officially uninstalled from all my devices as of 20 minutes ago.

And I doubt I will miss it much, as Firefox on Android officially supports all extensions since end of 2023/beginning of 2024. Therefore extensions like uBlock Origin and uMatrix work perfectly fine. The fact that those were not supported back then was the main reason for choosing Brave in the first place.

Now with this reason being gone, so too is the crypto- and scam-infested Brave browser.

Comments

Pi-hole, IPv6 and NTP - How to fix: "No valid NTP replies received, check server and network connectivity"

The following log message would only sporadically be logged on my Pi-hole. Not every hour, and not even every day. Just... sometimes. When the stars aligned... When, 52 years ago, Monday fell on a full moon and a 12th-generation carpenter was born... You get the idea.ย  ๐Ÿ˜„

The error message was:

"No valid NTP replies received, check server and network connectivity"

Strange. NTP works. Despite Pi-hole sometimes fancy otherwise.

Inspecting the Pi-hole configuration

pihole-FTL returned the following NTP configuration:

user@host:~$ pihole-FTL --config ntp
ntp.ipv4.active = true
ntp.ipv4.address =
ntp.ipv6.active = true
ntp.ipv6.address =
ntp.sync.active = true
ntp.sync.server = 1.de.pool.ntp.org
ntp.sync.interval = 3600
ntp.sync.count = 8
ntp.sync.rtc.set = false
ntp.sync.rtc.device =
ntp.sync.rtc.utc = true

That looked good to me.

It was here that I had my suspicions: Wait, does the NTP Pool Project already offer IPv6? I have never knowingly used public NTP pools with IPv6. In customer networks, NTP servers are usually only reachable via IPv4. I don't have an NTP server in my home network. Sadly, many services are still not IPv6 ready.

Some companies even remove IPv6 support, like DigiCert (a commercial certificate authority!), who removed IPv6 support when they switched to a new CDN provider. This left me speechless. Read https://knowledge.digicert.com/alerts/digicert-certificate-status-ip-address if you want to know more.

NTP & IPv6? Only with pools that start with a 2

A short search for IPv6 support in NTP-Pools and https://www.ntppool.org/en/use.html provided the answer:

Please also note that the system currently only provides IPv6 addresses for a zone in addition to IPv4 addresses if the zone name is prefixed by the number 2, e.g. 2.pool.ntp.org (provided there are any IPv6 NTP servers in the respective zone). Zone names not prefixed by a number, or prefixed with any of 0, 1 or 3, currently provide IPv4 addresses only.

It turns out that the problem lies in my dual-stack setup, since I use IPv4 and IPv6 in parallel. Or rather... It's with the NTP pools. I checked with dig to see if any AAAA records were returned for 1.de.pool.ntp.org. The pool I was using.

dig aaaa 1.de.pool.ntp.org returns no AAAA-Records.

user@host:~$ dig aaaa 1.de.pool.ntp.org

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> aaaa 1.de.pool.ntp.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43230
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 3 (Stale Answer)
;; QUESTION SECTION:
;1.de.pool.ntp.org.             IN      AAAA

;; AUTHORITY SECTION:
pool.ntp.org.           0       IN      SOA     d.ntpns.org. hostmaster.pool.ntp.org. 1749216969 5400 5400 1209600 3600

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Jun 06 16:10:31 CEST 2025
;; MSG SIZE  rcvd: 134

And surely enough a dig aaaa 2.de.pool.ntp.org returns AAAA-Records.

user@host:~$ dig aaaa 2.de.pool.ntp.org

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> aaaa 2.de.pool.ntp.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47906
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;2.de.pool.ntp.org.             IN      AAAA

;; ANSWER SECTION:
2.de.pool.ntp.org.      130     IN      AAAA    2a0f:85c1:b73:62:123:123:123:123
2.de.pool.ntp.org.      130     IN      AAAA    2a01:239:2a6:d500::1
2.de.pool.ntp.org.      130     IN      AAAA    2606:4700:f1::1
2.de.pool.ntp.org.      130     IN      AAAA    2a01:4f8:141:282::5:1

;; Query time: 656 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Jun 06 16:33:32 CEST 2025
;; MSG SIZE  rcvd: 158

My new Pi-hole configuration

The fix was easy, just configure 2.de.pool.ntp.org instead of 1.de.pool.ntp.org. Done.

user@host:~$ pihole-FTL --config ntp
ntp.ipv4.active = true
ntp.ipv4.address =
ntp.ipv6.active = true
ntp.ipv6.address =
ntp.sync.active = true
ntp.sync.server = 2.de.pool.ntp.org
ntp.sync.interval = 3600
ntp.sync.count = 8
ntp.sync.rtc.set = false
ntp.sync.rtc.device =
ntp.sync.rtc.utc = true

Now my Pi-hole instances aren't running long enough to really verify that the error is gone but I suspect so.

Comments

How to fix Pi-hole FTL error: EDE: DNSSEC bogus

If you are instead searching for an explanation of the error code have a look at RFC 8914.

I noticed that the DNS resolution on my secondary Pi-hole instance wasn't working. host wouldn't resolve a single DNS name. As the /etc/resolv.conf included only the DNS servers running on localhost (127.0.0.1 and ::1) DNS resolution didn't work at all. Naturally I started looking at the Pi-hole logfiles.

/var/log/pihole/pihole.log would log this for all domains.

Jun  4 00:02:54 dnsmasq[4323]: query 1.de.pool.ntp.org from 127.0.0.1
Jun  4 00:02:54 dnsmasq[4323]: forwarded 1.de.pool.ntp.org to 127.0.0.1#5335
Jun  4 00:02:54 dnsmasq[4323]: forwarded 1.de.pool.ntp.org to ::1#5335
Jun  4 00:02:54 dnsmasq[4323]: validation 1.de.pool.ntp.org is BOGUS
Jun  4 00:02:54 dnsmasq[4323]: reply error is SERVFAIL (EDE: DNSSEC bogus)

Ok that was a first hint. I checked /var/log/pihole/FTL.log and there would be this message repeated all over again.

2025-06-03 00:02:52.505 CEST [841/T22762] ERROR: Error NTP client: Cannot resolve NTP server address: Try again
2025-06-03 00:02:52.509 CEST [841/T22762] INFO: Local time is too inaccurate, retrying in 600 seconds before launching NTP server

NTP is not the culprit

I checked the local time and it matched the time on the primary Pi-hole instance. Strange. I even opened https://uhr.ptb.de/ which is the official time clock for Germany (yes, per law). And it matched to the second. timedatectl would also print the correct time for both UTC and CEST and state that the system clock is synchronized.

root@host:~# timedatectl
               Local time: Wed 2025-06-04 00:51:07 CEST
           Universal time: Tue 2025-06-03 22:51:07 UTC
                 RTC time: n/a
                Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

What the heck was going on?

Unbound leftovers

I googled "EDE: DNSSEC bogus" dnsmasq and found the solution in https://www.reddit.com/r/pihole/comments/zsrjzn/2_piholes_with_unbound_breaking_dns/.

Turns out I forgot to execute two critical steps.

  1. I didn't delete /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf
  2. I didn't comment out the line starting with unbound_conf= in /etc/resolvconf.conf

Or they came back, when I updated that Raspberry from Debian Bullseye to Bookworm today. Anyway after doing these two steps and restarting Unbound it now works flawlessly.

And I learned which files are not kept in sync by nebula-sync. ๐Ÿ˜‰

Comments

Blocking Googles AI answers in search results with uBlock Origin

If you have uBlock Origin installed as an extension in your browser there is a way to hide/block Googles AI answers.

  1. Click the uBlock Origin icon
  2. Click the button with the 3 wheels in the lower right
  3. Go to "My filters"
  4. Copy and paste the following into the field below: google.com##.hdzaWe
  5. Make sure the checkbox next to "Activate own filters" is set
  6. Hit "Apply changes" on top

Of course this only works until Google changes the name of the element, but if that happens there is usually a thread in https://www.reddit.com/r/uBlockOrigin/ with the new filter.

Or alternatively just put a swear word somewhere into your Google search query. Then the AI is also absent. ๐Ÿ˜‚

Instead of: programname error-message
Google for: programname fucking error-message

๐Ÿ˜†

The udm=14 parameter

And the "cheat code" for AI free search with Google also still exists. Just add the udm=14 parameter.

https://tedium.co/2024/05/17/google-web-search-make-default/ has all the details and https://tenbluelinks.org shows you how to achieve that for various browsers.

Enjoy!

Comments

Pagerduty finally fixed Rundecks Java requirements

In my rant article Get the damn memo already: Java11 reached end-of-life years ago I used Pagerduty's product Rundeck as and example for heavily outdated software requirements, making it unable to be installed on many modern Linux distributions.

Looks like they finally got their things together:

As of version 5.10.0 the Self Hosted RBA and Rundeck Open Source support either Java 11 or Java 17 runtime (JRE).
Building from source still requires Java 11 (JDK).

https://docs.rundeck.com/docs/administration/install/system-requirements.html#java

In fact you want at least version 5.11.0 as version 5.10.0 still had the requirement for Java11 in the RPM- dependencies according to this Github issue: https://github.com/rundeck/rundeck/issues/8917

Finally!

Let's just hope they already have plans for Java 21 or even Java 25. As Java 21 support for Oracle already ended in September 2024. ๐Ÿ˜œ

Comments

Using nebula-sync to synchronize your Pi-hole instances

As I have 2 running Pi-hole instances I have the problem of keeping them in sync. I do want all local DNS entries to be present on both. Also I do want the same filter lists, exceptions, etc.

Entering: nebula-sync. After Pi-hole released version 6 and made huge changes towards the architecture & API, the often used gravity-sync stopped working and was archived.

As nebula-sync can be run as a docker image the setup is fairly easy.

I just logged into my Portainer instance and spun up a new stack with the following YAML. I disable the gravity run after nebula-sync as this currently breaks the replica. The webserver process is killed and doesn't come back. Hence no connections to port 80 (HTTP) or 443 (HTTPS) are possible and therefore the API can't be reached either.

This is currently investigated/worked on in the following issue: Pi-hole FTL issue #2395: FTL takes 5 minutes to reboot?

---
services:
  nebula-sync:
    image: ghcr.io/lovelaze/nebula-sync:latest
    container_name: nebula-sync
    environment:
    - PRIMARY=http://ip1.ip1.ip1.ip1|password1
    - REPLICAS=http://ip2.ip2.ip2.ip2|password2
    - FULL_SYNC=true
    - RUN_GRAVITY=false   # running Gravity after sync breaks the replica, see: https://github.com/pi-hole/FTL/issues/2395
    - CRON=*/15 * * * *
    - TZ=Europe/Berlin

And if everything works we see the following:

2025-05-26T16:01:24Z INF Starting nebula-sync v0.11.0
2025-05-26T16:01:24Z INF Running sync mode=full replicas=1
2025-05-26T16:01:24Z INF Authenticating clients...
2025-05-26T16:01:25Z INF Syncing teleporters...
2025-05-26T16:01:25Z INF Syncing configs...
2025-05-26T16:01:25Z INF Running gravity...
2025-05-26T16:01:25Z INF Invalidating sessions...
2025-05-26T16:01:25Z INF Sync completed

When I did have RUN_GRAVITY=true in my stack I would always see the first sync succeeding. This run would however kill the webserver - hence the API isn't reachable any more and the nebula-sync container would only log the following error message:

2025-05-26T16:34:29Z INF Starting nebula-sync v0.11.0
2025-05-26T16:34:29Z INF Running sync mode=full replicas=1
2025-05-26T16:34:29Z INF Authenticating clients...
2025-05-26T16:34:31Z INF Syncing teleporters...
2025-05-26T16:34:31Z INF Syncing configs...
2025-05-26T16:34:31Z INF Running gravity...
2025-05-26T16:34:31Z INF Invalidating sessions...
2025-05-26T16:34:31Z INF Sync completed

2025-05-26T16:35:00Z INF Running sync mode=full replicas=1
2025-05-26T16:35:00Z INF Authenticating clients...
2025-05-26T16:35:02Z INF Invalidating sessions...
2025-05-26T16:35:04Z WRN Failed to invalidate session for target: http://ip2.ip2.ip2.ip2
2025-05-26T16:35:04Z ERR Sync failed error="authenticate: http://ip2.ip2.ip2.ip2/api/auth: Post \"http://ip2.ip2.ip2.ip2/api/auth\": dial tcp ip2.ip2.ip2.ip2:80: connect: connection refused"

Comments