Feuerfest

Just the private blog of a Linux sysadmin

This blog is now officially not indexed on Google anymore - and I don't know why [Update]

Scroll down for the updated part.

If you do a search in Google specifically for this blog, it will show up empty. Zero posts found, zero content shown. In the last weeks less and less content was shown until we reached point zero on December 28th, 2025.

And I have absolutely no clue about the reason. Only vague assumptions.

I do remember my site being definitely indexed and several results being shown. This was how I spotted lingering DNS entries from other peoples long-gone web projects still pointing to the IP of my server. Which I blogged about in "It's always DNS." in November 2024.

This lead me to implement a HTTP-Rewrite rule to redirect the requests for those domains to a simple txt-file asking the people in a nice way to remove those old DNS entries. This can still be found here: https://admin.brennt.net/please-delete.txt

However since December 19th 2025 no pages are indexed anymore.

HTTP-302, the first mistake?

And maybe here I made the first mistake which perhaps contributed to this whole situation. As according to the Apache documentation on the Redirect flag HTTP-302 "Found" is used as the default status code to be used. Not the HTTP-301 "Moved permanently" status code. Hence I signaled Google "Hey, the content only moved temporarily".

And this Apache configuration DOES send a HTTP-302:

# Rewrite for old, orphaned DNS records from other people..
RewriteEngine On
<If "%{HTTP_HOST} == 'berufungimzentrum.at'">
    RewriteRule "^(.*)$" "https://admin.brennt.net/please-delete.txt"
</If>

Anyone having some knowledge in SEO/SEM will tell you to avoid HTTP-302 in order to not be punished for "duplicate content". And yeah, I did know this too, once. Sometime, years ago. But I didn't care too much and had forgotten about this.

And my strategy to rewrite all URLs for the old, orphaned domains to this txt-file lead to a situation where the old domains were still seen as valid and the content indexed through them (my blog!) as valid content.

Then suddenly my domain appeared too, but the old domains were still working and all requests temporarily redirected to some other domain (admin.brennt.net - my blog..). Hence I assume that currently my domain is flagged for duplicate content or being some kind of "link-farm" and therefore not indexed.

And I have no clue if this situation will resolve itself automatically or when.

A slow but steady decline

Back to the beginning. Around mid of October 2025 I grew curious how my blog shows up in various search engines. And I was somewhat surprised. Only around 20 entries where shown for my blog. Why? I had no clue. While I could understand that the few posts, which garnered some interest and were shared on various platforms, were listed first, this didn't explain why only so few pages were shown.

I started digging.

The world of search engines - or: Who maintains an index of their own

The real treasure trove which defines what a search engine can show you in the results is its index. Only if a site is included in its index it is known to the search engine. Everything else is treated like it doesn't exist.

However not every search engine maintains its own index. https://seirdy.one/posts/2021/03/10/search-engines-with-own-indexes/ has a good list of search engines and if they are really autonomous in maintaining their own index or not. Based on this I did a few tests with search engines for this blog. I solely used the following search parameter: site:admin.brennt.net

Search Engine Result
Google No results
Bing Results found
DuckDuckGo Results found
Ecosia Results found if using Bing, no results with Google
Brave Results found
Yandex Results found

Every single search engine other than Google properly indexes my blog. Some have recent posts some are lagging behind a few weeks. This however is fine and solely depends on the crawler and index update cycles of the search engine operator.

My webserver logs also prove this to be true. Zero visitors with a referrer from Google, but a small and steady number from Bing, DuckDuckGo and others.

So why does only Google have problem with my site?

Can we get insights with the Google Search Console?

I went to the Google Search Console and verified admin.brennt.net as my domain. Now I was able to have a deep dive into what Google reported about my blog.

robots.txt

My first assumption was that the robots.txt was somehow awry but given how basic my robots.txt is I was dumbfounded on where it was wrong. "Maybe I missed some crucial technical development?" was the best guess I had. No, a quick search revealed that nothing has changed regarding the robots.txt and Google says my robots.txt is fine.

Just for the record, this is my robots.txt. As plain, boring and simple as it can be.

User-agent: *
Allow: /
Sitemap: https://admin.brennt.net/sitemap.xml

Inside the VirtualHost for my blog I use the following Rewrite to allow HTTP and HTTPS-Requests for the robots.txt to succeed. As normally all HTTP-Requests are redirected to HTTPS. The Search Console however complained about being an error present with the HTTP robots.txt..

RewriteEngine On
# Do not rewrite HTTP-Requests to robots.txt
<If "%{HTTP_HOST} == 'admin.brennt.net' && %{REQUEST_URI} != '/robots.txt'">
    RewriteRule "(.*)"      "https://%{HTTP_HOST}%{REQUEST_URI}" [R=301] [L]
</If>

But this is just house keeping. As that technical situation was already present when my blog was properly indexed. If at all, this should lead to my blog being ranked or indexed better, and not vanish..

Are security or other issues the problem?

The search console has the "Security & Manual Actions" menu. Under it are these two mentioned reports about Security issues and issues requiring manual interaction.

Again, no. Everything is fine.

Is my sitemap or RSS-Feed to blame?

I read some people who claim that Google accidentally read their RSS-Feed as Sitemap. And that excluding a link to their RSS-Feed in the sitemap.xml did the trick. While a good point, my RSS-Feed https://admin.brennt.net/rss.xml isn't listed in my sitemap. Uploading the Sitemap in the Search Console also showed no problems. Not in December 2025 and not in January 2026.

It even successfully picked up the newly creating articles.

However even that doesn't guarantee that Google will index your site. It's just one minor technical detail checked for validity.

"Crawled - currently not indexed" the bane of Google

Even the "Why pages aren't indexed" report didn't provide much insight. Yes, some links deliver a 404. Perfectly fine, I deleted some tags, hence those links now end in a 404. And the admin login page is marked with noindex. Also as it should be.

The "Duplicate without user-selected canoncial" took me a while to understand, but it boils down to this: Bludit has categories and tags. If you click on such a category-/tag-link you will be redirected to an automatically generated page showing all posts in that category/with that tag. However, the Bludit canonical-plugin currently doesn't generate these links for category or tag views. Hence I fixed it myself.

Depending on how I label my content some of these automatically generated pages can look the same i.e. a page for category A can show the exact same posts as the page for tag B. What a user then has to do is define a canonical link in the HTML source code to make it possible to properly distinguish these sites and tell Google that, yes, the same content is available under different URLs and this is fine (this is especially a problem with bigger sites being online for many years).

But none of these properly explain why my site isn't indexed anymore. As most importantly: All these issues were already present when my blog was indexed. Google explains the various reasons on its "Page indexing report" help page.

There we learn that "Crawled - currently not indexed" means:

The page was crawled by Google but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.
Source: https://support.google.com/webmasters/answer/7440203#crawled

And that was the moment I hit a wall. Google is crawling my blog. Nothing technical prevents Google from doing so. The overall structure is fine, no security issues or other issues (like phishing or other fraudulent activities) are done under this domain.

So why isn't Google showing my blog!? Especially since all other search engines don't seem to have an issue with my site.

Requesting validation

It also didn't help that Google itself had technical issues which prevented the page indexing report from being up-to-date. Instead I had to work with old data at that time.

I submitted my Sitemap and hoped that this will fix the issue. Alas it didn't. While the sitemap was retrieved and processed near instantly and showed a Success-Status along with the info that it discovered 124 pages... None of them were added.

I requested a re-validation and the search console told me it can take a while (they stated around 2 weeks, but mine took more like 3-4 weeks).

Fast-forward to January 17th 2026 and the result was in: "Validation failed"

What this means is explained here: https://support.google.com/webmasters/answer/7440203#validation

The reason is not really explained, but I took with me the following sentence:

You should prioritize fixing issues that are in validation state "failed" or "not started" and source "Website".

So I went to fix the "Duplicate without user-selected canonical" problem, as all others (404, noindex) are either no problems or are there intentionally. This shouldn't be a problem however, as Google itself writes the following regarding validation:

It might not always make sense to fix and validate a specific issue on your website: for example, URLs blocked by robots.txt are probably intentionally blocked. Use your judgment when deciding whether to address a given issue.

With that being fixed, I requested another validation in mid-January 2026. And now I have to wait another 2-8 weeks. *sigh*

A sudden realization

And then it hit me. I vaguely remember that Google scores pages based on links from other sites. Following the mantra: "If it's good quality content, people will link to it naturally." My blog isn't big. I don't have that many backlinks. And I don't to SEO/SEM.

But my blog was available under at least one domain which had a bit of backlink traffic - traffic I can still see in my webserver logs today!

Remember how my blog's content was first also reachable under this domain? Yep. My content was indexed under this domain. Then I changed the webserver config so that this isn't the case anymore. Now I send a proper HTTP-410 "Gone". With that... Did I nuke all the (borrowed) reputation my blog did possess?

If that should be the case (I have yet to dig into this topic..) the question will likely be: How does Google treat a site which backlinks vanished, or more technical, how to properly move content from one domain to another. And is there anything I can do afterwards to fix this, if I did it wrong.

Anything else?

If you are familiar with the topic and have an idea what I can check, feel free to comment. As currently I'm at my wits end.

Update: Passed the validation, still not indexed

I just noticed that I passed the "Duplicate without user-select canonical" check a good week ago, after doing the modifications in Adding canonical links for category and tag pages in Bludit 3.16.2.

Still nothing changed regarding the indexing.

Comments

Webinars & data gathering

Someone I follow on LinkedIn announced a webinar regarding OPNsense and a paid plugin. As I currently use OPNsense a little in my home lab, I was mildly interested. Unfortunately, the webinar was scheduled to take place during a customer meeting. When I asked about a recording of the webinar, I was told that:

"Yes, there will be a recording. The link will be send to all participants after the meeting. So just register and you are fine."

Cool, I thought.

Yeah but mandatory fields are:

  • Full name
  • Phone number
  • Mail address
  • Company
  • Position
  • Country
  • Postcode

I know that this is for gathering contacts, or 'leads' in sales terms. After all, it's a paid OPNsense plugin. I am also familiar with services such as "Frank geht ran" (Frank takes the call), which is operated by the data privacy NGO Digital Courage. They provide two numbers: One is a mobile number and the other is a landline number. If you call, a recorded message informs the caller that the person they are calling does not wish to receive any more telephone calls.

But... I just couldn't be bothered. I could have provided a disposable email address with a fake company name and a "Frank geht ran" phone number. Or I could have saved myself all the trouble and ignored it. Which is what I did.

Comments

Disabling the accuweather feature in Firefox

Mozila incooperated yet another feature nobody asked for. And of course it's turned on per-default. Screw you Mozilla!

Some people seem to have this feature for weeks as it's gradually rolled out, I got it today. Now whenever I typed a city name in the adressbar I would get a small window from accuweather showing me the current temperature. And from what I read online even the location data is shared!? What the heck Mozilla?

Naturally my immediate action was to disable this bullshit.

Open about:config and then change the following values:

browser.urlbar.weather.featureGate = false
browser.newtabpage.activity-stream.feeds.weatherfeed = false
browser.newtabpage.activity-stream.showWeather = false
browser.newtabpage.activity-stream.system.showWeatherOptIn = false
browser.newtabpage.activity-stream.weather.locationSearchEnabled = false

if you want to see all parameters associated with this feature, search for: browser.newtabpage.activity-stream.*weather

Sources:

Comments

Fix keepalived error: bind unicast_src - 99 cannot assign requested address

TL;DR: The configured unicast_src IP isn't present on any network interface. In my case DHCPv6 was to blame.

I accidentally unplugged the power cable from my RaspberryPi 4 today. Due to this I learned a few things today.

  1. First that my home DSL router (a FritzBox) doesn't always honor the preferred IPv4/v6 addresses send in DHCP-Requests
    • /etc/dhcpcd.conf did contain static ip_address=... and static ip6_address=...
  2. The FritzBox can't set DHCP reservations for IPv6 addresses - only IPv4 - WHY!?
  3. I have to read the keepalived error message while actually using my brain
    • I stumbled across the cannot assign requested address and thought of DHCP and was confused why the hell keepalived does DHCP things (the word requested mislead me)
    • In the following line the reason is written in plain text...  entering FAULT state (src address not configured)
  4. Static IP-configuration for servers was, is and will always be the best
  5. A mixed static & dynamic IPv6  configuration isn't hard at all once you read a bit about SLAAC

Long story short, this was the keepalived error I got. The VRRP-Instance immediately went into FAULT state and stayed there.

root@raspi:~# systemctl status keepalived.service
[...]
Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: Delaying startup for 5 seconds
Feb 05 13:14:22 raspi Keepalived[1278]: Startup complete
Feb 05 13:14:22 raspi systemd[1]: Started keepalived.service - Keepalive Daemon (LVS and VRRP).
Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: bind unicast_src fd87:f53:25b4:0:231d:4cbb:bca7:10 failed 99 - Cannot assign requested address
Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: (VI_2): entering FAULT state (src address not configured)
Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: (VI_2) Entering FAULT STATE
Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: VRRP_Group(ALL) Syncing instances to FAULT state

At first I skipped the following line:

Feb 05 13:14:22 raspi Keepalived_vrrp[1279]: (VI_2): entering FAULT state (src address not configured)

Hence I searched a bit and found an older GitHub issue where this problem was explained with VRRP trying to do stuff to fast, while the interface wasn't ready. The solution mentioned in keepalived issue #2237: Keepalived entering fault state on reboot was to set vrrp_startup_delay inside the global_defs section of /etc/keepalived/keepalived.conf. However this was already the present in my case.

Yeah, turns out the configured unicast_src IP wasn't present on any interface. As the FritzBox deemed it fit to assign a random one from the configured DHCP-Range. We can verify this quickly by grep'ing for the IPv6 address.

root@raspi:~ # ip -6 a | grep fd87:f53:25b4:0:231d:4cbb:bca7:10
root@raspi:~ #

The solution

In my case I finally switched to a mixed static and dynamic IPv6 setup. Configuring the local ULA address as a static one, but still receive and apply the router advertisement (RA) to get a global IPv6 so my RaspberryPi can still connect to the Internet.

Then it showed up on the interface.

root@raspi:~ # ip -6 a | grep fd87:f53:25b4:0:231d:4cbb:bca7:10
    inet6 fd87:f53:25b4:0:231d:4cbb:bca7:10/64 scope global
root@raspi:~ #

Another viable solution would of course be to just reboot the RaspberryPi and hope your DHCP-Server now assigns the correct IP. However my FritzBox only allows to set an IPv4 reservation in the DHCP settings. IPv6 addresses can't be used for DHCP reservations at all. So this was no solution for me.

If you want to know how to configured a mixed static and dynamic IPv6 read here: Configuring an mixed IPv6 setup - static ULA, dynamic GLA

Comments

Configuring an mixed IPv6 setup - static ULA, dynamic GLA

In Fix keepalived error: bind unicast_src - 99 cannot assign requested address I mentioned that I fixed my problem with a mixed static & dynamic IPv6 setup. Here is how I did it.

Status quo

For a few years I followed the Raspbian recommendation to use DHCP to assign the static IP. And it worked - until it didn't. This was my config. Note that I didn't use a fallback profile. I like to notice when DHCP doesn't work.

root@raspi:~# cat /etc/dhcpcd.conf
[...]
interface eth0
        static ip_address=192.168.1.10/24
        static ip6_address=fd87:f53:25b4:0:231d:4cbb:bca7:10/64
        static routers=192.168.1.1
        static domain_name_servers=127.0.0.1 ::1

# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1

# fallback to static profile on eth0
#interface eth0
#fallback static_eth0

Due to an accidental power loss my RaspberryPi rebooted and got a new IPv4 and IPv6, totally different from the configured ones.

The changed IPv4 was easily identified. I forgot to set a DHCP reservation for the MAC address in my DSL router. I suspected then that I also forgot this for the IPv6. Only to notice: My FritzBox 7530 doesn't allow to add IP/MAC reservations for IPv6. Only IPv4 addresses are supported.

And that was the moment where I had enough and decided to ditch DHCP all together.

For IPv4 this was easy enough.

root@raspi:~# cat /etc/network/interfaces.d/ipv4
auto eth0
iface eth0 inet static
        address 192.168.1.10
        netmask 255.255.255.0
        gateway 192.168.1.1

However for IPv6 it took me a few minutes. Specify address and netmask, done. Right?

Well, no. Internet access wasn't working. A quick check revealed that the GLA address was missing.

root@raspi:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether aa:aa:aa:aa:aa:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.10/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd87:f53:25b4:0:231d:4cbb:bca7:10/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::aaaa:bbbb:cccc:dddd/64 scope link
       valid_lft forever preferred_lft forever

Hosts in my LAN were perfectly reachable. A ping to an public IPv6 didn't succeed.

root@raspi:~# ping6 google.de
PING google.de(lcmuca-ah-in-x03.1e100.net (2a00:1450:4016:803::2003)) 56 data bytes
^C
--- 2a00:1450:4016:803::2003 ping statistics ---
13 packets transmitted, 0 received, 100% packet loss, time 12441ms

Turns out, when you configure a static Unique Local Address (ULA), which is the IPv6 equivalent to our beloved RFC1918 IPv4 (192.168.0.0/16, etc.), Linux doesn't listen to Router Advertisements (RAs) anymore. Hence no Global Link Address (GLA).

The small details are to set autoconf 1 and accept_ra 2 for the interface. This is also documented in the Debian Wiki. With that knowledge I changed my config. Defining the ULA IPv6 as static and not relying on DHCP also has other stability advantages, as I run some services on keepalived VIPs.

root@raspi:~# cat /etc/network/interfaces.d/ipv6
# IPv6
auto eth0
iface eth0 inet6 static
        address fd87:f53:25b4:0:231d:4cbb:bca7:10
        netmask 64
        # Mixing static and dynamic IPv6
        # from: https://wiki.debian.org/NetworkConfiguration
        # use SLAAC to get global IPv6 address from the router
        # we may not enable ipv6 forwarding, otherwise SLAAC gets disabled
        #
        # Automatically create IPv6 addresses based on Router Advertisements (RA)
        autoconf 1
        # Always accept RAs, even if a static IPv6 address is configured
        # as normally Linux doesn't listen to RAs anymore when a static IPv6 is assigned
        accept_ra 2

Disabling DHCP

And don't forget to disable the DHCP service.

root@raspi:~# systemctl stop dhcpcd.service
root@raspi:~# systemctl disable dhcpcd.service
Synchronizing state of dhcpcd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable dhcpcd
Removed "/etc/systemd/system/dhcpcd5.service".
Removed "/etc/systemd/system/multi-user.target.wants/dhcpcd.service".

After all these years?

Once again I am left wondering why I had this problem for the first time in 2026. After all IPv6 is 25 years old..

Comments

Datenschutzverständnis

Wann immer ich Leuten erklären muss, wieso Datenschutz in der Realität so merkwürdig gehandhabt wird und häufig irgendwie am Ziel vorbeigeht, erkläre ich das mit dem Diskretionsverständnis in einer Arztpraxis.

Dort gilt ja auch "Aus Diskretionsgründen bitte Abstand halten". Bringt halt nur gar nichts, wenn die Anmeldung mitten im Raum ist oder die Mitarbeitenden an der Rezeption so laut sprechen, das man doch alles versteht.

Comments