Feuerfest

Just the private blog of a Linux sysadmin

This blog is now officially not indexed on Google anymore - and I don't know why

If you do a search in Google specifically for this blog, it will show up empty. Zero posts found, zero content shown. In the last weeks less and less content was shown until we reached point zero on December 28th, 2025.

And I have absolutely no clue about the reason. Only vague assumptions.

I do remember my site being definitely indexed and several results being shown. This was how I spotted lingering DNS entries from other peoples long-gone web projects still pointing to the IP of my server. Which I blogged about in "It's always DNS." in November 2024.

This lead me to implement a HTTP-Rewrite rule to redirect the requests for those domains to a simple txt-file asking the people in a nice way to remove those old DNS entries. This can still be found here: https://admin.brennt.net/please-delete.txt

However since December 19th 2025 no pages are indexed anymore.

HTTP-302, the first mistake?

And maybe here I made the first mistake which perhaps contributed to this whole situation. As according to the Apache documentation on the Redirect flag HTTP-302 "Found" is used as the default status code to be used. Not the HTTP-301 "Moved permanently" status code. Hence I signaled Google "Hey, the content only moved temporarily".

And this Apache configuration DOES send a HTTP-302:

# Rewrite for old, orphaned DNS records from other people..
RewriteEngine On
<If "%{HTTP_HOST} == 'berufungimzentrum.at'">
    RewriteRule "^(.*)$" "https://admin.brennt.net/please-delete.txt"
</If>

Anyone having some knowledge in SEO/SEM will tell you to avoid HTTP-302 in order to not be punished for "duplicate content". And yeah, I did know this too, once. Sometime, years ago. But I didn't care too much and had forgotten about this.

And my strategy to rewrite all URLs for the old, orphaned domains to this txt-file lead to a situation where the old domains were still seen as valid and the content indexed through them (my blog!) as valid content.

Then suddenly my domain appeared too, but the old domains were still working and all requests temporarily redirected to some other domain (admin.brennt.net - my blog..). Hence I assume that currently my domain is flagged for duplicate content or being some kind of "link-farm" and therefore not indexed.

And I have no clue if this situation will resolve itself automatically or when.

A slow but steady decline

Back to the beginning. Around mid of October 2025 I grew curious how my blog shows up in various search engines. And I was somewhat surprised. Only around 20 entries where shown for my blog. Why? I had no clue. While I could understand that the few posts, which garnered some interest and were shared on various platforms, were listed first, this didn't explain why only so few pages were shown.

I started digging.

The world of search engines - or: Who maintains an index of their own

The real treasure trove which defines what a search engine can show you in the results is its index. Only if a site is included in its index it is known to the search engine. Everything else is treated like it doesn't exist.

However not every search engine maintains its own index. https://seirdy.one/posts/2021/03/10/search-engines-with-own-indexes/ has a good list of search engines and if they are really autonomous in maintaining their own index or not. Based on this I did a few tests with search engines for this blog. I solely used the following search parameter: site:admin.brennt.net

Search Engine Result
Google No results
Bing Results found
DuckDuckGo Results found
Ecosia Results found if using Bing, no results with Google
Brave Results found
Yandex Results found

Every single search engine other than Google properly indexes my blog. Some have recent posts some are lagging behind a few weeks. This however is fine and solely depends on the crawler and index update cycles of the search engine operator.

My webserver logs also prove this to be true. Zero visitors with a referrer from Google, but a small and steady number from Bing, DuckDuckGo and others.

So why does only Google have problem with my site?

Can we get insights with the Google Search Console?

I went to the Google Search Console and verified admin.brennt.net as my domain. Now I was able to have a deep dive into what Google reported about my blog.

robots.txt

My first assumption was that the robots.txt was somehow awry but given how basic my robots.txt is I was dumbfounded on where it was wrong. "Maybe I missed some crucial technical development?" was the best guess I had. No, a quick search revealed that nothing has changed regarding the robots.txt and Google says my robots.txt is fine.

Just for the record, this is my robots.txt. As plain, boring and simple as it can be.

User-agent: *
Allow: /
Sitemap: https://admin.brennt.net/sitemap.xml

Inside the VirtualHost for my blog I use the following Rewrite to allow HTTP and HTTPS-Requests for the robots.txt to succeed. As normally all HTTP-Requests are redirected to HTTPS. The Search Console however complained about being an error present with the HTTP robots.txt..

RewriteEngine On
# Do not rewrite HTTP-Requests to robots.txt
<If "%{HTTP_HOST} == 'admin.brennt.net' && %{REQUEST_URI} != '/robots.txt'">
    RewriteRule "(.*)"      "https://%{HTTP_HOST}$1" [R=301] [L]
</If>

But this is just house keeping. As that technical situation was already present when my blog was properly indexed. If at all, this should lead to my blog being ranked or indexed better, and not vanish..

Are security or other issues the problem?

The search console has the "Security & Manual Actions" menu. Under it are these two mentioned reports about Security issues and issues requiring manual interaction.

Again, no. Everything is fine.

Is my sitemap or RSS-Feed to blame?

I read some people who claim that Google accidentally read their RSS-Feed as Sitemap. And that excluding a link to their RSS-Feed in the sitemap.xml did the trick. While a good point, my RSS-Feed https://admin.brennt.net/rss.xml isn't listed in my sitemap. Uploading the Sitemap in the Search Console also showed no problems. Not in December 2025 and not in January 2026.

It even successfully picked up the newly creating articles.

However even that doesn't guarantee that Google will index your site. It's just one minor technical detail checked for validity.

"Crawled - currently not indexed" the bane of Google

Even the "Why pages aren't indexed" report didn't provide much insight. Yes, some links deliver a 404. Perfectly fine, I deleted some tags, hence those links now end in a 404. And the admin login page is marked with noindex. Also as it should be.

The "Duplicate without user-selected canoncial" took me a while to understand, but it boils down to this: Bludit has categories and tags. If you click on such a category-/tag-link you will be redirected to an automatically generated page showing all posts in that category/with that tag. However, the Bludit canonical-plugin currently doesn't generate these links for category or tag views. Hence I fixed it myself.

Depending on how I label my content some of these automatically generated pages can look the same i.e. a page for category A can show the exact same posts as the page for tag B. What a user then has to do is define a canonical link in the HTML source code to make it possible to properly distinguish these sites and tell Google that, yes, the same content is available under different URLs and this is fine (this is especially a problem with bigger sites being online for many years).

But none of these properly explain why my site isn't indexed anymore. As most importantly: All these issues were already present when my blog was indexed. Google explains the various reasons on its "Page indexing report" help page.

There we learn that "Crawled - currently not indexed" means:

The page was crawled by Google but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.
Source: https://support.google.com/webmasters/answer/7440203#crawled

And that was the moment I hit a wall. Google is crawling my blog. Nothing technical prevents Google from doing so. The overall structure is fine, no security issues or other issues (like phishing or other fraudulent activities) are done under this domain.

So why isn't Google showing my blog!? Especially since all other search engines don't seem to have an issue with my site.

Requesting validation

It also didn't help that Google itself had technical issues which prevented the page indexing report from being up-to-date. Instead I had to work with old data at that time.

I submitted my Sitemap and hoped that this will fix the issue. Alas it didn't. While the sitemap was retrieved and processed near instantly and showed a Success-Status along with the info that it discovered 124 pages... None of them were added.

I requested a re-validation and the search console told me it can take a while (they stated around 2 weeks, but mine took more like 3-4 weeks).

Fast-forward to January 17th 2026 and the result was in: "Validation failed"

What this means is explained here: https://support.google.com/webmasters/answer/7440203#validation

The reason is not really explained, but I took with me the following sentence:

You should prioritize fixing issues that are in validation state "failed" or "not started" and source "Website".

So I went to fix the "Duplicate without user-selected canonical" problem, as all others (404, noindex) are either no problems or are there intentionally. This shouldn't be a problem however, as Google itself writes the following regarding validation:

It might not always make sense to fix and validate a specific issue on your website: for example, URLs blocked by robots.txt are probably intentionally blocked. Use your judgment when deciding whether to address a given issue.

With that being fixed, I requested another validation in mid-January 2026. And now I have to wait another 2-8 weeks. *sigh*

A sudden realization

And then it hit me. I vaguely remember that Google scores pages based on links from other sites. Following the mantra: "If it's good quality content, people will link to it naturally." My blog isn't big. I don't have that many backlinks. And I don't to SEO/SEM.

But my blog was available under at least one domain which had a bit of backlink traffic - traffic I can still see in my webserver logs today!

Remember how my blog's content was first also reachable under this domain? Yep. My content was indexed under this domain. Then I changed the webserver config so that this isn't the case anymore. Now I send a proper HTTP-410 "Gone". With that... Did I nuke all the (borrowed) reputation my blog did possess?

If that should be the case (I have yet to dig into this topic..) the question will likely be: How does Google treat a site which backlinks vanished, or more technical, how to properly move content from one domain to another. And is there anything I can do afterwards to fix this, if I did it wrong.

Anything else?

If you are familiar with the topic and have an idea what I can check, feel free to comment. As currently I'm at my wits end.

Comments

Adding canonical links for category and tag pages in Bludit 3.16.2

Google's Search Console has problems with my site regarding duplicate content due to "Duplicate without user-selected canonical". Which is Google's wording for:

The automatically generated site-views for your categories and tags have the same content sometimes. Hence identical content is available under different URLs.

And yes, when I checked the HTML source of these pages there is no canonical link. Despite the canonical-plugin being active.

A single post shows the following:

<!-- Load Bludit Plugins: Site head -->
<link rel="canonical" href="https://admin.brennt.net/please-don-t-remove-your-comment-section">
<link href="/bl-plugins/prism/css/prism.css" rel="stylesheet">

But for https://admin.brennt.net/tag/2fa or https://admin.brennt.net/category/it it only showed the following:

<!-- Load Bludit Plugins: Site head -->
<link href="/bl-plugins/prism/css/prism.css" rel="stylesheet">

ChatGPT to the rescue! - But with a twist!

As it is currently 6am I wasn't in the mood to dig through the code myself. So I asked ChatGPT: "How do I retrieve the tag name element in the Bludit blogging software". Only for ChatGPT to give me an extensive answer. ... For PHP-Code the canonical-plugin didn't have.

Ah.. Yes.. Typical, isn't it? Stupid LLMs, bla bla.

No, turns out on January 15th the canonical-plugins was extensively re-written. Fixing the missing canonical links. Great. So ChatGPT did indeed based it answer on the current code. I quickly searched the Bludit forum and GitHub if there is anything said about a new Bludit release but nothing showed up. And as the last release was in August 2024 I currently don't have high hopes for a release in the near future.

Instead I just copy & pasted the current code completely into ChatGPT - as providing the GitHub link didn't work - and got an answer that looked good.

<?php

class pluginCanonical extends Plugin {

	public function siteHead()
	{
		// Home page
		if ($GLOBALS['WHERE_AM_I'] === 'home') {
			return '<link rel="canonical" href="'.DOMAIN_BASE.'"/>'.PHP_EOL;
		}

		// Single page / post
		elseif ($GLOBALS['WHERE_AM_I'] === 'page') {
			global $page;
			return '<link rel="canonical" href="'.$page->permalink().'"/>'.PHP_EOL;
		}

		// Tag pages
		elseif ($GLOBALS['WHERE_AM_I'] === 'tag') {
			global $url;
			$tagKey = $url->slug();
			return '<link rel="canonical" href="'.DOMAIN_TAGS.$tagKey.'"/>'.PHP_EOL;
		}

		// Category pages
		elseif ($GLOBALS['WHERE_AM_I'] === 'category') {
			global $url;
			$categoryKey = $url->slug();
			return '<link rel="canonical" href="'.DOMAIN_CATEGORIES.$categoryKey.'"/>'.PHP_EOL;
		}
	}

}

The only new lines are the ones for tag pages and category pages.

Editing the bl-plugins/canonical/plugin.php file, reloading a category and a tag page, aaaaaaand we're green on canonical links.

Result for https://admin.brennt.net/tag/2fa:

<!-- Load Bludit Plugins: Site head -->
<link rel="canonical" href="https://admin.brennt.net/tag/2fa"/>
<link href="/bl-plugins/prism/css/prism.css" rel="stylesheet">

Result for https://admin.brennt.net/category/it:

<!-- Load Bludit Plugins: Site head -->
<link rel="canonical" href="https://admin.brennt.net/category/it"/>
<link href="/bl-plugins/prism/css/prism.css" rel="stylesheet">

Great. Now back to the main problem...

Comments

Goodbye, Bludit? Hello, Kirby?

For a few months now, I've been regularly coming back to this point where I ask myself whether I want to continue using Bludit. The latest version dates from August 2024, and there are issues with IT security-related bugs open and unresolved on GitHub.

Sure, Bludit is open source. Anyone can fork it. But Jürgen Greuter (alias: Tante) wrote back in 2013: "Host your own is cynical". In this text, he discusses why not everyone can set up and operate software "just like that" when a service is discontinued or its business model changes fundamentally.

And in this sense, I would like to note: "Fork your own is cynical"

I want to blog. I want to write down and publish my thoughts. I don't want to programme PHP or deal with problems in dozens of different browser versions. In some cases, I would also have to acquire a lot of knowledge (again) first. And the time spent for maintaining the fork? No, thank you.

I just want to be a user. Period.

And well, as can be read in the Bludit forum, the only developer (Diego) is not working on Bludit until further notice. There are apparently only minimal adjustments. Too bad. Also because security-related bugs are obviously not included.

But just as I simply want to be a user, I can understand that Diego also has a life and needs to pay his bills.

So I did a little research and came across the blogging software Kirby. Also a FlatFile CMS. You do have to buy a licence for Kirby, but at 99€ for three years, it's more than fair. And the source code is available on GitHub. So if I want to, I can dig through the code myself and see what's going on or whether there's already an issue for my problem.

What's more, the software has been on the market for over 10 years and is used by several well-known magazines and projects (e.g. OBS and Katapult Magazine). That also speaks for its reliability.

Well, I think I'll spend a weekend or so with the trial version and see how Kirby feels. The demo was nice, anyway, and didn't leave me wanting anything.

Comments

Quo vadis Bludit? And a new blog theme

I switched to a new blog theme. While Solen was great, I didn't like the mandatory article images. It just makes no sense to search for the 20th "Code lines displayed in some kind of terminal or IDE" image for a blogpost. Also the overall look & feel was a bit too much "early 2000s". I wanted something that looked a bit more refined. More clean.

Browsing through https://themes.bludit.com/ I found the Keep It Simple-Theme. Only that it was last modified in 2020 for a 2.x Bludit version, while we are now at 3.16.2. 

Luckily only a few modifications were needed and I finally took the time to create a separate Git-Repository for my theme modifications. Have a look at if you want to use it too: https://github.com/ChrLau/keep-it-simple

Although it contains some CSS changes, but I kept the original CSS in via comments so it should be rather easy to switch back. The changes mostly affect colours, blockquotes, fonts. Not the general layout, hence incorporating updates from the original StyleShout template should be easy.

Some minor tweaks are still coming, as I still have to check mobile & widescreen support and I want a different font used in blockquotes. The current one merriweather-italic doesn't look good as the vertical alignment is too uneven for my liking. Especially the letter "e" is too high and gives every word with an "e" a somewhat strange look.

But in general I am happy with the current look & feel.

I mean, the W3 Validator still isn't completely happy, but there are some things I can't fix directly and the only option I have is to open a pull request: Bludit: Fix canoncial links in siteHead plugin. Sadly alt-tags for images are also not possible and the corresponding Issue in the Bludit repository is closed since 2022 as "this will be fixed with Bludit 4.x". Meanwhile we are still at Bludit 3.x, a 4.x branch doesn't even exist and development really slowed down and there isn't much activity from the only developer. I seriously hope these are not bad omens..

Also no activity on my Cookie security issue Enhance cookie security by setting samesite attribute and adding __Secure- prefix to sessionname (Bludit issue 1582) and at least one unpatched Stored XSS does exist: Bludit - Stored XSS Vulnerability (Bludit issue 1579).

Let's just hope the best for now...

Comments

Development of a custom Telegram notification mechanism for new Isso blog comments

Photo by Ehtiram Mammadov: https://www.pexels.com/photo/wooden-door-between-drawers-on-walls-23322329/

After integrating Isso into my Bludit blog, I am slowly starting to receive comments from my dear readers. As Bludit and Isso are separate, there is no easily visible notification of new comments that need to be approved.

As this is a fairly low traffic blog and I'm not constantly checking Isso manually, this has resulted in a comment being stuck in the moderation queue for 7 days. Only approved after the person contacted me directly to point this out.

Problem identified and the solution that immediately came to mind was the following:

  1. Write a script that will be executed every n minutes by a Systemd timer
  2. This script retrieves the number of Isso comments, pending approval, from the sqlite3 database file
  3. If the number is not zero, send a message via Telegram

This should be a feasible solution as long as currently I receive a somewhat low-amount of comments.

Database internals

Isso stores the comments in a sqlite3 database so our first task is to identify the structure (tables, columns) which store the data we need. With .table we get the tables and with .schema comments we get the SQL-Statement which was used to create the corresponding table.

However using PRAGMA table_info(comments); we get a cleaner list of all columns and the associated datatype. https://www.sqlite.org/pragma.html#toc lists all Pragmas in SQLite.

root@admin /opt/isso/db # cp comments.db testcomments.db

root@admin /opt/isso/db # sqlite3 testcomments.db
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.

sqlite> .table
comments     preferences  threads

sqlite> .schema comments
CREATE TABLE comments (     tid REFERENCES threads(id), id INTEGER PRIMARY KEY, parent INTEGER,     created FLOAT NOT NULL, modified FLOAT, mode INTEGER, remote_addr VARCHAR,     text VARCHAR, author VARCHAR, email VARCHAR, website VARCHAR,     likes INTEGER DEFAULT 0, dislikes INTEGER DEFAULT 0, voters BLOB NOT NULL,     notification INTEGER DEFAULT 0);
CREATE TRIGGER remove_stale_threads AFTER DELETE ON comments BEGIN     DELETE FROM threads WHERE id NOT IN (SELECT tid FROM comments); END;

sqlite> PRAGMA table_info(comments);
0|tid||0||0
1|id|INTEGER|0||1
2|parent|INTEGER|0||0
3|created|FLOAT|1||0
4|modified|FLOAT|0||0
5|mode|INTEGER|0||0
6|remote_addr|VARCHAR|0||0
7|text|VARCHAR|0||0
8|author|VARCHAR|0||0
9|email|VARCHAR|0||0
10|website|VARCHAR|0||0
11|likes|INTEGER|0|0|0
12|dislikes|INTEGER|0|0|0
13|voters|BLOB|1||0
14|notification|INTEGER|0|0|0

From the first look the column mode looks like what we want. To test this I created a new comment which is pending approvement.

user@host /opt/isso/db # sqlite3 testcomments.db <<< 'select * from comments where mode == 2;'
5|7||1724288099.62894||2|ip.ip.ip.ip|etertert|test||https://admin.brennt.net|0|0||0

And we got confirmation as only the new comment is listed. After approving the comment's mode changes to 1. Therefore we found a way to identify comments pending approval.

The check-isso-comments.sh script

Adding a bit of fail-safe and output we hack together the following script.

If you want to use it you need to fill in the values for TELEGRAM_CHAT_ID & TELEGRAM_BOT_TOKEN. Where TELEGRAM_CHAT_ID is the ID of the person who shall receive the messages and TELEGRAM_BOT_TOKEN takes your Bot's Token for accessing Telegrams HTTP-API.

Please always check https://github.com/ChrLau/scripts/blob/master/check-isso-comments.sh for the current version. I'm not going to update the script in this blogpost anymore.

#!/bin/bash
# vim: set tabstop=2 smarttab shiftwidth=2 softtabstop=2 expandtab foldmethod=syntax :

# Bash strict mode
#  read: http://redsymbol.net/articles/unofficial-bash-strict-mode/
set -euo pipefail
IFS=$'\n\t'

# Generic
VERSION="1.0"
#SOURCE="https://github.com/ChrLau/scripts/blob/master/check-isso-comments.sh"
# Values
TELEGRAM_CHAT_ID=""
TELEGRAM_BOT_TOKEN=""
ISSO_COMMENTS_DB=""
# Needed binaries
SQLITE3="$(command -v sqlite3)"
CURL="$(command -v curl)"
# Colored output
RED="\e[31m"
#GREEN="\e[32m"
ENDCOLOR="\e[0m"

# Check that variables are defined
if [ -z "$TELEGRAM_CHAT_ID" ] || [ -z "$TELEGRAM_BOT_TOKEN" ] || [ -z "$ISSO_COMMENTS_DB" ]; then
  echo "${RED}This script requires the variables TELEGRAM_CHAT_ID, TELEGRAM_BOT_TOKEN and ISSO_COMMENTS_DB to be set. Define them at the top of this script.${ENDCOLOR}"
  exit 1;
fi

# Test if sqlite3 is present and executeable
if [ ! -x "${SQLITE3}" ]; then
  echo "${RED}This script requires sqlite3 to connect to the database. Exiting.${ENDCOLOR}"
  exit 2;
fi

# Test if ssh is present and executeable
if [ ! -x "${CURL}" ]; then
  echo "${RED}This script requires curl to send the message via Telegram API. Exiting.${ENDCOLOR}"
  exit 2;
fi

# Test if the Isso comments DB file is readable
if [ ! -r "${ISSO_COMMENTS_DB}" ]; then
  echo "${RED}The ISSO sqlite3 database ${ISSO_COMMENTS_DB} is not readable. Exiting.${ENDCOLOR}"
  exit 3;
fi

COMMENT_COUNT=$(echo "select count(*) from comments where mode == 2" | sqlite3 "${ISSO_COMMENTS_DB}")

TEMPLATE=$(cat <<TEMPLATE
<strong>ISSO Comment checker</strong>

<pre>${COMMENT_COUNT} comments need approval</pre>
TEMPLATE
)

if [ "${COMMENT_COUNT}" -gt 0 ]; then

  ${CURL} --silent --output /dev/null \
    --data-urlencode "chat_id=${TELEGRAM_CHAT_ID}" \
    --data-urlencode "text=${TEMPLATE}" \
    --data-urlencode "parse_mode=HTML" \
    --data-urlencode "disable_web_page_preview=true" \
    "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage"

fi

Executing this script on the command-line will already sent a notification to me via Telegram. However I do want this to be automated hence I make use of a Systemd timer.

Systemd configuration

I use the following timer to get notified between 10 to 22 o'clock as there is no need to spam me with message when I can't do anything. I create the files as /etc/systemd/system/isso-comments.timer and /etc/systemd/system/isso-comments.service.

user@host ~ # systemctl cat isso-comments.timer
# /etc/systemd/system/isso-comments.timer
[Unit]
Description=Checks for new, unapproved Isso comments

[Timer]
# Documentation: https://www.freedesktop.org/software/systemd/man/latest/systemd.time.html#Calendar%20Events
OnCalendar=*-*-* 10..22:00:00
Unit=isso-comments.service

[Install]
WantedBy=default.target

The actual script is started by the following service unit file.

user@host ~ # systemctl cat isso-comments.service
# /etc/systemd/system/isso-comments.service
[Unit]
Description=Check for new unapproved Isso comments
After=network-online.target
Wants=network-online.target

[Service]
# Allows the execution of multiple ExecStart parameters in sequential order
Type=oneshot
# Show status "dead" after commands are executed (this is just commands being run)
RemainAfterExit=no
ExecStart=/usr/local/bin/check-isso-comments.sh

[Install]
WantedBy=default.target

After that it's the usual way of activating & starting a new Systemd unit and timer:

user@host ~ # systemctl daemon-reload

user@host ~ # systemctl enable isso-comments.service
Created symlink /etc/systemd/system/default.target.wants/isso-comments.service → /etc/systemd/system/isso-comments.service.
user@host ~ # systemctl enable isso-comments.timer
Created symlink /etc/systemd/system/default.target.wants/isso-comments.timer → /etc/systemd/system/isso-comments.timer.

user@host ~ # systemctl start isso-comments.timer
user@host ~ # systemctl status isso-comments.timer
● isso-comments.timer - Checks for new, unapproved Isso comments
     Loaded: loaded (/etc/systemd/system/isso-comments.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Mon 2024-09-09 19:33:45 CEST; 2s ago
    Trigger: Mon 2024-09-09 20:00:00 CEST; 26min left
   Triggers: ● isso-comments.service

Sep 09 19:33:45 admin systemd[1]: Started Checks for new, unapproved Isso comments.
user@host ~ # systemctl start isso-comments.service
user@host ~ # systemctl status isso-comments.service
● isso-comments.service - Check for new unapproved Isso comments
     Loaded: loaded (/etc/systemd/system/isso-comments.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2024-09-09 19:33:58 CEST; 14s ago
TriggeredBy: ● isso-comments.timer
    Process: 421812 ExecStart=/usr/local/bin/check-isso-comments.sh (code=exited, status=0/SUCCESS)
   Main PID: 421812 (code=exited, status=0/SUCCESS)
        CPU: 23ms

Sep 09 19:33:58 admin systemd[1]: Starting Check for new unapproved Isso comments...
Sep 09 19:33:58 admin systemd[1]: isso-comments.service: Succeeded.
Sep 09 19:33:58 admin systemd[1]: Finished Check for new unapproved Isso comments.

user@host ~ # systemctl list-timers
NEXT                         LEFT          LAST                         PASSED        UNIT                         ACTIVATES
Mon 2024-09-09 20:00:00 CEST 22min left    n/a                          n/a           isso-comments.timer          isso-comments.service
Tue 2024-09-10 00:00:00 CEST 4h 22min left Mon 2024-09-09 00:00:00 CEST 19h ago       logrotate.timer              logrotate.service
Tue 2024-09-10 00:00:00 CEST 4h 22min left Mon 2024-09-09 00:00:00 CEST 19h ago       man-db.timer                 man-db.service
Tue 2024-09-10 06:10:34 CEST 10h left      Mon 2024-09-09 06:06:10 CEST 13h ago       apt-daily-upgrade.timer      apt-daily-upgrade.service
Tue 2024-09-10 11:42:59 CEST 16h left      Mon 2024-09-09 19:33:17 CEST 4min 28s ago  apt-daily.timer              apt-daily.service
Tue 2024-09-10 14:22:35 CEST 18h left      Mon 2024-09-09 14:22:35 CEST 5h 15min ago  systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sun 2024-09-15 03:10:04 CEST 5 days left   Sun 2024-09-08 03:10:27 CEST 1 day 16h ago e2scrub_all.timer            e2scrub_all.service
Mon 2024-09-16 01:21:39 CEST 6 days left   Mon 2024-09-09 00:45:54 CEST 18h ago       fstrim.timer                 fstrim.service

9 timers listed.
Pass --all to see loaded but inactive timers, too.

And now I'm getting informed in time of new comments. Yai!

Comments

Integrating Isso into Bludit via Apache2 and WSGI

Photo by Sébastien BONNEVAL: https://www.pexels.com/photo/women-posting-notes-on-brown-board-7393948/

Isso is a comment system designed to be included in static sites. As Bludit is a static site generator and has no comment system on its own this is a perfect fit. 😀 Also the comments are stored in a local SQLite DB which means all data stays locally on this server and no external account registration is required.

Comment moderation is of course possible via a small admin backend.

Introduction

I actually tried to separate methods of setting up and integrating Isso in my Bludit instance.

The start was always a Python virtualenv as I didn't want to install all packages and libraries system-wide. After that, I tried two approaches as I wasn't sure which one I liked better.

First approach:

  • Python virtualenv
  • Isso started via Systemd Unit on localhost:8080
  • ProxyPass & ProxyPassReverse directives in the Apache vHost configuration forwarding the relevant URIs to Isso on http://localhost:8080

Second approach:

  • Python virtualenv
  • Isso integrated via mod_wsgi into Apache

In the end, I went with the second approach as it spared me a Systemd Unit and activating the mod_proxy & mod_proxy_http in Apache. Also the whole integration was done with one WSGI file and 8 lines in the vHost.

But as I got both to a working condition I document them here nonetheless.

Creating the Python virtualenv

Creating the virtualenv is a necessary step for both paths. Generally, you can omit it if you want to install the Python packages system-wide, but as I'm still testing the software I didn't want this.

Here I followed the Install from PyPi documentation from Isso.

root@admin ~# mkdir /opt/isso
root@admin ~# apt-get install python3-dev python3-virtualenv sqlite3 build-essential
# Switching back to our non-root user
root@admin ~# exit
user@admin:~$ cd /opt/isso
user@admin:/opt/isso$ virtualenv --download /opt/isso
user@admin:/opt/isso$ source /opt/isso/bin/activate
(isso) user@admin:/opt/isso$ pip install isso

# After that I ran an upgrade for isso to get the latest package
(isso) user@admin:/opt/isso$ pip install --upgrade isso

# Also we can already create an directory for the SQLite DB where the comments get stored
# Note: This MUST be writeable by the user running isso. So either the user the Systemd unit runs under, or, in my case: the Apache2 user
(isso) user@admin:/opt/isso$ mkdir /opt/isso/db

# To change the ownership I need to become root again
# I choose www-data:adm to allow myself write-access as the used non-root user is part of that group
root@admin ~# chown www-data:adm/opt/isso/db
root@admin ~# chmod 775 /opt/isso/db

With pip list we can get an overview of all installed packages and their versions.

(isso) user@admin:/opt/isso$ pip list
Package       Version
------------- -------
bleach        6.1.0
cffi          1.16.0
html5lib      1.1
isso          0.13.0
itsdangerous  2.2.0
jinja2        3.1.4
MarkupSafe    2.1.5
misaka        2.1.1
pip           20.3.4
pkg-resources 0.0.0
pycparser     2.22
setuptools    44.1.1
six           1.16.0
webencodings  0.5.1
werkzeug      3.0.3
wheel         0.34.2

Isso configuration file

My isso.cfg looked like the following. The documentation is here: https://isso-comments.de/docs/reference/server-config/

(isso) user@admin:/opt/isso$ cat isso.cfg
[general]
dbpath = /opt/isso/db/comments.db
host = https://admin.brennt.net/
max-age = 15m
log-file = /var/log/isso/isso.log
[moderation]
enabled = true
approve-if-email-previously-approved = false
purge-after = 30d
[server]
listen = http://localhost:8080/
public-endpoint = https://admin.brennt.net/isso
[guard]
enabled = true
ratelimit = 2
direct-reply = 3
reply-to-self = false
require-author = true
require-email = false
[markup]
options = strikethrough, superscript, autolink, fenced-code
[hash]
salt = Eech7co8Ohloopo9Ol6baimi
algorithm = pbkdf2
[admin]
enabled = true
password = YOUR-PASSWORD-HERE

This meant I needed to create the /var/log/isso directory and set the appropriate rights.

But I want to raise your attention towards the following line public-endpoint = https://admin.brennt.net/isso here I added /isso to the host as else we will run in problems as both Isso and Bludit are using the /admin URI for backend access.

While this can be changed in Isso when you hack the code and, as far as it could tell, also in Bludit. I didn't want to do this, as these are modifications which will always cause additional work when an update of either Isso or Bludit is done. Therefore I added the /isso URI and adapted the ReverseProxy/WSGI configuration accordingly. Then everything works out of the box.

Decide how to proceed

Now you need to decide if you want to go with the first or second approach. Depending on this either start with point 1a) Creating the Systemd unit file or 2) Apache2 + WSGI configuration

1a) Creating the Systemd unit file

The following unit file will start Isso. Store it under /etc/systemd/system/isso.service.

[Unit]
Description=Isso Comment Server

[Service]
Type=simple
User=user
WorkingDirectory=/opt/isso
ExecStart=/opt/isso/bin/isso -c /opt/isso/isso.cfg
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Then activate it and start the service.

root@admin /opt/isso # vi /etc/systemd/system/isso.service
root@admin /opt/isso # systemctl daemon-reload
root@admin /opt/isso # systemctl start isso.service
root@admin /opt/isso # systemctl status isso.service
● isso.service - Isso Comment Server
     Loaded: loaded (/etc/systemd/system/isso.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-07-24 23:19:48 CEST; 1s ago
   Main PID: 1272753 (isso)
      Tasks: 2 (limit: 9489)
     Memory: 24.1M
        CPU: 191ms
     CGroup: /system.slice/isso.service
             └─1272753 /opt/isso/bin/python /opt/isso/bin/isso -c /opt/isso/isso.cfg

Jul 24 23:19:48 admin systemd[1]: Started Isso Comment Server.
Jul 24 23:19:48 admin isso[1272753]: 2024-07-24 23:19:48,475 INFO: Using configuration file '/opt/isso/isso.cfg'

The logfile should now also have an entry that the service is started:

root@admin /opt/isso # cat /var/log/isso/isso.log
Using database at '/opt/isso/db/comments.db'
connected to https://admin.brennt.net/

1b) Writing the Apache2 ReverseProxy configuration

This is the relevant part of the Apache ReverseProxy configuration.

Note: I used the ReverseProxy configuration before adding the /isso URI to the public-endpoint parameter in the isso.cfg. I adapted the ProxyPass and ProxyPassReverse rules accordingly, but these may not worked as displayed here and you have to troubleshoot it a bit yourself.

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

<Proxy *>
        Require all granted
</Proxy>

ProxyPass "/isso/" "http://localhost:8080/isso/"
ProxyPassReverse "/isso/" "http://localhost:8080/isso/"
ProxyPass "/isso/isso-admin/" "http://localhost:8080/isso/admin/"
ProxyPassReverse "/isso/isso-admin/" "http://localhost:8080/isso/admin/"
ProxyPass "/isso/login" "http://localhost:8080/isso/login"
ProxyPassReverse "/isso/login" "http://localhost:8080/isso/login"

After a configtest we restart apache2.

root@admin ~# apache2ctl configtest
Syntax OK
root@admin ~# systemctl restart apache2.service

Now that this part is complete, continue with point 3) Integrating Isso into Bludit.

2) Apache2 + WSGI configuration

WSGI has the benefit that we can offer our Python application via Apache with no changes to the application and have it easily accessible via HTTP(S). As this is the first WSGI application on this server I need to install and activate mod_wsgi for Apache2 and create the isso.wsgi file which is used by mod_wsgi to load the application.

This is my changed /opt/isso/isso.wsgi file.

import os
import site
import sys

# Remember original sys.path.
prev_sys_path = list(sys.path)

# Add the new site-packages directory.
site.addsitedir("/opt/isso/lib/python3.9/site-packages")

# Reorder sys.path so new directories at the front.
new_sys_path = []
for item in list(sys.path):
    if item not in prev_sys_path:
        new_sys_path.append(item)
        sys.path.remove(item)
sys.path[:0] = new_sys_path

from isso import make_app
from isso import dist, config

application = make_app(
config.load(
    config.default_file(),
    "/opt/isso/isso.cfg"))

Installing mod_wsgi, as there is an apache2_invoke in Debian I don't need to activate the module via a2enmod wsgi.

root@admin ~# apt-get install libapache2-mod-wsgi-py3
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  libapache2-mod-wsgi-py3
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 99.5 kB of archives.
After this operation, 292 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 libapache2-mod-wsgi-py3 amd64 4.7.1-3+deb11u1 [99.5 kB]                                                                                                       Fetched 99.5 kB in 0s (3,761 kB/s)
Selecting previously unselected package libapache2-mod-wsgi-py3.                                                                                                                                                     (Reading database ... 57542 files and directories currently installed.)
Preparing to unpack .../libapache2-mod-wsgi-py3_4.7.1-3+deb11u1_amd64.deb ...
Unpacking libapache2-mod-wsgi-py3 (4.7.1-3+deb11u1) ...
Setting up libapache2-mod-wsgi-py3 (4.7.1-3+deb11u1) ...
apache2_invoke: Enable module wsgi

Then we add the necessary Directory & WSGI directives to the vHost:

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

# For isso comments
WSGIDaemonProcess isso user=www-data group=www-data threads=5
WSGIScriptAlias /isso /opt/isso/isso.wsgi

After an configtest we restart apache2.

root@admin ~# apache2ctl configtest
Syntax OK
root@admin ~# systemctl restart apache2.service

Now that this part is complete, continue with point 3) Integrating Isso into Bludit.

3) Integrating Isso into Bludit

Now we need to integrate Isso into Bludit in two places:

  1. The <script>-Tag containing the configuration and necessary Javascript code
  2. The <section>-Tag which will actually display the comments and comment form.

For 1. we need to find a place which is always loaded. No matter if we are on the main side or view a single article. For point 2 we must identify the part which renders single page articles and add it after the article.

These places are of course depending on the theme you use. As I use the Solen-Theme for Bludit I can use the following places:

  1. File bl-themes/solen-1.2/php/header-library.php, after line 28 add the <script>-Tag.
    • [...]
      <link rel="preload" href="https://fonts.googleapis.com/css2?family=Quicksand:wght@300;400;500;600;700&display=swap" as="style" onload="this.onload=null;this.rel='stylesheet'"><noscript><link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@300;400;500;600;700&display=swap" rel="stylesheet"></noscript>
      
      <script data-isso="/isso/"
              data-isso-css="true"
              data-isso-css-url="null"
              data-isso-lang="en"
              data-isso-max-comments-top="10"
              data-isso-max-comments-nested="5"
              data-isso-reveal-on-click="5"
              data-isso-sorting="newest"
              data-isso-avatar="true"
              data-isso-avatar-bg="#f0f0f0"
              data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
              data-isso-vote="true"
              data-isso-vote-levels=""
              data-isso-page-author-hashes="f124cf6b2f01,7831fe17a8cd"
              data-isso-reply-notifications-default-enabled="false"
              src="/isso/js/embed.min.js"></script>
      
      <!-- Load Bludit Plugins: Site head -->
      <?php Theme::plugins('siteHead'); ?>
  2. File bl-themes/solen-1.2/php/page.php, line 12. This file takes care of displaying the content for single page articles. To add the comments and comment form, we need to place the <section>-Tag after the <div class="article_spacer"></div>-Tag. By doing that we have a the Tag-List and a small horizontal line as optical separator between content and comments.
    • Change:

                      </article>
                      <div class="article_spacer"></div>
                  </div>
                  <div class="section_wrapper">
    • To:
                      </article>
                      <div class="article_spacer"></div>
                      <section id="isso-thread">
                          <noscript>Javascript needs to be activated to view comments.</noscript>
                      </section>
                  </div>
                  <div class="section_wrapper">

After a reload you should see the comment form below your article content, but above the article tags.

Like in this example screenshot:

Admin backend

A picture says more than a dozen sentences, so here is a screenshot of the Admin backend with a comment awaiting moderation. Using my configuration you can access it via http://host.domain.tld/isso/admin/.

Screenshot from the Isso Admin backend

Changing the /admin URI for Isso

Issos admin backend is reachable via /admin per-default. If, for whatever reason, you can't set public-endpoint = https://host.domain.tld/prefix with a prefix of your choice, but have to use host.domain.tld, this can still be changed in the source code.

Note: The following line numbers are for version 0.13.0 and change can at anytime. Also I haven't tested this thoroughly, so go with this at your own risk.

Grep'ing for /admin we learn that this is defined in the site-packages/isso/views/comments.py (NOT site-packages/isso/db/comments.py) file.

# Finding the right comments.py
(isso) user@admin:/opt/isso$ find . -type f -wholename *views/comments.py
./lib/python3.9/site-packages/isso/views/comments.py

Now open ./lib/python3.9/site-packages/isso/views/comments.py with your texteditor of choice and navigate to line 113 and 1344.

Line 113:
Change: ('admin', ('GET', '/admin/'))
    To: ('admin', ('GET', '/isso-admin/'))
Line 1344:
Change: '/admin/',
    To: '/isso-admin/',

Of course you can change /isso-admin/ to anything you like. But keep in mind this will be overwritten once you update Isso.

Troubleshooting

AH01630: client denied by server configuration

[authz_core:error] [pid 1257781:tid 1257781] AH01630: client denied by server configuration: /opt/isso/isso.wsgi

This error was easy to fix. I simply forgot the needed Directory directive in my Apaches vHost configuration.

Adding the following block to the vHost fixed that:

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

ModuleNotFoundError: No module named 'isso'

This one is easy to fix and was just an oversight on my end. I forgot to change the example path to the site-packages directory from my virtualenv. Hence the install isso package couldn't be located.

[wsgi:error] [pid 1258908:tid 1258908] mod_wsgi (pid=1258908): Failed to exec Python script file '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1258908:tid 1258908] mod_wsgi (pid=1258908): Exception occurred processing WSGI script '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1258908:tid 1258908] Traceback (most recent call last):
[wsgi:error] [pid 1258908:tid 1258908]   File "/opt/isso/isso.wsgi", line 3, in <module>
[wsgi:error] [pid 1258908:tid 1258908]     from isso import make_app
[wsgi:error] [pid 1258908:tid 1258908] ModuleNotFoundError: No module named 'isso'

I changed the following:

# Add the new site-packages directory.
site.addsitedir("/path/to/isso_virtualenv")

To the correct path:

# Add the new site-packages directory.
site.addsitedir("/opt/isso/lib/python3.9/site-packages")

Your path can of course be different. Just execute an find . -type d -name site-packages inside your virtualenv and you should get the correct path as result.

SyntaxError: unexpected EOF while parsing

On https://isso-comments.de/docs/reference/deployment/#mod-wsgi Isso lists some WSGI example config files to work with. However all but one have a crucial syntax error: A missing closing bracket at the end.

If you get an error like this:

[wsgi:error] [pid 1259079:tid 1259079] mod_wsgi (pid=1259079, process='', application='admin.brennt.net|/isso'): Failed to parse Python script file '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1259079:tid 1259079] mod_wsgi (pid=1259079): Exception occurred processing WSGI script '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1259079:tid 1259079]    File "/opt/isso/isso.wsgi", line 14
[wsgi:error] [pid 1259079:tid 1259079] 
[wsgi:error] [pid 1259079:tid 1259079]     ^
[wsgi:error] [pid 1259079:tid 1259079] SyntaxError: unexpected EOF while parsing

Be sure to check if every opening bracket has a closing one. Usually you just need to add one ) at the end of the last line and that's it.

SystemError: ffi_prep_closure(): bad user_data (it seems that the version of the libffi library seen at runtime is different from the 'ffi.h' file seen at compile-time)

This one baffled me at first and I had to search for an answer. Luckily StackOverflow came to the rescue: https://stackoverflow.com/a/70694565

The solution was rather easy. First install the libffi-dev package, than recompile the cffi with the command in the answer inside the virtualenv. After that the error was gone.

root@admin /opt/isso # apt-get install libffi-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  libffi-dev
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.5 kB of archives.
After this operation, 308 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 libffi-dev amd64 3.3-6 [56.5 kB]
Fetched 56.5 kB in 0s (2,086 kB/s)
Selecting previously unselected package libffi-dev:amd64.
(Reading database ... 57679 files and directories currently installed.)
Preparing to unpack .../libffi-dev_3.3-6_amd64.deb ...
Unpacking libffi-dev:amd64 (3.3-6) ...
Setting up libffi-dev:amd64 (3.3-6) ...
Processing triggers for man-db (2.9.4-2) ...

Then switch back to the non-root user for the virtualenv and recompile the package.

(isso) user@admin:/opt/isso$ pip install --force-reinstall --no-binary :all: cffi
Collecting cffi
  Using cached cffi-1.16.0.tar.gz (512 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Collecting pycparser
  Using cached pycparser-2.22.tar.gz (172 kB)
Skipping wheel build for pycparser, due to binaries being disabled for it.
Building wheels for collected packages: cffi
Building wheel for cffi (PEP 517) ... done
  Created wheel for cffi: filename=cffi-1.16.0-cp39-cp39-linux_x86_64.whl size=379496 sha256=cd91c1e57d0f3b6e5d23393c0aa436da657cd738262ab29d13ab665b2ad2a66c
Stored in directory: /home/user/.cache/pip/wheels/49/dd/bc/ce13506ca37e8052ad8f14cdfc175db1b3943e5d7bce7718e3
Successfully built cffi
Installing collected packages: pycparser, cffi
  Attempting uninstall: pycparser
    Found existing installation: pycparser 2.22
    Uninstalling pycparser-2.22:
      Successfully uninstalled pycparser-2.22
    Running setup.py install for pycparser ... done
  Attempting uninstall: cffi
    Found existing installation: cffi 1.16.0
    Uninstalling cffi-1.16.0:
      Successfully uninstalled cffi-1.16.0
Successfully installed cffi-1.16.0 pycparser-2.22

TypeError: load() got an unexpected keyword argument 'multiprocessing'

On https://isso-comments.de/docs/reference/deployment/#mod-wsgi Isso lists some WSGI example config files to work with. Using them brought up this error message.

To my shame I couldn't fix this one. From my understanding the syntax, etc. was correct but I couldn't find any really helpful answer. Therefore I remove that from the provided WSGI files and the error vanished.

Note: The missing closing bracket at the end has already been added here.

import os
import site
import sys

# Remember original sys.path.
prev_sys_path = list(sys.path)

# Add the new site-packages directory.
site.addsitedir("/path/to/isso_virtualenv")

# Reorder sys.path so new directories at the front.
new_sys_path = []
for item in list(sys.path):
    if item not in prev_sys_path:
        new_sys_path.append(item)
        sys.path.remove(item)
sys.path[:0] = new_sys_path

from isso import make_app
from isso import dist, config

application = make_app(
config.load(
    config.default_file(),
    "/path/to/isso.cfg"))

Other articles

Alternatives

I also found Remark42 (Official Website, GitHub) and might try that at a later time. So if Isso isn't after your liking, maybe have a look at that one.

Comments