Feuerfest

Just the private blog of a Linux sysadmin

Integrating Isso into Bludit via Apache2 and WSGI

Photo by Sébastien BONNEVAL: https://www.pexels.com/photo/women-posting-notes-on-brown-board-7393948/

Isso is a comment system designed to be included in static sites. As Bludit is a static site generator and has no comment system on its own this is a perfect fit. 😀 Also the comments are stored in a local SQLite DB which means all data stays locally on this server and no external account registration is required.

Comment moderation is of course possible via a small admin backend.

Introduction

I actually tried to separate methods of setting up and integrating Isso in my Bludit instance.

The start was always a Python virtualenv as I didn't want to install all packages and libraries system-wide. After that, I tried two approaches as I wasn't sure which one I liked better.

First approach:

  • Python virtualenv
  • Isso started via Systemd Unit on localhost:8080
  • ProxyPass & ProxyPassReverse directives in the Apache vHost configuration forwarding the relevant URIs to Isso on http://localhost:8080

Second approach:

  • Python virtualenv
  • Isso integrated via mod_wsgi into Apache

In the end, I went with the second approach as it spared me a Systemd Unit and activating the mod_proxy & mod_proxy_http in Apache. Also the whole integration was done with one WSGI file and 8 lines in the vHost.

But as I got both to a working condition I document them here nonetheless.

Creating the Python virtualenv

Creating the virtualenv is a necessary step for both paths. Generally, you can omit it if you want to install the Python packages system-wide, but as I'm still testing the software I didn't want this.

Here I followed the Install from PyPi documentation from Isso.

root@admin ~# mkdir /opt/isso
root@admin ~# apt-get install python3-dev python3-virtualenv sqlite3 build-essential
# Switching back to our non-root user
root@admin ~# exit
user@admin:~$ cd /opt/isso
user@admin:/opt/isso$ virtualenv --download /opt/isso
user@admin:/opt/isso$ source /opt/isso/bin/activate
(isso) user@admin:/opt/isso$ pip install isso

# After that I ran an upgrade for isso to get the latest package
(isso) user@admin:/opt/isso$ pip install --upgrade isso

# Also we can already create an directory for the SQLite DB where the comments get stored
# Note: This MUST be writeable by the user running isso. So either the user the Systemd unit runs under, or, in my case: the Apache2 user
(isso) user@admin:/opt/isso$ mkdir /opt/isso/db

# To change the ownership I need to become root again
# I choose www-data:adm to allow myself write-access as the used non-root user is part of that group
root@admin ~# chown www-data:adm/opt/isso/db
root@admin ~# chmod 775 /opt/isso/db

With pip list we can get an overview of all installed packages and their versions.

(isso) user@admin:/opt/isso$ pip list
Package       Version
------------- -------
bleach        6.1.0
cffi          1.16.0
html5lib      1.1
isso          0.13.0
itsdangerous  2.2.0
jinja2        3.1.4
MarkupSafe    2.1.5
misaka        2.1.1
pip           20.3.4
pkg-resources 0.0.0
pycparser     2.22
setuptools    44.1.1
six           1.16.0
webencodings  0.5.1
werkzeug      3.0.3
wheel         0.34.2

Isso configuration file

My isso.cfg looked like the following. The documentation is here: https://isso-comments.de/docs/reference/server-config/

(isso) user@admin:/opt/isso$ cat isso.cfg
[general]
dbpath = /opt/isso/db/comments.db
host = https://admin.brennt.net/
max-age = 15m
log-file = /var/log/isso/isso.log
[moderation]
enabled = true
approve-if-email-previously-approved = false
purge-after = 30d
[server]
listen = http://localhost:8080/
public-endpoint = https://admin.brennt.net/isso
[guard]
enabled = true
ratelimit = 2
direct-reply = 3
reply-to-self = false
require-author = true
require-email = false
[markup]
options = strikethrough, superscript, autolink, fenced-code
[hash]
salt = Eech7co8Ohloopo9Ol6baimi
algorithm = pbkdf2
[admin]
enabled = true
password = YOUR-PASSWORD-HERE

This meant I needed to create the /var/log/isso directory and set the appropriate rights.

But I want to raise your attention towards the following line public-endpoint = https://admin.brennt.net/isso here I added /isso to the host as else we will run in problems as both Isso and Bludit are using the /admin URI for backend access.

While this can be changed in Isso when you hack the code and, as far as it could tell, also in Bludit. I didn't want to do this, as these are modifications which will always cause additional work when an update of either Isso or Bludit is done. Therefore I added the /isso URI and adapted the ReverseProxy/WSGI configuration accordingly. Then everything works out of the box.

Decide how to proceed

Now you need to decide if you want to go with the first or second approach. Depending on this either start with point 1a) Creating the Systemd unit file or 2) Apache2 + WSGI configuration

1a) Creating the Systemd unit file

The following unit file will start Isso. Store it under /etc/systemd/system/isso.service.

[Unit]
Description=Isso Comment Server

[Service]
Type=simple
User=user
WorkingDirectory=/opt/isso
ExecStart=/opt/isso/bin/isso -c /opt/isso/isso.cfg
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Then activate it and start the service.

root@admin /opt/isso # vi /etc/systemd/system/isso.service
root@admin /opt/isso # systemctl daemon-reload
root@admin /opt/isso # systemctl start isso.service
root@admin /opt/isso # systemctl status isso.service
● isso.service - Isso Comment Server
     Loaded: loaded (/etc/systemd/system/isso.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-07-24 23:19:48 CEST; 1s ago
   Main PID: 1272753 (isso)
      Tasks: 2 (limit: 9489)
     Memory: 24.1M
        CPU: 191ms
     CGroup: /system.slice/isso.service
             └─1272753 /opt/isso/bin/python /opt/isso/bin/isso -c /opt/isso/isso.cfg

Jul 24 23:19:48 admin systemd[1]: Started Isso Comment Server.
Jul 24 23:19:48 admin isso[1272753]: 2024-07-24 23:19:48,475 INFO: Using configuration file '/opt/isso/isso.cfg'

The logfile should now also have an entry that the service is started:

root@admin /opt/isso # cat /var/log/isso/isso.log
Using database at '/opt/isso/db/comments.db'
connected to https://admin.brennt.net/

1b) Writing the Apache2 ReverseProxy configuration

This is the relevant part of the Apache ReverseProxy configuration.

Note: I used the ReverseProxy configuration before adding the /isso URI to the public-endpoint parameter in the isso.cfg. I adapted the ProxyPass and ProxyPassReverse rules accordingly, but these may not worked as displayed here and you have to troubleshoot it a bit yourself.

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

<Proxy *>
        Require all granted
</Proxy>

ProxyPass "/isso/" "http://localhost:8080/isso/"
ProxyPassReverse "/isso/" "http://localhost:8080/isso/"
ProxyPass "/isso/isso-admin/" "http://localhost:8080/isso/admin/"
ProxyPassReverse "/isso/isso-admin/" "http://localhost:8080/isso/admin/"
ProxyPass "/isso/login" "http://localhost:8080/isso/login"
ProxyPassReverse "/isso/login" "http://localhost:8080/isso/login"

After a configtest we restart apache2.

root@admin ~# apache2ctl configtest
Syntax OK
root@admin ~# systemctl restart apache2.service

Now that this part is complete, continue with point 3) Integrating Isso into Bludit.

2) Apache2 + WSGI configuration

WSGI has the benefit that we can offer our Python application via Apache with no changes to the application and have it easily accessible via HTTP(S). As this is the first WSGI application on this server I need to install and activate mod_wsgi for Apache2 and create the isso.wsgi file which is used by mod_wsgi to load the application.

This is my changed /opt/isso/isso.wsgi file.

import os
import site
import sys

# Remember original sys.path.
prev_sys_path = list(sys.path)

# Add the new site-packages directory.
site.addsitedir("/opt/isso/lib/python3.9/site-packages")

# Reorder sys.path so new directories at the front.
new_sys_path = []
for item in list(sys.path):
    if item not in prev_sys_path:
        new_sys_path.append(item)
        sys.path.remove(item)
sys.path[:0] = new_sys_path

from isso import make_app
from isso import dist, config

application = make_app(
config.load(
    config.default_file(),
    "/opt/isso/isso.cfg"))

Installing mod_wsgi, as there is an apache2_invoke in Debian I don't need to activate the module via a2enmod wsgi.

root@admin ~# apt-get install libapache2-mod-wsgi-py3
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  libapache2-mod-wsgi-py3
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 99.5 kB of archives.
After this operation, 292 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 libapache2-mod-wsgi-py3 amd64 4.7.1-3+deb11u1 [99.5 kB]                                                                                                       Fetched 99.5 kB in 0s (3,761 kB/s)
Selecting previously unselected package libapache2-mod-wsgi-py3.                                                                                                                                                     (Reading database ... 57542 files and directories currently installed.)
Preparing to unpack .../libapache2-mod-wsgi-py3_4.7.1-3+deb11u1_amd64.deb ...
Unpacking libapache2-mod-wsgi-py3 (4.7.1-3+deb11u1) ...
Setting up libapache2-mod-wsgi-py3 (4.7.1-3+deb11u1) ...
apache2_invoke: Enable module wsgi

Then we add the necessary Directory & WSGI directives to the vHost:

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

# For isso comments
WSGIDaemonProcess isso user=www-data group=www-data threads=5
WSGIScriptAlias /isso /opt/isso/isso.wsgi

After an configtest we restart apache2.

root@admin ~# apache2ctl configtest
Syntax OK
root@admin ~# systemctl restart apache2.service

Now that this part is complete, continue with point 3) Integrating Isso into Bludit.

3) Integrating Isso into Bludit

Now we need to integrate Isso into Bludit in two places:

  1. The <script>-Tag containing the configuration and necessary Javascript code
  2. The <section>-Tag which will actually display the comments and comment form.

For 1. we need to find a place which is always loaded. No matter if we are on the main side or view a single article. For point 2 we must identify the part which renders single page articles and add it after the article.

These places are of course depending on the theme you use. As I use the Solen-Theme for Bludit I can use the following places:

  1. File bl-themes/solen-1.2/php/header-library.php, after line 28 add the <script>-Tag.
    • [...]
      <link rel="preload" href="https://fonts.googleapis.com/css2?family=Quicksand:wght@300;400;500;600;700&display=swap" as="style" onload="this.onload=null;this.rel='stylesheet'"><noscript><link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@300;400;500;600;700&display=swap" rel="stylesheet"></noscript>
      
      <script data-isso="/isso/"
              data-isso-css="true"
              data-isso-css-url="null"
              data-isso-lang="en"
              data-isso-max-comments-top="10"
              data-isso-max-comments-nested="5"
              data-isso-reveal-on-click="5"
              data-isso-sorting="newest"
              data-isso-avatar="true"
              data-isso-avatar-bg="#f0f0f0"
              data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
              data-isso-vote="true"
              data-isso-vote-levels=""
              data-isso-page-author-hashes="f124cf6b2f01,7831fe17a8cd"
              data-isso-reply-notifications-default-enabled="false"
              src="/isso/js/embed.min.js"></script>
      
      <!-- Load Bludit Plugins: Site head -->
      <?php Theme::plugins('siteHead'); ?>
  2. File bl-themes/solen-1.2/php/page.php, line 12. This file takes care of displaying the content for single page articles. To add the comments and comment form, we need to place the <section>-Tag after the <div class="article_spacer"></div>-Tag. By doing that we have a the Tag-List and a small horizontal line as optical separator between content and comments.
    • Change:

                      </article>
                      <div class="article_spacer"></div>
                  </div>
                  <div class="section_wrapper">
    • To:
                      </article>
                      <div class="article_spacer"></div>
                      <section id="isso-thread">
                          <noscript>Javascript needs to be activated to view comments.</noscript>
                      </section>
                  </div>
                  <div class="section_wrapper">

After a reload you should see the comment form below your article content, but above the article tags.

Like in this example screenshot:

Admin backend

A picture says more than a dozen sentences, so here is a screenshot of the Admin backend with a comment awaiting moderation. Using my configuration you can access it via http://host.domain.tld/isso/admin/.

Screenshot from the Isso Admin backend

Changing the /admin URI for Isso

Issos admin backend is reachable via /admin per-default. If, for whatever reason, you can't set public-endpoint = https://host.domain.tld/prefix with a prefix of your choice, but have to use host.domain.tld, this can still be changed in the source code.

Note: The following line numbers are for version 0.13.0 and change can at anytime. Also I haven't tested this thoroughly, so go with this at your own risk.

Grep'ing for /admin we learn that this is defined in the site-packages/isso/views/comments.py (NOT site-packages/isso/db/comments.py) file.

# Finding the right comments.py
(isso) user@admin:/opt/isso$ find . -type f -wholename *views/comments.py
./lib/python3.9/site-packages/isso/views/comments.py

Now open ./lib/python3.9/site-packages/isso/views/comments.py with your texteditor of choice and navigate to line 113 and 1344.

Line 113:
Change: ('admin', ('GET', '/admin/'))
    To: ('admin', ('GET', '/isso-admin/'))
Line 1344:
Change: '/admin/',
    To: '/isso-admin/',

Of course you can change /isso-admin/ to anything you like. But keep in mind this will be overwritten once you update Isso.

Troubleshooting

AH01630: client denied by server configuration

[authz_core:error] [pid 1257781:tid 1257781] AH01630: client denied by server configuration: /opt/isso/isso.wsgi

This error was easy to fix. I simply forgot the needed Directory directive in my Apaches vHost configuration.

Adding the following block to the vHost fixed that:

# Isso comments directory
<Directory /opt/isso/>
        Options -Indexes +FollowSymLinks -MultiViews
        Require all granted
</Directory>

ModuleNotFoundError: No module named 'isso'

This one is easy to fix and was just an oversight on my end. I forgot to change the example path to the site-packages directory from my virtualenv. Hence the install isso package couldn't be located.

[wsgi:error] [pid 1258908:tid 1258908] mod_wsgi (pid=1258908): Failed to exec Python script file '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1258908:tid 1258908] mod_wsgi (pid=1258908): Exception occurred processing WSGI script '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1258908:tid 1258908] Traceback (most recent call last):
[wsgi:error] [pid 1258908:tid 1258908]   File "/opt/isso/isso.wsgi", line 3, in <module>
[wsgi:error] [pid 1258908:tid 1258908]     from isso import make_app
[wsgi:error] [pid 1258908:tid 1258908] ModuleNotFoundError: No module named 'isso'

I changed the following:

# Add the new site-packages directory.
site.addsitedir("/path/to/isso_virtualenv")

To the correct path:

# Add the new site-packages directory.
site.addsitedir("/opt/isso/lib/python3.9/site-packages")

Your path can of course be different. Just execute an find . -type d -name site-packages inside your virtualenv and you should get the correct path as result.

SyntaxError: unexpected EOF while parsing

On https://isso-comments.de/docs/reference/deployment/#mod-wsgi Isso lists some WSGI example config files to work with. However all but one have a crucial syntax error: A missing closing bracket at the end.

If you get an error like this:

[wsgi:error] [pid 1259079:tid 1259079] mod_wsgi (pid=1259079, process='', application='admin.brennt.net|/isso'): Failed to parse Python script file '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1259079:tid 1259079] mod_wsgi (pid=1259079): Exception occurred processing WSGI script '/opt/isso/isso.wsgi'.
[wsgi:error] [pid 1259079:tid 1259079]    File "/opt/isso/isso.wsgi", line 14
[wsgi:error] [pid 1259079:tid 1259079] 
[wsgi:error] [pid 1259079:tid 1259079]     ^
[wsgi:error] [pid 1259079:tid 1259079] SyntaxError: unexpected EOF while parsing

Be sure to check if every opening bracket has a closing one. Usually you just need to add one ) at the end of the last line and that's it.

SystemError: ffi_prep_closure(): bad user_data (it seems that the version of the libffi library seen at runtime is different from the 'ffi.h' file seen at compile-time)

This one baffled me at first and I had to search for an answer. Luckily StackOverflow came to the rescue: https://stackoverflow.com/a/70694565

The solution was rather easy. First install the libffi-dev package, than recompile the cffi with the command in the answer inside the virtualenv. After that the error was gone.

root@admin /opt/isso # apt-get install libffi-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  libffi-dev
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.5 kB of archives.
After this operation, 308 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 libffi-dev amd64 3.3-6 [56.5 kB]
Fetched 56.5 kB in 0s (2,086 kB/s)
Selecting previously unselected package libffi-dev:amd64.
(Reading database ... 57679 files and directories currently installed.)
Preparing to unpack .../libffi-dev_3.3-6_amd64.deb ...
Unpacking libffi-dev:amd64 (3.3-6) ...
Setting up libffi-dev:amd64 (3.3-6) ...
Processing triggers for man-db (2.9.4-2) ...

Then switch back to the non-root user for the virtualenv and recompile the package.

(isso) user@admin:/opt/isso$ pip install --force-reinstall --no-binary :all: cffi
Collecting cffi
  Using cached cffi-1.16.0.tar.gz (512 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Collecting pycparser
  Using cached pycparser-2.22.tar.gz (172 kB)
Skipping wheel build for pycparser, due to binaries being disabled for it.
Building wheels for collected packages: cffi
Building wheel for cffi (PEP 517) ... done
  Created wheel for cffi: filename=cffi-1.16.0-cp39-cp39-linux_x86_64.whl size=379496 sha256=cd91c1e57d0f3b6e5d23393c0aa436da657cd738262ab29d13ab665b2ad2a66c
Stored in directory: /home/clauf/.cache/pip/wheels/49/dd/bc/ce13506ca37e8052ad8f14cdfc175db1b3943e5d7bce7718e3
Successfully built cffi
Installing collected packages: pycparser, cffi
  Attempting uninstall: pycparser
    Found existing installation: pycparser 2.22
    Uninstalling pycparser-2.22:
      Successfully uninstalled pycparser-2.22
    Running setup.py install for pycparser ... done
  Attempting uninstall: cffi
    Found existing installation: cffi 1.16.0
    Uninstalling cffi-1.16.0:
      Successfully uninstalled cffi-1.16.0
Successfully installed cffi-1.16.0 pycparser-2.22

TypeError: load() got an unexpected keyword argument 'multiprocessing'

On https://isso-comments.de/docs/reference/deployment/#mod-wsgi Isso lists some WSGI example config files to work with. Using them brought up this error message.

To my shame I couldn't fix this one. From my understanding the syntax, etc. was correct but I couldn't find any really helpful answer. Therefore I remove that from the provided WSGI files and the error vanished.

Note: The missing closing bracket at the end has already been added here.

import os
import site
import sys

# Remember original sys.path.
prev_sys_path = list(sys.path)

# Add the new site-packages directory.
site.addsitedir("/path/to/isso_virtualenv")

# Reorder sys.path so new directories at the front.
new_sys_path = []
for item in list(sys.path):
    if item not in prev_sys_path:
        new_sys_path.append(item)
        sys.path.remove(item)
sys.path[:0] = new_sys_path

from isso import make_app
from isso import dist, config

application = make_app(
config.load(
    config.default_file(),
    "/path/to/isso.cfg"))

Other articles

Alternatives

I also found Remark42 (Official Website, GitHub) and might try that at a later time. So if Isso isn't after your liking, maybe have a look at that one.

Comments

Why blocking whole countries on the Internet isn't a precise process

Photo by Yan Krukau: https://www.pexels.com/photo/close-up-of-a-person-holding-uno-cards-9068976/

I just read it again on the Internet. Someone is asking: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US."

This inspired me to write a short post about why this is a dangerous and - let's call it politely - sub-clever idea.

You know what "Internet" means, do you?

The term Internet is short for "interconnected networks". The Internet isn't one big network. It's thousands and thousands of small and bigger networks linked together via so called routing protocols. They transport the information on which routers decide how to route your packet so it arrives at its destination. Routers are, to use an analogy, the traffic signs along the highway. Giving each packet directions on which lane it needs to take to reach its destination. In protocol terms, we speak about iBGP, eBGP, OSPF, RIP v1/v2, IGRP, EIGRP, and so on. The only real distinction is whether these protocols are Intra-AS (routing inside one AS - for example iBGP) or Inter-AS (routing between several AS - for example eBGP) routing protocols.

What is an AS, you ask? AS is short for autonomous system (Wikipedia). That's the technical term for a network under the control of a single entity, like a company. Each AS is identified by its unique number, the ASN. This number is used in routing protocols like BGP to exactly identify to which AS a rule belongs.

And as you must have already guessed by now: None of this respects real-world borders. Packets don't stop at borders. Here in Europe, even we people don't stop at borders. You just have to love Schengen (Wikipedia).

Therefore, the task of only allowing customers from the US is a little bit complicated to set up. Technically spoken. Data packets don't contain information from which country they originated. Just the source IP address.

But.. My firewall-/router-/hosting-/DDoS-/CDN-/whatever-provider provides such an option in the control panel of my/our account? So it must be possible!

I didn't say it couldn't be done under any circumstances. I just said, It's complicated and will constantly cause you pain and money loss.

Even BGP in itself isn't 100% safe and attack vectors like BGP hijacking (Wikipedia) do exist, but due to how BGP works, they are always pretty quickly noticed, and the culprit is easily and clearly identified.

So, when it is possible, how do they do it?

They are taking many and many educated guesses.

...

Yeah, ok. Sprinkled and garnished with some bureaucratic facts as their starting point for their educated guesses.

...

Ok, sometimes they outright pay internet service providers or other companies to give them those data. This might or might not be legal under your countries data privacy laws..

...

Not the answer you expected to read? Yeah, life is disappointing sometimes.

How the bureaucratic layer of the Internet works

Tackle the problem from another angle: Do you know how IP addresses are managed on the bureaucratic layer?

Do terms like IANA, RIR, Ripe and ARIN ring a bell? No? Ok, let me explain.

IANA is the Internet Assigned Numbers Authority. In their words, they "Perform the global coordination of the DNS Root, IP addressing, and other Internet protocol resources" Relevant for us in the article is the "IP addressing" part.

The IANA assigns chunks of IP addresses to the so-called RIRs. That's Regional Internet registry (Wikipedia). Those RIRs are (with their founding dates and current area of operations):

  • 1992 RIPE NCC - Europe, Russia, Middle East, Greenland
  • 1993 APNIC - Asian/Pacific region, Australia, China, India, etc.
  • 1997 ARIN - United States of America, Canada
  • 1999 LACNIC - Mexico, South American Continent
  • 2004 AFRINIC - African continent

These RIRs then provide companies in their assigned areas with IP addresses they can manage themselves. And to make this picture easier I left the ICANN & NRO, two other governing bodies, out of the picture.

As you can see some RIRs were founded later than others. This also means: Even if you filter based on which RIR manages the IP addresses, this isn't set in stone forever. Even if a RIR is responsible for a whole continent this can change.

What these companies, which offer geo-blocking, do is: They look where an IP address is located on the bureaucratic layer. Which RIR is responsible for the IP block? Which companies "own" the IPs? Where are they routed to/announced from. But these are all bureaucratic and technical information. These information can't be mapped 1 to 1 to a country. And these bits of information are extremely volatile.

Side note: And there is no RIR for each single country. The term LIR or Local Internet registry (Wikipedia) does exist. But it commonly refers to your Internet Service Provider (ISP) who assigns your Internet Modem/Router an IP address so you can browse the Internet. This has nothing to do with countries. The Internet itself isn't technically designed with the concept of "countries" or "borders" in mind. Never was and most likely (hopefully!!) never will.

Another problem are the systems who provide these information: Some provide real time information. Others don't. Additionally you don't know which metrics your vendor uses and how the vendor obtains them. And usually they don't make the process how they obtain and classify the information publicly available.

I had customer support agents who, instead of resolving a domain name via the ping or host command typed it into Google and used that information. Sometimes obtaining wrong information which was months old and therefore led to other errors...

And what about multi national businesses?

A company from Germany can have assigned IPs from the ARIN for their US business. Maybe they have a subsidiary company for their US business, but this still makes it a German company. How do you filter that?

Keep in mind: Maybe their US subsidiary was only established for jurisdictional problems and all people working with you are sitting in Germany. Hence mails, phone calls, letters, etc. will all come from Germany.

Additionally this company is free to use the IPs as they like. They can announce their BGP routes as they like. Nothing is preventing them from using IPs assigned by the RIPE NCC in the United States. This is done on a regular level. As especially IPv4 addresses are rare and sometimes IPs need to be moved around to satisfy the ever growing demand.

Side note: The NRO publishes the data of all delegated IP blocks under https://ftp.ripe.net/pub/stats/ripencc/nro-stats/latest/. And the file nro-delegated-stats contains the information which IP blocks were assigned by any RIR. You will find lines that the ARIN (Only responsible for the US & Canada) assigned IPs to an entity in Singapore.

Jan Schaumann used that file to present some cool statistics about IP allocations: https://labs.ripe.net/author/jschauma/whose-cidr-is-it-anyway/

To make the picture more complex: IPs issued by a RIR can be used in any country. Their is no rule nor enforcement that IPs issued by a RIR are only to be used in their sphere of influence. Therefore even that first starting piece of information can differ completely from reality.

Hence my statement that all this geolocation business is based on educated guesses. Yes, many positions will be precise. But the question is "For how long?" and do you really want to make your communication depending on that?

The technical reality

BGP routes themselves can change at any time. There is no "You can change them only once every 30 days." You can change them every 5 minutes if you like. They can even change completely automatically. Heck, they have to change automatically if we want a working Internet. There are always equipment malfunctions..

When I worked at a major German telecommunications provider, we utilized BGP to build an automatic fail-over in case an entire datacenter went offline. Both datacenters announced their routes (how traffic can reach them) via BGP towards the route reflector of our network team. Datacenter A announced with a local-preference of 200, datacenter B with a local-preference of 100. In iBGP the highest local-preference value takes priority. This means: If datacenter A should ever cease to function (the iBGP announcements from that datacenter stop reaching the route reflector) the traffic will immediately go towards datacenter B.

In our case, both datacenters A and B were located in Germany. But that was pure chance. My employer also had datacenters in France, the UK, Spain, etc. and of course also in the US. It just happened that the datacenter where my team was allocated the necessary rack space for our servers were both located in Germany.

So the endpoint can literally change every millisecond. And with it the country where traffic is sent to or originates from.

Of course we did regular fail-over tests. Now think about the following scenario: We are doing a live fail-over test. Datacenter A switched to B and datacenter B happens to be located in France. The traffic will be arriving in France for 5 minutes (the duration of our test). In exactly these 5 minutes a scan from a vendor notices that traffic for all IPs affected by our test will be located in France. The software will write this into its database and happily move along.

How long will that false, inaccurate and outdated information be kept in their database? What trouble will that cause your business?

Looking at it from the other side

Ok, so we clarified why geo-blocking is taking educated guesses with a bit of Voodoo. It is time to look at it from the other side, right? As this is a viewpoint which is regularly forgotten completely.

Let's go with the example above: "Hey, as we do only business in the United States, can't we simply block all other countries and be safer? All our customers and suppliers are located in the US.."

Is this really the reality? Are your suppliers and customers located in the US?

I bet 100% that you haven't even understood why you are making that claim. Most people will look at: "Where do we have stores? Where do we ship? What are our target customers?"

This usually leads to an opinion based on bureaucratic metrics. Or in other words: Delivery and invoice addresses.

But what about the customer in Idaho who just recently moved there from Spain and still uses his/her mail account from a Spanish mail provider?

Have you checked which IPs their email server uses? Are they hosted by a big cloud provider like Google, Azure or AWS? Do you have complete and absolute knowledge on how these biggest three tech companies manage their IPs and hundreds of networks today? Tomorrow? Next week?

Even they don't.

Which measures and workarounds they undertake should a datacenter be down? Or just be in a planned maintenance state?

It's fairly normal that in times of need workarounds are done to ensure customers can use the services, for which they are paying, again as quickly as possible.

Businesses change too

How about your biggest client suddenly stopping buying from you? Are you getting no calls for bids any more?

Could it be that the company you did business with was recently acquired by another company? And now they send all their mail from an entirely different mail server hosted in an entirely different country? Could that be the reason the RFQs (requests for quotations) stopped coming?

How much money will you lose before you notice this error?

Last words

I tried to explain in easy words for non-techie people why geo-blocking is usually bad. Yes, it's used by Netflix and many others. Yes, many products offer some kind of feature to achieve some form of geo-blocking.

But keep in mind: They have to do this for jurisdictional reasons. They bought rights to movies to show in certain countries. The owners of these rights want Netflix to ensure only those customers can watch these movies. Because they themselves sold the exact same rights to at least 25 other companies in other countries. And each of their customers will sue them once they notice that a competitor has the same movie in the same country. Hence, Netflix is trapped in a never-ending cat-and-mouse game with VPN companies that constantly change their endpoints.

I haven't even talked about VPNs. I haven't talked about DNS. I haven't talked about mail. All these require IPs to function. All these add several other layers of complexity. But all these are needed for your business to work in the 21st century.

You won't be more secure by blocking China, Russia, or North Korea from your firewall.

You will be more secure by applying patches on time. Using maintained software products. Separating your production environments from your development/test networks and the networks where the PCs/Laptops of your employees are located. By running regular security audits. By following NIST recommendations regarding password security. By defining a good manageable firewall rule framework. By having a ticket system that makes changes traceable AND reproducible. By introducing ITIL or some ISO stuff if you want to go that route.

Be advised: The bad guys are not just sitting in those countries that you are afraid of. China isn't solely attacking out of China in the cyberspace. No. Probably they utilize a nice hacked internet account from John Doe just around the corner of your shop.

Some links

If you want to read further I can recommend the site https://networklessons.com/. If you want to learn more about BGP you can visit https://networklessons.com/bgp and start from there.

Comments

Why I don't consider Outlook to be a functional mail client

Photo by Pixabay: https://www.pexels.com/photo/flare-of-fire-on-wood-with-black-smokes-57461/

This topic comes up far to often, therefore I decided to make a blogpost out of it. After all copy & pasting a link is easier than repeatedly writing the same bullet points.

Also: This is my private opinion and this article should rather be treated as a rant.

  • Mail templates are separate files? And the workflow to create them is seriously that antique?
    • Under Create an email message template (microsoft.com) Microsoft details how to create an email template. But you notice something? They use the term "[...] that include information that infrequently changes [...]" means only static text is allowed.
    • Yep, you can't draft mail templates where certain values get auto-filled and the like. I mean, how many employees, consultants, etc. have to sent their weekly/monthly time-sheet to someone? Is it so hard to automatically fill in the week number, month and automatically attach the latest file with a certain file name in a specified folder?
      • Yes! Automating this with software is surely the best way. But we all know how the reality in many companies looks like, right?
    • Additionally the mail templates are stored as files on your filesystem under: C:\users\username\appdata\roaming\microsoft\templates.
      • This means: Mail templates are not treated as mails in draft mode or the like. No, you have to load an external file via a separate dialogue into Outlook. That's user experience from the 1980s?
    • Workaround: Create a folder templates, create a sub-folder templates-for-templates. Store mail drafts (with recipients, subject, text, etc.) in templates-for-templates. When needed copy to templates. Attach file. Edit text manually. Hit send.
    • Never send directly out of templates-for-templates as else your template is gone.
    • But seriously? Why is this process so old and convoluted? I suspect the feature is kept this way because Microsoft is afraid of people utilizing it to send spam. But.. Sending spam manually? I think this stopped to be a thing at May 5th, 2000 (Wikipedia) at the latest.. Every worm/virus out there has it's own build-in logic to generate different subjects/texts/etc. Why deliberately keep a feature in such a broken state and punish your legitimate users?
  • No regular expressions in filter keywords
    • This annoys me probably the most. When you specify a filter "Sort all mails, where the subject begins with Newsletter PC news into a folder", Outlook will only sort mail with the exact subject of "Newsletter PC news"
    • Which is stupid when there is a static & changing part in the subject. I mean it's 2024. Support some kind of wildcard string matching via asterisks is not really new, isn't it? Like: "Sort all mail where the subject starts with "Newsletter PC news*" and then "Newsletter PC news April 2024" will also get sorted.. No. Not in Outlook.
  • Constant nuisance: Ctrl+F doesn't bring up the search bar - Instead it opens the new mail window..
    • I mean really? Ctrl+F is the shortcut for search everywhere. Why change that!?
    • Info: Ctrl+E activates the search field on top
  • Only one organizer for events
    • Ok, technically this isn't outlook but rather CalDAV and hence Google calendar, etc. suffer the from the same problems. But I still list it as a fault.
    • Why? Microsoft has repeatedly shown the middle finger to organizations like the ISO and the like. When it suits Microsoft's market share, they basically are willing to ignore a lot of common standards (like Google, Facebook, etc..). With their Active Directory infrastructure and Office Suite they have everything in-house and 100% under their own control to make this feature work in Windows environments - which most companies do run. But they don't care.
    • I mean.. On the other hand I'm glad that they follow the standard. It just turns out so often to be a feature we are in need of that I stopped counting.
    • And you already need proprietary connectors to properly integrate your Exchange calendar into other mail programs like Mozilla Thunderbird. So this shouldn't be really a big deal-breaker either..
  • Only one reminder for events
    • Due to my Attention deficit disorder I tend to have what is called "Altered time perception" or "time blindness". This means I won't experience 15 minutes as 15 minutes or grossly under-/overestimate how much time I really have left. Best description for non-ADDers I can give is: This means I will think of 15 minutes as "Ah, I still have 1 hour left." That this can lead to situations where I am late or wasn't able to fully prepare something for a meeting should be clear.
    • Therefore it really helps me to be able to set multiple reminders for an event.
    • Usually I do the following: 1 hour before, 30min, 15min. This helps me to break out of the time blindness and synchronize my altered time perception with reality. Enabling me to finish tasks before the meeting/event happens.
    • For events like a business trip which take more time to prepare I often set a reminder 1 or 2 weeks in advance. This way I have time to do my laundry in time and so on.
    • Outlook however only supports the setting of ONE reminder.. Yeah..
    • My workaround is to have events also in my private calendar. (Of course without any details and often just a generic title/description as to not store client information on my private device.)
  • Remember Xobni? / The search is horrible
    • Outlook search is a single input field and then it searches over everything. You can't specify if the search term you used is a name, part of the name of a file or part of an email address.
    • In the early 2000s there was Xobni. Slogan: "It reverts your Inbox." - Hence the name Xobni. It was a an add-on which added another sidebar to Outlook. There it displayed all people you've mailed with. And when you clicked on a person you saw all mails, all mail threads and, most importantly, all attachments this person had sent to you (or you to them). You could even add links to the persons social media profiles, etc. It was brilliant. And made work so easy. As often I remembered only the person who sent a file to me or the thread in which it was attached - but not the actual mail or even the subject of the mail, etc. Xobni made it pretty easy to work around that. Making it possible to search Outlook in a way in which our brain works.
    • Well, sadly Yahoo bought Xobni in July 2013 and shut it down in July 2014.
    • But it's 2024 and Microsoft hasn't come up with a similar functionality yet? Really?
Comments

ASUS RMA process is broken by design to maximize profit?

Photo by ThisIsEngineering: https://www.pexels.com/photo/woman-working-on-computer-hardware-19895718/

I watched an interesting video from the Youtube Channel GamersNexus. It's title is "ASUS Scammed Us".

And in this video they show how the ASUS RMA process is broken and many customers are faced with repair bills higher than the original costs for the devices. Or ASUS claims parts need to be repaired which are not broken according to the customers. Another big topic is also that ASUS regularly claims the customer caused the defect and hence repair isn't covered under their guarantee.

Yeah.. While watching the video you get the feeling the process was designed that way to maximize profit. This means it's intransparent, not flexible enough and generally doesn't have the customer at the core of it's view/goal.

Which sucks. And gained ASUS a place on my "Do not buy from ever again"-list... The video is linked below:

Or, if you prefer a link, here: https://www.youtube.com/watch?v=7pMrssIrKcY

Comments

Go home GoDaddy, you're drunk!

Photo by Tim Gouw: https://www.pexels.com/photo/man-in-white-shirt-using-macbook-pro-52608/

I'm just so fucking happy right now I have never been a customer of GoDaddy. As I learned via Reddit yesterday GoDaddy closed the access to their DNS API for many customers.

No prior information.

No change of the documentation regarding API access.

Nothing.

For many customers this meant that their revenue stream was affected as, for example, the SSL-Certificates for web services couldn't be automatically renewed. Which is the case when you are using Let's Encrypt.

Therefore I can't say it in any other words: GoDaddy deliberately sabotaged it's customers in order to maximize it's income.

Yeah, fuck you GoDaddy. You are on my personal blacklist now. Never going to do business with you. Not that I planned, but sometimes decisions like this must be called out and sanctioned.

When customers asked why their API calls returned an HTTP 403 error (Forbidden) GoDaddy provided the following answer (accentuation done by myself):

Hi, We have recently updated the account requirements to access parts of our production Domains API. As part of this update, access to these APIs are now limited: If you have lost access to these APIs, but feel you meet these requirements, please reply back with your account number and we will review your account and whitelist you if we have denied you access in error. Please note that this does not affect your access to any of our OTE APIs. If you have any further questions or need assistance with other API questions, please reach out. Regards, API Support TeamAvailability API: Limited to accounts with 50 or more domains Management and DNS APIs: Limited to accounts with 10 or more domains and/or an active Discount Domain Club plan.

Wow. The mentioned OTE API meanwhile is no workaround. It's GoDaddy's test API. Used to verify that your API-Calls work, prior to sending them to the productive API. You can't do anything there which would help GoDaddy's customers to find a solution without having to pay.

Sources

Am I the only one who can't use the API? (Reddit)

Warning: Godaddy silently cut access to their DNS API unless you pay them more money. If you're using Godaddy domain with letsencrypt or acme, be aware because your autorenewal will fail. (Reddit)

Comments

Things to do when updating Bludit

Photo by Markus Spiske: https://www.pexels.com/photo/green-and-yellow-printed-textile-330771/

I finally got around to update to the recent version of Bludit. And as I made two changes to files which will be overwritten, I made myself a small documentation.

Changing the default link target

Post with code example, here: https://admin.brennt.net/changing-the-default-link-target-in-bludits-tinymce

  1. Open file bludit-folder/bl-plugins/tinymce/plugin.php file
  2. Search for tinymce.init
  3. Then we add the default_link_target: '_blank' parameter at the end of the list
  4. Don't forget to add a semicolon behind the formerly last parameter

Keep the syntax highlighting

Original post: https://admin.brennt.net/bludit-and-syntax-highlighting

  1. Open: bludit-folder/bl-plugins/tinymce/tinymce/plugins/codesample/plugin.min.js
  2. Search for <pre and in the class property add line-numbers. It should now look like this: t.insertContent('<pre id="__new" class="language-'+a+' line-numbers">'+r+"</pre>")
  3. A little after that pre you will also find t.dom.setAttrib(e,"class","language-"+a), add the line-numbers class in there too, it should look like this: t.dom.setAttrib(e,"class","line-numbers language-"+a)
  4. Edit a random blogpost with code in it to verify that newly rendered pages get the line-numbers and syntax highlighting.

Enhancing Cookie security

Mozillas Observatory states that the Bludit Session Cookie is missing the samesite attribute and the Cookie name isn't prefixed with __Secure- or __Host-. I opened an issue for this on GitHub (Bludit issue #1582 Enhance cookie security by setting samesite attribute and adding __Secure- prefix to sessionname) but until this is integrated we can fix it in the following way:

  1. Open bludit-folder/bl-kernel/helpers/session.class.php
  2. Comment out the line containing: private static $sessionName = 'BLUDIT-KEY';
  3. Copy & paste the following to change the Cookie name:
    • // Set the __Secure- prefix if site is called via HTTPS, preventing overwrites from insecure origins
      //   see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#cookie_prefixes
      //private static $sessionName = 'BLUDIT-KEY';
      private static $sessionName = '__Secure-BLUDIT-KEY';
  4. Search for the function session_set_cookie_params
  5. Add the following line to add the samesite attribute to the Cookie by adding the following line. Also add a comma at the end of the formerly last line.
    • 'samesite' => 'strict'
  6. it should looks like this:
    • session_set_cookie_params([
          'lifetime' => $cookieParams["lifetime"],
          'path' => $path,
          'domain' => $cookieParams["domain"],
          'secure' => $secure,
          'httponly' => true,
          'samesite' => 'strict'
      ]);

Check is the RSS-Feed works

  1. Access https://admin.brennt.net/rss.xml and verify there is content displayed.
    • If not: Check if the RSS-Plugin works and is activated

Apply / check CSS changes

I made some small changes to the Solen Theme CSS. These must be re-done when the theme is updated.

  1. Open bl-themes/solen-1.0/css/style.css
  2. Change link color:
    • Element: .plugin a, ul li a, .footer_entry a, .judul_artikel a, line 5
    • Change: color: #DE004A;
    • To: color: #F06525;
  3. Change link color when hovering:
    • Element: .plugin a:hover, ul li a:hover, .footer_entry a:hover, .judul_artikel a:hover, line 10
    • Change: color: #F06525;
    • To: color: #C68449;
  4. Fix position of the blockquote bar:
    • Element: blockquote::box-shadow, line 24
    • Change:  box-shadow: inset 5px 0 rgb(255, 165, 0);
    • To:  box-shadow: -5px 0px 0px 0px rgb(255, 165, 0);
  5. Format code in posts:
    • Add the following element after line 37:
    • /* My custom stuff */
      code {
          font-size: 87.5%;
          color: #e83e8c;
          word-wrap: break-word;
          font-family: 'Roboto Mono', monospace;
      }
  6. Same padding-bottom as padding-top for header:
    • Element .section_title, line 136
    • Change: padding-bottom: 0px;
    • To: padding-bottom: 0.6em;
  7. Disable the white blur for the introduction texts:
    • Element: .pratinjau_artikel p:after, line 277
    • Change background: linear-gradient(to right, transparent, #ffffff 80%);
    • To: background: 0% 0%;

Solen-Theme changes

  1. Make header smaller:
    • Open solen-1.2/php/mini-hero.php
    • Remove line 4: <h2 class="hero_welcome"><?php echo $L->get('welcome'); ?></h2>
Comments