The Mastodon account Die Gerichte im echten Norden ("The courts in the real north") operated by the German federal state of Schleswig-Holstein made 3 posts regarding the legal risks of leaving negative restaurant reviews on Google.
(1/3) 🧵 Short #servicepost thread, because we keep seeing that people don't realise what the consequences can be when writing #GoogleReviews:
⚠️ ONLY make negative claims publicly if you can prove that your claims are true in a court of law. Because the burden of proof for the accuracy is usually on you as the author!
The topic regarding "burden of proof" should be well-known for many internet users. No matter if it's regarding eBay or Amazon reviews or (former) employer reviews on sites like Kununu or Glassdoor. As for the latter two, there is a reason why many people say: "Read between the lines". As the company's own legal, HR and marketing departments are constantly stalking the site and removing anything that isn't a 5-star review or a very positively worded 4-star review.
Also I know some people who only read 1-star to 4-star reviews on Amazon as they proofed to be reliable and well-balanced.
🧵 (2/3) ⚠️ ONLY make negative claims publicly if it is worth around EUR 5,000 to you if the worst comes to the worst! Because that's how much a lost lawsuit over a Google comment can cost you!!!
Again "If it comes to the worst". This means: You wanting to take it to court or having no option to solve it out of court. As usually you should be able to just deleted the comment, contact the lawyer (maybe pay some legal fees) and be done with it.
🧵 (3/3) 😱 5,000 EUR????
Yes, because such comments often jeopardise the existence of the companies concerned. The courts set the amounts in dispute correspondingly high. Often around EUR 10,000. And the legal fees and court costs are then calculated on the basis of the amounts in dispute. These quickly add up to around EUR 5,000.
Such comments jeopardise the existence of companies? Wow. Yeah, I heard bad service, low quality food for a comparatively high price, vermin & pests in the kitchen & storage rooms jeopardise the existence too. And many comments should refer to these topics.
*sigh*
The bigger problem: Honest, non-five-star reviews are removed en masse
Unsurprisingly, many companies contest, report or challenge any review that is not five stars on their profile. This is because public review scores on platforms such as Amazon or Google are a key factor for the vast majority of internet users. In fact, there are even law firms that specialise in this area.
This leads to the problem that reviews are becoming increasingly worthless. Ultimately, this renders the entire rating system obsolete. However, most internet users are unaware of these issues. They don't realise that there is an entire industry dedicated to manipulating every aspect of the 'review economy'.
Bad reviews? Report them as false claims. Most reviewers won't take it to court and won't care. In 99.9% of cases, they wouldn't be able to prove it in court anyway.
Reviewer resists? Take a lawyer.
Got too few reviews? Not enough 5-star reviews? Buy them in bulk. Done.
ARD Marktcheck, a German public television format, even recently made a video about this:
What now?
People in the video came to different solutions. One posted his review on his blog where "ARD Markt" found it and now he is prominently featured in this TV piece.
Another woman recommend that it should be made public, by Google, how many reviews have been deleted for that company. Something I strongly second!
And me? I think that making official reports on food inspections public could counteract this problem in the restaurant industry, as they verify legal obligations and requirements and are therefore far more relevant. Yes, this still doesn't solve the problem of these inspections happening too infrequently, but it's an improvement on the current situation.
The current situation only benefits those who provide poor service
This just goes to show how bad the situation is in the restaurant, hotel and catering industry. Some cities make their mandatory restaurant inspection reports public, while others don't. This is due to problematic laws which could make the city liable for any potential damages caused. The DEHOGA (Deutscher Hotel- und Gaststättenverband, in english: German Hotel and Restaurant Association), has even publicly attacked the non-profit platform "Topf Secret" from FoodWatch & FragDenStaat.
Topf Secret enables citizens to utilise the German Verbraucherinformationsgesetz (VIG) – the consumer information law – to request reports on health and food safety checks. The results are then published on the platform, making them publicly available to all. The name is a German pun: "Topf" means pot or kettle, and "Topf secret" refers to the phrase "top secret" and the fact that the reports are not actually publicly available.
DEHOGA states that the law sets out which reports must be made public and which must not be. For example, violations must at least result in a minimum fine of €350, and reports must be deleted after six months. Likewise, reports due to construction or documentation deficits are not to be published either.
I understand their point. It's the same in every specialised field. How can information collected and edited by professionals be understood by non-experts? Are problems exaggerated? Are findings that are not at all problematic in terms of food safety presented as such? These are valid concerns, in my opinion.
Companies also need to be protected against abusive reviews. Review platform operators such as Google and Amazon must also fulfil their legal obligations. I'm not arguing against that.
However, there is one major issue that nobody has addressed so far.
I can understand, and would expect, that the DEHOGA protects all its members and lobbies for favourable legislation. On the other hand, there is also a valid concern from citizens about food safety and hygiene in restaurants. In all the TV documentations I have seen about food safety controllers doing their job, I have always heard them say: "I don't eat in restaurants any more. I've seen too much." There are problems.
We are now left with a situation where customers want to be informed, but the respective industry association doesn't want this information to be published. They cite problems with some laws and general problems when information requiring a certain amount of expertise to understand is published.
Hmm...
Hmmmmm...
Let's think about that for a minute.
Who got that expertise? The DEHOGA.
Who could run such a platform for all their members? The DEHOGA.
Who could ensure laws are followed precisely? The DEHOGA.
Who says it's there to ensure a high standard in gastronomy? The DEHOGA.
Who acts in the interests of all their members? The DEHOGA.
Doesn't publicly outing the bad ones automatically reward the good ones? Those who do a good job, that is. Yes, it does.
Why doesn't DEHOGA run this platform? That's a very good question. In fact, they could even cooperate with FoodWatch and FragDenStaat. Thus eliminating any doubts that a form of greenwashing is being practised here.
The current system only really serves the bad ones. Those with subpar service, prices, food quality, or even severe hygiene or safety issues.
But let's face the truth: the DEHOGA is an industry association. Its purpose is to maximise its members' revenue by lobbying for favourable legislation, and so on. It's not there to ensure customers get good service. I wonder if "good service" and "maximising revenue" are connected in any way. Hmm... But I forgot that, for most people, capitalism just means "getting rich quickly with as little work as possible".
And now restaurant owners are wondering why I visit them so rarely.
In "Little Helper Scripts - Part 3: My Homelab CA Management Scripts", I mention that the regular expressions I use for identifying IPv4 and IPv6 addresses are rather basic. In particular, the IPv6 RegEx simply assumes that anything containing a colon is an IPv6 address.
When I jokingly asked on Mastodon if anyone had a better RegEx, I mentioned my script enhancements. My former colleague Klaus Umbach recommended rgxg (ReGular eXpression Generator) to me. It sounded like it would solve my problem exactly.
Installing rgxg
The installation on Debian is pretty easy as there is a package available.
root@host:~# apt-get install rgxg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
librgxg0
The following NEW packages will be installed:
librgxg0 rgxg
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.4 kB of archives.
After this operation, 81.9 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://debian.tu-bs.de/debian bookworm/main amd64 librgxg0 amd64 0.1.2-5 [15.3 kB]
Get:2 http://debian.tu-bs.de/debian bookworm/main amd64 rgxg amd64 0.1.2-5 [9,096 B]
Fetched 24.4 kB in 0s (200 kB/s)
Selecting previously unselected package librgxg0:amd64.
(Reading database ... 40580 files and directories currently installed.)
Preparing to unpack .../librgxg0_0.1.2-5_amd64.deb ...
Unpacking librgxg0:amd64 (0.1.2-5) ...
Selecting previously unselected package rgxg.
Preparing to unpack .../rgxg_0.1.2-5_amd64.deb ...
Unpacking rgxg (0.1.2-5) ...
Setting up librgxg0:amd64 (0.1.2-5) ...
Setting up rgxg (0.1.2-5) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for libc-bin (2.36-9+deb12u10) ...
root@host:~#
Generating a RegEx for IPv6 and IPv4
Klaus already delivered the example for the complete IPv6 address space. For IPv4 it is equally easy:
# RegEx for the complete IPv6 address space
user@host:~$ rgxg cidr ::0/0
((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})
# RegEx for the complete IPv4 address space
user@host:~$ rgxg cidr 0.0.0.0/0
(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}
Modifying hostcert.sh
All that is left to is define definite start and endpoints for the RegEx (^ and $) and test it.
user@host:~/git/github/chrlau/scripts/ca$ git diff
diff --git a/ca/hostcert.sh b/ca/hostcert.sh
index f743881..26ec0b0 100755
--- a/ca/hostcert.sh
+++ b/ca/hostcert.sh
@@ -42,16 +42,18 @@ else
CN="$1.lan"
fi
-# Check if Altname is an IPv4 or IPv6 (yeah.. very basic check..)
-# so we can set the proper x509v3 extension
+# Check if Altname is an IPv4 or IPv6 - so we can set the proper x509v3 extension
+# Note: Everything which doesn't match the IPv4 or IPv6 RegEx is treated as DNS altname!
for ALTNAME in $*; do
- if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
+ if [[ $ALTNAME =~ ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3}$ || $ALTNAME =~ ^((:(:[0-9A-Fa-f]{1,4}){1,7}|::|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,6}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,5}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,4}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,3}|::|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){1,2}|::|:[0-9A-Fa-f]{1,4}(::[0-9A-Fa-f]{1,4}|::|:[0-9A-Fa-f]{1,4}(::|:[0-9A-Fa-f]{1,4}))))))))|(:(:[0-9A-Fa-f]{1,4}){0,5}|[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,4}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,3}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4}){0,2}|:[0-9A-Fa-f]{1,4}(:(:[0-9A-Fa-f]{1,4})?|:[0-9A-Fa-f]{1,4}(:|:[0-9A-Fa-f]{1,4})))))):(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])){3})$ ]]; then
+ #if [[ $ALTNAME =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ || $ALTNAME =~ \.*:\.* ]]; then
IP_ALTNAMES+=("$ALTNAME")
else
DNS_ALTNAMES+=("$ALTNAME")
fi
done
+# TODO: Add DNS check against all DNS Altnames (CN is always part of DNS Altnames)
echo "CN: $CN"
echo "DNS ANs: ${DNS_ALTNAMES[@]}"
echo "IP ANs: ${IP_ALTNAMES[@]}"
And it seems to work fine. Notably all altnames who don't match any of the RegExes are treated as DNS-Altname which can cause trouble hence I think about adding a check to resolve all provided DNS names prior the certificate creation.
root@host:~/ca# ./hostcert.sh service.lan 1.2.3.4 fe80::1234 service-test.lan 2abt::0000 999.5.4.2 2a00:1:2:3:4:5::ffff
CN: service.lan
DNS ANs: service.lan service-test.lan 2abt::0000 999.5.4.2
IP ANs: 1.2.3.4 fe80::1234 2a00:1:2:3:4:5::ffff
Enter to confirm.
^C
root@host:~/ca#
Like many IT enthusiasts I run my homelab with various services. And of course HTTPS (and SSL/TLS in general) plays a crucial role in securing the transport layer of applications. And using a self-signed certificate without a CA is possible, but then I would get certificate warnings constantly and always would have too add each certificate in every browsers list. Having my own self-signed CA enables me to add the CA certificate once into every browser/truststores an all devices and be done with that.
While I happily use Let's Encrypt on all services reachable via Internet (using getssl or Apache's mod_md as ACME clients) I can't do so for my homelab domains. The reason is that I deliberately choose lan. as TLD for my home network to prevent overlap with any other registered domain/TLD. And while lan. is current not used/registered as an official TLD it is also not listed/reserved a Special-Use Domain Name by the IANA. Hence Let's Encrypt won't issue certificates for that domain as it can't verify the domain.
If course I can get in trouble when lan. is ever registered as a official TLD. But I doubt that will happen at all.
Managing your own self-signed Certificate Authority
However recently I noticed that some features are missing. As the logic to add SubjectAltNames only supported DNS AltNames and had some shortcomings. Commit 56fe6a93675818a483be3abe02cc1ac963a76aed fixed that.
The change was rather easy. Just add some regular expressions to differentiate between DNS, IPv4 and IPv6 altnames and add them with the right prefix to the X509v3 Subject Alternative Name extension (DNS altnames have to be prefixed with DNS: and IP altnames with IP:).
Generating a new certificate with DNS, IPv4 and IPv6 altnames
Now my hostcert.sh supports the following:
user@host:~/ca$ ./hostcert.sh service.lan 192.168.1.114 2a00:2d:bd11:c569:abcd:efff:cb42:1234 service-test.lan
CN: service.lan
DNS ANs: service.lan service-test.lan
IP ANs: 192.168.1.114 2a00:2d:bd11:c569:abcd:efff:cb42:1234
Enter to confirm.
writing RSA key
Reading pass from $CAPASS
CA signing: service.lan.csr -> service.lan.crt:
Using configuration from ca.config
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'DE'
localityName :ASN.1 12:'City'
organizationName :ASN.1 12:'LAN CA'
commonName :ASN.1 12:'service.lan'
Certificate is to be certified until Jul 5 23:08:40 2026 GMT (365 days)
Write out database with 1 new entries
Database updated
CA verifying: service.lan.crt <-> CA cert
service.lan.crt: OK
And the resulting certificate is also fine:
user@host:~/ca# grep "X509v3 Subject Alternative Name" -A1 service.lan.crt
X509v3 Subject Alternative Name:
DNS:service.lan, DNS:service-test.lan, IP Address:192.168.1.114, IP Address:2a00:2d:bd11:c569:abcd:efff:cb42:1234
If a company pulls such stunts, it is not acting in good faith. Not acting in good faith makes you a bad actor.
Do you really want to use software from a bad actor? Yet alone have it installed on your mobile? It's the most private piece of technical equipment we own in these days.
Yeah, me neither. Hence Brave has been officially uninstalled from all my devices as of 20 minutes ago.
And I doubt I will miss it much, as Firefox on Android officially supports all extensions since end of 2023/beginning of 2024. Therefore extensions like uBlock Origin and uMatrix work perfectly fine. The fact that those were not supported back then was the main reason for choosing Brave in the first place.
Now with this reason being gone, so too is the crypto- and scam-infested Brave browser.
The following log message would only sporadically be logged on my Pi-hole. Not every hour, and not even every day. Just... sometimes. When the stars aligned... When, 52 years ago, Monday fell on a full moon and a 12th-generation carpenter was born... You get the idea. 😄
The error message was:
"No valid NTP replies received, check server and network connectivity"
It was here that I had my suspicions: Wait, does the NTP Pool Project already offer IPv6? I have never knowingly used public NTP pools with IPv6. In customer networks, NTP servers are usually only reachable via IPv4. I don't have an NTP server in my home network. Sadly, many services are still not IPv6 ready.
Please also note that the system currently only provides IPv6 addresses for a zone in addition to IPv4 addresses if the zone name is prefixed by the number 2, e.g. 2.pool.ntp.org (provided there are any IPv6 NTP servers in the respective zone). Zone names not prefixed by a number, or prefixed with any of 0, 1 or 3, currently provide IPv4 addresses only.
It turns out that the problem lies in my dual-stack setup, since I use IPv4 and IPv6 in parallel. Or rather... It's with the NTP pools. I checked with dig to see if any AAAA records were returned for 1.de.pool.ntp.org. The pool I was using.
dig aaaa 1.de.pool.ntp.org returns no AAAA-Records.