Feuerfest

Just the private blog of a Linux sysadmin

No, you don't get Microsoft Office for free via the Office Customization Tool (OCT) and Generic Volume License Keys (GVLKs)

Photo by RDNE Stock project: https://www.pexels.com/photo/close-up-shot-of-a-person-holding-a-contract-7841486/

There are a bunch of websites and Youtube videos trying to tell you that the locally installable Microsoft Office versions can be downloaded, legally, free of charge, directly from Microsoft. This is, of course, nonsense.

The bait

All videos I watched and articles I read told their audience to visit https://config.office.com/deploymentsettings. Which is the official site from Microsoft for the so-called Office Customization Tool (OCT). The OCT is meant for sysadmin administrators to generate a configuration file that defines which office version (Office LTSC Professional Plus 2024, Microsoft 354 Apps for Enterprise, etc.) along with which office components (Word, Excel, PowerPoint, Outlook, etc.) are going to be installed.

It is a tool used to ensure installer packages for Microsoft Office are configured the same way. Helping IT-Administrators to automate and standardize office installations on all of the companies' various systems (notebook, workstations, etc.).

The end result in using the OCT is a configuration file. Not an actual installer. You won't get a .msi or .exe file. You've just created a configuration file telling the office installer which components to install & how to configure them.

However - and this is the main reason this scam works - a license key is pre-filled. And this is almost always highlighted prominently in the videos & articles. Like this is some sort of proof that "Microsoft is giving away Office for free" or "This is legitimate". That's a lie. Straight out. I'll elaborate on that in the next paragraph.

The next step is that you download the official Office Deployment Tool (ODT) from Microsoft. Then the setup routine from the ODT is called and the configuration file you created with the OCT is supplied.

And yes, of course you will end up with an installation of your chosen Microsoft Office version. Alas, this version will never be useful and only work for 30 days and you won't be able to activate your installation. Why? Read the next paragraph.

The catch

What all of them don't seem to grasp - or intentionally misrepresent - is that they are using a Microsoft Office version with a pre-filled Generic Volume License Key (GVLK). This key still must be activated via a so-called Key Management Service (KMS). The KMS is NOT your Microsoft activation service used to activate your windows installation. It's a separate software component running in the Active Directory Domain in corporate networks. And guess what? You need to buy license keys and add them to the KMS. Only then your copy will be activated.

The thought "Well I got a license key, so it's working." is outdated. That was maybe the case with software in the early 2000s. Nowadays there are several other methods to validate if a copy is genuine or not.

After all.. If the license keys are so precious - the most valuable part. Why are they then pre-filled and non-changeable in the Office Deployment Tool from Microsoft? Why are they listed in a separate article?

That's why all videos are non-surprisingly tight-lipped on the whole "How to activate the Office installation?" part. Usually they just show the successful installation and open a few Word documents to proof it's working. Well yes, it is - for at least 30 days. But after that you're starting at zero again.

And.. I have no idea if using that Office installation will have consequences for your Microsoft account..

Side-note: Generic Volume License Keys being sold as used license keys

At least here in Germany it is legal to sell used license keys. In terms of Generic Volume License Keys (GVLKs) this adds another possibility for fraud.

Many shops which sell used Microsoft Office license keys will offer you retail keys in their shop. Think of retail keys as the stickers on the DVD Box of the software you just bought in your store of choice. Here almost always the key is unique and identifies exactly one copy of Microsoft Office. And the shop will gladly take your money when you buy a retail key from them. Only for you to receive a mail shortly afterwards which states that, sadly, retail keys are not in stock. But hey! They offer to upgrade you to a volume license key. Maybe even together with an upgrade to a higher Office version. As in: You bought a retail key for Microsoft Office 2024 Home and they offer you to upgrade you to Microsoft Office 2024 Professional Plus.

See where this is going?

At this point you should deny that upgrade and demand your money back. As it's quite likely that you will receive a Generic Volume License Key. Of course the shop will state that "Activation cannot be guaranteed" after you complain with your issues, etc.

That's the reason why you should read the Terms & Conditions carefully prior to buying used software license keys..

Why not just use OpenSource software?

If you don't want to pay for your office software suite, why not just use LibreOffice? LibreOffice is OpenSource and free-of-charge. It can handle documents created with Microsoft Office as well. So why all the hassle?

If you've never heard of it before, feel free to read the Wikipedia article about it: https://en.wikipedia.org/wiki/LibreOffice

Comments

How to stop logging HTTP-Requets from check_http in Apache 2

Photo by Sena: https://www.pexels.com/photo/wood-covered-in-snow-10944259/

I encountered a situation where the HTTP(S)-Requests done by Icinga via check_http checkplugin seriously messed up the generated website traffic reports. The software used to generate said reports wasn't capable of filtering out certain requests based on User-Agents. Restructuring the report in a way that the hits from the monitoring requests could be easily identified was also out of the question as this would generate too much follow-up work for other involved parties & systems.

The solution (or rather: workaround...) was to simply omit the logging of said monitoring requests to the logfile of the Apache vHost.

The relevant line in the virtual host config was the following:

CustomLog /var/log/apache2/host.domain-tld-access.log combined

This defines where to store the logfile in the "NCSA extended/combined log format" as specified on https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#formats respectively in the /etc/apache2/apache2.conf in Debian.

root@host:~ # grep LogFormat /etc/apache2/apache2.conf
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

SetEnvIf to the rescue!

Apache2 allows us to set environment variables for each request via the SetEnvIf directive (among others). This is documented on the page for mod_setenvif: https://httpd.apache.org/docs/2.4/mod/mod_setenvif.html#setenvif

A typical line in the access.log would log like the following:

256.133.7.42 - - [30/Dec/2024:05:46:18 +0100] "GET / HTTP/1.0" 302 390 "-" "check_http/v2.3.1 (monitoring-plugins 2.3.1)

Since the requests were for the URI / we can't exclude requests based on Request_URI. Instead, we have to go for the supplied user-agent. Fortunately this is quite unique.

To catch future versions we use a wildcard-regex for the version numbers. As environment variable I choose monitoring.

# Don't log Icinga monitoring requests
SetEnvIf User-Agent "check_http/v(.*) \(monitoring-plugins (.*)\)" monitoring

Now all that is needed is to adopt the CustomLog-directive in the virtual host configuration. As the documentation for the CustomLog-directive explains we can access the environment variables via the env= option. Negating this will simply omit all requests with that environment variable set.

Therefore our resulting CustomLog line now looks like this:

CustomLog /var/log/apache2/admin.brennt.net-access.log combined env=!monitoring

Now just check your configuration for errors via apach2ctl configtest and restart your Apache webserver.

Comments

Using ping to monitor if your systems are working is wrong

Generated via Dall-E

I've seen this far too often. When monitoring a system a simple ping command is used to verify that the system is "up and running". In reality nothing could be further from the truth as this is not what you are actively checking.

Ping utilizes the Internet Control Message Protocol (ICMP) to send so-called ping-request packets to a server, which the server will answer with ping-reply packets. Giving us a method to verify that the server is actively answering our requests. But if the server is answering the packets, it doesn't mean that the server itself is in working condition. That all services are running.

And this is because of several reasons:

1. The network is reliable - mostly

In https://aphyr.com/posts/288-the-network-is-reliable Kyle Kingsbury, a.k.a "Aphyr" and Peter Bailis discuss common network fallacies, similar to the Fallacies of distributed computing by L. Peter Deutsch. As it is commonly assumed that the network "just works" and no strange things will happen. When indeed they do happen all the time.

In regard to our ICMP ping this means:

  1. There can be a firewall blocking ICMP or simply all traffic from our monitoring system
  2. Routing can be misconfigured
  3. Datacenter outages can happen
  4. Bandwidth can be drastically reduced
  5. VLANs can be misconfigured
  6. Cables can be broken
  7. Switchports can be defect
  8. Add your own ideas what can go wrong in a network

And you do want a monitoring method which allows you to reliably distinguish between network and system problems.

2. CPU Lockups

ICMP packets are answered by the kernel itself. This can have the nasty side-effect that your server literally hangs. Trapped in a state known as either Soft or Hard Lockup. And while overall they are somewhat rare - CPU Soft Lockups still do occur from time to time in my experience. Especially with earlier versions of hypervisors for virtual machines (VMs) as a CPU Soft Lockup can be triggered if there is simply too much CPU load on a system.

But the nasty side-effect of CPU Soft Lockups? The system will still reply to ICMP packets, while all other services are unreachable.

I once had problems with power management (ACPI) with a servers hardware. Somehow the ACPI kernel module would lock resources without freeing them. This effectively meant that the system came to a complete stop - but it didn't reboot or shutdown. Nor did it crash as in "Completely unreachable". No, ICMP packets were still answered quite fine.

Just no SSH connection was possible. No TCP or UDP services reachable. As the CPU was stuck at a certain operation and never switched to process other tasks.

Only disabling ACPI by adding the acpi=off parameter to the grub kernel boot command line "fixed" this.

Regarding soft lockups I can recommend reading the following:

  1. Linux Magic System Request Key Hacks: Here you learn how you can trigger a kernel panic yourself and how to configure a few things 
  2. https://www.baeldung.com/linux/terminal-kernel-panic also has a nice list of ways to trigger a kernel panic from the command line
  3. https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt Kernel documentation regarding "Softlockup detector and hardlockup detector (aka nmi_watchdog)"
  4. This SuSE knowledge base article also has some good basic information on how to configure timers, etc. https://www.suse.com/support/kb/doc/?id=000018705

Takeaways

  1. ICMP is suited to check if the system is reachable in the network
    • After all ICMP is more cost-effective than TCP in terms of package size and number of packages sent
  2. A TCP connect to the port providing the used service is usually better for the reasons stated above
  3. You must incorporate your network topology in your monitoring system; only then you will be able to properly distinguish between: "System unreachable", "Misconfigured switchport" and "Service stopped responding"
    • This means switches, firewalls, routers, loadbalancers, gateways - everything your users/service depends upon to be reachable must be included in your monitoring system, and:
  4. If possible the dependencies between them should be defined
    • Like: Router → Firewall → LoadBalancer → Switch → System → Service

Conclusion

Knowing all this you should keep the following in mind: A successful ping only verifies that the system is reachable via your network. And this doesn't imply anything about the state of the OS.

Yes, this is no mind-blowing truth that I reveal here. But still I encounter monitoring setups where ICMP is used to verify that a system is "up and running" far too often.

Comments

How to install DiagnosisTools on my Synology DiskStation 411, or: Why I love the Internet

Synology Inc. https://www.synology.com/img/products/detail/DS423/heading.png

I own a Synology DiskStation 411 - in short: DS411. It looks like the one in the picture - which shows the successor model DS423. The DS411 runs some custom Linux and therefore is missing a lot of common tools. Recently I had some network error which made my NAS unreachable from one of my Proxmox VMs. The debugging process was made harder as I had no tools such as lsof, strace or strings.

I googled a bit and learned that Synology offers a DiagnosisTool package which contains all these tools and more. The package center however showed no such package for me. From the search results I got that it should be shown in the package center if there is a compatible version for the DSM version running on the NAS. So seems there is no compatible version for my DS411 running DSM 6.4.2?

Luckily there is a command to install the tools: synogear install

root@DiskStation:~# synogear install
failed to get DiagnosisTool ... can't parse actual package download link from info file

Okay, that's uncool. But still doesn't explain why we can't install them. What does synogear install actually do? Time to investigate. As we have no stat or whereis we have to resort to command -v which is included in the POSIX standard. (Yes, which is available, but as command is available everywhere that's the better choice.)

root@DiskStation:~# command -v synogear
/usr/syno/bin/synogear
root@DiskStation:~# head /usr/syno/bin/synogear
#!/bin/sh

DEBUG_MODE="no"
TOOL_PATH="/var/packages/DiagnosisTool/target/tool/"
TEMP_PROFILE_DIR="/var/packages/DiagnosisTool/etc/"

STATUS_NOT_INSTALLED=101
STATUS_NOT_LOADED=102
STATUS_LOADED=103
STATUS_REMOVED=104

Nice! /usr/syno/bin/synogear is a simple shellscript. Therefore we can just run it in debug mode and see what is happening without having to read every single line.

root@DiskStation:~# bash -x synogear install
+ DEBUG_MODE=no
+ TOOL_PATH=/var/packages/DiagnosisTool/target/tool/
+ TEMP_PROFILE_DIR=/var/packages/DiagnosisTool/etc/
+ STATUS_NOT_INSTALLED=101
[...]
++ curl -s -L 'https://pkgupdate.synology.com/firmware/v1/get?language=enu&timezone=Amsterdam&unique=synology_88f6282_411&major=6&minor=2&build=25556&package_update_channel=stable&package=DiagnosisTool'
+ reference_json='{"package":{}}'
++ echo '{"package":{}}'
++ jq -e -r '.["package"]["link"]'
+ download_url=null
+ echo 'failed to get DiagnosisTool ... can'\''t parse actual package download link from info file'
failed to get DiagnosisTool ... can't parse actual package download link from info file
+ return 255
+ return 255
+ status=255
+ '[' 255 '!=' 102 ']'
+ return 1
root@DiskStation:~#

The main problem seems to be an empty JSON-Response from https://pkgupdate.synology.com/firmware/v1/get?language=enu&timezone=Amsterdam&unique=synology_88f6282_411&major=6&minor=2&build=25556&package_update_channel=stable&package=DiagnosisTool and opening that URL in a browser confirms it. So there really seems to be no package for my Model and DSM version combination.

Through my search I also learned that there is a package archive at https://archive.synology.com/download/Package/DiagnosisTool/ which listed several versions of the DiagnosisTool. One package for each CPU architecture. But that also didn't give many clues as I wasn't familiar with many of the CPU architectures and nothing seems to match my CPU. No Feroceon or 88FR131 or the like.

root@DiskStation:~# cat /proc/cpuinfo
Processor       : Feroceon 88FR131 rev 1 (v5l)
BogoMIPS        : 1589.24
Features        : swp half thumb fastmult edsp
CPU implementer : 0x56
CPU architecture: 5TE
CPU variant     : 0x2
CPU part        : 0x131
CPU revision    : 1

Hardware        : Synology 6282 board
Revision        : 0000
Serial          : 0000000000000000

As I didn't want to randomly install all sorts of packages for different CPU architectures - not knowing how good or bad Synology is in preventing the installation of non-matching packages, I opted for the r/synology subreddit and stopped my side-quest at this point, focusing on the main problem.

Random redditors to the rescue!

Nothing much happened for 2 months and I had already forgotten about that thread. My problem with the VM was solved in the meantime and I had no reason to pursue it any further.

Then someone replied. This person apparently did try all sorts of packages for the DiskStation additionally providing a link to the package that worked. It was there that I noticed something. The link provided was: https://global.synologydownload.com/download/Package/spk/DiagnosisTool/1.1-0112/DiagnosisTool-88f628x-1.1-0112.spk and I recognized the string 88f628x but couldn't pin down where I had spotted it. Only then it dawned on me: 88f for the Feroceon 88FR131 and 628x for all Synology 628x boards. Could this really be it?

Armed with this information I quickly identified https://global.synologydownload.com/download/Package/spk/DiagnosisTool/3.0.1-3008/DiagnosisTool-88f628x-3.0.1-3008.spk to be the last version for my DS411 and installed it via the package center GUI (button "Manual installation" and then navigated to the downloaded .spk file on my computer).

Note: The .spk file seems to be a normal compressed tar file and can be opened with the usual tools. The file structure inside is roughly the same as with any RPM or DEB package. Making it easy to understand what happens during/after package installation.

The installation went fine, no errors reported and after that synogear install worked:

root@DiskStation:~# synogear install
Tools are installed and ready to use.
DiagnosisTool version: 3.0.1-3008

And I was finally able to use lsof!

root@DiskStation:~# lsof -v
lsof version information:
    revision: 4.89
    latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/
    latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ
    latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man
    constructed: Mon Jan 20 18:58:20 CST 2020
    compiler: /usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ccache-gcc
    compiler version: 4.6.4
    compiler flags: -DSYNOPLAT_F_ARMV5 -O2 -mcpu=marvell-f -include /usr/syno/include/platformconfig.h -DSYNO_ENVIRONMENT -DBUILD_ARCH=32 -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -g -DSDK_VER_MIN_REQUIRED=600 -pipe -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -D_FORTIFY_SOURCE=2 -O2 -Wno-unused-result -DNETLINK_SOCK_DIAG=4 -DLINUXV=26032 -DGLIBCV=215 -DHASIPv6 -DNEEDS_NETINET_TCPH -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -DLSOF_ARCH="arm" -DLSOF_VSTR="2.6.32" -I/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/usr/include -I/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/usr/include -O
    loader flags: -L./lib -llsof
    Anyone can list all files.
    /dev warnings are disabled.
    Kernel ID check is disabled.

The redditor was also so nice to let me know that I have to execute synogear install each time before I can use these tools. Huh? Why that? Shouldn't they be in my path?

Turns out: No, the directory /var/packages/DiagnosisTool/target/tool/ isn't included into our PATH environment variable.

root@DiskStation:~# echo $PATH
/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin

synogear install does that. It copies /etc/profile to /var/packages/DiagnosisTool/etc/.profile, while removing all lines starting with PATH or export PATH. Adding the path to the tools directory to the new PATH and exporting that and setting the new .profile file in the ENV environment variable.

Most likely this is just a precaution for novice users.

And to check if the tools are "loaded" they grep if /var/packages/DiagnosisTool/target/tool/ is included in $PATH.

So yeah, there is no technical reason preventing us from just adding /var/packages/DiagnosisTool/target/tool/ to $PATH and be done with that.

And this is why I love the internet. I thought I would've never figured that out if not for someone to post an reply and included the file which worked.

New learnings

Synology mailed me about critical security vulnerabilities present in DSM 6.2.4-25556 Update 7 which are fixed in 6.2.4-25556 Update 8 (read the Release Notes). However the update wasn't offered to me via the normal update dialog in the DSM, as it is a staged rollout. Therefore I opted to download a patch file from the Synology support site. This means I did not download the whole DSM package for a specific version but just from one version to another. And here I noticed that the architecture name is included in the patch filename. Nice.

If you visit https://www.synology.com/en-global/support/download/DS411?version=6.2 do not just click on Download, instead opt to choose your current DSM version and the target version you wish to update to. Then you are offered a patch file and identifier/name for the architecture your DiskStation uses is part of the filename.

This should be a reliable way to identify the architecture for all Synology models - in case it isn't clear through the CPU/board name, etc. as in my case.

List of included tools

Someone ask me for a list of all tools which are included in the DiagnosesTool package. Here you go:

admin@DiskStation:~$ ls -l /var/packages/DiagnosisTool/target/tool/ | awk '{print $9}'
addr2line
addr2name
ar
arping
as
autojump
capsh
c++filt
cifsiostat
clockdiff
dig
domain_test.sh
elfedit
eu-addr2line
eu-ar
eu-elfcmp
eu-elfcompress
eu-elflint
eu-findtextrel
eu-make-debug-archive
eu-nm
eu-objdump
eu-ranlib
eu-readelf
eu-size
eu-stack
eu-strings
eu-strip
eu-unstrip
file
fio
fix_idmap.sh
free
gcore
gdb
gdbserver
getcap
getpcaps
gprof
iftop
iostat
iotop
iperf
iperf3
kill
killall
ld
ld.bfd
ldd
log-analyzer.sh
lsof
ltrace
mpstat
name2addr
ncat
ndisc6
nethogs
nm
nmap
nping
nslookup
objcopy
objdump
perf-check.py
pgrep
pidof
pidstat
ping
ping6
pkill
pmap
ps
pstree
pwdx
ranlib
rarpd
rdisc
rdisc6
readelf
rltraceroute6
run
sa1
sa2
sadc
sadf
sar
setcap
sid2ugid.sh
size
slabtop
sockstat
speedtest-cli.py
strace
strings
strip
sysctl
sysstat
tcpdump_wrapper
tcpspray
tcpspray6
tcptraceroute6
telnet
tload
tmux
top
tracepath
traceroute6
tracert6
uptime
vmstat
w
watch
zblacklist
zmap
ztee
Comments

"It's always DNS."

Photo by Visual Tag Mx: https://www.pexels.com/photo/white-and-black-scrabble-tiles-on-a-white-surface-5652026/

"It's always DNS."
   - Common saying among system administrators, developers and network admins alike.

Recently my blogpost about Puppet's move to go semi-open-source gained some attention and I grew curious where it was mentioned and what people thought about it. Therefore I did a quick search for "puppet goes enshittyfication" and was presented with a few results. Mostly Mastodon posts but also one website from Austria (the one without Kangaroos 😁). Strangely they also copied the site title, not just the texts' title, as it showed up as "Feuerfest | Puppet goes enshittyfication".

Strange.

I clicked on it and received a certificate warning that the domain in the certificate doesn't match the domain I'm trying to visit.

I ignored the warning and was presented with a 1:1 copy of my blog. Just the images were missing. Huh? What? Is somebody copying my blog?

A short whois on the domain name revealed nothing shady. It belonged to an Austrian organization whose goal it is to inform about becoming a priest of the catholic church and help seminarians. Ok, so definitely nothing shady.

I looked at the certificate and.. What? It was issued for "admin.brennt.net" by Let's Encrypt. That shouldn't be possible from all I know, as that domain is validated to my Let's Encrypt account. I checked the certificates fingerprints and.. They were identical, huh?

That would mean that either someone managed to get the private key for my certificate (not good!) or created a fake private key which somehow a webserver accepted. And wouldn't Firefox complain about that or would the TLS handshake fail? (If somebody knows the answer to this, please comment. Thank you!)

I was confused.

Maybe the IP/hoster of the server will shed some light on this?

Aaaaand it was the current IP of this blog/host. Nothing shady. Nothing strange. Just orphaned DNS-records from a long-gone web-project.

As I know that Google - and probably any other search engine too - doesn't like duplicate content I helped myself with a RewriteRule inside this vHost.

# Rewrite for old, orphaned DNS records from other people..
RewriteEngine On
<If "%{HTTP_HOST} == 'berufungimzentrum.at'">
    RewriteRule "^(.*)$" "https://admin.brennt.net/please-delete.txt"
</If>

Now everyone visiting my site via "that other domain" will receive a nice txt-file asking to please remove/change the DNS entries.

It certainly IS always DNS.

EDIT: As I regularly read the logfiles of my webserver I noticed that there are additional domains who point to this host.. So I added additional If-Statements to my vHost-config and added the domains to the txt-file. 😇

Comments

Puppet goes enshittyfication (Updated)

Photo by Pixabay: https://www.pexels.com/photo/computer-screen-turned-on-159299/

This one came unexpected. Puppetlabs, the company behind the configuration management software Puppet, was purchased by Perforce Software in 2022 (and renamed to "Puppet by Perforce") and now, in November 2024 we start to see the fallout of this.

As Puppetlabs announced on November 7th 2024 in a blogpost they are, to use an euphemism: "Moving the Puppet source code in-house."

Or in their words (emphasizes by me):

In early 2025, Puppet will begin to ship any new binaries and packages developed by our team to a private, hardened, and controlled location. Our intention with this change is not to limit community access to Puppet source code, but to address the growing risk of vulnerabilities across all software applications today while continuing to provide the security, support, and stability our customers deserve.

They then go on in length to state why this doesn't affect customers/the community and that the community still can get access to that private, hardened and controlled location via a separate "development license (EULA)". However currently no information about the nature of that EULA is known and information will be released in early 2025 so after the split was made.

To say it bluntly: I call that bullshit. The whole talk around security and supply-chain risks is non-sense. If Puppet really wanted to enhance the technical security of their software product they could have achieved so in a myriad of other ways.

  • Like integrating a code scanner into their build pipelines
  • Doing regular code audits (with/from external companies)
  • Introducing a four-eyes-principle before commits are merged into the main branch
  • Participating in bug-bounty programs
  • And so on...

There is simply no reason to limit the access to the source-code to achieve that goal. (And I am not even touching the topic of FUD or how open source enables easier & faster spotting of software defects/vulnerabilities.) Therefore this can only be viewed as a straw-man fallacy type of expression to masquerade the real intention. I instead see it as what it truly is: An attempt to maximize revenue. After all Perforce wants to see a return for their investment. And as fair as this is, the "how" lets much to be desired...

We already have some sort-of proof. Ben Ford, a former Puppet employee wrote a blogpost Everything is on fire.... in mid October 2024 stating his negative experience with the executives and vice-presidents of Puppet he made in 2023 when he tried to explain the whole community topic to them (emphasizes by me):

"I know from personal experience with the company that they do not value the community for itself. In 2023, they flew me to Boston to explain the community to execs & VPs. Halfway through my presentation, they cut me off to demand to know how we monetize and then they pivoted into speculation on how we would monetize. There was zero interest in the idea that a healthy community supported a strong ecosystem from which the entire value of the company and product was derived. None. If the community didn’t literally hand them dollars, then they didn’t want to invest in supporting it."

I find that astonishing. Puppet is such a complex piece of configuration management software and has so many community-developed tools supporting it (think about g10k) that a total neglect of everything the community has achieved is mind-blowingly short-sighted. Puppet itself doesn't manage to keep up with all the tools they have released in the past. The ways of tools like Geppetto or the Puppet Plugin for IntellJ IDEA speak for themselves. Promising a fully-fledged Puppet IDE based on Eclispe and then letting it rot? No official support from "Puppet by Perforce" for one of the most used and commercially successful integrated development environment (IDE)? Wow. This is work the community contributes. And as we now know Puppet gives a damn about that. Cool.

EDIT November 13th 2024: I forgot to add some important facts regarding Puppets' ecosystem and the community:

I consider at least the container images to be of utmost priority for many customers. And neglecting all important tools around your core-product isn't going to help either. These are exactly the type of requirements customers have/questions they ask.

  • How can we run it ourself? Without constantly buying expensive support.
  • How long will it take until we build up sufficient experience?
  • What technology knowledge do our employees need in order to provide a flawless service?
  • How can we ease the routine tasks? / What tools are there to help us during daily business?

Currently Puppet has a steep learning curve which is only made easier thanks to the community. And now we know Perforce doesn't see this as any kind of addition to their companies' value. Great.

EDIT END

The most shocking part for myself was: That the Apache 2.0 license doesn't require that the source code itself is available. Simply publishing a Changelog should be enough to stay legally compliant (not talking about staying morally compliant here...). And as he pointed out in another blogpost from November 8th, 2024 there is reason they cannot change the license (emphasizes by me to match the authors'):

"But here’s the problem. It was inconsistently maintained over the history of the project. It didn’t even exist for the first years of Puppet and then later on it would regularly crash or corrupt itself and months would go by without CLA enforcement. If Perforce actually tried to change the license, it would require a long and costly audit and then months or years of tracking down long-gone contributors and somehow convincing them to agree to the license change.

I honestly think that’s the real reason they didn’t change the license. Apache 2 allows them to close the source away and only requires that they tell you what they changed. A changelog might be enough to stay technically compliant. This lets them pretend to be open source without actually participating."

I absolutely agree with him. They wanted to go closed-source but simply couldn't as previously they never intended to. Or as someone on the internet said: "Luckily the CLA is ironclad." So instead they did what was possible and that is moving the source-code to an internal repository. Using that as the source for all official Puppet packages - but will we as Community still have access to those packages?

For me, based on how the blogpost is written, I tend to say: No. Say goodbye to https://yum.puppetlabs.com/ and https://apt.puppetlabs.com/. Say goodbye to an easy way of getting your Puppet Server, Agent and Bolt packages for your Linux distribution of choice.

Update 15th November 2024: I asked Ben Ford in one of his Linkedin posts if there is a decision regarding the repositories and he replied with: "We will be meeting next week to discuss details. We'll all know a bit more then." As good as it is that this topic isn't off the table it still adds to the current uncertainty. Personally I would have thought that those are the details you finalize before making such an announcement.. But ah well, everyone is different.. Update End

A myriad of new problems

This creates a myriad of new problems.

1. We will see a breakline between "Puppet by Perforce"-Puppet packages and Community-packages in technical compatibility and, most likely, functionality too.

This mostly depends on how well Puppet contributes back to the Open Source repository and/or how well-written the Changelog is and regarding that I invite you to check some of their release notes for new Puppet server versions (Puppet Server 7 Release Notes / Puppet Server 8 Release Notes) although it got better with version 8... Granted they already stated the following in their blogpost (emphasizes by me):

We will release hardened Puppet releases to a new location and will slow down the frequency of commits of source code to public repositories.

This means customers using the open source variant of Puppet will be in somewhat dangerous waters regarding compatibility towards the commercial variant and vice-versa. And I'm not speaking about the inter-compatibility between Puppet servers from different packages alone. Things like "How the Puppet CA works" or "How are catalogues generated" etc. Keep in mind: Migrating from one product to the other can also get significantly harder.

This could even affect Puppet module development in case the commercial Puppet server contains resource types the community based one doesn't or does implement them slightly different. This will affect customers on both sides badly and doesn't make a good look for Puppet. Is Perforce sure this move wasn't sponsored by RedHat/Ansible? Their biggest competitioner?

2. Documentation desaster

As bad as the state of Puppet documentation is (I find it extremely lacking in every aspect) at least you have one and it's the only one. Starting 2025 we will have two sets of documentation. Have fun working out the kinks..

Additionally documentation wont get better. Apparently this source states that "Perforce actually axed our entire Docs team!" How good will a Changelog or the documentation be when it's nobody's responsibility?

3. Community provided packages vs. vendor packages

Nearly all customers I worked at have some kind of policy regarding what type of software is allowed on a system and how that is defined. Sometimes this goes so far as "No community supported software. Only software with official vendor support". Starting 2025 this would mean these customers would need to move to Puppet Enterprise and ditch the community packages. The problem I foresee is this: Many customers already use Ansible in parallel and most operations teams are tired having to use two configuration management solutions. This gives them a strong argument in favour of Ansible. Especially in times of economic hard-ship and budget cuts.

But again: Having packages from Puppet itself at least makes sure you have the same packages everywhere. In 2025 when the main, sole and primary source for those packages goes dark numerous others are likely to appear. Remember Oracles' move to make the Oracle JVM a paid-only product for commercial use? And how that fostered the creation of dozens of different JVMs? Yeah, that's a somewhat possible scenario for the Puppet Server and Agent too. Although I doubt we will ever see more than 3-4 viable parallel solutions at anytime given the amount of work and that Puppet isn't that widely required as a JVM is. Still this poses a huge operational risk for every customer.

4. Was this all intentional?

I'm not really sure if Perforce considered all this. However they are not stupid. They sure must see the ramifications of their change. And this will lead to customers asking themself ugly questions. Questions like: "Was this kind of uncertainty intentional?" This shatters trust on a basic level. Trust that might never be retained.

5. Community engagement & open source contributors

Another big question is community engagement. We now have a private equity company which thinks nothing about the community and the community knows this. There is already a drop in activity since the acquisition of Puppet by Perforce. I think this trend will continue. After all, with the current situation we will have a "We take everything from the community we want, but decide very carefully what and if we are giving anything back in return." This doesn't work for many open source contributors. And it is the main reason why many will view Puppet as being closed source from 2025 onward. Despite being technically still open source - but again the community values moral & ethics higher than legal correctness.

So, where are we heading?

Personally I wouldn't be too surprised if this is the moment where we are looking back to in the future and say: "This is the start of the downfall. This is when Puppet became more and more irrelevant until it's demise." As my personal viewpoint is that Puppet lacked vision and discipline for some years. Lot's of stuff was created, promoted and abandoned. Lot's of stuff was handed-over to the community to maintain it. But still the ecosystem wasn't made easier, wasn't streamlined. Documentation wasn't as detailed as it should be. Tools and command-line clients lacked certain features you'd have to work around yourself. And so on.. I even ditched Puppet for my homelab recently in favour of Ansible. The overhead I had to carry out to keep it running, the work on-top which was generated by Puppet itself, just to keep it running. Ansible doesn't have all of that.

In my text Get the damn memo already: Java11 reached end-of-life years ago I wrote:

If you sell software, every process involved in creating that piece of software should be treated as part of your core business and main revenue stream. Giving it the attention it deserves. If you don't, I'm going to make a few assumptions about your business. And those assumptions won't be favourable.

And this includes the community around your product. Especially more so for open source software.

Second I won't be surprised if many customers don't take the bite, don't switch to a commercial license, ride Puppet as long as it is feasible and then just switch to Ansible.

Maybe we will also see a fork. Giving the community the possibility to break with Puppet functionality. Not having to maintain compatibility any longer.

Time will tell.

New developments

This was added on November 15th: There are now the first commnity-build packages available. So looks like a fork is happening.

Read: https://overlookinfratech.com/2024/11/13/fork-announce/

EDIT: A newer post regarding the developments around the container topic and fork is here: Puppet is dead. Long live OpenVox!

Comments