Feuerfest

Just the private blog of a Linux sysadmin

MeTube: A selfhosted WebUI for yt-dlp

Most people should have heard of youtube-dl and the legal battles around it. As the content industry only saw it as a tool for piracy. And yes, while many may use it solely for that there is also the big group of people who want to download single TV news articles (videos) or documentaries produced by public-service broadcasting companies such as ARTE, ARD, ZDF - to name a few German ones. youtube-dl was sued into oblivion, but as it was OpenSource other forks were created with yt-dlp being the currently active one.

yt-dlp however is not just the only program offering this kind of functionality. As such there is the program MediathekView which gathers all senders program information and allows for the easy download of all content. Why MediathekView isn't sued? It only allows the download of content from the online Mediathek of public broadcasting companies, such as: ARD, ZDF, Arte, 3Sat, SWR, BR, MDR, NDR, WDR, HR, RBB, ORF and SRF. Which are all public broadcasting TV senders from Germany, Austria and Switzerland. Hence no problems with 3rd party rights do exist.

But... MediathekView is a local application and I like to have a simple web-frontend useable from any device. Introducing MeTube it's a web-frontend build around yt-dlp and provided as a Docker container. The WebUI itself is minimalistic but does its job.

Screenshot from the Metube WebUI(Click to enlarge)

In my environment MeTube is configured to save videos on a share on my NAS. Storing the videos in the correct folder automatically.

Mounting the CIFS-Share is done via the following line in /etc/fstab:

root@portainer:~# cat /etc/fstab
# /etc/fstab: static file system information.
[...]
# Metube Mount
//ip.ip.ip.ip/video/yt-dlp /mnt/yt-dlp cifs rw,vers=3.0,credentials=/root/.fileserver_smbcredentials,dir_mode=0775,file_mode=0775,uid=1002,gid=1002

Then we use that local mount for the /downloads folder of the Docker container. After the first start I learned I additionally need to explicitly store the TEMP_DIR and STATE_DIR on local volumes on the Docker container host itself. After that I fixed the healthcheck as it is hardcoded for HTTP.

services:
  metube:
    image: ghcr.io/alexta69/metube
    container_name: metube
    restart: unless-stopped
    ports:
      - "8081:8081"
    volumes:
      # On CIFS-Share, mounted via /etc/fstab
      - /mnt/yt-dlp:/downloads
      # HTTPS
      - /opt/docker/certs/portainer.lan.crt:/ssl/crt.pem
      - /opt/docker/certs/portainer.lan.key:/ssl/key.pem
      # Local volumes to make CIFS-Share work
      - /opt/docker/metube/temp:/temporary
      - /opt/docker/metube/state:/state
    environment:
      - PUID=1002
      - PGID=1002
      # HTTPS
      - HTTPS=true
      - CERTFILE=/ssl/crt.pem
      - KEYFILE=/ssl/key.pem
      # Needed as our /downloads folder is located on a CIFS-Share
      - TEMP_DIR=/temporary
      - STATE_DIR=/state
      # Downloaded files are deleted on the server, when they are trashed from the "Completed" section of the UI
      #  - Will delete files from /download! This is not to clear some kind of cache
      #- DELETE_FILE_ON_TRASHCAN=true
      #
      # yt-dlp options
      # Download best video (not higher than 1080p) & audio in german language, if no german is available use english, else use default
      # Note 1: Not every language has a separate "audio only" track, this is why we use best[language...], as ba[language=...] only matches "audio only" tracks
      # Note 2: yt-dlp can't reliably identify the automatically generated audiotracks as these are not clearly listed as such in the metadata
      #   format-sort res:1080 means: Not higher than 1080p
      #   ^=de means we also take de-DE or de-AT, similiar for ^=en meaning en-UK, en-US, etc.
      - 'YTDL_OPTIONS={ "format-sort": "res:1080", "format": "best[language^=de]/best[language^=en]/b", "merge_output_format": "mp4" }'
    # Internal container healthcheck is hardcoded for HTTP
    healthcheck:
        test: ["CMD-SHELL", "curl -fsS --insecure -m 2 https://localhost:8081/ || exit 1"]
        interval: 60s
        timeout: 10s
        retries: 3
        start_period: 10s

The only feature I currently miss is that I can't select the audio track which I do want to download along with the video. However having such options would only solve one part of the problem. As the other part would be that each site would need to clearly state which audio track contains which language. And that isn't even reliably done on YouTube.

Hence I help myself with passing some options to yt-dlp via YTDL_OPTIONS. My config 'YTDL_OPTIONS={ "format-sort": "res:1080", "format": "best[language^=de]/best[language^=en]/b", "merge_output_format": "mp4" }' means: Sort the videos based on the resolution, set 1080p as the highest (best). Videos in higher resolution won't be considered for download. Based on that we download the best video/audio available in German and if nothing is available in German we use English. If that isn't available too, we just use the default.

As stated in the comments of the Dockerfile the problem is that not every language has an audio only track. Only on those a bestaudio filter like ba[language^=en] would work. I encountered too many videos where the language was part of the video track, therefore I switched to just use best[language^=...]. This works more reliable from my experience. And yes, these settings overwrite everything you select in the web-frontend.

Nonetheless for the edge cases being able to define settings for a single download would be nice.

Last tips

If you encounter any problem when downloading a video with yt-dlp make sure you have a JavaScript runtime installed (MeTube uses deno). You either need to specify the path to it in the config file or on the command line via: yt-dlp --js-runtimes deno:/path/to/deno -F URL as only then a -F will show you all available formats. Else some can be missing.

Secondly, use -s to simulate a run when testing your config. Like for my config above: yt-dlp --js-runtimes deno:C:\Users\Khark\.deno\bin\deno.exe -s -S res:1080 -f "best[language^=de]/best[language^=en]/b" URL

Also pay attention to the log line that stated which video and audio track are being downloaded: [info] YouTube-VideoID: Downloading 1 format(s): 96-5. This number corresponds directly to the values you can retrieve with -F. Generally when your filter isn't working it mostly downloads the default video and audio track.

Generally I can recommend reading the yt-dlp readme on Format Selection.

Comments

A better approach to enable line-numbers in Bludit code-blocks

In my posts Things to do when updating Bludit and Bludit and syntax highlighting I detailed how to get line-numbers displayed. It always annoyed me that it involved direct code-changes in the codesample-Plugin's code from TinyMCE.

Today I updated to Bludit 3.18.4, did the manual changes, but alas no line-numbers. Nothing worked, no errors displayed. Strange.

I took the Problem to ClaudeAI and it recommended a whole different approach which I like even better. By creating a js/custom-linenumbers.js file in the admin-Theme and integrate this via a <script>-Element. The js/custom-linenumbers.js will take care of changing the <pre class="language- tags by adding line-numbers. This all happens in the background and is saved, when the article is saved. Nice!

This effectively means I don't have to do manually change code anymore. Just add a line into the update the bl-kernel/admin/themes/booty/index.php file and maybe updating the prism.js/prism.css from time to time while ensuring that the line-numbers plugin is included. Sweet!

How to integrate the custom-linenumbers.js file into the Bludit admin theme

First we create the following file as bl-kernel/admin/themes/booty/js/custom-linenumbers.js:

console.log('Loaded: custom-linenumbers.js');
// Adds the line-numbers plugin from PRISM to all <pre class="language- tags
document.addEventListener('DOMContentLoaded', function() {
    if (typeof tinymce !== 'undefined') {
        tinymce.on('AddEditor', function(e) {
            var editor = e.editor;
            editor.on('GetContent', function(evt) {
                if (evt.content && evt.content.indexOf('language-') !== -1) {
                    // Create temporary DOM-Element
                    var tempDiv = document.createElement('div');
                    tempDiv.innerHTML = evt.content;

                    // Check <pre> Tags
                    var codeBlocks = tempDiv.querySelectorAll('pre[class*="language-"]');

                    codeBlocks.forEach(function(block) {
                        // Only add line-numbers if it isn't present
                        if (!block.className.includes('line-numbers')) {
                            block.className += ' line-numbers';
                            console.log('Added line-numbers to:', block.className);
                        }
                    });

                    // Return modified content
                    evt.content = tempDiv.innerHTML;
                }
            });
        });
    }
});

This code is responsible for changing any occurence of class="language- to class="language- line-numbers while keeping the selected language.

Secondly we make Bludit loading this scriptfile by adding a line in bl-kernel/admin/themes/booty/index.php. Open it and search for the closing head-element (I mean </head>). The included plugins should be right above that.

Here, add the following: <script src="<?php echo DOMAIN_ADMIN_THEME.'js/custom-linenumbers.js' ?>"></script>

So your result will look like this:

[...]
        ?>

        <!-- Plugins -->
        <?php Theme::plugins('adminHead') ?>
        <script src="<?php echo DOMAIN_ADMIN_THEME.'js/custom-linenumbers.js' ?>"></script>

</head>
[...]

That's it.

You still need to update your bl-plugins/prism/css/prism.css and bl-plugins/prism/js/prism.js files with a version downloaded from https://prismjs.com/. Make sure the CSS includes the line-numbers plugin, it's not selected per-default. Also, to make Bludit use that also enable the Prism-Plugin in Bludit beforehand if you don't have it already. Sadly the Bludit Prism-Plugin doesn't include the line-numbers plugin, so we need to go this route.

Verify it works

Log into Bludit and hit F12 to display the Browser console. You should see the message Loaded: custom-linenumbers.js in there.

Create a new post, add a codeblock and select any language. On saving the codeblock you should see the message Added line-numbers to: ...

If you want to get rid of the message comment them out, by change the lines into //console.log...

Comments

Termix: SelfHosted connection manager

I finally got around setting myself up with a Termix instance (their GitHub). Its a connection manager for various protocols (SSH, RDP, Telnet, etc.) accessible via a web-frontend. Termix it self runs inside a Docker container.

Here is a view of the web-frontend (I resized the window to make it smaller). I generated a new SSH-Key solely for the use connecting from Termix to the configured hosts. Then added the public key to 2 hosts, put them inside a folder for better overview, hit connect and it works.

Screenshot of the Termix WebUI(Click to enlarge)

It supports the creation of tunnels too and various other options. So far I have only used it with SSH so I can't say much regarding RDP (or Telnet 😂). Having this reachable via HTTPS could be a nice solution in environments where direct SSH (and VPN) is blocked.

The docker compose file

I configured SSL with certificates from my own CA. These are mounted read-only into the container under /certs. This all works without Traefik, Caddy or Nginx for SSL.

services:
  termix:
    image: ghcr.io/lukegus/termix:latest
    container_name: termix
    restart: unless-stopped
    environment:
      - ENABLE_SSL=true
      - SSL_PORT=8443
      - SSL_DOMAIN=host.tld
      - PORT=8080
      - SSL_CERT_PATH=/certs/host.tld.crt
      - SSL_KEY_PATH=/certs/host.tld.key
    ports:
      - "6666:8443"
    volumes:
      - /opt/docker/termix/data:/app/data
      # Mount cert-dir for certificates read-only
      - /opt/docker/certs/:/certs:ro

A welcomed surprise

I was pleasantly surprised to notice that the Termix docker container automatically reported "Healthy" inside my dashboard. Without me ever having defined a proper healthcheck.

Turns out Termix is one of these rare projects who define a healthcheck in the container image itself:

root@host:~# docker inspect termix | grep -A 20 Healthcheck
            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "wget -q -O /dev/null http://localhost:30001/health || exit 1"
                ],
                "Interval": 30000000000,
                "Timeout": 10000000000,
                "StartPeriod": 60000000000,
                "Retries": 3
            },

Nice!

Comments

WHATWG, Firefox and bad ports

When I setup my Termix instance I used port 6666/tcp. However on my first visit I wasn't greeted with a Termix login page, rather a Firefox message appeared. One I had never encountered before.

Huh? What? I use all kinds of strange ports in my home network and never got that error message.

I was kind of annoyed that there was no button labeled "I know the risk, take me there anyway".

However a quick search showed the solution.

  1. Open about:config
  2. Enter: network.security.ports.banned.override
    • The key doesn't exist per-default
  3. Create it as type "String"
  4. Add the port number
    • If multiple ports are needed specify them as a comma separated list: 6666,7777

This is how it looks in my case:

What ports are blocked? And why?

If we look at the source code, we see the list of ports that is blocked: https://searchfox.org/firefox-main/source/netwerk/base/nsIOService.cpp#122

In total just shy over 80 ports are blocked. And there seems to be no separation between UDP or TCP ports.

A bit more Firefox context is in their Knowledge Base: https://kb.mozillazine.org/Network.security.ports.banned

They get this port list from the "The Web Hypertext Application Technology Working Group (WHATWG)" who define a list of "bad ports" in this document: https://fetch.spec.whatwg.org/#port-blocking

Apparently "A port is a bad port if it is listed in the first column of the following table.", well you never stop learning. 😉

Comments

OpenSSL error "error 47 at 0 depth lookup: permitted subtree violation" explained, or: Why I have to generate a new CA root certificate

I wanted to get rid of the HTTPS-Warning when opening the web-frontend of my DSL-Router. As I still use the vendor-supplied selfsigned certificate there. Hence I used my ca-scripts (GitHub) to generate a certificate for the IP and standard hostname (fritz.box).

Only to get the error:

error 47 at 0 depth lookup: permitted subtree violation
error 192.168.1.1.crt: verification failed

Huh? This is how I used my script. hostcert.sh calls sign.sh to sign the CSR and verifies the signed certificate against the CA-Root certificate.

root@host:~/ca# ./hostcert.sh 192.168.1.1 fritz.box
CN: 192.168.1.1
DNS ANs: fritz.box
IP ANs: 192.168.1.1
Enter to confirm.

writing RSA key
Reading pass from $CAPASS
CA signing: 192.168.1.1.csr -> 192.168.1.1.crt:
Using configuration from ca.config
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'DE'
localityName          :ASN.1 12:'Karlsruhe'
organizationName      :ASN.1 12:'LAN CA host cert'
commonName            :ASN.1 12:'192.168.1.1'
Certificate is to be certified until Mar 14 20:57:03 2027 GMT (365 days)

Write out database with 1 new entries
Database updated
CA verifying: 192.168.1.1.crt <-> CA cert
C=DE, L=Karlsruhe, O=LAN CA host cert, CN=192.168.1.1
error 47 at 0 depth lookup: permitted subtree violation
error 192.168.1.1.crt: verification failed

The offending command is:

root@host:~/ca# openssl verify -CAfile ca.crt fritz.box.crt 
C=DE, L=Karlsruhe, O=LAN CA host cert, CN=fritz.box
error 47 at 0 depth lookup: permitted subtree violation
error fritz.box.crt: verification failed

The root cause is that I forgot that I added an X509v3 Name Constraints. This dictates that all Common Name or SubjectAltNames, have to end in .lan and clearly fritz.box is in violation of that.

root@host:~/ca# openssl x509 -in ca.crt -text | grep "X509v3 Name" -A2
            X509v3 Name Constraints: 
                Permitted:
                  DNS:lan

The solution is to generate it solely for the IP, right?

root@host:~/ca# ./hostcert.sh 192.168.1.1
CA verifying: 192.168.1.1.crt <-> CA cert
C=DE, L=Karlsruhe, O=LAN CA host cert, CN=192.168.1.1
error 47 at 0 depth lookup: permitted subtree violation
error 192.168.1.1.crt: verification failed

Yeah no.. It's wrong too. In the first certificate the IP was also defined. I just thought fritz.box is the offending SAN as it is listed first (my script adds IP SANs after DNS SANs).

Through this I learned that as soon as one name constraint is specified, all SubjectAltNames have to follow the constraints. Constraints of type DNS and IPAddress are checked independently. And 192.168.1.1 doesn't match the Permitted DNS zone of .lan.

The corresponding RFC 5280 sections are:

Looks like I have to generate a new CA. Narf! This time however, I will make sure to extract all allowed and denied name constraints from the CA root certificate and check it against the supplied SubjectAltName BEFORE I create or sign the CSR.

Comments

My n8n docker compose file (without caddy, traefik, nginx, etc. for SSL)

This is my docker compose file for n8n. I use certs signed by my own private CA via a mounted folder.

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=change-me
      - N8N_HOST=HOST
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://HOST:5678/
      - GENERIC_TIMEZONE=Europe/Berlin
      - N8N_SSL_CERT=/certs/HOST.crt
      - N8N_SSL_KEY=/certs/HOST.key
      # Enable nodes "Execute Command" and "Local File Trigger"
      - NODES_EXCLUDE=[]
    volumes:
      - /opt/docker/n8n/n8n_data:/home/node/.n8n
      - /opt/docker/n8n/local-files:/files
      # Mount cert-dir read-only for certificates
      - /opt/docker/certs/:/certs:ro
    healthcheck:
      # No curl in n8n container
      test: ["CMD-SHELL", "wget --no-check-certificate --quiet -O - https://HOST:5678/healthz | grep -q '\"ok\"' || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
Comments