Feuerfest

Just the private blog of a Linux sysadmin

Why I don't consider Outlook to be a functional mail client

Photo by Pixabay: https://www.pexels.com/photo/flare-of-fire-on-wood-with-black-smokes-57461/

This topic comes up far to often, therefore I decided to make a blogpost out of it. After all copy & pasting a link is easier than repeatedly writing the same bullet points.

Also: This is my private opinion and this article should rather be treated as a rant.

  • Mail templates are separate files? And the workflow to create them is seriously that antique?
    • Under Create an email message template (microsoft.com) Microsoft details how to create an email template. But you notice something? They use the term "[...] that include information that infrequently changes [...]" means only static text is allowed.
    • Yep, you can't draft mail templates where certain values get auto-filled and the like. I mean, how many employees, consultants, etc. have to sent their weekly/monthly time-sheet to someone? Is it so hard to automatically fill in the week number, month and automatically attach the latest file with a certain file name in a specified folder?
      • Yes! Automating this with software is surely the best way. But we all know how the reality in many companies looks like, right?
    • Additionally the mail templates are stored as files on your filesystem under: C:\users\username\appdata\roaming\microsoft\templates.
      • This means: Mail templates are not treated as mails in draft mode or the like. No, you have to load an external file via a separate dialogue into Outlook. That's user experience from the 1980s?
    • Workaround: Create a folder templates, create a sub-folder templates-for-templates. Store mail drafts (with recipients, subject, text, etc.) in templates-for-templates. When needed copy to templates. Attach file. Edit text manually. Hit send.
    • Never send directly out of templates-for-templates as else your template is gone.
    • But seriously? Why is this process so old and convoluted? I suspect the feature is kept this way because Microsoft is afraid of people utilizing it to send spam. But.. Sending spam manually? I think this stopped to be a thing at May 5th, 2000 (Wikipedia) at the latest.. Every worm/virus out there has it's own build-in logic to generate different subjects/texts/etc. Why deliberately keep a feature in such a broken state and punish your legitimate users?
  • No regular expressions in filter keywords
    • This annoys me probably the most. When you specify a filter "Sort all mails, where the subject begins with Newsletter PC news into a folder", Outlook will only sort mail with the exact subject of "Newsletter PC news"
    • Which is stupid when there is a static & changing part in the subject. I mean it's 2024. Support some kind of wildcard string matching via asterisks is not really new, isn't it? Like: "Sort all mail where the subject starts with "Newsletter PC news*" and then "Newsletter PC news April 2024" will also get sorted.. No. Not in Outlook.
  • Constant nuisance: Ctrl+F doesn't bring up the search bar - Instead it opens the new mail window..
    • I mean really? Ctrl+F is the shortcut for search everywhere. Why change that!?
    • Info: Ctrl+E activates the search field on top
  • Only one organizer for events
    • Ok, technically this isn't outlook but rather CalDAV and hence Google calendar, etc. suffer the from the same problems. But I still list it as a fault.
    • Why? Microsoft has repeatedly shown the middle finger to organizations like the ISO and the like. When it suits Microsoft's market share, they basically are willing to ignore a lot of common standards (like Google, Facebook, etc..). With their Active Directory infrastructure and Office Suite they have everything in-house and 100% under their own control to make this feature work in Windows environments - which most companies do run. But they don't care.
    • I mean.. On the other hand I'm glad that they follow the standard. It just turns out so often to be a feature we are in need of that I stopped counting.
    • And you already need proprietary connectors to properly integrate your Exchange calendar into other mail programs like Mozilla Thunderbird. So this shouldn't be really a big deal-breaker either..
  • Only one reminder for events
    • Due to my Attention deficit disorder I tend to have what is called "Altered time perception" or "time blindness". This means I won't experience 15 minutes as 15 minutes or grossly under-/overestimate how much time I really have left. Best description for non-ADDers I can give is: This means I will think of 15 minutes as "Ah, I still have 1 hour left." That this can lead to situations where I am late or wasn't able to fully prepare something for a meeting should be clear.
    • Therefore it really helps me to be able to set multiple reminders for an event.
    • Usually I do the following: 1 hour before, 30min, 15min. This helps me to break out of the time blindness and synchronize my altered time perception with reality. Enabling me to finish tasks before the meeting/event happens.
    • For events like a business trip which take more time to prepare I often set a reminder 1 or 2 weeks in advance. This way I have time to do my laundry in time and so on.
    • Outlook however only supports the setting of ONE reminder.. Yeah..
    • My workaround is to have events also in my private calendar. (Of course without any details and often just a generic title/description as to not store client information on my private device.)
  • Remember Xobni? / The search is horrible
    • Outlook search is a single input field and then it searches over everything. You can't specify if the search term you used is a name, part of the name of a file or part of an email address.
    • In the early 2000s there was Xobni. Slogan: "It reverts your Inbox." - Hence the name Xobni. It was a an add-on which added another sidebar to Outlook. There it displayed all people you've mailed with. And when you clicked on a person you saw all mails, all mail threads and, most importantly, all attachments this person had sent to you (or you to them). You could even add links to the persons social media profiles, etc. It was brilliant. And made work so easy. As often I remembered only the person who sent a file to me or the thread in which it was attached - but not the actual mail or even the subject of the mail, etc. Xobni made it pretty easy to work around that. Making it possible to search Outlook in a way in which our brain works.
    • Well, sadly Yahoo bought Xobni in July 2013 and shut it down in July 2014.
    • But it's 2024 and Microsoft hasn't come up with a similar functionality yet? Really?
Comments

Your content needs a date!

Photo by Pixabay: https://www.pexels.com/photo/clear-glass-with-red-sand-grainer-39396/

All too often, I come across blog posts, 'What's new?' sections and other content that doesn't show when the content was first published or last modified. I find this annoying. This information provides crucial context. It allows me to make certain assumptions and correctly place the content in a timeline.

It's like reading a changelog for software where the added, changed or removed features aren't attributed to the version where they changed. It's not helpful at all.

A political piece, written at the height of a scandal may omit crucial information. Which was first discovered months later. During the lengthy and boring police investigation. About which nobody writes in detail, of course. If I had a date next to the text, I could place the piece in the correct position on the timeline and explain to myself why certain arguments were not used or were simply wrong, but which may have represented the current knowledge at the time the piece was written.

Today I got curious about what happened to the german PC handbook publishing company Data Becker. And I found this blogpost (in german) by Thomas Vehmeier: Data Becker – eine Ära geht zu Ende (vehmeier.com). Apparently he worked at Data Becker in the middle of the 1990's. And in his text he writes about his experience and how & why Data Becker failed when the Internet, and therefore the market, began to change.

But.. There is no date. Nowhere. He also doesn't mention the year when Data Becker got out of business. Classical archaeological problem. We can only definitely say "It happened after the 1990's". But apart from that? Well he links to the WirtschaftsWoche. A german business magazine. They do a have date on their article. 9th October 2013. And they wrote that Data Becker will go out of business in 2014.

Does this clarify when his text was written? No, but it answers it somewhat sufficiently.

Albeit it illustrates my problem. Yes, it is not an unsolvable one, but still annoying - for me. And, I guess, I'm again in the minority here.

EDIT: I just re-read this article in August 2025 and now the blogpost from Thomas Vehmeier shows a date. 10th October 2013. Yai!

Comments

ASUS RMA process is broken by design to maximize profit?

Photo by ThisIsEngineering: https://www.pexels.com/photo/woman-working-on-computer-hardware-19895718/

I watched an interesting video from the Youtube Channel GamersNexus. It's title is "ASUS Scammed Us".

And in this video they show how the ASUS RMA process is broken and many customers are faced with repair bills higher than the original costs for the devices. Or ASUS claims parts need to be repaired which are not broken according to the customers. Another big topic is also that ASUS regularly claims the customer caused the defect and hence repair isn't covered under their guarantee.

Yeah.. While watching the video you get the feeling the process was designed that way to maximize profit. This means it's intransparent, not flexible enough and generally doesn't have the customer at the core of it's view/goal.

Which sucks. And gained ASUS a place on my "Do not buy from ever again"-list... The video is linked below:

Or, if you prefer a link, here: https://www.youtube.com/watch?v=7pMrssIrKcY

Comments

Go home GoDaddy, you're drunk!

Photo by Tim Gouw: https://www.pexels.com/photo/man-in-white-shirt-using-macbook-pro-52608/

I'm just so fucking happy right now I have never been a customer of GoDaddy. As I learned via Reddit yesterday GoDaddy closed the access to their DNS API for many customers.

No prior information.

No change of the documentation regarding API access.

Nothing.

For many customers this meant that their revenue stream was affected as, for example, the SSL-Certificates for web services couldn't be automatically renewed. Which is the case when you are using Let's Encrypt.

Therefore I can't say it in any other words: GoDaddy deliberately sabotaged it's customers in order to maximize it's income.

Yeah, fuck you GoDaddy. You are on my personal blacklist now. Never going to do business with you. Not that I planned, but sometimes decisions like this must be called out and sanctioned.

When customers asked why their API calls returned an HTTP 403 error (Forbidden) GoDaddy provided the following answer (accentuation done by myself):

Hi, We have recently updated the account requirements to access parts of our production Domains API. As part of this update, access to these APIs are now limited: If you have lost access to these APIs, but feel you meet these requirements, please reply back with your account number and we will review your account and whitelist you if we have denied you access in error. Please note that this does not affect your access to any of our OTE APIs. If you have any further questions or need assistance with other API questions, please reach out. Regards, API Support TeamAvailability API: Limited to accounts with 50 or more domains Management and DNS APIs: Limited to accounts with 10 or more domains and/or an active Discount Domain Club plan.

Wow. The mentioned OTE API meanwhile is no workaround. It's GoDaddy's test API. Used to verify that your API-Calls work, prior to sending them to the productive API. You can't do anything there which would help GoDaddy's customers to find a solution without having to pay.

Sources

Am I the only one who can't use the API? (Reddit)

Warning: Godaddy silently cut access to their DNS API unless you pay them more money. If you're using Godaddy domain with letsencrypt or acme, be aware because your autorenewal will fail. (Reddit)

Comments

Creating a systemd timer to regularly pull Git-Repositories and getting to know an uncomfortable systemd/journalctl bug along the way

Photo by Christina Morillo: https://www.pexels.com/photo/white-dry-erase-board-with-red-diagram-1181311/

I have a VM in my LAN which servers as central admin host. There I wanted to create a systemd unit and timer to automatically update my Git-Repositories. The reason is that it happens too often that I push some changes from other machines but forget to pull the repos before I commit my changes done on the admin host.

Sure, stash the commit. Fix any conflicts. No big deal. But still annoying. Therefore: A systemd unit file with a timer that updates the repos every hour and be done with it.

Encountering the bug

While reading the systemd documentation I learned that you can list multiple ExecStart parameters if the service is of Type=oneshot. Then all commands will be executed in sequential order.

Unless Type= is oneshot, exactly one command must be given. When Type=oneshot is used, zero or more commands may be specified. [...] If more than one command is specified, the commands are invoked sequentially in the order they appear in the unit file. If one of the commands fails (and is not prefixed with "-"), other lines are not executed, and the unit is considered failed.

From: https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html#ExecStart=

This seems to be the easiest way to achieve my goal. Just add one ExecStart= line for every Git-Repo, done. For testing I didn't write the timer file. Wanting to verify that the unit file works. And the following unit file works flawlessly.

user@lanadmin:~$ systemctl --user cat git-update
# /home/user/.config/systemd/user/git-update.service
[Unit]
Description=Update git-Repositories
After=network-online.target
Wants=network-online.target

[Service]
# Allows the execution of multiple ExecStart parameters in sequential order
Type=oneshot
# Show status "dead" after commands are executed (this is just commands being run)
RemainAfterExit=no
# git pull = git fetch + git merge
ExecStart=/usr/bin/git -C %h/git/github/chrlau/dotfiles pull
ExecStart=/usr/bin/git -C %h/git/github/chrlau/scripts pull

[Install]
WantedBy=default.target

However, while initially writing it and looking at the output of systemctl --user status git-update I noticed something.

user@lanadmin:~$ systemctl --user status git-update
○ git-update.service - Update git-Repositories
     Loaded: loaded (/home/user/.config/systemd/user/git-update.service; enabled; preset: enabled)
     Active: inactive (dead) since Fri 2024-04-26 22:16:21 CEST; 15s ago
    Process: 39510 ExecStart=/usr/bin/git -C /home/user/git/github/chrlau/dotfiles pull (code=exited, status=0/SUCCESS)
    Process: 39516 ExecStart=/usr/bin/git -C /home/user/git/github/chrlau/scripts pull (code=exited, status=0/SUCCESS)
   Main PID: 39516 (code=exited, status=0/SUCCESS)
        CPU: 40ms

Apr 26 22:16:18 lanadmin systemd[9234]: Starting git-update.service - Update git-Repositories...
Apr 26 22:16:21 lanadmin git[39521]: Already up to date.
Apr 26 22:16:21 lanadmin systemd[9234]: Finished git-update.service - Update git-Repositories.

There should be two log lines with git[pid]: Already up to date. After all we call git two times. But there is only one line. Why!?

At first I considered something like rate-limiting or the de-duplication of identical log messages. But I found nothing. Only an old RedHat Bugreports from 2013 & 2017 about how journalctl can't always catch the necessary process information (cgroup, etc.) from /proc before the process is gone (read: Bug 963620 - journald: we need a way to get audit, cgroup, ... information attached to log messages instead of asynchronously reading them in and Bug 1426152 - Journalctl miss to show logs from unit). Especially with short-running processes this occurred regularly. This can't be the reason, or?

I checked the journal for the unit file. Line still missing.

user@lanadmin:~$ journalctl --user -u git-update
Apr 26 22:16:18 lanadmin systemd[9234]: Starting git-update.service - Update git-Repositories...
Apr 26 22:16:21 lanadmin git[39521]: Already up to date.
Apr 26 22:16:21 lanadmin systemd[9234]: Finished git-update.service - Update git-Repositories.

Then I accidentally executed journalctl without any parameters and...

user@lanadmin:~$ journalctl
[...]
Apr 26 22:16:18 lanadmin systemd[9234]: Starting git-update.service - Update git-Repositories...
Apr 26 22:16:20 lanadmin git[39515]: Already up to date.
Apr 26 22:16:21 lanadmin git[39521]: Already up to date.
Apr 26 22:16:21 lanadmin systemd[9234]: Finished git-update.service - Update git-Repositories.
[...]

There it is. So why does a simple journalctl display both lines, while a systemctl --user status git-update doesn't?

Remembering the bug we just read? journalctl has a verbose mode. This displays all fields for every log line. This should tell us the difference between those to log messages.

At first we have the entry for the Starting git-update.service - Update git-Repositories message. Nothing suspicious here.

user@lanadmin:~$ journalctl -o verbose
Fri 2024-04-26 22:16:18.724396 CEST [s=78cb2a728dda4d579b41ba58b655d4c2;i=6a32;b=859f56a381394260854aeac3b77d87a3;m=1a58bdb7ae0;t=6170593979632;x=ed56438f3a913535]
    PRIORITY=6
    SYSLOG_FACILITY=3
    TID=9234
    SYSLOG_IDENTIFIER=systemd
    _TRANSPORT=journal
    _PID=9234
    _UID=1000
    _GID=1000
    _COMM=systemd
    _EXE=/usr/lib/systemd/systemd
    _CMDLINE=/lib/systemd/systemd --user
    _CAP_EFFECTIVE=0
    _SELINUX_CONTEXT=unconfined
    _AUDIT_SESSION=393
    _AUDIT_LOGINUID=1000
    _SYSTEMD_CGROUP=/user.slice/user-1000.slice/user@1000.service/init.scope
    _SYSTEMD_OWNER_UID=1000
    _SYSTEMD_UNIT=user@1000.service
    _SYSTEMD_USER_UNIT=init.scope
    _SYSTEMD_SLICE=user-1000.slice
    _SYSTEMD_USER_SLICE=-.slice
    _BOOT_ID=859f56a381394260854aeac3b77d87a3
    _MACHINE_ID=e83bb1062b594b79817a5c8a5605f9fd
    _HOSTNAME=lanadmin
    _RUNTIME_SCOPE=system
    CODE_FILE=src/core/job.c
    JOB_TYPE=start
    CODE_LINE=581
    CODE_FUNC=job_emit_start_message
    MESSAGE_ID=7d4958e842da4a758f6c1cdc7b36dcc5
    MESSAGE=Starting git-update.service - Update git-Repositories...
    JOB_ID=10
    USER_INVOCATION_ID=8f476f2ef43245ba89a9cb69a26f8577
    USER_UNIT=git-update.service
    _SOURCE_REALTIME_TIMESTAMP=1714162578724396

Then comes the entry for the first Already up to date. log message. And it's entry is way shorter than the previous log message. No fields regarding systemd are associated.

Fri 2024-04-26 22:16:20.137988 CEST [s=78cb2a728dda4d579b41ba58b655d4c2;i=6a33;b=859f56a381394260854aeac3b77d87a3;m=1a58bf10cb2;t=6170593ad2804;x=730951cbebf4e84a]
    PRIORITY=6
    SYSLOG_FACILITY=3
    _UID=1000
    _GID=1000
    _BOOT_ID=859f56a381394260854aeac3b77d87a3
    _MACHINE_ID=e83bb1062b594b79817a5c8a5605f9fd
    _HOSTNAME=lanadmin
    _RUNTIME_SCOPE=system
    _TRANSPORT=stdout
    _STREAM_ID=36f2542db1e249da8c5c5b1342d065e8
    SYSLOG_IDENTIFIER=git
    MESSAGE=Already up to date.
    _PID=39515
    _COMM=git

And yep, here is the second Already up to date. log message. It contains all fields and this is the message we see, when we display the journal-entries for our git-update.service unit.

Fri 2024-04-26 22:16:21.471040 CEST [s=78cb2a728dda4d579b41ba58b655d4c2;i=6a34;b=859f56a381394260854aeac3b77d87a3;m=1a58c0563ee;t=6170593c17f40;x=2574d8467f36d20]
    PRIORITY=6
    SYSLOG_FACILITY=3
    _UID=1000
    _GID=1000
    _CAP_EFFECTIVE=0
    _SELINUX_CONTEXT=unconfined
    _AUDIT_SESSION=393
    _AUDIT_LOGINUID=1000
    _SYSTEMD_OWNER_UID=1000
    _SYSTEMD_UNIT=user@1000.service
    _SYSTEMD_SLICE=user-1000.slice
    _BOOT_ID=859f56a381394260854aeac3b77d87a3
    _MACHINE_ID=e83bb1062b594b79817a5c8a5605f9fd
    _HOSTNAME=lanadmin
    _RUNTIME_SCOPE=system
    _TRANSPORT=stdout
    SYSLOG_IDENTIFIER=git
    MESSAGE=Already up to date.
    _COMM=git
    _STREAM_ID=cfc8932e3cf9431aa59873d163d624a8
    _PID=39521
    _SYSTEMD_CGROUP=/user.slice/user-1000.slice/user@1000.service/app.slice/git-update.service
    _SYSTEMD_USER_UNIT=git-update.service
    _SYSTEMD_USER_SLICE=app.slice
    _SYSTEMD_INVOCATION_ID=8f476f2ef43245ba89a9cb69a26f8577

Great. So how to fix this? Yeah, I can't. Unless I can make the git-process running longer there is no real solution. I tried adding ExecStart=/usr/bin/sleep 1 after each git command, but that of course didn't change anything. As sleep is a different process.

Now I'm left with the following situation: Sometimes both log entries are logged correctly with all fields. Sometimes just one (either the first or second one). And rarely none is logged at all. Then all I have are the standard Starting git-update.service - Update git-Repositories. and Finished git-update.service - Update git-Repositories... log messages which are sent via systemd when a unit file is started and when it finishes.

Beautiful. Just beautiful. I mean.. The syslog facility, identifier and priority is logged each time. So yeah, that's actually a reason for good old rsyslog.

A somewhat of a solution?

The best advise I can currently give is: If you have short lived processes started via systemd and it's important you can easily view all log messages:

  1. Make sure ForwardToSyslog=yes is set in /etc/systemd/journald.conf. Note that the default values are usually listed as comments. So if the line #ForwardToSyslog=yes is present, you should be fine
  2. Install rsyslog or any other traditional syslog service
  3. Configure it to store your log messages in a separate logfile or let it go to /var/log/messages
  4. Don't forget to configure logrotate (or some other sort of logfile rotating) for all logfiles created by rsyslog 😉

I just learned to always execute a plain journalctl during troubleshooting sessions just to make sure that I spot messages from short running processes.

And what about the timer?

This is the timer file I use. It runs once every hour.

user@lanadmin:~$ systemctl --user cat git-update.timer
# /home/user/.config/systemd/user/git-update.timer
[Unit]
Description=Update git-repositories every hour

[Timer]
# Documentation: https://www.freedesktop.org/software/systemd/man/latest/systemd.time.html#Calendar%20Events
OnCalendar=*-*-* *:00:00
Unit=git-update.service

[Install]
WantedBy=default.target

After creating the file you need to enable and start it.

user@lanadmin:~$ systemctl --user enable git-update.timer
Created symlink /home/user/.config/systemd/user/default.target.wants/git-update.timer → /home/user/.config/systemd/user/git-update.timer.

user@lanadmin:~$ systemctl --user start git-update.timer

Using systemctl --user list-timers we can verify that the timer is scheduled to run.

user@lanadmin:~$ systemctl --user list-timers
NEXT                         LEFT       LAST PASSED UNIT                   ACTIVATES
Sun 2024-04-28 16:00:00 CEST 49min left -    -      git-update.timer git-update.service

1 timers listed.
Pass --all to see loaded but inactive timers, too.
Comments

Things to do when updating Bludit

Photo by Markus Spiske: https://www.pexels.com/photo/green-and-yellow-printed-textile-330771/

I finally got around to update to the recent version of Bludit. And as I made two changes to files which will be overwritten, I made myself a small documentation.

Changing the default link target

Post with code example, here: https://admin.brennt.net/changing-the-default-link-target-in-bludits-tinymce

  1. Open file bludit-folder/bl-plugins/tinymce/plugin.php file
  2. Search for tinymce.init
  3. Then we add the default_link_target: '_blank' parameter at the end of the list
  4. Don't forget to add a semicolon behind the formerly last parameter

Keep the syntax highlighting

Original post: https://admin.brennt.net/bludit-and-syntax-highlighting

  1. Open: bludit-folder/bl-plugins/tinymce/tinymce/plugins/codesample/plugin.min.js
  2. Search for <pre and in the class property add line-numbers. It should now look like this: t.insertContent('<pre id="__new" class="language-'+a+' line-numbers">'+r+"</pre>")
  3. A little after that pre you will also find t.dom.setAttrib(e,"class","language-"+a), add the line-numbers class in there too, it should look like this: t.dom.setAttrib(e,"class","line-numbers language-"+a)
  4. Edit a random blogpost with code in it to verify that newly rendered pages get the line-numbers and syntax highlighting.

Enhancing Cookie security

Mozillas Observatory states that the Bludit Session Cookie is missing the samesite attribute and the Cookie name isn't prefixed with __Secure- or __Host-. I opened an issue for this on GitHub (Bludit issue #1582 Enhance cookie security by setting samesite attribute and adding __Secure- prefix to sessionname) but until this is integrated we can fix it in the following way:

  1. Open bludit-folder/bl-kernel/helpers/session.class.php
  2. Comment out the line containing: private static $sessionName = 'BLUDIT-KEY';
  3. Copy & paste the following to change the Cookie name:
    • // Set the __Secure- prefix if site is called via HTTPS, preventing overwrites from insecure origins
      //   see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#cookie_prefixes
      //private static $sessionName = 'BLUDIT-KEY';
      private static $sessionName = '__Secure-BLUDIT-KEY';
  4. Search for the function session_set_cookie_params
  5. Add the following line to add the samesite attribute to the Cookie by adding the following line. Also add a comma at the end of the formerly last line.
    • 'samesite' => 'strict'
  6. it should looks like this:
    • session_set_cookie_params([
          'lifetime' => $cookieParams["lifetime"],
          'path' => $path,
          'domain' => $cookieParams["domain"],
          'secure' => $secure,
          'httponly' => true,
          'samesite' => 'strict'
      ]);

Check is the RSS-Feed works

  1. Access https://admin.brennt.net/rss.xml and verify there is content displayed.
    • If not: Check if the RSS-Plugin works and is activated

Apply / check CSS changes

I made some small changes to the Solen Theme CSS. These must be re-done when the theme is updated.

  1. Open bl-themes/solen-1.0/css/style.css
  2. Change link color:
    • Element: .plugin a, ul li a, .footer_entry a, .judul_artikel a, line 5
    • Change: color: #DE004A;
    • To: color: #F06525;
  3. Change link color when hovering:
    • Element: .plugin a:hover, ul li a:hover, .footer_entry a:hover, .judul_artikel a:hover, line 10
    • Change: color: #F06525;
    • To: color: #C68449;
  4. Fix position of the blockquote bar:
    • Element: blockquote::box-shadow, line 24
    • Change:  box-shadow: inset 5px 0 rgb(255, 165, 0);
    • To:  box-shadow: -5px 0px 0px 0px rgb(255, 165, 0);
  5. Format code in posts:
    • Add the following element after line 37:
    • /* My custom stuff */
      code {
          font-size: 87.5%;
          color: #e83e8c;
          word-wrap: break-word;
          font-family: 'Roboto Mono', monospace;
      }
  6. Same padding-bottom as padding-top for header:
    • Element .section_title, line 136
    • Change: padding-bottom: 0px;
    • To: padding-bottom: 0.6em;
  7. Disable the white blur for the introduction texts:
    • Element: .pratinjau_artikel p:after, line 277
    • Change background: linear-gradient(to right, transparent, #ffffff 80%);
    • To: background: 0% 0%;

Solen-Theme changes

  1. Make header smaller:
    • Open solen-1.2/php/mini-hero.php
    • Remove line 4: <h2 class="hero_welcome"><?php echo $L->get('welcome'); ?></h2>
Comments