Recovering crashed MariaDB InnoDB Tables/Indexes

Today I came across a crashed InnoDB MariaDB database. When trying to restart the process, it immediately failed with the following messages in the systemd journal:

[Note] Starting MariaDB 10.11.6-MariaDB-0+deb12u1 source revision  as process 661197
[Note] InnoDB: Compressed tables use zlib 1.2.13
[Note] InnoDB: Number of transaction pools: 1
[Note] InnoDB: Using crc32 + pclmulqdq instructions
[Note] InnoDB: Using liburing
[Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
[Note] InnoDB: Completed initialization of buffer pool
[Note] InnoDB: File system buffers for log disabled (block size=4096 bytes)
[Note] InnoDB: Starting crash recovery from checkpoint LSN=10862992357
[Note] InnoDB: End of log at LSN=10868013717
[Note] InnoDB: Retry with innodb_force_recovery=5
[ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption
[Note] InnoDB: Starting shutdown...
[ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[Note] Plugin 'FEEDBACK' is disabled.
[ERROR] Unknown/unsupported storage engine: InnoDB
[ERROR] Aborting

Before continuing with any recovery process, I created a backup of /var/lib/mysql:

cp -rp /var/lib/mysql /var/lib/mysql.bak

Some posts recommended to start MariaDB with innodb_force_recovery = 4. However, as you can see in the messages, MariaDB automatically tried to recover with innodb_force_recovery = 5 and failed. I manually set the recovery mode to 6 in the /etc/mysql/mariadb.cnf:

[mysqld]
innodb_force_recovery = 6

With that, MariaDB finally started. Sadly, this did not yet fully resolve the issue. In mode 6, MariaDB starts with all tables in read only mode. That means I was able to access the data, but the database was not really usable or fixed. The obvious next step was to do a mysqldump, clean the database and then insert all data again. Running mysqldump failed with the next error:

mysqldump: Error 1034: Index for table 'mytable' is corrupt; try to repair it when dumping table mytable at row: 3

Interestingly, dumping a single table did work. If anyone knows why, please let me know. I dumped a list of all table names in the database into a file called /root/tables, then with bash iterated over all tables and dumped each one into a dedicated file:

while read TABLE; do
mysqldump -f mydatabase $TABLE > /root/mydatabase/$TABLE-$(date -I).sql;
done < /root/tables

All files can then just be concatenated into one large dump file:

cat /root/mydatabase/* > /root/mydatabase.sql

After the database content was safely stored in a text sql dump file, I bootstrapped a new /var/lib/mysql directory:

rm -rf /var/lib/mysql
apt install --reinstall mariadb-server
sudo -u mysql mariadb-install-db

Then I created the database again and inserted the dump:

MariaDB [(none)]> CREATE DATABASE mydatabase;
MariaDB [(none)]> use mydatabase;
MariaDB [mydatabase]> source /root/mydatabase.sql

After this, the database was healthy again.

OpenPGP Smart Card with Re-Encrypting Mailing List

Schleuder can be used to create mailing lists that allow encryption of e-mails with GnuPG/PGP. Schleuder has to decrypt incoming mails and re-encrypts them for each subscriber of the mailing list.

Therefore it is necessary to entrust the server with the private key. However, it is possible to add an additional layer of security by storing the private key in a OpenPGP card, for example a Nitrokey. As Schleuder basically uses GnuPG without any special configuration, a PGP key pair can be imported as a key for the mailing list. Obviously, your server needs a USB port.

This can be used in combination with a mailbox, that forwards incoming e-mails to the Schleuder list. This allows a team to have a trusted public PGP key that does not have to be shared across the team. If someone from the outside sends an encrypted e-mail to the team mailbox, the mail gets re-encrypted for all team members with their private PGP keys. If they have to answer back to the original sender, the x-resend command of Schleuder can be used to sign the mail with the key on the smart card.

For this to work, first install and configure Schleuder as described in the documentation. Then configure a new mailing list.

To use the OpenPGP card for your list, create a key pair on it with all needed identities. The Schleuder list address needs to be included, for example mylist@schleuder.example.com. If you want a public mailbox, which redirects incoming mails to the Schleuder list, add this identity as well (for example secure@example.com). Do not forget to set the URL of public key. Then store the public key in the provided location.

When you’re done with the smart card setup, attach it to your server (and forward the USB device to a VM, if you need to). Then, as root, import the smart card for the mailing list:

sudo -u schleuder gpg --pinentry-mode loopback --homedir /var/lib/schleuder/lists/schleuder.example.com/mylist/ --edit-card

Then, in the GnuPG console, execute the following commands:

fetch
quit

You now should have imported the key pair with the private key stored on the smart card. You now need to ensure that your key is unlocked by decrypting a file and entering the pin. Create a simple text file that is encrypted with the public key stored on the smart card and save it to /var/lib/schleuder/unlock-pin.gpg. Then execute the following command

sudo -u schleuder gpg --homedir /var/lib/schleuder/lists/schleuder.netzbegruenung.de/cert-dispatch/ --decrypt /var/lib/schleuder/unlock-pin.gpg

In some cases I have to to this twice for the smart card to be unlocked. You can now send an e-mail to the list encrypted with the public key belonging to the private key on the smart card.

Migrating from Univention Corporate Server to Samba and Keycloak

In the past I used the Univention Corporate Server for Identity Management in some organizations. However, I find that UCS is relatively huge and I nowadays I prefer to operate a Samba server in combination with Keycloak, because it is easier to integrate into server orchestration tools. Also, Keycloak provides sufficient functionality to manage users within the Samba Active Directory. It took me quite some time to figure out how to migrate the data cleanly. The goal is to migrate the data into clean Active Directory structures. I only want to migrate the username, sn, givenName, displayName and mail attributes while keeping the objectUUID. Also, group memberships should be migrated.

One major caveat was to import the entryUUID of UCS into the objectUUID attribute of Samba. samba-tool has no parameter to set the objectGUID for a new user. A possible workaround is to use ldbmodify to import the user objects first. ldbmodify is available in the ldb-tools package on Debian based distributions.

The import assumes that the Samba server has already been provisioned with

samba-tool domain provision

The following script can be used to export the data in 2 files:

#!/bin/bash

UIDS=$(slapcat -a "(mail=*)" | grep uid: | sed "s/uid: //")

while IFS= read -r USERUID; do
	echo "Exporting '$USERUID'"
	USER_DATA=$(slapcat -a "(uid=$USERUID)")
	USER_DN=$(echo "$USER_DATA" | grep "dn: " | sed "s/dn: //")
	USER_GUID=$(echo "$USER_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	USER_MAIL=$(echo "$USER_DATA" | grep -E "^mail:" | sed "s/^mail://")
	USER_FIRST_NAME=$(echo "$USER_DATA" | grep "givenName:" | sed "s/^givenName://")
	USER_LAST_NAME=$(echo "$USER_DATA" | grep "sn:" | sed "s/^sn://")
	USER_DISPLAY_NAME=$(echo "$USER_DATA" | grep "displayName:" | sed "s/^displayName://")
	USER_NTHASH=$(echo "$USER_DATA" | grep "sambaNTPassword: " | sed "s/sambaNTPassword: //")

	echo "Writing $USER_DN"
	echo "dn: CN=$USERUID,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectclass: user" >> /root/users.ldif
	echo "objectGUID: $USER_GUID" >> /root/users.ldif
	echo "sAMAccountName: $USERUID" >> /root/users.ldif
	echo "mail:$USER_MAIL" >> /root/users.ldif
	echo "displayName:$USER_DISPLAY_NAME" >> /root/users.ldif
	echo "sn:$USER_LAST_NAME" >> /root/users.ldif
	echo "givenName:$USER_FIRST_NAME" >> /root/users.ldif
	echo "" >> /root/users.ldif

	MEMBERSHIPS=$(echo "$USER_DATA" | grep "memberOf: " | sed "s/memberOf: //" | sed "s/cn=//" | cut -d ',' -f 1)
	while IFS= read -r MEMBERSHIP; do
		echo "samba-tool group addmembers '$MEMBERSHIP' $USERUID" >> /root/import.sh
	done <<< "$MEMBERSHIPS"
	echo "pdbedit -u $USERUID --set-nt-hash $USER_NTHASH" >> /root/import.sh
done <<< "$UIDS"

EXPORT_GROUPS=$(slapcat -a "(objectClass=posixGroup)" | grep "cn: " | sed "s/cn: //")
while IFS= read -r GROUPCN; do
	if [ "$GROUPCN" = "Domain Users" ] || [ "$GROUPCN" = "Domain Admins" ] || [ "$GROUPCN" = "Domain Guests" ] || [ "$GROUPCN" = "Domain Controllers" ]  || [ "$GROUPCN" = "Windows Hosts" ] || [ "$GROUPCN" = "DC Backup Hosts" ] || [ "$GROUPCN" = "DC Slave Hosts" ] || [ "$GROUPCN" = "Computers" ] || [ "$GROUPCN" = "Printer-Admins" ] || [ "$GROUPCN" = "Slave Join" ] || [ "$GROUPCN" = "Backup Join" ] ; then
		continue
	fi
	GROUP_DATA=$(slapcat -a "(cn=$GROUPCN)")
	GROUP_UUID=$(echo "$GROUP_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	echo "Writing $GROUPCN"
	echo "dn: CN=$GROUPCN,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectClass: top" >> /root/users.ldif
	echo "objectClass: group" >> /root/users.ldif
	echo "cn: $GROUPCN" >> /root/users.ldif
	echo "name: $GROUPCN" >> /root/users.ldif
	echo "objectGUID: $GROUP_UUID" >> /root/users.ldif
	echo "sAMAccountName: $GROUPCN" >> /root/users.ldif
	echo "" >> /root/users.ldif
done <<< "$EXPORT_GROUPS"

Now copy the created users.ldif and import.sh files into the /root directory of your new Samba server. To import the data into Samba, first import the user objects with ldbmodify:

ldbmodify -H tdb:///var/lib/samba/private/sam.ldb /root/users.ldif --relax

Finally set the group memberships and password hashes with executing the import.sh:

bash /root/import.sh

Why blocking destination ports or protocols in firewalls is usually bad

TL;DR: Should a firewall block all destination ports with a list of exceptions? The answer is almost certainly “no”. You’re just breaking the internet.

I notice that there are still tons of firewall operators that block outbound internet traffic to less common destination ports and protocols “to improve the network security”. While there was some merit to that in the past, this approach is basically useless today. The were times when malware mostly used other ports than tcp/443 & TLS, but these times are gone. Every malware will try to hide within traffic of the most used protocol: HTTPS. The only thing that will be achieved by blocking other destination ports and protocols is depriving users of ways of (good) communication and, most important, taking away large parts of the IPv4 connection address space.

Let’s use Jitsi, a video conferencing solution, as an example, why blocking destination ports does not fulfill any purpose: the video stream is usually transmitted via UDP to a service listening on port 10000. If the port is blocked by a firewall, the software is quite often configured to fall back to a Turn server that usually listens on port 443/TCP. I think this example is telling the full story: blocking the port does not prevent the client program from communicating to the outside world. Instead, the program has to resort to a less suited protocol (TCP) and the destination IP address can not be reused for normal HTTPS traffic (unless some sort of deep package inspection is done on the server, which also has a performance impact). On the other hand, blocked firewall ports take away an easy way of shaping and monitoring traffic based on port and protocol. For example, UDP traffic to port 10000 could be preferred over traffic to port 443/tcp. Users in video conferences probably need a low latency, while opening a website in a browser can take a couple of seconds without huge impact on users.

Does that mean that no ports should be blocked? No, there are some edge cases where blocking ports is sort of useful. For example it can be useful to block port 25/tcp as a destination for all outbound traffic. We know that all sane e-mail ISPs will receive e-mails on port 25. Blocking all internal machines from sending spam to all common ISPs will therefore prevent the source IP from appearing on anti-spam IP block lists. The difference is obvious: all important e-mail ISPs will usually only receive mails on port 25 with SMTP. While technically machines in the network could send e-mails to SMTP servers listening on other ports, that will not happen with any relevant e-mail ISP that could cause problems with the source IP reputation. Also, it is probably legitimate to prevent Samba or possibly even DNS traffic to the outside world.

To sum things up: destination ports are part of the destination address space. Blocking ports does not work to prevent malware from communicating with the outside world. There are exactly 2 reasons for blocking specific destination ports:

  1. A known (good) client software can accidentally be used by a user to leak information (Samba).
  2. The destination IP is known to operate a service on a specific port (for example 25) that should not be available to any machine.

Recommended FOSS tools for (software development) teams

In this post I want to collect and share my experiences made with different Free and Open Source Tools, mostly in context of software development teams. The list contains a recommendation for each type of software I’d currently recommend to use in a team. I’m not aiming at providing an extensive list with all pros and cons for each product, but a summary of my personal experiences. That means I’m working with the tools listed below in more than one team, and in general the feedback of the teams is positive. In all cases I worked with alternatives and I honestly feel that can make a recommendation. While old fashioned and hard to use GUIs were plaguing FOSS projects in the past, I do not think that this is a major concern nowadays. In my experience all types of employees can work with the tools listed below. Many of the tools listed below have not as many features as the huge commercial alternatives, but completely fulfill the role they need to.

On a general note I prefer easy to install and maintain software. It’s a huge plus if the software can be installed from a Linux distro repository and is a community driven project (in comparison to driven by a company which sees Open Source as a selling point for its enterprise products). I’m running all tools on Debian servers, which is also community driven and in my opinion a good compromise between stability, maintenance, and up-to-dateness. If the software is not directly available in the Debian repositories, it should be easy to install with all required dependencies and a good installation documentation.

I have a dislike for projects that have a paid enterprise plan, because the vendor often moves important features for teams into the paid versions. Features in paid versions are quite often not FOSS, which makes the concept of using FOSS pointless. (I do think that open source developers need to earn money, but I prefer the support and consulting business model.) Also, the tool should support LDAP for synchronizing teams to simplify permission handling.

  • Chat: Matrix & Element
    While Element feels more like a messenger and less like a team chat, Matrix allows creating rooms which all users on a server can join without invites. At the same time federation/communication with the outside world is fully supported. The GUI is modern and privacy/security features are awesome. While Mattermost is also an awesome community chat, it is missing LDAP in the community version. Rocket.Chat does include LDAP and also works quite well, but frequent glitches are diminishing the overall experience.
  • File sharing: Nextcloud
    Not much to say here. I guess it is the de facto standard and it works well. I usually do disable newer “eye-candy” apps like the Dashboard and Wheather. There is not much need for them in a file sharing tool. Nextcloud is experiencing a growing feature creep in recent years, but as most features are encapsulated in apps, these can be disabled. Many of these new features and not really powerful/helpful and distracting from the main purpose of the software.
  • Kanban: Nextcloud Deck
    When already using Nextcloud, the Deck app is easy to install and a powerful enough Kanban tool. Wekan feels outdated in comparison, but I have to admit that it has more features. However, in my experience the Deck community is extremely active and probably outpacing the Wekan development.
  • Code hosting: Gitea
    Gitea is an awesome and rapidly developing community driven project. Gitlab in comparison is really heavy in regards of maintenance & resource consumption. Also, I feel the GUI of Gitea is much leaner. Also, Gitlab sadly excludes some features from its Community Edition which I feel should definitely be part of it, for example support assignment of multiple users to an issue.
  • Project Management: Redmine
    Not all projects in software development teams are about developing software. Gitea can be used for other projects, but usually the GUI feels off in these cases. I like Redmine, which provides all important features for managing projects of all sizes. The advantage of Redmine over other tools: there is no paid version, and all features are fully FOSS.
  • Helpdesk: Zammad
    Zammad its really easy to use. It can be configured to support more complex scenarios, but the overall focus on lean processes helps to focus on the most important thing: answering the questions of customers.
  • Wiki: Wiki.js
    No team should exist without a wiki to document processes and knowledge. Gitea also provides a Wiki functionality, but this is again focused on supporting software development. Wiki.js has all important features: Markdown (developers like it!) and WYSIWYG support, Backups with git, useful permissions and a modern GUI. The best reason for Wiki.js is the option to fully work with Markdown files in a git repository. If at any point Wiki.js becomes stale, migrating will be very easy. I also like Dokuwiki for its lean interface, which could be used alternatively. However, I think that Wiki.js will be the future.

Varmilo keyboard settings / key combinations

The original documentation of Varmilo for its keyboards is rather difficult to understand and plain wrong in some cases. A Reddit thread is incomplete, at least for the 104 keys version. For the 104 keys keyboard, the following combinations can be used:

  • Fn + Arrow Right: Switch Backlight mode
  • Fn + Arrow Up: increase Backlight
  • Fn + Arrow Down: decrease Backlight
  • Fn + Right Win: hold for 3 seconds to switch Fn key with Right Win key (to switch back, press Left Win first, then Fn)
  • Fn + ESC: hold for 3 seconds to reset settings (if switched with Win key, hold Left Win  + ESC)
  • Fn + Left Ctrl: switch Caps Lock and left Ctrl (use Caps Lock key to switch back!)
  • Fn + W: hold for 3 seconds to switch to Windows mode
  • Fn + A: hold for 3 seconds to switch to Apple mode

Unattended OpenBSD sysupgrade with encrypted RAID1

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

If you have an OpenBSD running on (mostly) encrypted RAID1 partitions like I described in https://sven-seeberg.de/wp/?p=1018, the unattended system upgrade triggered by sysupgrade will fail after rebooting into install mode. Without interaction, the system is stuck in a reboot loop. To continue with the upgrade process, follow these instructions:

  1. When the error message appears that the system cannot continue, hit Control + C to prevent the system from rebooting. You should now have a shell.
  2. Create the sd3 device: cd /dev; sh MAKEDEV sd3
  3. Decrypt the softraid: bioctl -c C -l /dev/sd3a softraid0
  4. Hit Control + D or type exit

The unattended upgrade should continue normally without any further interaction.

Install OpenBSD on (mostly) encrypted RAID1 from USB

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

The following procedure partitions two hard disks (sd0, sd1) in an unencrypted (sd3) and encrypted RAID 1 (sd4 + sd5) for OpenBSD, assuming that you’re installing from a USB drive (sd0). It seems that booting from an encrypted RAID 1 is not supported as of OpenBSD 6.7, therefore the root partition needs to be unencrypted. This setup is basically a modified version of https://research.kudelskisecurity.com/2013/09/19/softraid-and-crypto-for-openbsd-5-3/

    1. After booting the installer, press S to enter the shell.
    2. # cd /dev
    3. Create the sd devices:
      # sh MAKEDEV sd0 sd1 sd2 sd3 sd4 sd5
    4. Check which device is your USB drive with the installer on it:
      # disklabel sd0
      [...]
      # disklabel sd1
      [...]
      # disklabel sd2
      [...]

      Look for the line label:. In my case, sd2 is the USB device.

    5. Delete previous data on disks, if exists:
      # dd if=/dev/zero of=/dev/rsd0c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd1c count=1 bs=1M
    6. If you made mistakes during partitioning earlier, reboot at this stage.
    7. Create GPT partition tables:
      # fdisk -iy sd0
      # fdisk -iy sd1
    8. Partition sd0, and repeat for sd1. Partition a is going to contain the unencrypted root, partition b the encrypted other partitions.
      # disklabel -E sd0
      Label editor (enter '?' for help at any prompt)
      sd0> a a
      offset: [1024]
      size: [976772081] 4G
      FS type: [4.2BSD] RAID
      sd0*>a b
      offset: [8401995]
      size: [968366070]
      FS type: [4.2BSD] RAID
      sd0*> w
      sd0> q
      No label Changes.
    9. Create both RAID 1 devices:
      # bioctl -c 1 -l sd0a,sd1a softraid0
      [...]
      sofraid0: RAID 1 volume attached as sd3
      # bioctl -c 1 -l sd0b,sd1b softraid0
      [...]
      sofraid0: RAID 1 volume attached as sd4

      sd3 will be the unencrypted root, sd4 will contain another encrypted softraid0.

    10. Remove garbage from the RAID 1 partitions:
      # dd if=/dev/zero of=/dev/rsd3c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd4c count=1 bs=1M
    11. Partition sd3 to be used as the root partition. Use all available space.
      # disklabel -E sd3
      Label editor (enter '?' for help at any prompt)
      sd3> a a
      offset: [0]
      size: [2102963] 
      FS type: [4.2BSD]
      sd3*> w
      sd3> q
      No label changes.
    12. Partition sd4 to be used for all other encrypted partitions. Use all available space.
      # disklabel -E sd4
      Label editor (enter '?' for help at any prompt)
      sd4> a a
      offset: [0]
      size: [974668062] 
      FS type: [4.2BSD] RAID
      sd4*> w
      sd4> q
      No label changes.
    13. Finally, let’s create the encrypted softraid:
      # bioctl -c C -l sd4a softraid0
      [...]
      sofraid0: CRYPTO volume attached as sd5
    14. Run install to start the installer.
    15. When asked for the disk to install on, first select sd3 and use (W)hole disk. I split the space into a 2 GB root and 2 GB swap partition.
    16. Then partition sd5 and use (W)hole disk again. Add partitions as you like. I prefer a simplified layout:
      a d   #8 GB for /tmp
      a e   #20GB for /var
      a f   #20GB for /usr
      a g   #remaining space, /home
      w
      q
    17. Complete setup
    18. The boot will fail, because the partitions cannot be decrypted. Open a shell by entering sh and run bioctl -c C -l /dev/sd3a softraid0 && exit. To help decrypting during boot, you can create a file /sbin/decrypt with the following content:
      #!/bin/sh
      bioctl -c C -l /dev/sd3a softraid0

Managing password for Saltstack with Passbolt

I really like the approach of Passbolt to manage passwords with PGP. Passbolt also has a decent API that enables some scripting, and some basic Python packages already exist.

That made me wonder if I could use Passbolt as a password safe for Saltstack. After some research, I came up with a pretty simple Python script that renders Pillars from Passbolt groups. After installing https://github.com/netzbegruenung/passbolt-salt, you need to add the following lines to a Pillar SLS file:

#!py
def run():
    from salt_passbolt import fetch_passbolt_passwords
    # The following UUID is the UUID of a Passbolt group
    return fetch_passbolt_passwords("27b9abd4-af9b-4c9e-9af1-cf8cb963680c") 

With that, you can access passwords in states with Jinja:

{{ pillar['passbolt']['3ec2a739-8e51-4c67-89fb-4bbfe9147e17'] }}

I have to admit that addressing groups and passwords with UUIDs is not the most convenient way, but it definitely works.

Please note that the passwords are accessible to all servers that use this Pillar. Therefore create different Passbolt groups for your different servers.