Recovering crashed MariaDB InnoDB Tables/Indexes

Today I came across a crashed InnoDB MariaDB database. When trying to restart the process, it immediately failed with the following messages in the systemd journal:

[Note] Starting MariaDB 10.11.6-MariaDB-0+deb12u1 source revision  as process 661197
[Note] InnoDB: Compressed tables use zlib 1.2.13
[Note] InnoDB: Number of transaction pools: 1
[Note] InnoDB: Using crc32 + pclmulqdq instructions
[Note] InnoDB: Using liburing
[Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
[Note] InnoDB: Completed initialization of buffer pool
[Note] InnoDB: File system buffers for log disabled (block size=4096 bytes)
[Note] InnoDB: Starting crash recovery from checkpoint LSN=10862992357
[Note] InnoDB: End of log at LSN=10868013717
[Note] InnoDB: Retry with innodb_force_recovery=5
[ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption
[Note] InnoDB: Starting shutdown...
[ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[Note] Plugin 'FEEDBACK' is disabled.
[ERROR] Unknown/unsupported storage engine: InnoDB
[ERROR] Aborting

Before continuing with any recovery process, I created a backup of /var/lib/mysql:

cp -rp /var/lib/mysql /var/lib/mysql.bak

Some posts recommended to start MariaDB with innodb_force_recovery = 4. However, as you can see in the messages, MariaDB automatically tried to recover with innodb_force_recovery = 5 and failed. I manually set the recovery mode to 6 in the /etc/mysql/mariadb.cnf:

[mysqld]
innodb_force_recovery = 6

With that, MariaDB finally started. Sadly, this did not yet fully resolve the issue. In mode 6, MariaDB starts with all tables in read only mode. That means I was able to access the data, but the database was not really usable or fixed. The obvious next step was to do a mysqldump, clean the database and then insert all data again. Running mysqldump failed with the next error:

mysqldump: Error 1034: Index for table 'mytable' is corrupt; try to repair it when dumping table mytable at row: 3

Interestingly, dumping a single table did work. If anyone knows why, please let me know. I dumped a list of all table names in the database into a file called /root/tables, then with bash iterated over all tables and dumped each one into a dedicated file:

while read TABLE; do
mysqldump -f mydatabase $TABLE > /root/mydatabase/$TABLE-$(date -I).sql;
done < /root/tables

All files can then just be concatenated into one large dump file:

cat /root/mydatabase/* > /root/mydatabase.sql

After the database content was safely stored in a text sql dump file, I bootstrapped a new /var/lib/mysql directory:

rm -rf /var/lib/mysql
apt install --reinstall mariadb-server
sudo -u mysql mariadb-install-db

Then I created the database again and inserted the dump:

MariaDB [(none)]> CREATE DATABASE mydatabase;
MariaDB [(none)]> use mydatabase;
MariaDB [mydatabase]> source /root/mydatabase.sql

After this, the database was healthy again.

Migrating from Univention Corporate Server to Samba and Keycloak

In the past I used the Univention Corporate Server for Identity Management in some organizations. However, I find that UCS is relatively huge and I nowadays I prefer to operate a Samba server in combination with Keycloak, because it is easier to integrate into server orchestration tools. Also, Keycloak provides sufficient functionality to manage users within the Samba Active Directory. It took me quite some time to figure out how to migrate the data cleanly. The goal is to migrate the data into clean Active Directory structures. I only want to migrate the username, sn, givenName, displayName and mail attributes while keeping the objectUUID. Also, group memberships should be migrated.

One major caveat was to import the entryUUID of UCS into the objectUUID attribute of Samba. samba-tool has no parameter to set the objectGUID for a new user. A possible workaround is to use ldbmodify to import the user objects first. ldbmodify is available in the ldb-tools package on Debian based distributions.

The import assumes that the Samba server has already been provisioned with

samba-tool domain provision

The following script can be used to export the data in 2 files:

#!/bin/bash

UIDS=$(slapcat -a "(mail=*)" | grep uid: | sed "s/uid: //")

while IFS= read -r USERUID; do
	echo "Exporting '$USERUID'"
	USER_DATA=$(slapcat -a "(uid=$USERUID)")
	USER_DN=$(echo "$USER_DATA" | grep "dn: " | sed "s/dn: //")
	USER_GUID=$(echo "$USER_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	USER_MAIL=$(echo "$USER_DATA" | grep -E "^mail:" | sed "s/^mail://")
	USER_FIRST_NAME=$(echo "$USER_DATA" | grep "givenName:" | sed "s/^givenName://")
	USER_LAST_NAME=$(echo "$USER_DATA" | grep "sn:" | sed "s/^sn://")
	USER_DISPLAY_NAME=$(echo "$USER_DATA" | grep "displayName:" | sed "s/^displayName://")
	USER_NTHASH=$(echo "$USER_DATA" | grep "sambaNTPassword: " | sed "s/sambaNTPassword: //")

	echo "Writing $USER_DN"
	echo "dn: CN=$USERUID,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectclass: user" >> /root/users.ldif
	echo "objectGUID: $USER_GUID" >> /root/users.ldif
	echo "sAMAccountName: $USERUID" >> /root/users.ldif
	echo "mail:$USER_MAIL" >> /root/users.ldif
	echo "displayName:$USER_DISPLAY_NAME" >> /root/users.ldif
	echo "sn:$USER_LAST_NAME" >> /root/users.ldif
	echo "givenName:$USER_FIRST_NAME" >> /root/users.ldif
	echo "" >> /root/users.ldif

	MEMBERSHIPS=$(echo "$USER_DATA" | grep "memberOf: " | sed "s/memberOf: //" | sed "s/cn=//" | cut -d ',' -f 1)
	while IFS= read -r MEMBERSHIP; do
		echo "samba-tool group addmembers '$MEMBERSHIP' $USERUID" >> /root/import.sh
	done <<< "$MEMBERSHIPS"
	echo "pdbedit -u $USERUID --set-nt-hash $USER_NTHASH" >> /root/import.sh
done <<< "$UIDS"

EXPORT_GROUPS=$(slapcat -a "(objectClass=posixGroup)" | grep "cn: " | sed "s/cn: //")
while IFS= read -r GROUPCN; do
	if [ "$GROUPCN" = "Domain Users" ] || [ "$GROUPCN" = "Domain Admins" ] || [ "$GROUPCN" = "Domain Guests" ] || [ "$GROUPCN" = "Domain Controllers" ]  || [ "$GROUPCN" = "Windows Hosts" ] || [ "$GROUPCN" = "DC Backup Hosts" ] || [ "$GROUPCN" = "DC Slave Hosts" ] || [ "$GROUPCN" = "Computers" ] || [ "$GROUPCN" = "Printer-Admins" ] || [ "$GROUPCN" = "Slave Join" ] || [ "$GROUPCN" = "Backup Join" ] ; then
		continue
	fi
	GROUP_DATA=$(slapcat -a "(cn=$GROUPCN)")
	GROUP_UUID=$(echo "$GROUP_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	echo "Writing $GROUPCN"
	echo "dn: CN=$GROUPCN,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectClass: top" >> /root/users.ldif
	echo "objectClass: group" >> /root/users.ldif
	echo "cn: $GROUPCN" >> /root/users.ldif
	echo "name: $GROUPCN" >> /root/users.ldif
	echo "objectGUID: $GROUP_UUID" >> /root/users.ldif
	echo "sAMAccountName: $GROUPCN" >> /root/users.ldif
	echo "" >> /root/users.ldif
done <<< "$EXPORT_GROUPS"

Now copy the created users.ldif and import.sh files into the /root directory of your new Samba server. To import the data into Samba, first import the user objects with ldbmodify:

ldbmodify -H tdb:///var/lib/samba/private/sam.ldb /root/users.ldif --relax

Finally set the group memberships and password hashes with executing the import.sh:

bash /root/import.sh

Recommended FOSS tools for (software development) teams

In this post I want to collect and share my experiences made with different Free and Open Source Tools, mostly in context of software development teams. The list contains a recommendation for each type of software I’d currently recommend to use in a team. I’m not aiming at providing an extensive list with all pros and cons for each product, but a summary of my personal experiences. That means I’m working with the tools listed below in more than one team, and in general the feedback of the teams is positive. In all cases I worked with alternatives and I honestly feel that can make a recommendation. While old fashioned and hard to use GUIs were plaguing FOSS projects in the past, I do not think that this is a major concern nowadays. In my experience all types of employees can work with the tools listed below. Many of the tools listed below have not as many features as the huge commercial alternatives, but completely fulfill the role they need to.

On a general note I prefer easy to install and maintain software. It’s a huge plus if the software can be installed from a Linux distro repository and is a community driven project (in comparison to driven by a company which sees Open Source as a selling point for its enterprise products). I’m running all tools on Debian servers, which is also community driven and in my opinion a good compromise between stability, maintenance, and up-to-dateness. If the software is not directly available in the Debian repositories, it should be easy to install with all required dependencies and a good installation documentation.

I have a dislike for projects that have a paid enterprise plan, because the vendor often moves important features for teams into the paid versions. Features in paid versions are quite often not FOSS, which makes the concept of using FOSS pointless. (I do think that open source developers need to earn money, but I prefer the support and consulting business model.) Also, the tool should support LDAP for synchronizing teams to simplify permission handling.

  • Chat: Matrix & Element
    While Element feels more like a messenger and less like a team chat, Matrix allows creating rooms which all users on a server can join without invites. At the same time federation/communication with the outside world is fully supported. The GUI is modern and privacy/security features are awesome. While Mattermost is also an awesome community chat, it is missing LDAP in the community version. Rocket.Chat does include LDAP and also works quite well, but frequent glitches are diminishing the overall experience.
  • File sharing: Nextcloud
    Not much to say here. I guess it is the de facto standard and it works well. I usually do disable newer “eye-candy” apps like the Dashboard and Wheather. There is not much need for them in a file sharing tool. Nextcloud is experiencing a growing feature creep in recent years, but as most features are encapsulated in apps, these can be disabled. Many of these new features and not really powerful/helpful and distracting from the main purpose of the software.
  • Kanban: Nextcloud Deck
    When already using Nextcloud, the Deck app is easy to install and a powerful enough Kanban tool. Wekan feels outdated in comparison, but I have to admit that it has more features. However, in my experience the Deck community is extremely active and probably outpacing the Wekan development.
  • Code hosting: Gitea
    Gitea is an awesome and rapidly developing community driven project. Gitlab in comparison is really heavy in regards of maintenance & resource consumption. Also, I feel the GUI of Gitea is much leaner. Also, Gitlab sadly excludes some features from its Community Edition which I feel should definitely be part of it, for example support assignment of multiple users to an issue.
  • Project Management: Redmine
    Not all projects in software development teams are about developing software. Gitea can be used for other projects, but usually the GUI feels off in these cases. I like Redmine, which provides all important features for managing projects of all sizes. The advantage of Redmine over other tools: there is no paid version, and all features are fully FOSS.
  • Helpdesk: Zammad
    Zammad its really easy to use. It can be configured to support more complex scenarios, but the overall focus on lean processes helps to focus on the most important thing: answering the questions of customers.
  • Wiki: Wiki.js
    No team should exist without a wiki to document processes and knowledge. Gitea also provides a Wiki functionality, but this is again focused on supporting software development. Wiki.js has all important features: Markdown (developers like it!) and WYSIWYG support, Backups with git, useful permissions and a modern GUI. The best reason for Wiki.js is the option to fully work with Markdown files in a git repository. If at any point Wiki.js becomes stale, migrating will be very easy. I also like Dokuwiki for its lean interface, which could be used alternatively. However, I think that Wiki.js will be the future.

Varmilo keyboard settings / key combinations

The original documentation of Varmilo for its keyboards is rather difficult to understand and plain wrong in some cases. A Reddit thread is incomplete, at least for the 104 keys version. For the 104 keys keyboard, the following combinations can be used:

  • Fn + Arrow Right: Switch Backlight mode
  • Fn + Arrow Up: increase Backlight
  • Fn + Arrow Down: decrease Backlight
  • Fn + Right Win: hold for 3 seconds to switch Fn key with Right Win key (to switch back, press Left Win first, then Fn)
  • Fn + ESC: hold for 3 seconds to reset settings (if switched with Win key, hold Left Win  + ESC)
  • Fn + Left Ctrl: switch Caps Lock and left Ctrl (use Caps Lock key to switch back!)
  • Fn + W: hold for 3 seconds to switch to Windows mode
  • Fn + A: hold for 3 seconds to switch to Apple mode

Unattended OpenBSD sysupgrade with encrypted RAID1

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

If you have an OpenBSD running on (mostly) encrypted RAID1 partitions like I described in https://sven-seeberg.de/wp/?p=1018, the unattended system upgrade triggered by sysupgrade will fail after rebooting into install mode. Without interaction, the system is stuck in a reboot loop. To continue with the upgrade process, follow these instructions:

  1. When the error message appears that the system cannot continue, hit Control + C to prevent the system from rebooting. You should now have a shell.
  2. Create the sd3 device: cd /dev; sh MAKEDEV sd3
  3. Decrypt the softraid: bioctl -c C -l /dev/sd3a softraid0
  4. Hit Control + D or type exit

The unattended upgrade should continue normally without any further interaction.

Install OpenBSD on (mostly) encrypted RAID1 from USB

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

The following procedure partitions two hard disks (sd0, sd1) in an unencrypted (sd3) and encrypted RAID 1 (sd4 + sd5) for OpenBSD, assuming that you’re installing from a USB drive (sd0). It seems that booting from an encrypted RAID 1 is not supported as of OpenBSD 6.7, therefore the root partition needs to be unencrypted. This setup is basically a modified version of https://research.kudelskisecurity.com/2013/09/19/softraid-and-crypto-for-openbsd-5-3/

    1. After booting the installer, press S to enter the shell.
    2. # cd /dev
    3. Create the sd devices:
      # sh MAKEDEV sd0 sd1 sd2 sd3 sd4 sd5
    4. Check which device is your USB drive with the installer on it:
      # disklabel sd0
      [...]
      # disklabel sd1
      [...]
      # disklabel sd2
      [...]

      Look for the line label:. In my case, sd2 is the USB device.

    5. Delete previous data on disks, if exists:
      # dd if=/dev/zero of=/dev/rsd0c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd1c count=1 bs=1M
    6. If you made mistakes during partitioning earlier, reboot at this stage.
    7. Create GPT partition tables:
      # fdisk -iy sd0
      # fdisk -iy sd1
    8. Partition sd0, and repeat for sd1. Partition a is going to contain the unencrypted root, partition b the encrypted other partitions.
      # disklabel -E sd0
      Label editor (enter '?' for help at any prompt)
      sd0> a a
      offset: [1024]
      size: [976772081] 4G
      FS type: [4.2BSD] RAID
      sd0*>a b
      offset: [8401995]
      size: [968366070]
      FS type: [4.2BSD] RAID
      sd0*> w
      sd0> q
      No label Changes.
    9. Create both RAID 1 devices:
      # bioctl -c 1 -l sd0a,sd1a softraid0
      [...]
      sofraid0: RAID 1 volume attached as sd3
      # bioctl -c 1 -l sd0b,sd1b softraid0
      [...]
      sofraid0: RAID 1 volume attached as sd4

      sd3 will be the unencrypted root, sd4 will contain another encrypted softraid0.

    10. Remove garbage from the RAID 1 partitions:
      # dd if=/dev/zero of=/dev/rsd3c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd4c count=1 bs=1M
    11. Partition sd3 to be used as the root partition. Use all available space.
      # disklabel -E sd3
      Label editor (enter '?' for help at any prompt)
      sd3> a a
      offset: [0]
      size: [2102963] 
      FS type: [4.2BSD]
      sd3*> w
      sd3> q
      No label changes.
    12. Partition sd4 to be used for all other encrypted partitions. Use all available space.
      # disklabel -E sd4
      Label editor (enter '?' for help at any prompt)
      sd4> a a
      offset: [0]
      size: [974668062] 
      FS type: [4.2BSD] RAID
      sd4*> w
      sd4> q
      No label changes.
    13. Finally, let’s create the encrypted softraid:
      # bioctl -c C -l sd4a softraid0
      [...]
      sofraid0: CRYPTO volume attached as sd5
    14. Run install to start the installer.
    15. When asked for the disk to install on, first select sd3 and use (W)hole disk. I split the space into a 2 GB root and 2 GB swap partition.
    16. Then partition sd5 and use (W)hole disk again. Add partitions as you like. I prefer a simplified layout:
      a d   #8 GB for /tmp
      a e   #20GB for /var
      a f   #20GB for /usr
      a g   #remaining space, /home
      w
      q
    17. Complete setup
    18. The boot will fail, because the partitions cannot be decrypted. Open a shell by entering sh and run bioctl -c C -l /dev/sd3a softraid0 && exit. To help decrypting during boot, you can create a file /sbin/decrypt with the following content:
      #!/bin/sh
      bioctl -c C -l /dev/sd3a softraid0

Managing password for Saltstack with Passbolt

I really like the approach of Passbolt to manage passwords with PGP. Passbolt also has a decent API that enables some scripting, and some basic Python packages already exist.

That made me wonder if I could use Passbolt as a password safe for Saltstack. After some research, I came up with a pretty simple Python script that renders Pillars from Passbolt groups. After installing https://github.com/netzbegruenung/passbolt-salt, you need to add the following lines to a Pillar SLS file:

#!py
def run():
    from salt_passbolt import fetch_passbolt_passwords
    # The following UUID is the UUID of a Passbolt group
    return fetch_passbolt_passwords("27b9abd4-af9b-4c9e-9af1-cf8cb963680c") 

With that, you can access passwords in states with Jinja:

{{ pillar['passbolt']['3ec2a739-8e51-4c67-89fb-4bbfe9147e17'] }}

I have to admit that addressing groups and passwords with UUIDs is not the most convenient way, but it definitely works.

Please note that the passwords are accessible to all servers that use this Pillar. Therefore create different Passbolt groups for your different servers.

Using multiple OpenPGP Smart Cards with the same secret keys

For redundancy I am keeping the same PGP private key on multiple OpenPGP smart cards. Sadly, GnuPG does not provide a way to manage multiple smart cards for the same private key stub. Therefore, the management for the smart cards must be done manually. (This text does not cover creating multiple smart cards with the same device. Outline: I’m running the keytocard command multiple times on different smart cards.)

After importing the smart card on a device, the private key stubs are kept int the directory

~/.gnupg/private-keys-v1.d

To see which file belongs to which private (sub-)key, run

gpg --with-keygrip -K

Then move the files belonging to the smart card to backup locations, for example

cd ~/.gnupg/private-keys-v1.d 
mv AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.key \
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.key.card1

Repeat this for all private keys stored on your smart card.

After that, unplug the first smart card and plug in the second smart card. Run

gpg --edit-card
fetch

Then run gpg –with-keygrip -K again and copy the newly created stub files files to new locations:

cd ~/.gnupg/private-keys-v1.d 
mv AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.key \
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.key.card2

Now you can copy the .card1 or card2 files over the original key file and by that switch the smart card. You can write a short bash script that automatically copies the correct key file. Example:

#!/bin/bash
touch ~/.gnupg/sc-toggle-status
SC=$(cat ~/.gnupg/sc-toggle-status)
if [ "$SC" == "card1" ]; then
  echo "card2" > .gnupg/sc-toggle-status
  find ~/.gnupg/private-keys-v1.d -name "*.card2" | while read f; do cp "$f" "${f%.card2}"; done
  echo "Switching to SmartCard 2"
else
  echo "card1" > .gnupg/sc-toggle-status
  find ~/.gnupg/private-keys-v1.d -name "*.card1" | while read f; do cp "$f" "${f%.card1}"; done
  echo "Switching to SmartCard 1"
fi