Migrating from Univention Corporate Server to Samba and Keycloak

In the past I used the Univention Corporate Server for Identity Management in some organizations. However, I find that UCS is relatively huge and I nowadays I prefer to operate a Samba server in combination with Keycloak, because it is easier to integrate into server orchestration tools. Also, Keycloak provides sufficient functionality to manage users within the Samba Active Directory. It took me quite some time to figure out how to migrate the data cleanly. The goal is to migrate the data into clean Active Directory structures. I only want to migrate the username, sn, givenName, displayName and mail attributes while keeping the objectUUID. Also, group memberships should be migrated.

One major caveat was to import the entryUUID of UCS into the objectUUID attribute of Samba. samba-tool has no parameter to set the objectGUID for a new user. A possible workaround is to use ldbmodify to import the user objects first. ldbmodify is available in the ldb-tools package on Debian based distributions.

The import assumes that the Samba server has already been provisioned with

samba-tool domain provision

The following script can be used to export the data in 2 files:


UIDS=$(slapcat -a "(mail=*)" | grep uid: | sed "s/uid: //")

while IFS= read -r USERUID; do
	echo "Exporting '$USERUID'"
	USER_DATA=$(slapcat -a "(uid=$USERUID)")
	USER_DN=$(echo "$USER_DATA" | grep "dn: " | sed "s/dn: //")
	USER_GUID=$(echo "$USER_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	USER_MAIL=$(echo "$USER_DATA" | grep -E "^mail:" | sed "s/^mail://")
	USER_FIRST_NAME=$(echo "$USER_DATA" | grep "givenName:" | sed "s/^givenName://")
	USER_LAST_NAME=$(echo "$USER_DATA" | grep "sn:" | sed "s/^sn://")
	USER_DISPLAY_NAME=$(echo "$USER_DATA" | grep "displayName:" | sed "s/^displayName://")
	USER_NTHASH=$(echo "$USER_DATA" | grep "sambaNTPassword: " | sed "s/sambaNTPassword: //")

	echo "Writing $USER_DN"
	echo "dn: CN=$USERUID,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectclass: user" >> /root/users.ldif
	echo "objectGUID: $USER_GUID" >> /root/users.ldif
	echo "sAMAccountName: $USERUID" >> /root/users.ldif
	echo "mail:$USER_MAIL" >> /root/users.ldif
	echo "displayName:$USER_DISPLAY_NAME" >> /root/users.ldif
	echo "sn:$USER_LAST_NAME" >> /root/users.ldif
	echo "givenName:$USER_FIRST_NAME" >> /root/users.ldif
	echo "" >> /root/users.ldif

	MEMBERSHIPS=$(echo "$USER_DATA" | grep "memberOf: " | sed "s/memberOf: //" | sed "s/cn=//" | cut -d ',' -f 1)
	while IFS= read -r MEMBERSHIP; do
		echo "samba-tool group addmembers '$MEMBERSHIP' $USERUID" >> /root/import.sh
	done <<< "$MEMBERSHIPS"
	echo "pdbedit -u $USERUID --set-nt-hash $USER_NTHASH" >> /root/import.sh
done <<< "$UIDS"

EXPORT_GROUPS=$(slapcat -a "(objectClass=posixGroup)" | grep "cn: " | sed "s/cn: //")
while IFS= read -r GROUPCN; do
	if [ "$GROUPCN" = "Domain Users" ] || [ "$GROUPCN" = "Domain Admins" ] || [ "$GROUPCN" = "Domain Guests" ] || [ "$GROUPCN" = "Domain Controllers" ]  || [ "$GROUPCN" = "Windows Hosts" ] || [ "$GROUPCN" = "DC Backup Hosts" ] || [ "$GROUPCN" = "DC Slave Hosts" ] || [ "$GROUPCN" = "Computers" ] || [ "$GROUPCN" = "Printer-Admins" ] || [ "$GROUPCN" = "Slave Join" ] || [ "$GROUPCN" = "Backup Join" ] ; then
	GROUP_DATA=$(slapcat -a "(cn=$GROUPCN)")
	GROUP_UUID=$(echo "$GROUP_DATA" | grep "entryUUID: " | sed "s/entryUUID: //")
	echo "Writing $GROUPCN"
	echo "dn: CN=$GROUPCN,CN=Users,DC=example,DC=com" >> /root/users.ldif
	echo "changetype: add" >> /root/users.ldif
	echo "objectClass: top" >> /root/users.ldif
	echo "objectClass: group" >> /root/users.ldif
	echo "cn: $GROUPCN" >> /root/users.ldif
	echo "name: $GROUPCN" >> /root/users.ldif
	echo "objectGUID: $GROUP_UUID" >> /root/users.ldif
	echo "sAMAccountName: $GROUPCN" >> /root/users.ldif
	echo "" >> /root/users.ldif
done <<< "$EXPORT_GROUPS"

Now copy the created users.ldif and import.sh files into the /root directory of your new Samba server. To import the data into Samba, first import the user objects with ldbmodify:

ldbmodify -H tdb:///var/lib/samba/private/sam.ldb /root/users.ldif --relax

Finally set the group memberships and password hashes with executing the import.sh:

bash /root/import.sh

Why blocking destination ports or protocols in firewalls is usually bad

TL;DR: Should a firewall block all destination ports with a list of exceptions? The answer is almost certainly “no”. You’re just breaking the internet.

I notice that there are still tons of firewall operators that block outbound internet traffic to less common destination ports and protocols “to improve the network security”. While there was some merit to that in the past, this approach is basically useless today. The were times when malware mostly used other ports than tcp/443 & TLS, but these times are gone. Every malware will try to hide within traffic of the most used protocol: HTTPS. The only thing that will be achieved by blocking other destination ports and protocols is depriving users of ways of (good) communication and, most important, taking away large parts of the IPv4 connection address space.

Let’s use Jitsi, a video conferencing solution, as an example, why blocking destination ports does not fulfill any purpose: the video stream is usually transmitted via UDP to a service listening on port 10000. If the port is blocked by a firewall, the software is quite often configured to fall back to a Turn server that usually listens on port 443/TCP. I think this example is telling the full story: blocking the port does not prevent the client program from communicating to the outside world. Instead, the program has to resort to a less suited protocol (TCP) and the destination IP address can not be reused for normal HTTPS traffic (unless some sort of deep package inspection is done on the server, which also has a performance impact). On the other hand, blocked firewall ports take away an easy way of shaping and monitoring traffic based on port and protocol. For example, UDP traffic to port 10000 could be preferred over traffic to port 443/tcp. Users in video conferences probably need a low latency, while opening a website in a browser can take a couple of seconds without huge impact on users.

Does that mean that no ports should be blocked? No, there are some edge cases where blocking ports is sort of useful. For example it can be useful to block port 25/tcp as a destination for all outbound traffic. We know that all sane e-mail ISPs will receive e-mails on port 25. Blocking all internal machines from sending spam to all common ISPs will therefore prevent the source IP from appearing on anti-spam IP block lists. The difference is obvious: all important e-mail ISPs will usually only receive mails on port 25 with SMTP. While technically machines in the network could send e-mails to SMTP servers listening on other ports, that will not happen with any relevant e-mail ISP that could cause problems with the source IP reputation. Also, it is probably legitimate to prevent Samba or possibly even DNS traffic to the outside world.

To sum things up: destination ports are part of the destination address space. Blocking ports does not work to prevent malware from communicating with the outside world. There are exactly 2 reasons for blocking specific destination ports:

  1. A known (good) client software can accidentally be used by a user to leak information (Samba).
  2. The destination IP is known to operate a service on a specific port (for example 25) that should not be available to any machine.

Recommended FOSS tools for (software development) teams

In this post I want to collect and share my experiences made with different Free and Open Source Tools, mostly in context of software development teams. The list contains a recommendation for each type of software I’d currently recommend to use in a team. I’m not aiming at providing an extensive list with all pros and cons for each product, but a summary of my personal experiences. That means I’m working with the tools listed below in more than one team, and in general the feedback of the teams is positive. In all cases I worked with alternatives and I honestly feel that can make a recommendation. While old fashioned and hard to use GUIs were plaguing FOSS projects in the past, I do not think that this is a major concern nowadays. In my experience all types of employees can work with the tools listed below. Many of the tools listed below have not as many features as the huge commercial alternatives, but completely fulfill the role they need to.

On a general note I prefer easy to install and maintain software. It’s a huge plus if the software can be installed from a Linux distro repository and is a community driven project (in comparison to driven by a company which sees Open Source as a selling point for its enterprise products). I’m running all tools on Debian servers, which is also community driven and in my opinion a good compromise between stability, maintenance, and up-to-dateness. If the software is not directly available in the Debian repositories, it should be easy to install with all required dependencies and a good installation documentation.

I have a dislike for projects that have a paid enterprise plan, because the vendor often moves important features for teams into the paid versions. Features in paid versions are quite often not FOSS, which makes the concept of using FOSS pointless. (I do think that open source developers need to earn money, but I prefer the support and consulting business model.) Also, the tool should support LDAP for synchronizing teams to simplify permission handling.

  • Chat: Matrix & Element
    While Element feels more like a messenger and less like a team chat, Matrix allows creating rooms which all users on a server can join without invites. At the same time federation/communication with the outside world is fully supported. The GUI is modern and privacy/security features are awesome. While Mattermost is also an awesome community chat, it is missing LDAP in the community version. Rocket.Chat does include LDAP and also works quite well, but frequent glitches are diminishing the overall experience.
  • File sharing: Nextcloud
    Not much to say here. I guess it is the de facto standard and it works well. I usually do disable newer “eye-candy” apps like the Dashboard and Wheather. There is not much need for them in a file sharing tool. Nextcloud is experiencing a growing feature creep in recent years, but as most features are encapsulated in apps, these can be disabled. Many of these new features and not really powerful/helpful and distracting from the main purpose of the software.
  • Kanban: Nextcloud Deck
    When already using Nextcloud, the Deck app is easy to install and a powerful enough Kanban tool. Wekan feels outdated in comparison, but I have to admit that it has more features. However, in my experience the Deck community is extremely active and probably outpacing the Wekan development.
  • Code hosting: Gitea
    Gitea is an awesome and rapidly developing community driven project. Gitlab in comparison is really heavy in regards of maintenance & resource consumption. Also, I feel the GUI of Gitea is much leaner. Also, Gitlab sadly excludes some features from its Community Edition which I feel should definitely be part of it, for example support assignment of multiple users to an issue.
  • Project Management: Redmine
    Not all projects in software development teams are about developing software. Gitea can be used for other projects, but usually the GUI feels off in these cases. I like Redmine, which provides all important features for managing projects of all sizes. The advantage of Redmine over other tools: there is no paid version, and all features are fully FOSS.
  • Helpdesk: Zammad
    Zammad its really easy to use. It can be configured to support more complex scenarios, but the overall focus on lean processes helps to focus on the most important thing: answering the questions of customers.
  • Wiki: Wiki.js
    No team should exist without a wiki to document processes and knowledge. Gitea also provides a Wiki functionality, but this is again focused on supporting software development. Wiki.js has all important features: Markdown (developers like it!) and WYSIWYG support, Backups with git, useful permissions and a modern GUI. The best reason for Wiki.js is the option to fully work with Markdown files in a git repository. If at any point Wiki.js becomes stale, migrating will be very easy. I also like Dokuwiki for its lean interface, which could be used alternatively. However, I think that Wiki.js will be the future.

Varmilo keyboard settings / key combinations

The original documentation of Varmilo for its keyboards is rather difficult to understand and plain wrong in some cases. A Reddit thread is incomplete, at least for the 104 keys version. For the 104 keys keyboard, the following combinations can be used:

  • Fn + Arrow Right: Switch Backlight mode
  • Fn + Arrow Up: increase Backlight
  • Fn + Arrow Down: decrease Backlight
  • Fn + Right Win: hold for 3 seconds to switch Fn key with Right Win key (to switch back, press Left Win first, then Fn)
  • Fn + ESC: hold for 3 seconds to reset settings (if switched with Win key, hold Left Win  + ESC)
  • Fn + Left Ctrl: switch Caps Lock and left Ctrl (use Caps Lock key to switch back!)
  • Fn + W: hold for 3 seconds to switch to Windows mode
  • Fn + A: hold for 3 seconds to switch to Apple mode

Unattended OpenBSD sysupgrade with encrypted RAID1

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

If you have an OpenBSD running on (mostly) encrypted RAID1 partitions like I described in https://sven-seeberg.de/wp/?p=1018, the unattended system upgrade triggered by sysupgrade will fail after rebooting into install mode. Without interaction, the system is stuck in a reboot loop. To continue with the upgrade process, follow these instructions:

  1. When the error message appears that the system cannot continue, hit Control + C to prevent the system from rebooting. You should now have a shell.
  2. Create the sd3 device: cd /dev; sh MAKEDEV sd3
  3. Decrypt the softraid: bioctl -c C -l /dev/sd3a softraid0
  4. Hit Control + D or type exit

The unattended upgrade should continue normally without any further interaction.

Install OpenBSD on (mostly) encrypted RAID1 from USB

Update 2022-10-22: As of the 7.2 release, OpenBSD supports booting from an encrypted RAID 1. The procedure below therefore becomes obsolete.

The following procedure partitions two hard disks (sd0, sd1) in an unencrypted (sd3) and encrypted RAID 1 (sd4 + sd5) for OpenBSD, assuming that you’re installing from a USB drive (sd0). It seems that booting from an encrypted RAID 1 is not supported as of OpenBSD 6.7, therefore the root partition needs to be unencrypted. This setup is basically a modified version of https://research.kudelskisecurity.com/2013/09/19/softraid-and-crypto-for-openbsd-5-3/

    1. After booting the installer, press S to enter the shell.
    2. # cd /dev
    3. Create the sd devices:
      # sh MAKEDEV sd0 sd1 sd2 sd3 sd4 sd5
    4. Check which device is your USB drive with the installer on it:
      # disklabel sd0
      # disklabel sd1
      # disklabel sd2

      Look for the line label:. In my case, sd2 is the USB device.

    5. Delete previous data on disks, if exists:
      # dd if=/dev/zero of=/dev/rsd0c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd1c count=1 bs=1M
    6. If you made mistakes during partitioning earlier, reboot at this stage.
    7. Create GPT partition tables:
      # fdisk -iy sd0
      # fdisk -iy sd1
    8. Partition sd0, and repeat for sd1. Partition a is going to contain the unencrypted root, partition b the encrypted other partitions.
      # disklabel -E sd0
      Label editor (enter '?' for help at any prompt)
      sd0> a a
      offset: [1024]
      size: [976772081] 4G
      FS type: [4.2BSD] RAID
      sd0*>a b
      offset: [8401995]
      size: [968366070]
      FS type: [4.2BSD] RAID
      sd0*> w
      sd0> q
      No label Changes.
    9. Create both RAID 1 devices:
      # bioctl -c 1 -l sd0a,sd1a softraid0
      sofraid0: RAID 1 volume attached as sd3
      # bioctl -c 1 -l sd0b,sd1b softraid0
      sofraid0: RAID 1 volume attached as sd4

      sd3 will be the unencrypted root, sd4 will contain another encrypted softraid0.

    10. Remove garbage from the RAID 1 partitions:
      # dd if=/dev/zero of=/dev/rsd3c count=1 bs=1M
      # dd if=/dev/zero of=/dev/rsd4c count=1 bs=1M
    11. Partition sd3 to be used as the root partition. Use all available space.
      # disklabel -E sd3
      Label editor (enter '?' for help at any prompt)
      sd3> a a
      offset: [0]
      size: [2102963] 
      FS type: [4.2BSD]
      sd3*> w
      sd3> q
      No label changes.
    12. Partition sd4 to be used for all other encrypted partitions. Use all available space.
      # disklabel -E sd4
      Label editor (enter '?' for help at any prompt)
      sd4> a a
      offset: [0]
      size: [974668062] 
      FS type: [4.2BSD] RAID
      sd4*> w
      sd4> q
      No label changes.
    13. Finally, let’s create the encrypted softraid:
      # bioctl -c C -l sd4a softraid0
      sofraid0: CRYPTO volume attached as sd5
    14. Run install to start the installer.
    15. When asked for the disk to install on, first select sd3 and use (W)hole disk. I split the space into a 2 GB root and 2 GB swap partition.
    16. Then partition sd5 and use (W)hole disk again. Add partitions as you like. I prefer a simplified layout:
      a d   #8 GB for /tmp
      a e   #20GB for /var
      a f   #20GB for /usr
      a g   #remaining space, /home
    17. Complete setup
    18. The boot will fail, because the partitions cannot be decrypted. Open a shell by entering sh and run bioctl -c C -l /dev/sd3a softraid0 && exit. To help decrypting during boot, you can create a file /sbin/decrypt with the following content:
      bioctl -c C -l /dev/sd3a softraid0

Managing password for Saltstack with Passbolt

I really like the approach of Passbolt to manage passwords with PGP. Passbolt also has a decent API that enables some scripting, and some basic Python packages already exist.

That made me wonder if I could use Passbolt as a password safe for Saltstack. After some research, I came up with a pretty simple Python script that renders Pillars from Passbolt groups. After installing https://github.com/netzbegruenung/passbolt-salt, you need to add the following lines to a Pillar SLS file:

def run():
    from salt_passbolt import fetch_passbolt_passwords
    # The following UUID is the UUID of a Passbolt group
    return fetch_passbolt_passwords("27b9abd4-af9b-4c9e-9af1-cf8cb963680c") 

With that, you can access passwords in states with Jinja:

{{ pillar['passbolt']['3ec2a739-8e51-4c67-89fb-4bbfe9147e17'] }}

I have to admit that addressing groups and passwords with UUIDs is not the most convenient way, but it definitely works.

Please note that the passwords are accessible to all servers that use this Pillar. Therefore create different Passbolt groups for your different servers.

Using multiple OpenPGP Smart Cards with the same secret keys

For redundancy I am keeping the same PGP private key on multiple OpenPGP smart cards. Sadly, GnuPG does not provide a way to manage multiple smart cards for the same private key stub. Therefore, the management for the smart cards must be done manually. (This text does not cover creating multiple smart cards with the same device. Outline: I’m running the keytocard command multiple times on different smart cards.)

After importing the smart card on a device, the private key stubs are kept int the directory


To see which file belongs to which private (sub-)key, run

gpg --with-keygrip -K

Then move the files belonging to the smart card to backup locations, for example

cd ~/.gnupg/private-keys-v1.d 

Repeat this for all private keys stored on your smart card.

After that, unplug the first smart card and plug in the second smart card. Run

gpg --edit-card

Then run gpg –with-keygrip -K again and copy the newly created stub files files to new locations:

cd ~/.gnupg/private-keys-v1.d 

Now you can copy the .card1 or card2 files over the original key file and by that switch the smart card. You can write a short bash script that automatically copies the correct key file. Example:

touch ~/.gnupg/sc-toggle-status
SC=$(cat ~/.gnupg/sc-toggle-status)
if [ "$SC" == "card1" ]; then
  echo "card2" > .gnupg/sc-toggle-status
  find ~/.gnupg/private-keys-v1.d -name "*.card2" | while read f; do cp "$f" "${f%.card2}"; done
  echo "Switching to SmartCard 2"
  echo "card1" > .gnupg/sc-toggle-status
  find ~/.gnupg/private-keys-v1.d -name "*.card1" | while read f; do cp "$f" "${f%.card1}"; done
  echo "Switching to SmartCard 1"

Debian router with IPv6 prefix delegation, DMZ and dynamic DNS

Recently, I started to set up a Debian Buster based router with IPv6 prefix delegation and two /64 subnets. One subnet is used for desktop clients, the other serves as a demilitarized zone (DMZ) for servers. The Debian router is located behind Fritz.Box home router, which serves as the DSL modem and forwards all external ports to the Debian router. Of course, traditional IPv4 with NAT is also configured. I’m using a dynamic DNS service to access the IPv6 addresses in the DMZ from the Internet. It took me quite some time to figure everything out, therefore I want to share my findings. Of course, this requires that your ISP provides you with more than just one /64 subnet. My ISP provides a /56.

The following diagram illustrates the setup, including interface names on the router:

Regarding IPv4, enp1s0 has the address, enp2s0 has and enp3s0 has

First, I had to enable prefix delegation in my Fritz.Box. Coming from the IPv4 NAT world this was something new.

Now with prefix delegation enabled in the Fritz.Box, the Debian router needs to set these prefixes to its DMZ and client network interfaces (enp2s0, enp3s0). This can be achieved with the WIDE DHCPv6 client. (https://superuser.com/questions/742792/how-do-i-deploy-ipv6-within-a-lan-using-a-debian-based-router-and-prefix-delegat was very helpful for me.)

On the router, install it (and all other required packages) with

sudo apt install wide-dhcpv6-client dnsmasq iptables-persistent

Then edit


and set its content to

profile default
  request domain-name-servers;
  request domain-name;
  script "/etc/wide-dhcpv6/dhcp6c-script";

interface enp1s0 {
    send rapid-commit;
    send ia-na 0;
    send ia-pd 0;

id-assoc na 0 {

id-assoc pd 0 {
    prefix ::/60 infinity;
    prefix-interface enp2s0 {
        sla-len 4;
        sla-id 0;
        ifid 1;
    prefix-interface enp3s0 {
        sla-len 4;
        sla-id 1;
        ifid 1;

Also configure the /etc/network/interfaces like this:

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

allow-hotplug enp1s0
iface enp1s0 inet dhcp
iface enp1s0 inet6 auto
    # Important to accept delegated prefixes
    post-up sysctl -w net.ipv6.conf.enp1s0.accept_ra=2

allow-hotplug enp2s0
iface enp2s0 inet static

allow-hotplug enp3s0
iface enp3s0 inet static

Now when connecting enp1s0, the delegated prefixes will automatically be set to the internal facing interfaces. The internal interfaces will receive the addresses $PREFIX::1.

Next, I’m using Dnsmasq on the internal interfaces to provide DNS and IPv6 router advertisements. Add the following lines to the /etc/dnsmasq.conf

# IPv4
# IPv6
dhcp-range = ::1,constructor:enp2s0, ra-stateless, ra-names, 4h
dhcp-range = ::1,constructor:enp3s0, ra-stateless, ra-names, 4h

To manage inbound and outbound traffic between the different network segments. As is common, the green zone only allows outbound traffic, while the DMZ allows inbound traffic to specified hosts. The following configuration demonstrates how to allow inbound IPv6 traffic to specific hosts. The rule can be extended to specific ports as well. To restore Iptables during boot, I’m using the iptables-persistent package. My /etc/iptables/rules.v4 and /etc/iptables/rules.v6 contain the following lines:

# /etc/iptables/rules.v4
:OUTPUT ACCEPT [81:8253]
-A INPUT -i enp2s0 -j ACCEPT
-A INPUT -i enp3s0 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A FORWARD -i enp2s0 -j ACCEPT
-A FORWARD -i enp3s0 -j ACCEPT
:INPUT ACCEPT [23:1484]
:OUTPUT ACCEPT [24:1535]
# /etc/iptables/rules.v6
:OUTPUT ACCEPT [175:15496]
-A INPUT -p ipv6-icmp -j ACCEPT
-A INPUT -s fe80::/10 -j ACCEPT
-A INPUT -i enp2s0 -j ACCEPT
-A INPUT -i enp3s0 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A FORWARD -i enp2s0 -j ACCEPT
-A FORWARD -i enp3s0 -j ACCEPT
-A FORWARD -d ::2/::ffff:ffff:ffff:ffff -o enp2s0 -p tcp -j ACCEPT

Notice the rule -A FORWARD -d ::2/::ffff:ffff:ffff:ffff -o enp2s0 -p tcp -j ACCEPT. This allows accessing the host in the DMZ from the internet. Now we need to take care that the server in the DMZ always gets the $PREFIX::3 address. This can be done by setting a token with ip. To do this every time the interface is being activated, for example on boot, add the following lines to the /etc/network/interfaces configuration of the server in the DMZ:

iface enp0s31f6 inet6 auto
    pre-up /sbin/ip token set ::2 dev enp0s31f6

To publish the IPv6 address of the server on freedns.afraid.org, I’m using the following crontab line (replace $TOKEN with your private token):

* *    * * *   (IP=$(ip -6 a list dev enp0s31f6 | grep global | awk '{print $2}' | sed 's/\/64//') && wget --no-check-certificate -O - "https://freedns.afraid.org/dynamic/update.php?$TOKEN&address=$IP" >> /tmp/freedns_$HOSTNAME.log 2>&1)

I hope I did not forget any important part. Feel free to ping me if your setup according to this post does not work.