There are quite a few resonable reasons why you would want to migrate your cronjobs to systemd timers. While cron used to be present in any Unix-like systems, that is not longer the case. You can of course still install it, but most of my systems are using systemd nowadays anyway.

Apart from that, here are a few reasons to further embrace your love-hate-relationship with systemd:

  • Logging: Cron jobs may or may not write to a log file somewhere. This can become a mess, especially on multi-user systems where every crontab follows it's own "standard". Systemd timers will log to the systemd journal in a clean fashion making debugging a lot easier.
  • Dependencies: Units can list reaquired dependencies to other units, ensuring that all necesarry prerequisites for a job are provided.
  • Configuration: This might be subjective and is similar to the logging. Personally I find unit files easier to read (and to find). The syntax is nicely documented and anyone having to make changes will be able to, without understanding every line of what your awesome-shellscript.sh does.
  • Autostart: Starting and stopping a service or timer just requires a systemctl enable/disable making management less complex.
  • Other: There are more points to be made here, e.g. ressource management. The ones above are the ones most important to me at this point. The great arch wiki as always has more information.

The Jobs

I'll be covering two cron jobs on one of my servers as examples that I want to migrate:

*/5 * * * * /var/lib/cloudflare-ddns/update.sh >> /var/log/cloudflare_ddns.log 2>&1
0   2 * * * /var/lib/borgbackup/create-backup.sh

The backup script run by the second job currently includes measures to send out an email if something has failed. It checks exit codes for some of the commands. While migrating this, I will also set up a more centralized systemd unit to send out emails for failed units and timers.

Some Basics

While a crontab entry is just a single line of configuration, systemd will need two files to accomplish the job: A timer file with the suffix .timer and a service (.service) which will be controlled by it.

Timer Units

Timers are systemd unit files ending in .timer. The are loaded in the same way but include a [Timer] section. They have support for both Realtime timers (run on specified time a.k.a wallclock timers) and Monolithic timers which run at a given relative interval. I will be using a monolithic timer to check and set my dynamic DNS settings and a realtime timer for the backup job since the later is quite rescource-hungry and is better run somewhere at night when I'm not actively using the server.

Service Units

The corresponding my-awesome-job.service for a my-awesome-job.timer is normally named like the timer. If you really need to activate a differently named .service you can do this with a Unit= directive in the [Timer] section. I won't go in to the details of service management, as it should be well-known nowadays.

Example 1: Run a shellscript every 5 Minutes

The follwing job runs a script to update my IP via the the Cloudflare API. This serves as a solution like DynDNS to be always able to reach the host on it's URL, even though it has a non-static IP address. The IP seems to change rarely though, so the script will check beforehand and exit if the IP is unchanged. The script is included in my dotfiles as a template and gets rendered by ansible.

Service file

Let's start by creating the .service file. Systemd units have a lot more options, but the following simple unit will be everything I need for this.

[Unit]
Description=Check and set DDNS IPs
Requires=network-online.target
After=network-online.target

[Service]
Type=oneshot
ExecStart=/var/lib/cloudflare-ddns/update.sh

The service contains two sections: [Unit] and [Service].

[Unit] Section

While the Description= directive is quite self-explanatory, there are few caveats to be namend in the following two lines.

The Requires= is where dependenciy-management is implemented here. The script will try to access an API and obviously requires internet connectivity for that. Quite confusingly there is also an alternative Wants= directive which in most cases will do the same thing. The difference being, that while both directives will try to start the dependency, Wants= will continue if that fails, while Requires= will not. In this case we can't do anything meaningful without a connction, so it seems reasonable to fail if it is not present.

Looking at the documentation for the targets, there are two possible options: network.target and network-online.target. Be aware, that only network-online.target actually ensures what we want here, this is often cause of confusion. network.target will only require the networking system (e.g. NetworkManager) to be set up, but not actually to be connected.

Lastly in this section we add the After= to not only pull in the dependency but also make sure that our script is run in the right order relative to it.

[Service] Section

While the most common Type= is simple, setting it to oneshot seems to be a better fit. The docs say the behaviour is similar to simple but it will consider the unit up after the main process exits. That means, it will block until the command is run. Lastly the ExecStart= directive is the command we want to run. Here I'll just pass my existing shell script.

Timer file

The timer file will be triggering the .service just created. Behold, here it is in it's full glory:

[Unit]
Description=Cloudflare DDNS timer

[Timer]
OnBootSec=1
OnUnitActiveSec=5min
Persistent=true

[Install]
WantedBy=basic.target

Apart from the [Unit] section which just contains a description, there are two others worth mentioning.

[Timer] Section

Finally, timer-related configuration! This section makes the timer a timer. As mentioned before, for this task a monolithic timer is the better aproach. Sure, it would also be possible to run the task on specific times, but we don't really when it runs as long as it is in 5 minute intervals.

OnBootSec= tells the timer to run 1 second after booting into the system. In reality it will probably wait longer since the service has network-online.target as dependency. As you might have guessed OnUnitActiveSec sepecifies the time after the unit should run again since the unit was last activated. 5 Minutes seems reasonable for a IP check.

The Persistent option is set to true. This directive is used to control whether missed runs should be cached up to. The service unit will be triggered immediately if at least one run was missed.

[Install] Section

This only includes a WantedBy= to let the systemd know when this timer will be activated. basic.target is a special target unit covering basic boot-up.

Test, Run and Autostart

After placing the two files inside /etc/systemd/system/ we need to reload the systemd configurations effectively reading our newly created service and timer.

systemctl daemon-reload

No errors where reported, so we can proceed to start and enable the timer at system boot.

systemctl reenable --now cloudflare.timer
systemctl start cloudflare.timer

Checking with systemctl status cloudflare.timer we can see any errors in detail in case something went wrong.

[root@birne system]# systemctl status cloudflare.timer 
● cloudflare.timer - Cloudflare DDNS timer
     Loaded: loaded (/etc/systemd/system/cloudflare.timer; enabled; vendor preset: disabled)
     Active: active (waiting) since Tue 2020-03-17 09:29:02 CET; 41s ago
    Trigger: Tue 2020-03-17 09:34:02 CET; 4min 18s left
   Triggers: ● cloudflare.service

Mar 17 09:29:02 birne systemd[1]: Started Cloudflare DDNS timer.
[root@birne system]# journalctl -u cloudflare.timer
-- Logs begin at Tue 2020-01-21 15:08:44 CET, end at Tue 2020-03-17 09:29:03 CET. --
Mar 17 09:29:02 birne systemd[1]: Started Cloudflare DDNS timer.

As a last check there is the systemctl list-timers command, which now should show our new one too.

[root@birne system]# systemctl list-timers
NEXT                        LEFT         LAST                        PASSED       UNIT                         ACTIVATES
Tue 2020-03-17 09:34:02 CET 3min 0s left Tue 2020-03-17 09:29:02 CET 1min 59s ago cloudflare.timer             cloudflare.service
Tue 2020-03-17 16:13:08 CET 6h left      Mon 2020-03-16 16:13:08 CET 17h ago      systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Wed 2020-03-18 00:00:00 CET 14h left     Tue 2020-03-17 00:00:10 CET 9h ago       man-db.timer                 man-db.service
Wed 2020-03-18 00:00:00 CET 14h left     Tue 2020-03-17 00:00:10 CET 9h ago       shadow.timer                 shadow.service
Wed 2020-03-18 00:00:00 CET 14h left     Tue 2020-03-17 00:00:10 CET 9h ago       updatedb.timer               updatedb.service

5 timers listed.
Pass --all to see loaded but inactive timers, too.

Everything looks good so far. After waiting a few minutes, we can check the logs, this time for the .service file. The script seems to have been run fine, so this one is finished for now.

[root@birne system]# journalctl -u cloudflare.service 
-- Logs begin at Tue 2020-01-21 15:08:44 CET, end at Tue 2020-03-17 09:29:03 CET. --
Mar 17 09:29:02 birne systemd[1]: Starting Check and set DDNS IPs...
Mar 17 09:29:02 birne update.sh[255964]: [Cloudflare DDNS] Check Initiated
Mar 17 09:29:03 birne update.sh[255964]: [Cloudflare DDNS] IPs have not changed.
Mar 17 09:29:03 birne systemd[1]: cloudflare.service: Succeeded.
Mar 17 09:29:03 birne systemd[1]: Finished Check and set DDNS IPs.

Example 2: Backup at 02:00 with E-Mail notification

Let's start with the .service and .timer for the actual backup run. These will be similar so I will only go into the differences.

Unit files

# /etc/systemd/system/borg-backup.service
[Unit]
Description=Run Backup with Borg
Requires=network-online.target
After=network-online.target
OnFailure=status-email-user@%n.service

[Service]
Type=simple
ExecStart=/var/lib/borgbackup/create-backup.sh
# /etc/systemd/system/borg-backup.timer
[Unit]
Description=Run daily backup with Borg

[Timer]
OnCalendar=*-*-* 02:00:00

[Install]
WantedBy=basic.target

The .service file specifies the unit to be of type simple this time. For the .timer I will be using a realtime timer instead. Specific dates for the OnCalendar= directive are specified in this format:

DayOfWeek Year-Month-Day Hour:Minute:Second

The DayOfWeek is left out meaning "every day of the week", while other parts of the format can be set to asteriks for a wildcard. Note that it is opssible to specify multiple OnCalendar= directives if you need to get really specific about run times.

Email Notifications

Cron normally sends mails when a job outputs to sdout or stderr to MAILTO, I will be setting up a mechanism for systemd to send out an email if a unit fails. For the actual sending part I will be using msmtp which is already configured in /etc/msmtprc. Any tool that can send mails from the command line can be used. On my system, this command would send an email to notifications@pablo.tools:

echo "Hello World, this is the message body " | msmtp notifications@pablo.tools

To simplify the unit file, the following script is placend in /usr/local/bin/systemd-email. It formats the mail correctly and takes some input as arguments, namely the adress to send to and the subject line which will be the failed unit's name.

#!/bin/sh
/usr/bin/sendmail -t <<ERRMAIL
To: $1
From: systemd <root@$HOSTNAME>
Subject: [$(systemctl show -p Result --value $2)] $2 on $HOSTNAME
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8

$(systemctl status --full "$2")
ERRMAIL

The script will be triggered by the following unit placed in /etc/systemd/system/status-email-user@.service:

[Unit]
Description=status email for %i

[Service]
Type=oneshot
ExecStart=/usr/local/bin/systemd-email notifications@pablo.tools %i
User=root
Group=systemd-journal

First thing to notice, is that the unit's name contains an @ symbol. This will allow to use it from multiple service files by adding OnFailure=status-email-user@%n.service to the [Unit] section. %n passes the unit's name to the template. The recipient's mail address is hard-coded, since I want all notification mails to go here.

To test that emails are being send out correctly the arch wiki, where the base for the scripts above where taken from, proposes to start status-email-user@dbus.service. You will need to reload the systemd daemon before that that.

systemctl daemon-reload
systemctl start status-email-user@dbus.service

If everything went well and no errors are reported, check your inbox. You should see a new mail similar to this:

Date: Wed, 25 Mar 2020 14:03:56 +0100
From: systemd <serverpablotools@gmail.com>
To: notifications@pablo.tools
Subject: dbus on birne

● dbus.service - D-Bus System Message Bus
     Loaded: loaded (/usr/lib/systemd/system/dbus.service; static; vendor preset: disabled)
     Active: active (running) since Thu 2020-03-19 02:57:26 CET; 6 days ago
TriggeredBy: ● dbus.socket
       Docs: man:dbus-daemon(1)
   Main PID: 396 (dbus-daemon)
      Tasks: 1 (limit: 9114)
     Memory: 3.5M
     CGroup: /system.slice/dbus.service
             └─396 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only

Mar 19 02:57:26 birne systemd[1]: Started D-Bus System Message Bus.

As a last note, if you want to always get a mail, even if the service ran correcly you can add these lines to your .service files together with the OnFailure directive in the [Unit]:

Wants=status-email-user@%n.service
Before=status-email-user@%n.service

You should now get a new mail notification everytime the unit runs, whether it succeeds or not.