systemd: Shut down computer at a certain uptime

Motivation

Paid-per-time cloud services. I don’t want to forget one of those running, just to get a fat bill at the end of the month. And if the intended use is short sessions anyhow, make sure that the machine shuts down by itself after a given amount of time. Just make sure that a shutdown by the machine itself accounts for cutting the costs. And sane cloud provider does that except for, possibly, costs for storing the VM’s disk image.

So this is the cloud computing parallel to “did I lock the door?”.

The examples here are based upon systemd 241 on Debian GNU/Linux 10.

The main service

There is more than one way to do this. I went for two services: One that calls /sbin/shutdown with a five minute delay (so I get a chance to cancel it) and then second is a timer for the uptime limit.

So the main service is this file as /etc/systemd/system/uptime-limiter.service:

[Unit]
Description=Limit uptime service

[Service]
ExecStart=/sbin/shutdown -h +5 "System it taken down by uptime-limit.service"
Type=simple

[Install]
WantedBy=multi-user.target

The naïve approach is to just enable the service and expect it to work. Well, it does work when started manually, but when this service starts as part of the system bringup, the shutdown request is registered but later ignored. Most likely because systemd somehow cancels pending shutdown requests when it reaches the ultimate target.

I should mention that adding After=multi-user.target in the unit file didn’t help. Maybe some other target. Don’t know.

The timer service

So the way to ensure that the shutdown command is respected is to trigger it off with a timer service.

The timer service as /etc/systemd/system/uptime-limiter.timer, in this case allows for 6 hours of uptime (plus the extra 5 minutes given by the main service):

[Unit]
Description=Timer for Limit uptime service

[Timer]
OnBootSec=6h
AccuracySec=1s

[Install]
WantedBy=timers.target

and enable it:

# systemctl enable uptime-limiter.timer
Created symlink /etc/systemd/system/timers.target.wants/uptime-limiter.timer → /etc/systemd/system/uptime-limiter.timer.

Note two things here: That I enabled the timer, not the service itself, by adding the .timer suffix. And I didn’t start it. For that, there’s the –now flag.

So there are two steps: When the timer fires off, the call to /sbin/shutdown takes place, and that causes nagging wall messages to start once a minute, and eventually a shutdown. Mission complete.

What timers are pending

Ah, that’s surprisingly easy:

# systemctl list-timers
NEXT                         LEFT          LAST                         PASSED       UNIT                         ACTIVATES
Sun 2021-01-31 17:38:28 UTC  14min left    n/a                          n/a          systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sun 2021-01-31 20:50:22 UTC  3h 26min left Sun 2021-01-31 12:36:41 UTC  4h 47min ago apt-daily.timer              apt-daily.service
Sun 2021-01-31 23:23:28 UTC  5h 59min left n/a                          n/a          uptime-limiter.timer         uptime-limiter.service
Sun 2021-01-31 23:23:34 UTC  5h 59min left Sun 2021-01-31 17:23:34 UTC  44s ago      google-oslogin-cache.timer   google-oslogin-cache.service
Mon 2021-02-01 00:00:00 UTC  6h left       Sun 2021-01-31 12:36:41 UTC  4h 47min ago logrotate.timer              logrotate.service
Mon 2021-02-01 00:00:00 UTC  6h left       Sun 2021-01-31 12:36:41 UTC  4h 47min ago man-db.timer                 man-db.service
Mon 2021-02-01 06:49:19 UTC  13h left      Sun 2021-01-31 12:36:41 UTC  4h 47min ago apt-daily-upgrade.timer      apt-daily-upgrade.service

Clean and simple. And this is probably why this method is better than a long delay on shutdown, which is less clear about what it’s about to do, as shown next.

Note that a timer service can be stopped, which is parallel to canceling a shutdown. Restarting it to push the time limit further won’t work in this case, because the service is written related to OnBootSec.

Is there a shutdown pending?

To check if a shutdown is about to happen:

$ cat /run/systemd/shutdown/scheduled
USEC=1612103418427661
WARN_WALL=1
MODE=poweroff
WALL_MESSAGE=System it taken down by uptime-limit.service

There are different reports on what happens when the shutdown is canceled. On my system, the file was deleted in response to “shutdown -c”, but not when the shutdown was canceled because the system had just booted up. There’s other suggested ways too, but in the end, it appears like there’s no definite way to tell if a system has a shutdown scheduled or not. At least not as of systemd 241.

That USEC line is the epoch time for when shutdown will take place. A Perl guy like me goes

$ perl -e 'print scalar gmtime(1612103418427661/1e6)'

but that’s me.

What didn’t work

So this shows what doesn’t work: Enable the main service (as well as start it right away with the –now flag):

# systemctl enable --now uptime-limiter
Created symlink /etc/systemd/system/multi-user.target.wants/uptime-limiter.service → /etc/systemd/system/uptime-limiter.service.

Broadcast message from root@instance-1 (Sun 2021-01-31 14:15:19 UTC):

System it taken down by uptime-limit.service
The system is going down for poweroff at Sun 2021-01-31 14:25:19 UTC!

So the broadcast message is out there right away. But this is misleading: It won’t work at all when the service is started automatically during system boot.

Some notes on how an air conditioner works

I’m not an AC guy. Not at all.

But I do own an Tadiran Wave INV 40/3 installed 2016, and then I had some issues with it. In hindsight, there was some real kind of problem a few years ago, and the rest of the problems were because the AC guy that came to fix the original problem replaced a 100k thermistor with a 5ok one. So the microcontroller detected something was wrong every now and then, and stopped the show (preferably on a hot summer day). He used to come, change the thermistor again, and the machine worked nicely until the next time. Why this helped on the first occasion and on those that followed is beyond me. Eventually I acquired and replaced the thermistor myself, with rated value given in the installation manual, and I hope that’s the end of it.

By the way, a 50k thermistor instead of a 100k thermistor makes the controller think the temperature is 16°C higher than it actually is. So it’s not wrong enough to make prevent the machine from kicking off, but bad enough to dislike it enough to stop after a few months. Or a year.

I have no training in the field of air conditioning or anything of that field, and my memories from the Thermodynamics course I took (and somehow passed with a fair grade) is that the world is turning into a mess. Being a human, I’m always at the risk of writing rubbish, but on this post I feel pretty free to do so. If I got something wrong, kindly correct me below. If you blew your compressor because anything I wrote below, blame yourself for taking advice from someone who has no clue.

But the story with the thermistor is yet another piece of evidence that if something is important, better be involved in the gory details. Not necessarily those written below.

The jots

  • The name of the game is the refrigerant’s boiling temperature, which changes with the pressure. The compressor pushes coolant towards the condenser’s coil (“radiator”), which travels downwards and cools down there, turning into liquid somewhere in the middle of it. As the refrigerant cools down all the way through the coil, it reaches the end of the coil at a temperature that is lower than the boiling point (but the pressure remains more or less uniform). The difference between the measured temperature at the boiling point calculated from the pressure is called subcooling, and gives an indication of the level of the refrigerant inside the coil. That is, where in the coil it turned from gas to fluid.
  • By the same coin, inside the evaporator, the fluid refrigerant (actually, mixed with some gas due to spark evaporation at the metering device) travels upwards and heats up by the inside air it cools. Somewhere in the middle the fluid reaches its boiling point, and continues as gas. Once again, the difference between the measured temperature at the end of the coil and the boiling point indicated by the pressure is called superheat, and indicates where in the coil this took place. This too gives an indication of the refrigerant fill.
  • The measurement of temperature and pressure is typically done at the outdoor unit’s connection to the pipes. There might be differences in pressure and temperature from those actually relevant at the evaporator coil, but this is good enough for figuring out the overall situation.
  • The cooling gas is R410A. According to the installer’s guide for my AC (page 34), the equilibrium temperature is 4°C with pressure 116 PSIG on the evaporator, and 50°C with 429 PSIG at the condenser (AC unit working as cooler). These figures are possible to derive, because there are typical temperatures of the refrigerant at the end of both coils: The environment’s temperature at the end of condenser’s coil, and some 15°C at the evaporator’s (?). This can however be deadly wrong if either coil (or both) don’t function properly, most likely due to poor ventilation (dirty filters, obstructions, poor ventilation path indoors etc.).
  • The transition from high pressure to low occurs at the metering device, which is basically a narrow pipe. TXV / TEV valves are commonly in use. In a typical in/outdoor unit setting, the metering device is indoors, next to the coil. This means that the refrigerant flows in the pipe towards the indoor unit as a compressed fluid — this is referred to as the “liquid line” and is therefore the narrower and hotter pipe. Narrower, because the flow is slower and the refrigerant is expensive. In the other direction, the refrigerant flows as a gas, in the wider, cooler pipe. It’s a bit confusing that the hotter pipe is the one containing liquid, but it’s all a matter of pressures.
  • Note that in both coils, the higher part is hotter: In the condenser because hot refrigerant is pushed from above, and it cools down as it travels downwards. In the evaporator, the fluid arrives from below, and travels upwards as it heats up. Consequently, in both coils, a physically higher transition point between fluid and gas inside the coil means a higher pressure — because the temperature there is higher. This is how the fluid levels even out between the condenser and evaporator: In steady state, the compressor generates a constant pressure difference and flow on the gas side, which works against the metering valve and differences in altitude. By conservation of mass, there must be a certain amount of fluid in the system to accommodate the existing amount of refrigerant (to complete the amount that is in gas form). If the fluid level rises on one coil, it has to drop on the other. But since a rising fluid level also means a rise in pressure, it causes more pressure against the metering device as well as the compressor. So a rise in the fluid level makes the coil push harder fluid downwards, and gas upwards making refrigerant to travel more towards the other coil, and the fluid level to drop on the current one.
  • Another way to look at it: The pressure difference imposed by the compressor dictates a difference between the temperatures of the liquid-to-fluid transition points of both coils. Well more or less, because the pressure to boiling point relation isn’t linear. So given the amount of refrigerant in the system, the fluid is distributed in the segment from the condenser’s coils bottom, through the liquid line and the metering device to the evaporator coil’s bottom. The levels of the fluids stabilize so as to satisfy the temperature difference imposed by the compressor.
  • However if there is too little refrigerant (a leak…), then this temperature difference relation can’t be satisfied. The result, say people out there, is bubble noise in the pipes and accumulation of ice at the outdoor unit’s wider pipe.
  • The metering valve is designed to create a pressure gap with fluids, so it the pressure difference is smaller when gas runs through it. The compressor drops the pressure in the (wider) gas pipe, causing the temperature to drop well below zero, and ice accumulates.
  • The viscosity of R410A is more than ten times larger as a fluid than as a gas at 10°C, so if it reaches the metering device as a gas, it flows through much easier. Hence if there isn’t enough refrigerant in the system — that is, the segment from the condeser coil’s bottom to the metering device isn’t filled with continuous liquid — gas will travel quickly through the metering device, causing a pressure drop in the liquid line, which even more gas to evaporate in the liquid line (creating bubbles that are often audible).
  • Note however that accumulation of ice could be just the result of poor ventilation of the indoor unit, for example due to dust in the filters: If the gas isn’t warmed up enough in the indoor unit, its overall temperature cycle drops, and it reaches the outdoor unit very cool.
  • How do you tell? Well, air condition guys ask about those filters every time for a reason. In addition, inverter AC systems have temperature sensors before, in the middle of and after the coil in both indoor and outdoor units. This way, the microcontroller can tell what the fluid level is on both sides. If it’s too low, odds are that it will correctly detect that situation and issue an error code on the user panel, and more accurately with a blinking LED on the controller PCB, either the one indoor or outdoor. It’s a good idea to download the installer’s manual for the specific AC model, if you’re brave enough to open your AC on a hot summer’s day.
  • “Heat pump” is just the same of an AC that can also do heating.
  • In English, HVAC is a good searching keyword for air conditioning.

Videos

Apache 2.4: RewriteRule with [NE] causing 500 Internal Server Error

This is the weirdest thing. With an Apache 2.4.10 on Linux Debian 8 (yes, old), and a relatively simple mod_rewrite rule in .htaccess going

RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule (.*) https://www.mysite.com/$1 [R=301,L,NE]

This is really nothing special. Just pushing users to the www host name, if they were lazy typing it.

This works almost perfectly, but then I have this thing about trying odd stuff. So I get an internal server error when I request the page https://mysite.com/%01, but not mysite.com/%1 and not mysite.com/%20. In other words, I get a server error when there’s a legal percent-escape of a character in the %01-%1f range. Not with %00, and not with %20 and up, and not when there’s just one digit. It’s really a matter of a proper percent escape.

This %-escape doesn’t have to be in the beginning of the string. https://mysite.com/mything%01.html is equally offensive. But it appears like a “?” neutralizes the problem — there is no server error if a “?” appeared before the offending %-escape. Recall that characters after a “?” are normally not escaped.

And of course, there is no problem accessing https://www.mysite.com/mything%01.html (getting a 404, but fine), as the rule doesn’t come to effect.

The internal server error leaves no traces in neither the error log nor the access log. So there’s no hint there. Neither in the journalctl log. No process dies either.

The problem is clearly the NE flag (marked red above), telling mod_rewrite not to escape the the string it rewrites. Why this is an issue is beyond me. I could try asking in forums and such, but I have hunch on the chances getting an answer.

Exactly the same behavior is observed when this rule is in an Apache configuration file (in <Virtualhost> context).

So I just dropped the NE flag. Actually, this is a note to self explaining why I did it.

One could argue that it’s pretty harmless to get an error on really weird URL requests, however often a serious vulnerability begins with something that just went a little wrong. So I prefer to stay away.

If anyone has insights on this matter, please comment below. Maybe it has been silently fixed in later versions of Apache?

Systemd services as cronjobs: No process runs away

But why?

Cronjobs typically consists of a single utility which we’re pretty confident about. Even if it takes quite some time to complete (updatedb, for example), there’s always a simple story, a single task to complete with a known beginning and end.

If the task involves a shell script that calls a few utilities, that feeling of control fades. It’s therefore reassuring to know that everything can be cleaned up neatly by simple stopping a service. Systemd is good at that, since all processes that are involved in the service are kept in a separate cgroup. So when the service is stopped, all processes that were possibly generated eventually get a SIGKILL, typically 90 seconds after the request to stop the service, unless they terminated voluntarily in response to the initial SIGTERM.

Advantage number two is that the systemd allows for a series of capabilities to limit what the cronjob is capable of doing, thanks to the cgroup arrangement. This doesn’t fall very short from the possibilities of container virtualization, with pretty simple assignments in the unit file. This includes making certain directories inaccessible or accessible for read-only, setting up temporary directories, disallow external network connection, limit the set of allowed syscalls, and of course limit the amount of resources that are consumed by the service. They’re called Control Groups for a reason.

There’s also the RuntimeMaxSec parameter in the service unit file, which is the maximal wall clock time the service is allowed to run. The service is terminated and put in failure state if this time is exceeded. This is however supported from systemd version 229 and later, so check with “systemctl –version”.

My original idea was to use systemd timers to kick off the job, and let RuntimeMaxSec make sure it would get cleaned up if it ran too long (i.e. got stuck somehow). But because the server in question ran a rather old version of systemd, I went for a cron entry for starting the service and another one for stopping it, with a certain time difference between them. In hindsight, cron turned to be neater for kicking off the jobs, because I had multiple variants of them in different times. So one single file enclosed all.

The main practical difference is that if a service reaches RuntimeMaxSec, it’s terminated with a failed status. The cron solution stops the service without this. I guess there’s a systemctl way to achieve the failed status, if that’s really important.

As a side note, I have a separate post on Firejail, which is yet another possibility to use cgroups for controlling what processes do.

Timer basics

The idea is simple: A service can be started as a result of a timer event. That’s all that timer units do.

Timer units are configured like any systemd units (man systemd.unit) but have a .timer suffix and a dedicated [Timer] section. By convention, the timer unit named foo.timer activates the service foo.service, unless specified differently with the Unit= assignment (useful for generating confusion).

Units that are already running when the timer event occurs are not restarted, but are left to keep running. Exactly like systemctl start would do.

For an cronjob-style timer, use OnCalendar= to specify the times. See man systemd.time for the format. Note that AccuracySec= should be set too to control how much systemd can play with the exact time of execution, or systemd’s behavior might be confusing.

To see all active timers, go

$ systemctl list-timers

The unit file

As usual, the unit file (e.g. /etc/systemd/system/cronjob-test@.service) is short and concise:

[Unit]
Description=Cronjob test service

[Service]
ExecStart=/home/eli/shellout/utils/shellout.pl "%I"
Type=simple
User=eli
WorkingDirectory=/home/eli/shellout/utils
KillMode=mixed
NoNewPrivileges=true

This is a simple service, meaning that systemd expects the process launched by ExecStart to run in the foreground.

Note however that the service unit’s file name has a “@” character and that %I is used to choose what to run, based upon the unescaped instance name (see main systemd.unit). This turns the unit file into a template, and allows choosing an arbitrary command (the shellout.pl script is explained below) with something like (really, this works)

# systemctl start cronjob-test@'echo "Hello, world"'

This might seems dangerous, but recall that root privileges are required to start the service, and you get a plain-user process (possibly with no ability to escalate privileges) in return. Not the big jackpot.

For stopping the service, exactly the same service specifier string is required. But it’s also possible to stop all instances of a service with

# systemctl stop 'cronjob-test@*'

How neat is that?

A few comments on this:

  • The service should not be systemd-wise enabled (i.e. no “systemctl enable”) — that’s what you do to get it started on boot or following some kind of event. This is not the case, as the whole point is to start the service directly by a timer or crond.
  • Accordingly, the service unit file does not have an [Install] section.
  • A side effect of this is that the service may not appear in the list made by “systemctl” (without any arguments) unless it has processes running on its behalf currently running (or possibly if it’s in the failed state). Simple logic: It’s not loaded unless it has a cgroup allocated, and the cgroup is removed along with the last process. But it may appear anyhow under some conditions.
  • ExecStart must have a full path (i.e. not relative) even if the WorkingDirectory is set. In particular, it can’t be ./something.
  • A “systemctl start” on a service that is marked as failed will be started anyhow (i.e. the fact that it’s marked failed doesn’t prevent that). Quite obvious, but I tested it to be sure.
  • Also, a “systemctl start” causes the execution of ExecStart if and only if there’s no cgroup for it, which is equivalent to not having a process running on its behalf
  • KillMode is set to “mixed” which sends a SIGTERM only to the process that is launched directly when the service is stopped. The SIGKILL 90 seconds later, if any, is sent to all processes however. The default is to give all processes in the cgroup the SIGTERM when stopping.
  • NoNewPrivileges is a little paranoid thing: When no process has any reason to change its privileges or user IDs, block this possibility. This mitigates damage, should the job be successfully attacked in some way. But I ended up not using it, as running sendmail fails (it has some setuid thing to allow access to the mail spooler).

Stopping

There is no log entry for a service of simple type that terminates with a success status. Even though it’s stopped in the sense that it has no allocated cgroup and “systemctl start” behaves as if it was stopped, a successful termination is silent. Not sure if I like this, but that’s the way it is.

When the process doesn’t respond to SIGTERM:

Jan 16 19:13:03 systemd[1]: Stopping Cronjob test service...
Jan 16 19:14:33 systemd[1]: cronjob-test.service stop-sigterm timed out. Killing.
Jan 16 19:14:33 systemd[1]: cronjob-test.service: main process exited, code=killed, status=9/KILL
Jan 16 19:14:33 systemd[1]: Stopped Cronjob test service.
Jan 16 19:14:33 systemd[1]: Unit cronjob-test.service entered failed state.

So there’s always “Stopping” first and then “Stopped”. And if there are processes in the control group 90 seconds after “Stopping”, SIGKILL is sent, and the service gets a “failed” status. Not being able to quit properly is a failure.

A “systemctl stop” on a service that is already stopped is legit: The systemctl utility returns silently with a success status, and a “Stopped” message appears in the log without anything actually taking place. Neither does the service’s status change, so if it was considered failed before, so it remains. And if the target to stop was a group if instances (e.g. systemctl stop ‘cronjob-test@*’) and there were no instances to stop, there’s even not a log message on that.

Same logic with “Starting” and “Started”: A superfluous “systemctl start” does nothing except for a “Started” log message, and the utility is silent, returning success.

Capturing the output

By default, the output (stdout and stderr) of the processes is logged in the journal. This is usually pretty convenient, however I wanted the good old cronjob behavior: An email is sent unless the job is completely silent and exits with a success status (actually, crond doesn’t care, but I wanted this too).

This concept doesn’t fit systemd’s spirit: You don’t start sending mails each time a service has something to say. One could use OnFailure for activating another service that calls home when the service gets into a failure status (which includes a non-success termination of the main process), but that mail won’t tell me the output. To achieve this, I wrote a Perl script. So there’s one extra process, but who cares, systemd kills’em all in the end anyhow.

Here it comes (I called it shellout.pl):

#!/usr/bin/perl

use strict;
use warnings;

# Parameters for sending mail to report errors
my $sender = 'eli';
my $recipient = 'eli';
my $sendmail = "/usr/sbin/sendmail -i -f$sender";

my $cmd = shift;
my $start = time();

my $output = '';

my $catcher = sub { finish("Received signal."); };

$SIG{HUP} = $catcher;
$SIG{TERM} = $catcher;
$SIG{INT} = $catcher;
$SIG{QUIT} = $catcher;

my $pid = open (my $fh, '-|');

finish("Failed to fork: $!")
  unless (defined $pid);

if (!$pid) { # Child process
  # Redirect stderr to stdout for child processes as well
  open (STDERR, ">&STDOUT");

  exec($cmd) or die("Failed to exec $cmd: $!\n");
}

# Parent
while (defined (my $l = <$fh>)) {
  $output .= $l;
}

close $fh
 or finish("Error: $! $?");

finish("Execution successful, but output was generated.")
 if (length $output);

exit 0; # Happy end
sub finish {
  my ($msg) = @_;

  my $elapsed = time() - $start;

  $msg .= "\n\nOutput generated:\n\n$output\n"
    if (length $output);

  open (my $fh, '|-', "$sendmail $recipient") or
    finish("Failed to run sendmail: $!");

  print $fh <<"END";
From: Shellout script <$sender>
Subject: systemd cron job issue
To: $recipient

The script with command \"$cmd\" ran $elapsed seconds.

$msg
END

  close $fh
    or die("Failed to send email: $! $?\n");

  $SIG{TERM} = sub { }; # Not sure this matters
  kill -15, $$; # Kill entire process group

  exit(1);
}

First, let’s pay attention to

open (STDERR, ">&STDOUT");

which makes sure standard error is redirected to standard output. This is inherited by child processes, which is exactly the point.

The script catches the signals (SIGTERM in particular, which is systemd’s first hint that it’s time to pack and leave) and sends a SIGTERM to all other processes in turn. This is combined with KillMode being set to “mixed” in the service unit file, so that only shellout.pl gets the signal, and not the other processes.

The rationale is that if all processes get the signal at once, it may (theoretically?) turn out that the child process terminates before the script reacted to the signal it got itself, so it will fail to report that the reason for the termination was a signal, as opposed to the termination of the child. This could miss a situation where the child process got stuck and said nothing when being killed.

Note that the script kills all processes in the process group just before quitting due to a signal it got, or when the invoked process terminates and there was output. Before doing so, it sets the signal handler to a NOP, to avoid an endless loop, since the script’s process will get it as well (?). This NOP thing appears to be unnecessary, but better safe than sorry.

Also note that the while loop quits when there’s nothing more in <$fh>. This means that if the child process forks and then terminates, the while loop will continue, because unless the forked process closed its output file handles, it will keep the reference count of the script’s stdin above zero. The first child process will remain as a zombie until the forked process is done. Only then will it be reaped by virtue of the close $fh. This machinery is not intended for fork() sorcery.

I took a different approach in another post of mine, where the idea was to fork explicitly and modify the child’s attributes. Another post discusses timing out a child process in general.

Summary

Yes, cronjobs are much simpler. But in the long run, it’s a good idea to acquire the ability to run cronjobs as services for the sake of keeping the system clean from runaway processes.

Perl + DBI: Measuring the time of database queries

It’s often desired to know how much wall clock time the SQL queries take. As with Perl, there’s more than one way to do it. This is a simple way, which involves overriding DBI’s execute() method, so it measures the time and issues a warn() with basic caller info and the time in milliseconds.

The same thing can be done with any other method, of course. Go “man DBI” and look for “Subclassing the DBI” for the full story.

So the first thing is to define the RootClass property on the DBI object generation, so that MySubDBI is the root class. Something like

$dbh = DBI->connect( "DBI:mysql::localhost", "thedatabase", "thepassword",
		     { RaiseError => 1, AutoCommit => 1, PrintError => 0,
		       RootClass => 'MySubDBI',
		       Taint => 1});

and then, somewhere, the class needs to be defined. This can be in a separate module .pm file, but also at the end of the same file as the code for the DBI->connect:

package MySubDBI;
use strict;

use DBI;
use vars qw(@ISA);
@ISA = qw(DBI);

package MySubDBI::db;
use vars qw(@ISA);
@ISA = qw(DBI::db);

# Empty, yet necessary.

package MySubDBI::st;
use Time::HiRes qw(gettimeofday tv_interval);
use vars qw(@ISA);
@ISA = qw(DBI::st);

sub execute {
  my ($sth, @args) = @_;
  my $tic = [gettimeofday];
  my $res = $sth->SUPER::execute(@args);
  my $exectime = int(tv_interval($tic)*1000);

  my ($package0, $file0, $line0, $subroutine0) = caller(0);
  my ($package1, $file1, $line1, $subroutine1) = caller(1);

  warn("execute() call from $subroutine1 (line $line0) took $exectime ms\n");
  return $res;
}

1;

The code can be smarter, of course. For example, issue a warning only if the query time exceeds a certain limit.

MySQL: LIKE vs. REGEXP() performance testing

Introduction

At some point I needed to choose between using LIKE or REGEXP() for a not so simple string match. Without going into the details, the matching string contains a lot of wildcard segments, and while both would have done the job, I thought maybe REGEXP() would benefit from some extra information about the wildcard parts. It’s not that I cared that ‘%’ would match characters it shouldn’t, but I wanted so save some backtracking by telling the matching engine not to match just anything. Save some CPU, I thought. Spoiler: It was a nice thought, but no.

So I ran a few performance tests on a sample table:

CREATE TABLE product_info (
	cpe_name BLOB NOT NULL,
	title BLOB NOT NULL,
	PRIMARY KEY (cpe_name(511))
) engine = MyISAM;

Note that cpe_name is the primary key, and is hence a unique index.

I should also mention that the match patterns I use are for testing are practically useless for CPE name matching because they don’t handle escaped “:” characters properly. Just in case you know what a CPE is and you’re here for that. In short, these are just performance tests.

I did this on MySQL server version: 5.1.47, Source distribution. There are newer versions around, I know. Maybe they do better.

The art of silly queries

So there’s about more than half a million entries:

mysql> SELECT COUNT(*) FROM product_info;
+----------+
| COUNT(*) |
+----------+
|   588094 |
+----------+
1 row in set (0.00 sec)

Let’s ask it another way:

mysql> SELECT COUNT(*) FROM product_info WHERE cpe_name LIKE '%';
+----------+
| COUNT(*) |
+----------+
|   588094 |
+----------+
1 row in set (0.08 sec)

Say what? LIKE ‘%’ is always true for a non-NULL BLOB. MySQL didn’t optimize this simple thing, and actually checked every single entry?

mysql> EXPLAIN SELECT COUNT(*) FROM product_info;
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra                        |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
|  1 | SIMPLE      | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | Select tables optimized away |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
1 row in set (0.00 sec)

mysql> EXPLAIN SELECT COUNT(*) FROM product_info WHERE cpe_name LIKE '%';
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table        | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | product_info | ALL  | NULL          | NULL | NULL    | NULL | 588094 | Using where |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)

Apparently it did. Other silly stuff:

mysql> SELECT COUNT(*) FROM product_info WHERE NOT cpe_name IS NULL;
+----------+
| COUNT(*) |
+----------+
|   588094 |
+----------+
1 row in set (0.08 sec)

mysql> EXPLAIN SELECT COUNT(*) FROM product_info WHERE NOT cpe_name IS NULL;
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table        | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | product_info | ALL  | PRIMARY       | NULL | NULL    | NULL | 588094 | Using where |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)

Silly or what? The column is defined as “NOT NULL”. What is there to check? So maybe the idea is that if I make stupid queries, MySQL responds with stupid behavior. Well, not really:

mysql> SELECT COUNT(*) FROM product_info WHERE 1=1;
+----------+
| COUNT(*) |
+----------+
|   588094 |
+----------+
1 row in set (0.00 sec)

mysql> EXPLAIN SELECT COUNT(*) FROM product_info WHERE 1=1;
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra                        |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
|  1 | SIMPLE      | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | Select tables optimized away |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
1 row in set (0.00 sec)

It’s more like MySQL makes optimizations only if they’re really obvious. MySQL != gcc.

LIKE vs REGEXP

So it turns out that LIKE is really fast, and it seems to take advantage of the index:

mysql> SELECT cpe_name FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:hummingbird:cyberdocs:-%';
+-------------------------------------------------+
| cpe_name                                        |
+-------------------------------------------------+
| cpe:2.3:a:hummingbird:cyberdocs:-:*:*:*:*:*:*:* |
+-------------------------------------------------+
1 row in set (0.00 sec)

The same query, only with a regex:

mysql> SELECT cpe_name FROM product_info WHERE cpe_name REGEXP '^cpe:2.3:a:hummingbird:cyberdocs:-.*';
+-------------------------------------------------+
| cpe_name                                        |
+-------------------------------------------------+
| cpe:2.3:a:hummingbird:cyberdocs:-:*:*:*:*:*:*:* |
+-------------------------------------------------+
1 row in set (0.21 sec)

Recall that indexing means that the rows are sorted. Because the initial part of the string is fixed, it’s possible to narrow down the number of rows to match with an index lookup, and then work on those that match that initial part. This was taken advantage of with the LIKE match, but apparently not with REGEXP():

mysql> EXPLAIN SELECT cpe_name FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:hummingbird:cyberdocs:-%';
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table        | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
|  1 | SIMPLE      | product_info | range | PRIMARY       | PRIMARY | 513     | NULL |    1 | Using where |
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)

mysql> EXPLAIN SELECT cpe_name FROM product_info WHERE cpe_name REGEXP '^cpe:2.3:a:hummingbird:cyberdocs:-.*';
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table        | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | product_info | ALL  | NULL          | NULL | NULL    | NULL | 588094 | Using where |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)

The conclusion is quite clear: For heavy duty matching, don’t use REGEXP() if LIKE can do the job. In particular if a lot of rows can be ruled out by virtue of the first characters in the string.

Making it harder for LIKE

Let’s warm up a bit:

mysql> SELECT cpe_name FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:humming%:cyberdocs:-%';
+-------------------------------------------------+
| cpe_name                                        |
+-------------------------------------------------+
| cpe:2.3:a:hummingbird:cyberdocs:-:*:*:*:*:*:*:* |
+-------------------------------------------------+
1 row in set (0.00 sec)

It was quicker than measurable, mainly because there was little to do:

mysql> EXPLAIN SELECT cpe_name FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:humming%:cyberdocs:-%';
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table        | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
|  1 | SIMPLE      | product_info | range | PRIMARY       | PRIMARY | 513     | NULL |   80 | Using where |
+----+-------------+--------------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)

As expected, the index was used for the initial part of the string, and that left MySQL with 80 rows to actually do the matching (in this specific case). So it was quick.

But what if the wildcard is at the beginning of the string?

mysql> SELECT cpe_name FROM product_info WHERE cpe_name LIKE '%cpe:2.3:a:hummingbird:cyberdocs:-%';
+-------------------------------------------------+
| cpe_name                                        |
+-------------------------------------------------+
| cpe:2.3:a:hummingbird:cyberdocs:-:*:*:*:*:*:*:* |
+-------------------------------------------------+
1 row in set (0.10 sec)

So that took some considerable time, however still less than the REXEXP() case. The index didn’t help in this case, it seems:

mysql> EXPLAIN SELECT cpe_name FROM product_info WHERE cpe_name LIKE '%cpe:2.3:a:hummingbird:cyberdocs:-%';
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table        | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | product_info | ALL  | NULL          | NULL | NULL    | NULL | 588094 | Using where |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)

So apparently, the pattern was applied to all rows. It was still faster than REGEXP() making a very simple match. So the latter seems not to be optimized in MySQL.

And to wrap this up, an expression full with wildcards. Similar to the one I thought maybe REXEXP would do better:

mysql> SELECT COUNT(*) FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:%:%:%:-:%:%:%:%:%:%';
+----------+
| COUNT(*) |
+----------+
|    13205 |
+----------+
1 row in set (0.11 sec)

mysql> EXPLAIN SELECT COUNT(*) FROM product_info WHERE cpe_name LIKE 'cpe:2.3:a:%:%:%:-:%:%:%:%:%:%';
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table        | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | product_info | ALL  | PRIMARY       | NULL | NULL    | NULL | 588094 | Using where |
+----+-------------+--------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)

So what happened here is that the initial part of the string didn’t help much, and the match seems to have been done on all rows. It took more or less the same time as the much simpler match pattern above.

Conclusion

There seems to be two main conclusions from this little set of experiments: The first one isn’t surprising: The most important factor is how many rows are being accessed, not so much what is done with them. And the second is that MySQL does some sensible optimizations when LIKE is used, in particular it narrows down the number of rows with the index, when possible. Something it won’t do with REGEXP(), even if with an “^” at the beginning of the regex.

MySQL: Enforcing long unique BLOB / TEXT columns with UNIQUE INDEX

Introduction

When creating indexes to TEXT / BLOB columns (and their variants), it’s required to specify how many characters the index should cover. In MySQL’s docs it’s usually suggested to keep them short in for better performance. There’s also a limit on the number of characters, which varies from one database engine to another, going from a few hundreds to a few thousands characters.

However it’s not unusual to use UNIQUE INDEX for making the database enforce the uniqueness of a field in the table. ON DUPLICATE KEY, INSERT IGNORE, UPDATE IGNORE and REPLACE can then be used to gracefully keep things tidy.

But does UNIQUE INDEX mean that the entire field remains unique, or is only the part covered by the index checked? Spoiler: Only the part covered by the index. In other words, MySQL’s uniqueness enforcement may be too strict.

Almost needless to say, if the index isn’t UNIQUE, it’s just a performance issue: If the number of covered characters is small, the database will more often fetch and discard the data of a row that turns out not to match the search criteria. But it doesn’t change the logic behavior. With UNIQUE INDEX, the number of characters does.

A simple test follows.

Trying the UNIQUE INDEX

I created a table with MySQL 5.1.47, with an index covering only 5 chars. Deliberately very short.

CREATE TABLE try (
	id INT UNSIGNED NOT NULL AUTO_INCREMENT,
	message BLOB NOT NULL,
	PRIMARY KEY (id),
	UNIQUE INDEX (message(5))
) engine = MyISAM;

Which ends up with this:

mysql> DESCRIBE try;
+---------+------------------+------+-----+---------+----------------+
| Field   | Type             | Null | Key | Default | Extra          |
+---------+------------------+------+-----+---------+----------------+
| id      | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| message | blob             | NO   | UNI | NULL    |                |
+---------+------------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)

Inserting the first row:

mysql> INSERT INTO try(message) VALUES('Hello, world');
Query OK, 1 row affected (0.00 sec)

And now trying a second:

mysql> INSERT INTO try(message) VALUES('Hello there');
ERROR 1062 (23000): Duplicate entry 'Hello' for key 'message'

That’s it. It just looked at the first five chars. Trying a difference within this region:

mysql> INSERT INTO try(message) VALUES('Hell there');
Query OK, 1 row affected (0.00 sec)

No wonder, that worked.

mysql> SELECT * FROM try;
+----+--------------+
| id | message      |
+----+--------------+
|  1 | Hello, world |
|  2 | Hell there   |
+----+--------------+
2 rows in set (0.00 sec)

Handling large TEXT/BLOB

And that leaves the question: How is it possible to ensure uniqueness on large chunks of text or binary data? One solution I can think of is to add a column for a hash (say SHA1), and let the application calculate that hash for each row it inserts, and insert it along with it. And make the UNIQUE INDEX on the hash, not the text. Something like

CREATE TABLE try2 (
	id INT UNSIGNED NOT NULL AUTO_INCREMENT,
	message BLOB NOT NULL,
	hash TINYBLOB NOT NULL,
	PRIMARY KEY (id),
	UNIQUE INDEX (hash(40))
) engine = MyISAM;

But wait. MySQL supports hashing functions. Why not use them instead? Well, the problem is that if I want an INSERT statement push the data and its hash in one go they query becomes a bit nasty. What came to my mind is:

mysql> INSERT INTO try2(message, hash) (SELECT t.x, SHA1(t.x) FROM (SELECT 'Hello there' AS x) AS t);
Query OK, 1 row affected (0.00 sec)
Records: 1  Duplicates: 0  Warnings: 0

Can it get more concise than this? Suggestions are welcome. The double SELECT is required because I want the string literal to be mentioned once.

Isn’t it easier to let the application calculate the SHA1, and send it to the server by value? It’s a matter of taste, I guess.

Anyhow, trying again with exactly the same:

mysql> INSERT INTO try2(message, hash) (SELECT t.x, SHA1(t.x) FROM (SELECT 'Hello there' AS x) AS t);
ERROR 1062 (23000): Duplicate entry '726c76553e1a3fdea29134f36e6af2ea05ec5cce' for key 'hash'

and with something slightly different:

mysql> INSERT INTO try2(message, hash) (SELECT t.x, SHA1(t.x) FROM (SELECT 'Hello there!' AS x) AS t);
Query OK, 1 row affected (0.00 sec)
Records: 1  Duplicates: 0  Warnings: 0

So yep, it works:

mysql> SELECT * FROM try2;
+----+--------------+------------------------------------------+
| id | message      | hash                                     |
+----+--------------+------------------------------------------+
|  1 | Hello there  | 726c76553e1a3fdea29134f36e6af2ea05ec5cce |
|  2 | Hello there! | 6b19cb3790b6da8f7c34b4d8895d78a56d078624 |
+----+--------------+------------------------------------------+
2 rows in set (0.00 sec)

Once again, even though the example I showed demonstrates how to make MySQL calculate the hash, I would do it in the application.

Decoding email’s quoted-printable with Perl

To make it short, the command at shell prompt is

$ perl -MMIME::QuotedPrint -e 'local $/; $x=<>; print decode_qp($x)' < quoted.txt > unquoted.html

and I needed this to extract an HTML segment of an email.

Pretty-printing data structures with Perl with Data::Dumper

As Perl allows for complicated data structures by virtue of references to hashes and arrays, it’s often useful to look into what’s going on there. In my case, it was on the output of a large JSON parse.

So to make a long story short, if $tree is a reference to the data structure, go

use Data::Dumper;
$Data::Dumper::Terse = 1;
$Data::Dumper::Purity = 1;
$Data::Dumper::Sortkeys = 1;
print Dumper $tree;

Noted how many flags? Data::Dumper isn’t always ideal for pretty-printing (there are a few alternatives) but it wins mainly because it’s part of the commonly installed Perl libraries. One of its drawbacks, which is also its advantage, is that its output is Perl code that reconstructs the data structure. Which means that it fusses with accuracy, in particular if the data structure contains blessed references.

So I definitely miss a Data::Dumper::Hooman or something of that sort.

And even more annoyingly, if it meets a complicated value more than one (e.g. a blessed refs), it puts a reference to the first appearance of the same value in the following times. Which is efficient, maybe, but doesn’t help for reading by a human.

So to the flags.

The Sortkeys flag is recommended for human reading (as well as diffs) for obvious reasons.

The Terse flag makes sure that values are dumped as literals and not referenced. For this to work, the Purity flag was also necessary in my case, or the Terse flag was simply ignored silently. The latter adds a lot of assignments at the end of the dump to fix inaccuracies.

The problem was that one of the fields was blessed as JSON::PP::Boolean, so the dump read

'acInsufInfo' => bless( do{\(my $o = 0)}, 'JSON::PP::Boolean' ),

and then it was referenced over and over again. With the Purity flag, the references appeared at the end of the dump, to correct the inaccurate (non-blessed) assignment before. It seems like without it, Dumper refused to respect the Terse flag for these, because it would break the concept that the dump can be executed to reconstruct the original.

Closing note: It’s quite unusual that mainstream Perl libraries behave in a quirky way that actually needs explanation.

Random notes about Gimp 2.10

This is just a few notes to self on this tool.

  • Version 2.10 is a big deal: It’s the first one to use GPU acceleration for scaling and rotation, as well as deforming of different types. No more small previews, no more sluggish response to scaling. No need to scale down images for working, because it was too slow.
  • The default theme (dark) is a joke. It’s elegant, but it’s difficult to see the differences between dark grey and lighter grey, which is the difference between a layer lock being enabled or not. I went for “system”, which got me Mint colors and nice icon spacing. Doesn’t look very elegant, but hey, I now know what I’m doing.
  • The default icons are a joke as well. Very sleek, but the colors are gone. Once again, it’s elegance vs. functionality. There’s a “color” option for the icon. Looks like a children’s toolbox now, but it’s much easier to find the correct tool.

Linked layers

First and foremost: Layer groups is most likely the preferred solution. In particular, I used to link between a layer with a layer mask and its duplicate with the same layer mask applied in order to resize / rotate the layer along its layer mask. This isn’t necessary anymore: Just put the layer with a layer mask inside a group, and manipulate the group instead of the layer. This is quicker than this duplicate, apply and link method, but there’s one feature still lacking: The ability to crop the layer to which the mask has been applied, and manipulate a smaller area.

Linked layers doesn’t seem to work on 2.10.14 — this seems to be a bug: Say, when linking two layers and scaling / rotating one of them, nothing happens with the other.

Solution / workaround: There are two options I know of:

  • Don’t use linked layers: Put the relevant layers in a layer group, and do the scaling / rotation on the group. But that doesn’t work well for manipulating a small layer in order to apply the changes on a larger one.
  • Link between a layer group (possibly with one layer) and the layers that are affected by it. This really looks like a bug workaround: Linking layers to a layer group will make the changes to the layer group apply to all other layers. But only if the manipulations are made to the layer group. Which is more or less fine, because the normal way to work is to manipulate some specific layer, and the others follow. So put that layer in a group of its own, and link the others with the group.