Perl + Linux: Properly cleaning up a forking script after it exits
Leave no leftover childred
One of the really tricky things about a Perl script that forks this way or another, is how to make sure that the children vanish after the parent has exited. This is an issue both if the children were created with a fork() call, or with a safe pipe, as with
my $pid = open(my $fd, '-|');
It may seem to work fine when the main script is terminated with a CTRL-C. The children will indeed vanish. But try killing the main script with a “kill” command, and the parent dies, but the children remain alive and kicking.
The Linux-only solution is
use Linux::Prctl
and then, in the part of the script that runs as a child, do
Linux::Prctl::set_pdeathsig(9);
immediately after the branch between parent and child. This tells Linux to send a SIGKILL to the process that made this call (i.e. the child) as soon as the parent exits. One might be more gentle with a SIGTERM (number 15). But the idea is the same. Parent is away, get the hammer.
To get the Perl module:
# apt install liblinux-prctl-perl
And BTW, SIGPIPE doesn’t help here, even if there’s a pipe between the two processes: It’s delivered only when the child processes attempts to write to a pipe that is closed on the other end. If it doesn’t, the broken pipe is never sensed. And if it’s on the reading side, there’s no SIGPIPE at all — the pipe just gives an EOF when the data is exhausted.
The pdeathsig can of course be used in non-Perl programs as well. This is the Perl example.
Multiple safe pipes
When a process generates multiple children, there’s a problem with the fact that the children inherit the already existing opened file descriptors. For example, when the main script creates multiple children by virtue of safe pipes for read (calling open(my $fd, ‘-|’) repeatedly, so the children write and parent reads): Looking at /proc/PID/fd of the children, it’s clear that they have a lot of pipes opened that they have nothing to do with.
This prevents the main script (the parent), as well some of the children from terminating, even after either side calls to exit() or die(). These processes don’t turn into zombies, but remain plain unterminated processes in the stopped state. At least so it turned out on my Perl v5.26.1 on an x86_64 Linux machine.
The problem for this case occurs when pipes have pending data when the main script attempted to terminate, for example by virtue of a print to STDOUT (which is redirected to the pipe going to the parent). This is problematic, because the child process will attempt to write the remaining data just before quitting (STDOUT is flushed). The process will block forever on this write() call. Since the child doesn’t terminate, the parent process blocks on wait(), and doesn’t terminate either. It’s a deadlock. Even if close() isn’t called explicitly in the main script, the automatic file descriptor close before termination will behave exactly the same: It waits for the child process.
What usually happens in this situation is that when the parent closes the file descriptor, it sends a SIGPIPE to the child. The blocking write() returns as a result with an EPIPE status (Broken pipe), and the child process terminates. This allows the parent’s wait() to reap the child, and the parent process can continue.
And here’s the twist: If the file descriptor belongs to several processes after forking, SIGPIPE is sent to the child only when the last file descriptor is closed. As a result, when the parent process attempts to close one of its pipes, SIGPIPE isn’t sent if the children hasn’t closed their copies of the same pipe file descriptor. The deadlock described above occurs.
There can be worked around by making sure to close the pipes so that the child processes are reaped in the order reversed to their creation. But it’s much simpler to just close the unnecessary file descriptors on the children side.
So the solution is to go
foreach my $fd (@safe_pipe_fds) { close($fd) and print STDERR "What? Closing unnecessary file descriptor was successful!\n"; }
on the child’s side, immediately after the call to set_pdeathsig(), as mentioned above.
All of these close() calls should fail with an ECHILD (No child processes) status: The close() call attempts to waitpid() for the main script’s children (closing a pipe waits for the process on the other side to terminate), which fails because only the true parent can do that. Regardless, the file descriptors are indeed closed, and each child process holds only the file descriptors it needs to. And most importantly, there’s no problem terminating.
So the error message is given when the close is successful. The “and” part isn’t a mistake.
It’s also worth mentioning, that exactly the same close() (with a failed wait() call) occurs anyhow when the child process terminates (I’ve checked it with strace). The code snippet above just makes it earlier, and solves the deadlock problem.
Either way, it’s probably wiser to use pipe() and fork() except for really simple one-on-one IPC between a script and itself, so that all this file descriptor and child reaping is done on the table.
As for pipes to and from other executables with open(), that’s not a problem. I mean calls such as open(IN, “ps aux|”) etc. That’s because Perl automatically closes all file descriptors except STDIN, STDOUT and STDERR when calling execve(), which is the syscall for executing another program.
Or more precisely, it sets the FD_CLOEXEC flag for all files opened with a file number above $^F (a.k.a $SYSTEM_FD_MAX), which defaults to 2. So it’s actually Linux that automatically closes the files on a call to execve(). The possible problem mentioned above with SIGPIPE is hence solved this way. Note that this is something Perl does for us, so if you’re writing a program in C and plan to call execve() after a fork — by all means close all file descriptors that aren’t needed before doing that.