Empty statements in Verilog’s if-else are OK

A reminder to self: As the title implies, it’s OK to have an empty statement in Verilog. This is useful in order to make all the following clauses active only if the first condition isn’t met. This is commonly seen in C, but I wasn’t sure if it’s OK to do this in Verilog too. So it is.

So this is fine, for example:

always @(posedge clk)
  if (relax)
     ; // Do nothing.
  else if (this_condition)
    do_this <= do_this + 1;
  else if (that_condition)
    do_that <= do_that + 1;

This is backed up by the IEEE standard 1364-2001 (standard Verilog), which says in Syntax 9-4:

conditional_statement ::=
  if ( expression )
     statement_or_null [ else statement_or_null ]
     | if_else_if_statement

And then defines in section A.6.4:

statement ::=
{ attribute_instance } blocking_assignment ;
  | { attribute_instance } case_statement
  | { attribute_instance } conditional_statement
  | { attribute_instance } disable_statement
  | { attribute_instance } event_trigger
  | { attribute_instance } loop_statement
  | { attribute_instance } nonblocking_assignment ;
  | { attribute_instance } par_block
  | { attribute_instance } procedural_continuous_assignments ;
  | { attribute_instance } procedural_timing_control_statement
  | { attribute_instance } seq_block
  | { attribute_instance } system_task_enable
  | { attribute_instance } task_enable
  | { attribute_instance } wait_statement

statement_or_null ::=
  statement | { attribute_instance } ;

Those attribute_instance are irrelevant, in particular as they are optional.

udev, the “authorized” attribute and other failed attempts to ban a bogus USB keyboard


This is a spin-off post about failing attempts to fix the problem with a webcam’s keyboard buttons. Namely, that the a shaky physical connections caused the USB device to go on and off the bus rapidly, and consequently crash X windows. The background story is in this post.

There is really nothing to learn from this post regarding how to accomplish something. The only reason I don’t trash this is that there’s some possibly useful information about udev.

What I tried to do

There is a possibility to ban a USB device from being accessed by Linux, by virtue of the “authorized” attribute. Something like this.

# cd /sys/devices/pci0000:00/0000:00:14.0/usb2/2-5/
# echo 0 > authorized
# echo 1 > authorized
bash: echo: write error: Invalid argument

The ^C^Z after the first command is not a mistake. The first command got stuck for several seconds.

And this can be done with udev rules as well.

But surprisingly enough, there doesn’t seem to be a way to avoid the generation of the /dev/input/event* file without ignoring the USB device completely. It’s possible to delete it early enough, but that doesn’t really help, it turns out.

ATTRS{authorized} can be set to 0 only for the entire USB device. There is no such parameter for a udev event with the “input” subsystem.

Some udev queries

While trying to figure out the ATTRS{authorized} thing, these are my little play-arounds. Nothing really useful here:

$ sudo udevadm monitor --udev --property

I got

UDEV  [5662716.427855] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1 (usb)

UDEV  [5662716.430744] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1 (usb)

UDEV  [5662716.430935] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0 (usb)

UDEV  [5662716.433265] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/media5 (media)

UDEV  [5662716.435400] bind     /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.1 (usb)

UDEV  [5662716.436539] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/video4linux/video0 (video4linux)
DEVLINKS=/dev/v4l/by-id/usb-Generic_USB2.0_PC_CAMERA-video-index0 /dev/v4l/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-video-index0

UDEV  [5662716.436956] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121 (input)
KEY=100000 0 0 0

UDEV  [5662716.591160] add      /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22 (input)
DEVLINKS=/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event /dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00

UDEV  [5662716.593390] bind     /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0 (usb)

UDEV  [5662716.595836] bind     /devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1 (usb)

So the device I want to avoid was /dev/input/event22 this time. What’s its attributes?

$ sudo udevadm info -a -n /dev/input/event22 

Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

  looking at device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22':

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121':
    ATTRS{name}=="USB2.0 PC CAMERA: USB2.0 PC CAM"

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0':
    ATTRS{bAlternateSetting}==" 0"
    ATTRS{interface}=="USB2.0 PC CAMERA"

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1':
    ATTRS{bNumInterfaces}==" 2"
    ATTRS{product}=="USB2.0 PC CAMERA"
    ATTRS{version}==" 2.00"

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2':
    ATTRS{bNumInterfaces}==" 1"
    ATTRS{product}=="USB2.0 HUB"
    ATTRS{version}==" 2.00"

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1/1-5':
    ATTRS{bNumInterfaces}==" 1"
    ATTRS{product}=="4-Port USB 2.0 Hub"
    ATTRS{version}==" 2.10"

  looking at parent device '/devices/pci0000:00/0000:00:14.0/usb1':
    ATTRS{bNumInterfaces}==" 1"
    ATTRS{manufacturer}=="Linux 4.15.0-20-generic xhci-hcd"
    ATTRS{product}=="xHCI Host Controller"
    ATTRS{version}==" 2.00"

  looking at parent device '/devices/pci0000:00/0000:00:14.0':

  looking at parent device '/devices/pci0000:00':

And what udev rules are currently in effect for this? Note that this doesn’t require root, and nothing really happens to the system:

$ udevadm test -a add $(udevadm info -q path -n /dev/input/event22)calling: test
version 237
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

Load module index
Parsed configuration file /etc/systemd/network/eth1.link
Skipping empty file: /etc/systemd/network/99-default.link
Created link configuration context.

[ ... reading a lot of files ... ]

rules contain 393216 bytes tokens (32768 * 12 bytes), 39371 bytes strings
25632 strings (220044 bytes), 22252 de-duplicated (184054 bytes), 3381 trie nodes used
GROUP 104 /lib/udev/rules.d/50-udev-default.rules:29
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:8
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:17
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-evdev.rules:21
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'input_id' /lib/udev/rules.d/60-input-id.rules:5
capabilities/ev raw kernel attribute: 3
capabilities/abs raw kernel attribute: 0
capabilities/rel raw kernel attribute: 0
capabilities/key raw kernel attribute: 100000 0 0 0
properties raw kernel attribute: 0
test_key: checking bit block 0 for any keys; found=0
test_key: checking bit block 64 for any keys; found=0
test_key: checking bit block 128 for any keys; found=0
test_key: checking bit block 192 for any keys; found=1
IMPORT builtin 'hwdb' /lib/udev/rules.d/60-input-id.rules:6
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-input.rules:11
/sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0: if_class 14 protocol 0
LINK 'input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' /lib/udev/rules.d/60-persistent-input.rules:32
IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-input.rules:35
LINK 'input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' /lib/udev/rules.d/60-persistent-input.rules:40
PROGRAM 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22' /lib/udev/rules.d/80-libinput-device-groups.rules:7
starting 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22'
'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22'(out) '3/1908/2311:usb-0000:00:14.0-5.2'
Process 'libinput-device-group /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5.2/1-5.2.1/1-5.2.1:1.0/input/input121/event22' succeeded.
IMPORT builtin 'hwdb' /lib/udev/rules.d/90-libinput-model-quirks.rules:46
IMPORT builtin 'hwdb' returned non-zero
IMPORT builtin 'hwdb' /lib/udev/rules.d/90-libinput-model-quirks.rules:50
IMPORT builtin 'hwdb' returned non-zero
handling device node '/dev/input/event22', devnum=c13:86, mode=0660, uid=0, gid=104
preserve permissions /dev/input/event22, 020660, uid=0, gid=104
preserve already existing symlink '/dev/char/13:86' to '../input/event22'
found 'c13:86' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:85' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:84' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
found 'c13:83' claiming '/run/udev/links/\x2finput\x2fby-id\x2fusb-Generic_USB2.0_PC_CAMERA-event-if00'
creating link '/dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' to '/dev/input/event22'
preserve already existing symlink '/dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00' to '../event22'
found 'c13:86' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:85' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:84' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
found 'c13:83' claiming '/run/udev/links/\x2finput\x2fby-path\x2fpci-0000:00:14.0-usb-0:5.2.1:1.0-event'
creating link '/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' to '/dev/input/event22'
preserve already existing symlink '/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event' to '../event22'
DEVLINKS=/dev/input/by-path/pci-0000:00:14.0-usb-0:5.2.1:1.0-event /dev/input/by-id/usb-Generic_USB2.0_PC_CAMERA-event-if00
Unload module index
Unloaded link configuration context.

Other failed attempts

I tried the following:

# Rule for disabling bogus keyboard on webcam. It causes X-Windows to
# crash if it goes on and off too much

SUBSYSTEM=="input", ENV{ID_VENDOR_ID}=="1908", ENV{ID_MODEL_ID}=="2311", MODE:="000"

(the := assignment makes this assignment final).

However none of these two rules managed to stop X from reacting.

Setting the mode to 000 made the device file inaccessible, but yet it was registered. As for the second rule, it doesn’t help, because it indeed set LIBINPUT_IGNORE_DEVICE correctly, but for the wrong udev event. That’s because the udev event that triggers libinput is based upon that the KERNEL attribute is event[0-9]*, which is executed earlier (see 80-libinput-device-groups.rules), but ATTRS{name} isn’t defined for that specific udev event (see output of udevadm info above).

I also tried RUN+=”/bin/rm /dev/input/event%n”, and that indeed removed the device node, but X still reacted, and complained with “libinput: USB2.0 PC CAMERA: USB2.0 PC CAM: Failed to create a device for /dev/input/event28″. Because it was indeed deleted.

But since it appears like X.org accesses keyboards through libinput, maybe use the example for ignoring a device, as given on this page, even though it’s quite similar to what I’ve already attempted?

So I saved this file as /etc/udev/rules.d/79-no-camera-keyboard.rules:

# Make libinput ignore webcam's button as a keyboard. As a result there's
# no event to X-Windows

ACTION=="add|change", KERNEL=="event[0-9]*", \
   ENV{ID_VENDOR_ID}=="1908", \
   ENV{ID_MODEL_ID}=="2311", \

And then reload:

# udevadm control --reload

but that didn’t make any apparent difference (I verified that the rule was matched).

And that’s all, folks. Recall that I didn’t promise a happy end.

Perl: “$” doesn’t really mean end of string

Who ate my newline?

It’s 2023, Perl is ranked below COBOL, but I still consider it as my loyal working horse. But even the most loyal horse will give you a grand kick in the bottom every now and then.

So let’s jump to the problematic code:

use warnings;
use strict;

my $str = ".\n\n";

my $nonn = qr/[ \t]|(?<!\n)\n(?!\n)/;

my ($pre, $match, $post) = ($str =~ /^($nonn*)(.*?)($nonn*)$/s);

print "pre = \"$pre\"\n";
print "match = \"$match\"\n";
print "post = \"$post\"\n";

print "This doesn't add up!\n"
  unless ($str eq "$pre$match$post");

For now, never mind what I tried to do here. Let’s just note that $nonn doesn’t capture anything: Those two expressions with parentheses are a lookbehind and a lookahead, and hence don’t capture.

So now let’s look at

my ($pre, $match, $post) = ($str =~ /^($nonn*)(.*?)($nonn*)$/s);

This is an enclosure between ^ and $, and everything in the middle is captured into three matches. So no matter what, the concatenation of these three matches should equal $str, shouldn’t it? Let’s give it a test run:

$ ./try.pl
pre = ""
match = ".
post = ""
This doesn't add up!

So $pre and $post are empty. OK, fine. Hence $match should equal $str, which is “.\n\n”. But I see only one newline. Where’s the other one?


The one thing that I really like about Perl, is that even when it plays a dirty trick, the answer is in the plain manual. As in “man perlre”, where it says, black on white in the description of $:

Match the end of the string (or before newline at the end of the string; or before any newline if /m is used)

So there we have it. “$” can also consider the character before the last newline as the end. Note that “$” itself will not match the last newline, so even if there’s a capture on the “$” itself, as in “($)”, that last newline is still not captured. It’s a Perl quirk. One of those things that make Perl do exactly what you really want, except for when you’re surgical about it.

I’ve been using Perl a lot for 20 years, and I wasn’t aware that “$” could match anything but the end of the string (let alone the “/m” modifier).

So that’s what happened above: $ considered the character before the last newline to be the end, and one newline went up in smoke.

Use \z instead

The second thing that I really like about Perl, is that even when it’s quirky, there’s always a simple solution. The same “man perlre” also says:

\z     Match only at end of string

Simple, isn’t it? From now on and until the end of time, always use \z if you really mean the end of string. Like, character-wise. And if I change “$” to “\z” in the code above, I get:

my ($pre, $match, $post) = ($str =~ /^($nonn*)(.*?)($nonn*)\z/s);

and the test run gives:

$ ./try.pl
pre = ""
match = ".

post = ""

The working horse is back on track again.

What I really wanted to do

Since I messed up with this regex, I should maybe explain what it does:

my $nonn = qr/[ \t]|(?<!\n)\n(?!\n)/;

First, let’s note that $nonn only matches one character (or none): It’s either a plain space, a tab or a newline. But what’s the mess with the newline?

The “(?<!\n)\n(?!\n)” part says this: Match a \n character that isn’t preceded by a \n, and isn’t followed by a \n. Or in other words, match a newline only if it isn’t part of a sequence of newlines. Only if it’s one, isolated \n.

No double \n. Or for short, “nonn”.

I needed this for a script that handles multiple newlines later on (in LaTeX, a double newline means a new paragraph, that’s the reason).

And it actually worked. The “\n\n” part in the string wasn’t matched into neither $pre nor $post. But the (.*?), which attempts to match as little as possible, sold off the last newline to $. Tricky stuff.

Writing about timing and timing constraints: Kind-of behind the scenes

Oh my goodness

I’ll say this from the start: This is me babbling. You have been warned. Don’t expect anything coherent in this post. Not that I necessarily keep things very tidy in my other posts, but this one is clearly me typing at full speed, not trying to be organized or anything.

I’ll buy you a pizza if you figure out the real motivation behind this post. Well, actually I won’t. At most, I’ll answer with an emoji with a pizza. But you’re welcome to try your luck anyhow. If you bother to read this, that is. Which I’m not sure is the best way to spend your time.

So these are my thoughts after publishing a series of pages about FPGA timing and timing constraints (which begins with this page). It’s not only a relatively long series of pages, but it also involves the translation of the pages into Chinese, Japanese and Korean. I know none of these languages, so I used Google Translate. I’ve already written a post about the challenges and solutions with automatic translation of technical documents.

The translations are a huge experiment. It’s not clear at all if people who speak these languages will find them appealing. There’s clearly a readability problem with an automatic translation, no matter how hard I work to make the text digestible for an AI machine.

People who speak English and one of these languages have expressed little enthusiasm when looking at the results (to say the least). But then it boils down to the choices of people who only have that East Asian language. They will surely have a laugh or curse when reading encountering the hick-ups of automatic translations, but do they have a better alternative? Time will tell. Or more precisely, the logs will tell.

I’ve never written a behind-the-scenes post about other things that I’ve written. Maybe I do this because it’s the first time in ages that I can write English exactly the way I think, and not keeping restricting myself to write in a way that works well with the translation machinery. One of the main limitations with automatic translation to CJK languages is that the sentences must be short and concise. The ordering of words is very different in those languages, so if I use a word like “it” or “which” that refers back to something mentioned earlier in the sentence, the translation machinery is forced to decide what it referred to, and name it explicitly in the translation. And quite often, that goes wrong. When the translation is to another European language, or even Hebrew, it’s possible to retain the ambiguity in the translation, and leave it to the intelligence of the reader.

So if this post feels like a huge ramble, yes, that’s exactly what it is. Me feeling free at the keyboard. The liberation from thinking how each and every word may be mistranslated. I mean, even a work like “generation” in the sense of creating something can be translated into like generation gap. Or if I say “that’s a special case”, will that become something about a bag? I’m really tired of translating and back-translating and cutting down Chinese into single characters in order to ensure that when I write “segment” I don’t get something related to marketing.

Let’s talk about timing

I’ve been thinking about writing a guide to FPGA timing and timing constraints for quite a while. It’s one of the most important topics, if not the most important topic in the field, and somehow it seems like the explanations about this field are either very theoretic, or they’re so specific that it’s difficult to understand the whole point. It was clear to me that I needed to begin with the basics of the theory (and translate it, of course). I was however quite surprised myself how little theory there was to convey. The entire thing is about setup time, hold time and clock-to-output. That’s it. That’s the whole story.

It was quite clear where to take it from there: To discuss the most commonly used timing constraint, that is to say the period constraint. And make that international, once again.

But how do you talk about this simple constraint? I mean, the normal explanation is just “this is what you should write”. How do you explain what it really means?

So the approach that I went for throughout the series of pages was to explain it through the timing reports. It’s a bit of a tedious way, but the idea was that the correct way to understand timing constraints, and in fact one of the most essential capabilities for working with timing, is to read the timing reports. Hence I went for explaining the timing report.

Another alternative could have been to jump into explaining the meaning of the timing constraint as a clock object. This is a more technical approach, and it’s necessary to go through it at some stage. But I deferred that to later.

To PLL or not to PLL

And by the way, that’s a title that I really wanted somewhere in those pages, but it was clearly going to be lost in translation. So all you guys from East Asia out there, if you won’t read the translated pages, just know I sacrificed proper English in favor of you getting something decent to read. Kindly appreciate it.

Anyhow, it turned out that I couldn’t finish the entire topic of the period constraint in one page, so there’s a second page on the same topic, this time for the sake of explaining how a PLL changes the picture. And that brings up the whole story of clock domain crossing between related clocks. Plus I also filled in the part about hold time.

Hold time is a neglected topic regarding timing constraints. Probably because the timing constraints rarely fail on that, and still: The tools deliberately add delays to the paths in order to meet hold timing, and by doing so, they put an obstacle before themselves in the quest to attain the setup timing. It’s quite amazing to see how a simple routed net suddenly gets a delay of several ns, just because the hold timing required so. In particular when crossing clock domains between related, but unaligned clocks.

And that’s another expression I couldn’t use on those pages: Crossing clock domains. It always says “clock domain crossing”, and it’s protected against translation. Otherwise, I get crucifixion in the translation. And the protection against translation works only on nouns, not verbs. Did I just divert from the topic? Maybe I need a closure?

Timing closure

When is the right time to talk about it? Timing closure is often the reason we even start think about timing (ghosts in the FPGA is another common reason). So it’s definitely an important topic, at least to the impatient reader. But shouldn’t I put it in the end? After all, it’s the ultimate surgical operation that is done once you understand it all.

So no. I went with the people on this one. It’s the ultimate topic, yes, but I also need to realize that some people will read one page or two and then give up. So better get to the important thing ASAP. Actually, I also gave away a checklist kind of page (with trans-la-tions). Maybe that’s the biggest contradiction I made in those pages. I said all the time that timing is like a divine science, and don’t be lazy about it. Be deep and profound, learn it as an art, and then I come up with a checklist. Not only that, I repeat that crime at the end of the page series. And I do that in three additional languages, too.

To my defense (or is it?), the repeated crime had a practical and silly reason. I had that page already written. More or less. So it was a bit of low-hanging fruit. And my rule (in this blog and otherwise) has always been not to throw away food. If you have some random pieces of cheese in your fridge, make a quiche. If there is mold on the cheese, you just make a different kind of quiche.

So that last page in the series is not 100% coherent with those preceding it, but it kinda summarizes the point. Besides, I already mentioned that the whole series of pages is one big experiment…?


That’s a word in Jiddish, literally taken from the Hebrew word תכלית. Actually, it’s the pronunciation of that word in the mouth of European Jews. The meaning of the word is “purpose”, the reason for doing something, but more in the direction of the intention, what result you want to achieve etc.

So after talking about timing closure for two pages, I finally got to the pudding: Actually talk about how to write timing constraint. But not so fast! There’s still one tedious phase to go through, namely learn how to select the logic that we want to be specific about.

This whole thing with Tcl commands, which are used in SDC format (that is, by Vivado, Quartus and several other tools) gives a certain level of compatibility. But as it turns out, there’s Synopsys’ set of capabilities, which seems to have been adopted in full by Vivado, and then there’s Quartus’ take on the same topic, which is same-same but completely different. So the simple commands work the same, but when it comes to more complicated stuff, Quartus diverges, and not for the better. In particular, Quartus lacks the “-filter” option that allows more complicated expressions for defining whether to include a logic element (actually, the object representing it) in a timing constraint. So one is limited to playing with wildcards, and that, well, sucks.

So in short, this was the point I had to admit that Vivado is superior, with a margin. I also began to suspect that Vivado is Synopsys in disguise, but that’s just a wild speculation.

It’s only after that, that I could begin to really talk about commands like set_max_delay, set_min_delay and set_false_path. That’s what I call tachles. I finally got to the point, after rambling tons.

I think that the most important point about set_false_path is that it has maximal priority. One would usually consider priority to be an advanced topic.

Ah, and here comes another messy, out-of-nowhere comment about this series of pages. The idea wasn’t to write a full and comprehensive guide that includes everything that there is possible to know. That’s what the official docs are for. The idea was to cover the things that I personally use and feel that I must be aware of, and hence the convey the minimal toolbox that one needs to carry.

Return from subroutine: I didn’t want to cover an advanced topic like priorities between timing constraints, because who really needs that? It’s quite clear that those specific timing constraints are stronger than the period constraint, and except for that, nobody should mess up with more complicated combinations of contradicting constraints.

But set_false_path is scary: It wins everything else, and it disables the timing check of the relevant paths silently. It’s literally a killer. So make a small mistake with that constraint, and your design is cooked. So I spent quite a few words on warnings about that.

Eight o’clock rock

When I first started working with FPGAs, my mentors were from the ASIC industry. For some reason, which I’m still to find out, ASIC guys like to everything with one clock, if possible. In the FPGA world, it’s more a like clock being given away like they were air. So the whole issue of timing constraints needs to be addressed in the context of crossing clock domains.

And here’s another word that I avoided all the time: Addressed. Because who knows what that will be translated into, when it comes to Chinese, Japanese and Korean.

The alternative to a lot of clocks, is multi-cycle paths, of course: This is what the ASIC guys do in order to stick to one clock for the entire design. And in order to take advantage of the relatively relaxed timing of logic that depends on a clock enable, there are timing constraints. Which is a risky business. I won’t repeat the explanation here, but let’s say that it’s generally not recommended.

Who you wanna talk to?

One of the things that I actually planned and carried out as planned (well, not really as planned) was to put the discussion about I/O constraints last. This went against the principle of important things first, but since I/O constraints are technically just a private case of constraints on internal paths (that is, paths that start and end inside the FPGA), it made a lot of sense to put this part last.

Now read this last sentence again, and realize that there’s nothing even close to that length in the pages I wrote on timing constraint. No chance a sentence like that would have been translated correctly.

So where were I? Oh yes, timing constraint for I/O. But soon enough I realized that I can’t talk about timing constraints for I/O before discussing system synchronous vs. source synchronous vs. asynchronous I/O. But it’s not that simple. One can’t just throw these terms in the air and hope for good. So all of the sudden I found myself with a small בלת”ם (Read “baltam”, meaning “unplanned” in Hebrew), which wasn’t so small.

That’s because source-synchronous I/O means both inputs and outputs, and these are implemented in different ways. So there was no choice (or was there?) to sit down and explain how to do that. How can you discuss the possibilities for timing constraints of a source-synchronous input without explaining how the I/O is implemented. There are quite a few possibilities, after all.

Source-synchronous outputs are actually a much simpler story, because it just means to generate all signals with registers.

On top of all this mess, there’s another topic which is important both for the timing of I/O and for ensuring that the design behaves in a consistent way: IOB registers.

But oops. IOB is a Xilinx word. Oddly enough, it doesn’t seem like there’s a standard term for the idea of having a flip-flop close to the I/O pin. It’s such a common thing, and yet there’s no common term. I think. So I used the term I’m used to, IOB. It’s quite apparent that I’m a Xilinx guy. I’ve worked a bit with Altera (should I call them AMD and Intel FPGA by now? It takes time to get used to that). I nevertheless tried to be vendor-neutral in those pages. They were supposed to be about all FPGAs, not only the two that were leading back in 202x, just before being acquired by two giants, leaving an unknown future ahead.

But that was quite impossible, because timing constraints are usually written in SDC, and SDC was invented by Synopsys, and Vivado are the closest that I know of to the full and accurate interpretation of the constraints, as well as the Tcl environment. That was yet another long sentence that I would never write for translation. But I actually expect that all tools will end up supporting Synopsys’ full vocabulary, so maybe a few years from now, these posts will really be vendor-neutral, and the only thing people won’t understand is what Vivado and Xilinx is, because some other FPGA vendor that we’ve never heard of has taken control of the market, while “AMD FPGA” put all they had on datacenter acceleration and ended up closing the FPGA department because, well, because.

Did I diverge again? Definitely. All I wanted to say is that I also wrote a page about IOB registers, and needless to say, that page is available in three additional languages, just like the rest.


I really wonder if anyone will read through this post from top to bottom. If you haven’t, and just scrolled down to the bottom to see if this goes anywhere, I can’t blame you. Truth to be said, this post went nowhere, and neither was it intended to do so.

Any guesses about the reason I wrote this anyhow. I still promise to send a pizza emoji in response to correct answers. Even though all answers are incorrect by the nature of this foolishness.

A few posts on FPGA on my other site

I’ve been a bit silent on this blog for a while, but that’s only because I’ve been busy writing on my spin-off site lately.

So here are a few posts over there which are pretty much related to what I do on this blog. These pages are also translated to Chinese, Japanese and Korean, so this is what those links in parentheses are about.

About FIFOs:

About clock domains:

Using git send-email with Gmail + OAUTH2, but without subscribing to cloud services


There is a widespread belief, that in order to use git send-email with Gmail, there’s a need to subscribe to Google Cloud services and obtain some credentials. Or that a two-factor authentication (2fa) is required.

This is not the case, however. If Thunderbird can manage to fetch and send emails through Google’s mail servers (as well as other OAUTH2 authenticated mail services), there’s no reason why a utility won’t be able to do the same.

The subscription to Google’s services is indeed required if the communication with Google’s server must be done without human supervision. That’s the whole point with API keys. If a human is around when the mail is dispatched, there’s no need for any special measures. And it’s quite obvious that there’s a responsive human around when a patch is being submitted.

What is actually needed, is a client ID and a client secret, and these are indeed obtained by registering to Google’s cloud service (this explains how). But here’s the thing: Someone at Mozilla has already obtained these, and hardcoded them into Thunderbird itself. So there’s no problem using these to access Gmail with another mail client. It seems like many believe that the client ID and secret must be related to the mail account to access, and therefore each and every one has to obtain their own pair. That’s a mistake that has made a lot of people angry for nothing.

This post describes how to use git send-email without any further involvement with Google, except for having a Gmail account. The same method surely applies for other mail service providers that rely on OAUTH2, but I haven’t gotten into that. It should be quite easy to apply the same idea to other services as well however.

For this to work, Thunderbird must be configured to access the same email account. This doesn’t mean that you actually have to use Thunderbird for your mail exchange. It’s actually enough to configure the Gmail server as an outgoing mail server for the relevant account. In other words, you don’t even need to fetch mails from the server with Thunderbird.

The point is to make Thunderbird set up the OAUTH2 session, and then fetch the relevant piece of credentials from it. And take it from there with Google’s servers. Thunderbird is a good candidate for taking care of the session’s setup, because the whole idea with OAUTH2 is that the user / password session (plus possible additional authentication challenges) is done with a browser. Since Thunderbird is Firefox in disguise, it integrates the browser session well into its general flow.

If you want to use another piece of software to maintain the OAUTH2 session, that’s most likely possible, given that you can get its refresh token. This will also require obtaining its client ID and client secret. Odds are that it can be found somewhere in that software’s sources, exactly as I found it for Thunderbird. Or look at the https connection it runs to get an access token (which isn’t all that easy, encryption and that).

Outline of solution

All below relates to Linux Mint 19, Thunderbird 91.10.0, git version 2.17.1, Perl 5.26 and msmtp 1.8.14. But except for Thunderbird and msmtp, I don’t think the versions are going to matter.

It’s highly recommended to read through my blog post on OAUTH2, in particular the section called “The authentication handshake in a nutshell”. You’re going to need to know the difference between an access token and a refresh token sooner or later.

So the first obstacle is the fact that git send-email relies on the system’s sendmail to send out the emails. That utility doesn’t support OAUTH2 at the time of writing this. So instead, I used msmtp, which is a drop-in replacement for sendmail, plus it supports OAUTH2 (since version 1.8.13).

msmtp identifies itself to the server by sending it an access token in the SMTP session (see a dump of a sample session below). This access token is short-lived (3600 seconds from Google as of writing this), so it can’t be fetched from Thunderbird just like that. In particular because most of the time Thunderbird doesn’t have it.

What Thunderbird does have is a refresh token. It’s a completely automatic task to ask Google’s server for the access token with the refresh token at hand. It’s also an easy task (once you’ve figured out how to do it, that is). It’s also easy to get the refresh token from Thunderbird, exactly in the same way as getting a saved password. In fact, Thunderbird treats the refresh token as a password.

msmtp allows executing an arbitrary program in order to get the password or the access token. So I wrote a Perl script (oauth2-helper.pl) that reads the refresh token from a file and gets an access token from Google’s server. This is how msmtp manages to authenticate itself.

So everything relies on this refresh token. In principle, it can change every time it’s used. In practice, as of today, Google’s servers don’t change it. It seems like the refresh token is automatically replaced every six months, but even if that’s true today, it may change.

But that doesn’t matter so much. All that is necessary is that the refresh token is correct once. If the refresh token goes out of sync with Google’s server, a simple user / password session rectifies this. And as of now, than virtually never happens.

So let’s get to the hands-on part.

Install msmtp

Odds are that your distribution offers msmtp, so it can be installed with something like

# apt install msmtp

Note however that the version needs to be at least 1.8.13, which wasn’t my case (Linux Mint 19). So I installed it from the sources. To do that, first install the TLS library, if it’s not installed already (as root):

# apt install gnutls-dev

Then clone the git repository, compile and install:

$ GIT_SSL_NO_VERIFY=true git clone http://git.marlam.de/git/msmtp.git
$ cd msmtp
$ git checkout msmtp-1.8.14
$ autoreconf -i
$ ./configure
$ make && echo Success
$ sudo make install

The installation goes to /usr/local/bin and other /usr/local/ paths, as one would expect.

I checked out version 1.8.14 because later versions failed to compile on my Linux Mint 19. OAUTH2 support was added in 1.8.13, and judging by the commit messages it hasn’t been changed since, except for commit 1f3f4bfd098, which is “Send XOAUTH2 in two lines, required by Microsoft servers”. Possibly cherry-pick this commit (I didn’t).

Once everything has been set up as described below, it’s possible to send an email with

$ msmtp -v -t < ~/email.eml

The -v flag is used only for debugging, and it prints out the entire SMTP session.

The -t flag tells msmtp to fetch the recipients from the mail’s own headers. Otherwise, the recipients need to be listed in the command line, just like sendmail. Without this flag or recipients, msmtp just replies with

msmtp: no recipients found

The -t flag isn’t necessary with git send-email, because it explicitly lists the recipients in the command line.

The oauth2-helper.pl script

As mentioned above, Thunderbird has the refresh token, but msmtp needs an access token. So the script that talks with Google’s server and grabs the access token can be downloaded from its Github repo. Save it, with execution permission to /usr/local/bin/oauth2-helper.pl (or whatever, but this is what I assume in the configurations below).

Some Perl libraries may be required to run this script. On a Debian-based system, the packages’ names are  probably something like libhttp-message-perl, libwww-perl and libjson-perl.

It’s written to access Google’s token server, but can be modified easily to access a different service provider by changing the parameters at its beginning. For other email providers, check if it happens to be listed in OAuth2Providers.jsm. I don’t know how well it will work with those other providers, though.

The script reads the refresh token from ~/.oauth2_reftoken as a plain file containing the blob only. There’s an inherent security risk of having this token stored like this, but it’s basically the same risk as the fact that it can be obtained from Thunderbird’s credential files. The difference is the amount of security by obscurity. Anyhow, the reference token isn’t your password, and it can’t be derived from it. Either way, make sure that this file has a 0600 or 0400 permission, if you’re running on a multi-user computer.

The script caches the access token in ~/.oauth2_acctoken, with an expiration timestamp. As of today, it means that the script talks with the Google’s server once in 60 minutes at most.

Setting up config files

So with msmtp installed and the script downloaded into /usr/local/bin/oauth2-helper.pl, all that is left is configuration files.

First, create ~/.msmtprc as follows (put your Gmail username instead of mail.username, of course):

account default
host smtp.gmail.com
port 587
tls on
tls_starttls on
auth xoauth2
user mail.username
passwordeval /usr/local/bin/oauth2-helper.pl
from mail.username@gmail.com

And then change the [sendemail] section in ~/.gitconfig to

        smtpServer = /usr/local/bin/msmtp

That’s it. Only that single line. It’s however possible to use smtpServerOption in the .gitconfig to add various flags. So for example, to get the entire SMTP session shown while sending the email, it should say:

        smtpServer = /usr/local/bin/msmtp
        smtpServerOption = -v

But really, don’t, unless there’s a problem sending mails.

Other than that, don’t keep old settings. For example, there should not be a “from=” entry in .gitconfig. Having such causes a “From:” header to be added into the mail body (so it’s visible to the reader of the mail). This header is created when there is a difference between the “From” that is generated by git send-email (which is taken from the “from=” entry) and the patch’ author, as it appears in the patch’ “From” header. The purpose of this in-body header is to tell “git am” who the real author is (i.e. not the sender of the patch). So this extra header won’t appear in the commit, but it nevertheless makes the sender of the message look somewhat clueless.

So in short, no old junk.

Sending a patch

Unless it’s the first time, I suggest just trying to send the patch to your own email address, and see if it works. There’s a good chance that the refresh token from the previous time will still be good, so it will just work, and no point hassling more.

Actually, it’s fine to try like this even on the first time, because the Perl script will fail to grab the access token and then tell you what to do to fix it, namely:

  • Make sure that Thunderbird has access to the mail account itself, possibly by attempting to send an email through Gmail’s server.
  • Go to Thunderbird’s Preferences > Privacy & Security and click on Saved Passwords. Look for the account, where the Provider start with oauth://. Right-click that line and choose “Copy Password”.
  • Create or open ~/.oauth2_reftoken, and paste the blob into that file, so it contains only that string. No need to be uptight with newlines and whitespaces: They are ignored.

And then go, as usual:

$ git send-email --to 'my@test.mail' 0001-my.patch

I’ve added the output of a successful session (with the -v flag) below.

Room for improvements

It would have been nicer to fetch the refresh token automatically from Thunderbird’s credentials store (that is from logins.json, based upon the decryption key that is kept in key4.db), but the available scripts for that are written in Python. And to me Python is equal to “will cause trouble sooner or later”. Anyhow, this tutorial describes the mechanism (in the part about Firefox).

Besides, it could have been even nicer if the script was completely standalone, and didn’t depend on Thunderbird at all. That requires doing the whole dance with the browser, something I have no motivation to get into.

A successful session

This is what it looks like when a patch is properly sent, with the smtpServerOption = -v line in .gitignore (so msmtp produces verbose output):

Send this email? ([y]es|[n]o|[q]uit|[a]ll): y
ignoring system configuration file /usr/local/etc/msmtprc: No such file or directory
loaded user configuration file /home/eli/.msmtprc
falling back to default account
Fetching access token based upon refresh token in /home/eli/.oauth2_reftoken...
using account default from /home/eli/.msmtprc
host = smtp.gmail.com
port = 587
source ip = (not set)
proxy host = (not set)
proxy port = 0
socket = (not set)
timeout = off
protocol = smtp
domain = localhost
auth = XOAUTH2
user = mail.username
password = *
passwordeval = /usr/local/bin/oauth2-helper.pl
ntlmdomain = (not set)
tls = on
tls_starttls = on
tls_trust_file = system
tls_crl_file = (not set)
tls_fingerprint = (not set)
tls_key_file = (not set)
tls_cert_file = (not set)
tls_certcheck = on
tls_min_dh_prime_bits = (not set)
tls_priorities = (not set)
tls_host_override = (not set)
auto_from = off
maildomain = (not set)
from = mail.username@gmail.com
set_from_header = auto
set_date_header = auto
remove_bcc_headers = on
undisclosed_recipients = off
dsn_notify = (not set)
dsn_return = (not set)
logfile = (not set)
logfile_time_format = (not set)
syslog = (not set)
aliases = (not set)
reading recipients from the command line
<-- 220 smtp.gmail.com ESMTP m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
--> EHLO localhost
<-- 250-smtp.gmail.com at your service, []
<-- 250-SIZE 35882577
<-- 250-8BITMIME
<-- 250-STARTTLS
<-- 250-CHUNKING
<-- 250 SMTPUTF8
<-- 220 2.0.0 Ready to start TLS
TLS session parameters:
TLS certificate information:
        C=US,O=Google Trust Services LLC,CN=GTS CA 1C3
        Activation time: Mon 26 Sep 2022 11:22:04 AM IDT
        Expiration time: Mon 19 Dec 2022 10:22:03 AM IST
        SHA256: 53:F3:CA:1D:37:F2:1F:ED:2C:67:40:A2:A2:29:C2:C8:E8:AF:9E:60:7A:01:92:EC:F0:2A:11:E8:37:A5:88:F3
        SHA1 (deprecated): D4:69:6E:59:2D:75:43:59:02:74:25:67:E7:57:40:E0:28:43:A8:62
--> EHLO localhost
<-- 250-smtp.gmail.com at your service, []
<-- 250-SIZE 35882577
<-- 250-8BITMIME
<-- 250-CHUNKING
<-- 250 SMTPUTF8
<-- 235 2.7.0 Accepted
--> MAIL FROM:<mail.username@gmail.com>
--> RCPT TO:<test@mail.com>
--> RCPT TO:<mail.username@gmail.com>
--> DATA
<-- 250 2.1.0 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
<-- 250 2.1.5 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
<-- 250 2.1.5 OK m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
<-- 354  Go ahead m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
--> From: Eli Billauer <mail.username@gmail.com>
--> To: test@mail.com
--> Cc: Eli Billauer <mail.username@gmail.com>
--> Subject: [PATCH v8] Gosh! Why don't you apply this patch already!
--> Date: Sun, 30 Oct 2022 07:01:14 +0200
--> Message-Id: <20221030050114.49299-1-mail.username@gmail.com>
--> X-Mailer: git-send-email 2.17.1

[ ... email body comes here ... ]

--> --
--> 2.17.1
--> .
<-- 250 2.0.0 OK  1667106108 m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
--> QUIT
<-- 221 2.0.0 closing connection m8-20020a7bcb88000000b003c6d21a19a0sm3316430wmi.29 - gsmtp
OK. Log says:
Sendmail: /usr/local/bin/msmtp -v -i test@mail.com mail.username@gmail.com
From: Eli Billauer <mail.username@gmail.com>
To: test@mail.com
Cc: Eli Billauer <mail.username@gmail.com>
Subject: [PATCH v8] Gosh! Why don't you apply this patch already!
Date: Sun, 30 Oct 2022 07:01:14 +0200
Message-Id: <20221030050114.49299-1-mail.username@gmail.com>
X-Mailer: git-send-email 2.17.1

Result: OK

Ah, and the fact that the access token can be copied from here is of course meaningless, as it has expired long ago.

Thunderbird debug notes

These are some random notes I made while digging in Thunderbird’s guts to find out what’s going on.

So this is Thunderbird’s official git repo. Not that I used it.

To get logging info from Thunderbird: Based upon this page, go to Thunderbird’s preferences > General and click the Config Editor button. Set mailnews.oauth.loglevel to All (was Warn). Same with mailnews.smtp.loglevel. Then open the Error Console with Ctrl+Shift+J.

The cute thing about these logs is that the access code is written in the log. So it’s possible to skip the Perl script, and use the access code from Thunderbird’s log. Really inconvenient, but possible.

The OAuth2 token requests is implemented in Oauth2.jsm. It’s possible to make a breakpoint in this module by through Tools > Developer Tools > Developer Toolbox, and once it opens (after requesting permission for external connection), go to the debugger.

Find Oauth2.jsm in the sources pane to the left (of the Debugger tab), under resource:// modules > sessionstore. Add a breakpoint in requestAccessToken() so that the clientID and consumerSecret properties can be revealed.

Sending a patch from Thunderbird directly

This is a really bad idea. But if you have Thunderbird, and need to send a patch right now, this is a quick, dirty and somewhat dangerous procedure for doing that.

Why is it dangerous? Because at some point, it’s easy to pick “Send now” instead of “Send later”, and boom, a junk patch is mailed to the whole world.

The problem with Thunderbird is that it makes small changes into the patch’ body. So to work around this, there’s a really silly procedure. I used it once, and I’m not proud of that.

So here we go.

First, a very simple script that outputs the patch mail into a file. Say that I called it dumpit (should be executable, of course):


cat > /home/eli/Desktop/git-send-email.eml

Then change ~/.gitconfig, so it reads something like this in the [sendemail] section:

        from = mail.username@gmail.com
        smtpServer = /home/eli/Desktop/dumpit

So basically it uses the silly script as a mail server, and the content goes out to a plain file.

Then run git send-email as usual. The result is a git-send-email.eml as a file.

And now comes the part of making Thunderbird send it.

  • Close Thunderbird. All windows.
  • Change directory to where Thunderbird keeps its profile files, to under Mail/Local Folders
  • Remove “Unsent Messages” and “Unsent Messages.msf”
  • Open Thunderbird again
  • Inside Thunderbird, go to Hamburger Icon > File > Open > Saved Message… and select git-send-email.eml. The email message should appear.
  • Right-Click somewhere in the message’s body, and pick Edit as New Message…
  • Don’t send this message as is! It’s completely messed up. In particular, there are some indentations in the patch itself, which renders it useless.
  • Instead, pick File > Send Later.
  • Once again, close Thunderbird. All windows.
  • Remove “Unsent Messages.msf” (only)
  • Edit “Unsent Messages” as follows: Everything under the “Content-Transfer-Encoding: 7bit” part is the mail’s body. So remove the “From:” line after it, and paste the email’s body from git-send-email.eml instead.
  • Note that there are normally two blank lines after the mail’s body. Retain them.
  • Open Thunderbird again. Verify that those indentations are away.
  • Look at the mail inside Outbox, and verify that it’s OK now. These are the three things to look for in particular:
    • The “From:” part at the beginning of the message is gone.
    • At the end of the message, there’s a “–” and git’s version number. These should be in separate lines.
    • Look at the mail’s source. The “+” and “-” signs of the diffs must not be indented.
  • If all is fine, right-click Outbox, and pick “Send unsent messages”. And hope for good.

Are you sure you want to do this?

Android: Compiling and running a Java command-line utility

This is a failure

This post is a messy collection of things I wrote down as I tried to make a simple Java command-line (as in adb shell) utility for making a change in the device’s config settings. It’s one of several attempts I’ve made to stop the automatic hibernation of unused app, as discussion on this other post of mine.

Except for a lot of conclusions about what is impossible, this post shows how to compile and run a plain Java program that can access Android’s API functions. But be warned: This is most likely not very useful, because the better part of that API can’t be used. More on this below.

So I did manage to get the program running. It’s just that I learned that it’s useless.

So if you’re reading this because you want to do something quick and dirty utility for Android, my message is: Don’t. Fetch the source code of some boilerplate app somewhere, and take it from there.

Let’s be honest: Stopping the hibernation is by itself not reason enough for all the efforts I’ve put in, but it’s a good way to get acquainted with Android in a way that will hopefully help me to solve issues in the future.

And this is a good place to mention that neither Java nor Android is not what I do for living, so no wonder I get into nonsense.

I am however well-versed with Linux machines in general, which is why I’m inclined towards plain command-line programs. Which turns out to be a bad idea with Android, as evident from below.

The strategies

As mentioned in that post, the device’s config settings are more than apparently stored in an XML file, which isn’t really an XML file, but rather a file in ABX format, which is some special Android thing with currently no reliable toolset for manipulation.

Android’s own “settings” command-line utility allows manipulation of these settings, but specifically not the “config” namespace. So if the built-in utility won’t do that for me, how about writing a little thingy of my own?

Strategy #1 was to use the API methods of android.provider.Settings to change the setting values directly. But alas, the methods that manipulate the “config” namespace are not exposed to the API level. So they are accessible only from within Android’s core classes. Once again, mr. Android tells me not touch “config”.

Strategy #2 was to follow the SettingService class. What this service actually does, is to do the work as requested by the “settings” command-line utility. More on that below. Anyhow, the trick is that the actual access to the settings database (XML / ABX files, actually) is through the SettingsProvider class, which extends ContentProvider.

OK, so a quick word on Android Content Providers. This is an Android-specific IPC API that allows Android components (including apps) to expose information to just other components, with a noSQL-like set of methods. So the Content Provider is expected to have some kind of database with row/column tables, just like good old SQL. The most common use for this is the contact list (as in a phonebook).

The side that wants to access the data calls getContentResolver() with a URI that represents the requested database, and the ContentResolver object that is returned has methods like insert(), query(), update(), and most interestingly, call(). So by using the methods of the ContentResolver object, the requests are passed on the Content Provider on the other side, which fulfills the request and conveys the result.

So I thought I should connect with the SettingsProvider as a Content Provider, and use the same calls as in SettingService. The URI of this Content Provider isn’t given in SettingService, because it’s apparently given to the object constructor directly. A wild guess, based upon what I found in the dumpsys output, is “content://com.android.providers.settings/.SettingsProvider”. Based upon the Settings.java it’s more like just “content://settings”. Never got to try this however.

The reason I never got that far is that getContentResolver() is a method of the Context class. And that’s not a small technicality. Because a command line program of the sort I was playing around with doesn’t have a context. That’s something that apps and services have. There’s no way to create one ad-hoc, as far as I know. In the absence of a Context object, there’s no way to call getContentResolver(), and that means that there’s no access to the Content Provider. Game over.

This obstacle is probably not a coincidence either: In order to access data, the system wants to know who you are. A plain Java command-line utility has no such identity. In the context of the “config” class, it’s even more evident: Setting’s plain API doesn’t expose it, and the Content Provider API wants to know who fiddles with these variables.

The conclusion is hence that odds are that any valuable and interesting functionality will be impossible from a simple command-line utility. For this to have a chance to work, there must be an app or a service, which is what I wanted to avoid.

How does “settings” run

The “settings” command is in fact /system/bin/settings which consists of the following:

cmd settings "$@"

So what is “cmd”? Let’s just try:

$ adb shell cmd
cmd: No service specified; use -l to list all running services. Use -w to start and wait for a service.

And using “cmd -l” indeed enlists “settings” as a service.

Same goes for “pm”, but with the “package” service instead.

So these important utilities don’t perform the action themselves. A bit like Linux’ systemd commands, they talk with a service instead. The rationale is probably the same: Not to expose the API that can touch the sensitive stuff, so only a service can do that.

Not directly related, but dumpsys is /system/bin/dumpsys, which is an ELF:

dumpsys: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /system/, BuildID[md5/uuid]=eb94567e9085216f43594a212afd2160, stripped

Installing the minimal SDK for compilation

Downloaded commandlinetools-linux-8512546_latest.zip from Android Studio’s download page. Android Studio can be downloaded on the same page, but I’m a little fan of IDEs, and what I want to do is small and non-standard (Gradle is often mentioned in the context of building from command line, but that’s for full-blown apps).

Unzip the file, and move the lib and bin directories into the following directory structure:

├── sdk
│   └── cmdline-tools
│       └── latest
│           ├── bin
│           └── lib

Or sdkmanager will complain that

Error: Could not determine SDK root.
Error: Either specify it explicitly with --sdk_root= or move this package into its expected location: <sdk>/cmdline-tools/latest/

A little silly obstacle to prepare you for what’s ahead.

Change directory to sdk/cmdline-tools/latest and get a list of installable packages:

$ bin/sdkmanager --list

And then, to download the build tools, go something like:

$ bin/sdkmanager 'build-tools;28.0.0'

Note that the exact version is specified (they were all listed before) and that the single-quote is required because of the semicolon. Expect a long download.

Remember that old build tools are better if you don’t need the features of the new ones. Generally speaking, old tools build things that are compatible to a wider range of targets.

Congrats, the sdk/ directory now consumes 1.1 GB, most of which is under emulator/ (just delete that directory).

dx is in build-tools/28.0.0/.

Next up, obtain android.jar, which contains the API stubs. This is needed by javac, or else it refuses to import Android libraries. First, figure out which API level I need. My phone runs Android 12, so I’ll go for API level 31, according to the list at the top of this page.


$ bin/sdkmanager 'platforms;android-31'

The SDK just grew to 1.3 GB. Nice. The important part is platforms/android-31/android.jar, and it’s there.

Like any jar file, it’s possible to peek inside with

$ jar xvf android.jar

and that allows seeing what is exposed in each class by using e.g. to see what android.provider.Settings imports:

javap android/provider/Settings.class

Note that classes defined inside classes are put in separate .class files, e.g. Settings$System.class. The first line in javap’s output says which .java file the .class file was compiled from.

Compile command line utility for Android

You know that this is probably useless, right? You’ve read the introduction above?

So let’s take this simple program as hello.java:

import java.io.*;
public class hello {
    public static void main(String[] args) {
	PrintStream p = java.lang.System.out; // Keep it short

        p.println("Hello, world");

The only thing worth mentioning is that @p is used instead of java.lang.System.out. Just System.out is ambiguous, so it has to be this long.

Compile with


where compile.sh is

set -e



rm -f $TARGET.class $TARGET.dex
javac -source 1.7 -target 1.7 -bootclasspath $JAR -source 1.7 -target 1.7 hello.java $TARGET.java
$DX --dex --output=$TARGET.dex $TARGET.class

The -source 1.7 -target 1.7 thing in javac’s arguments is to create Java 7 bytecode. Which is the reason for this warning, which can be ignored safely:

warning: [options] bootstrap class path not set in conjunction with -source 7

This is a good time to mention that Java is deprecated, and Android embraces Kotlin. As if they didn’t learn the lesson with those .NET languages.

If everything goes fine, send it to the phone (delme-tmp should be created first, of course):

$ adb push hello.dex /storage/emulated/0/delme-tmp/

Execution: Change directory to /storage/emulated/0/delme-tmp and go

$ dalvikvm -Djava.class.path=./hello.dex hello
Hello, world

Yey. It works. But that’s hardly worth anything, is it? How about using some Android API? So let’s change hello.java to this. The most basic thing I could think about (and in fact, I failed to come up with anything to do that didn’t somehow require a context):

import java.io.*;
import android.os.Build;

public class hello {
    public static void main(String[] args) {
	PrintStream p = java.lang.System.out; // Keep it short

	String manufacturer = Build.MANUFACTURER;

        p.println("Manufacturer is " + manufacturer);

But that gave

$ dalvikvm -Djava.class.path=./hello.dex hello
Exception in thread "main" java.lang.UnsatisfiedLinkError: No implementation found for java.lang.String android.os.SystemProperties.native_get(java.lang.String, java.lang.String) (tried Java_android_os_SystemProperties_native_1get and Java_android_os_SystemProperties_native_1get__Ljava_lang_String_2Ljava_lang_String_2)
	at android.os.SystemProperties.native_get(Native Method)
	at android.os.SystemProperties.get(SystemProperties.java:165)
	at android.os.Build.getString(Build.java:1434)
	at android.os.Build.<clinit>(Build.java:54)
	at hello.main(hello.java:8)

So compiling against the libraries is fine, but actually getting it to do something is a different story. There is probably a solution for this, but given the prospects of getting something useful done with this method, I didn’t bother going any further.

Sources on compilation:

  • See the last answer on this page on how to compile something that runs with DalvikVM.
  • See this page on how to compile a command-line program, albeit with a shell wrapper for “app_process”. This page also mentions that both dalvikvm as well as app_process lack context.

General notes

Most important: There’s Android Code Search for looking up stuff in the code base.

The sources for the Android core functions can be fetched with

git clone https://github.com/aosp-mirror/platform_frameworks_base.git

It’s 4.6 GB, so it takes quite some time to download.

It’s a good idea to check out a tag that corresponds to the same API as in the android.jar you’re working with. For the sake of seeing the actual implementation of the API functions, as well as how they’re really used, when the API was in that stage. I haven’t found a clear-cut way to connect between the API level and a tag in the git repo, so I went by the dates of the commits.

  • The root of the core classes is at platform_frameworks_base/core/java
  • There’s a direct connection between the class name and the directory it resides under.
  • The package directive at the beginning of Java files are just a way to allow wildcards for importing several classes at once.
  • Command-line utilities tend to extend the ShellCommand class. Looking for this is a good way to find the utilities and test programs in the source code.
  • However a common way to implement a command-line utility is through the “cmd” executable, which invokes the onShellCommand() public method of a service.

Dissection notes

Random pieces of info as I went along (towards failure, that is):

  • setSyncDisabledMode() is defined in core/java/android/provider/Settings.java, inside the Config subclass (search for “class Config”) in the file.
  • The command-line utility “settings” is implemented in packages/SettingsProvider/src/com/android/providers/settings/SettingsService.java.
  • The Settings functionality is implemented in packages/SettingsProvider/src/com/android/providers/settings/SettingsProvider.java. The most interesting part is the list of methods it supports, under the “call” method, which includes constants such as Settings.CALL_METHOD_LIST_GLOBAL (accessed by the “settings” utility) but also Settings.CALL_METHOD_LIST_CONFIG (which isn’t accessed by the same utility).
  • The names of the XML files are listed in packages/SettingsProvider/src/com/android/providers/settings/SettingsProvider.java (e.g. SETTINGS_FILE_CONFIG = “settings_config.xml”).

It’s quite apparent that the command-line utility (SettingsService.java) uses the public method calls (e.g. Settings.CALL_METHOD_GET_SYSTEM) and it doesn’t seem like there’s another way to access the registery (i.e. the XML files) because calls a private method for implementing this call (getSystemSetting() in this case).

Android 12: Turning off revocation of permissions of unused apps

No solution, yet

This post summarizes my failed attempt to get rid of the said nuisance. Actually, I was successful, but the success was short-lived: The fix that I suggest here gets reverted by the system soon enough.

I expect to get back to this, in particular if I find a way to do either of the following three:

  • Manipulate an ABX file (that’s Android’s compressed XML format)
  • Permanently disable a service.
  • Write a utility that changes the required setting by virtue of Android’s API. A failed attempt to write a command-line utility is described here.

If anyone knows how to do this, beyond anything I’ve written here, please comment below.

Note Gene Cash’ suggestion below in the comments, which is based upon turning off a specific permissions for each app. It’s an interesting direction, however it requires revoking a permission for each new app that we install.

So here comes the long story.

“For your protection”

Yet another “feature” for “my own good”: If an app isn’t used for 90 days, Android 12 automatically revokes the apps permissions, and deletes its temporary data. Which sounds harmless, but if that app involves a registration, well, you’ll have to do that again when you launch it next time. Which can be extremely annoying if you need a taxi quickly because of an issue with your car. And of course you don’t need the taxi app so often…

The common term for this brilliant feature is “removing permissions from unused apps”, but the internal term is “hibernation”.

I guess the people who though about this feature also throw away everything in their homes if they haven’t used them for 90 days.

Anyhow, I like my smartphone repeatable. If I open an app after a long time, I want it to work like it did before. It’s a weird expectation in these days of upgrades every day, I know, but that’s what I want.

It seems like there’s a simple solution. If you have adb access to your phone, that is (or some other type of shell). Or maybe someone will come up with an app that reverts all the silly stuff that new versions of Android come up with. Just be sure to run it every 90 days.

I should mention, that it’s possible to opt out this “feature” for each App individually: Go to each app’s configuration, and opt out “Remove permissions and free up space” under “Unused apps”. But I didn’t find a way to do this globally. Probably because we’re not considered responsible enough to make such a decision.

The short-lived fix

Hibernation of apps is officially documented on this page.

According to that page, the timeout before hibernation is set as a parameter, auto_revoke_unused_threshold_millis2. So

$ adb shell device_config get permissions auto_revoke_unused_threshold_millis2

7776000000 milliseconds equals 7776000 seconds, which equals 90 days. Looks like it, doesn’t it?

So the apparent workaround is to make the hibernation time very long, so it never happens. Let’s say 20 times 365 days? Good enough? That’s 630720000000 milliseconds. Yes, this number exceeds 232, but so does the original threshold. It’s a 64-bit machine, after all.

So just go (this doesn’t require rooting the phone):

$ adb shell device_config put permissions auto_revoke_unused_threshold_millis2 630720000000

And then verify that the new number is in effect:

$ adb shell device_config get permissions auto_revoke_unused_threshold_millis2

Yey. Does it solve the problem? Nope, that didn’t work. A couple of weeks later I got the same notification, and this time a couple of other apps were hibernated.

Taking a closer look, it turned out that the value of auto_revoke_unused_threshold_millis2 had returned to 7776000000 (after a couple of weeks). How and why, I don’t know. I tried to change it back to the desired value, and ran a reboot, and the updated value survived that. So the values that are set with device_config are permanent. But another reboot later, it was back to 7776000000. So I don’t know what’s going on.

Maybe it’s because I didn’t change the parameter with the same name, but under the app_hibernation namespace?

$ adb shell device_config get app_hibernation auto_revoke_unused_threshold_millis2

And maybe the easiest way is to to turn off hibernation altogether. There happens to be a parameter with an interesting name:

$ adb shell device_config get app_hibernation app_hibernation_enabled

Googling for this parameter name, I found absolutely nothing. But if it does what its name implies, maybe this will help?

$ adb shell device_config put app_hibernation app_hibernation_enabled false

I can’t even tell if this would help. The parameter got changed back soon enough to “true”.

Other ideas

Truth to be said, I don’t really know why and when the values of these parameters change back to their original values.

But here are a few clues: device_config has a “set_sync_disabled_for_tests” option, so I suppose

adb shell device_config set_sync_disabled_for_tests persistent

could prevent the configuration to reset all the time, but I didn’t try that.

It more than appears that the source of the default values for these parameters are in a permanent setting file, /data/system/users/0/settings_config.xml (needless to say, root access is required to invoke the 0/ directory). There are several similar .xml files in the 0/ directory, as listed in the definition of the class that processes these XML files, SettingsRegistry.

But here’s the crux: Despite the .xml suffix, this isn’t an XML file, but ABX, which is a format that is new in Android 12. It’s a condensed form of XML, which can’t be edited with just a text editor. Since this format is new and specific, there’s currently no reliable toolkit for making pinpoint changes. The only thing I found today (October 2022) is described below, along with its shortcomings.

An entry like this in settings_config.xml:

<setting id="3247" name="app_hibernation/app_hibernation_enabled" package="com.google.android.gms" preserve_in_restore="true" value="true" />

appears with a row like this in the output of “dumpsys settings”.

_id:3247 name:app_hibernation/app_hibernation_enabled pkg:com.google.android.gms value:true

In other words, the ABX file is clearly maintained by the “settings” service (this is what the second argument to dumpsys means). There’s a “settings” command-line utility too, but it doesn’t give access to this attribute (I’m not sure how related it is to the “settings” service).

An alternative approach could be to turn off the Hibernation service itself.

The services can be listed in Settings > System > Developer Options > Running Services, but these are only services that are linked with apps, it seems like. So not there.

On the other hand, the service can be found with dumpsys, and it’s called com.android.server.apphibernation.AppHibernationService. Unfortunately it can’t be disabled with pm disable-user as suggested on this page, be the argument to this command must be a package. The service’s name isn’t recognized by pm for this purpose.

I have no idea how to just disable a service. I’m not even sure it’s possible.

Additional trivia

In case this doesn’t work, here are some random pieces of info that might help. First of all, there’s a service called com.android.server.apphibernation.AppHibernationService. So maybe disable it somehow?

Another thing is that by looking at the output of “adb shell dumpsys package”, it appears like there’s a package called android.intent.action.MANAGE_UNUSED_APPS, which has an Intent named com.google.android.permissioncontroller/com.android.permissioncontroller.permission.ui.ManagePermissionsActivity. Maybe disable it, in the spirit of this post? It’s not clear if that’s the Activity that disables the app, or the one that allows enabling it back.

And then we have this snippet from the (adb shell dumpsys):

logs: "1663488654709 HibernationPolicy:i unused app com.google.android.apps.podcasts - last used on Thu Jan 01 02:00:00 GMT+02:00 1970 "
logs: "1663488654710 HibernationPolicy:i unused app com.google.android.apps.authenticator2 - last used on Thu Jan 01 02:00:00 GMT+02:00 1970 "
logs: "1663488654711 HibernationPolicy:i unused app com.gettaxi.android - last used on Thu Jan 01 02:00:00 GMT+02:00 1970 "
logs: " Globally hibernating apps [com.google.android.apps.podcasts, com.google.android.apps.authenticator2, com.gettaxi.android] "

The sources for the hibernation code:

git clone https://android.googlesource.com/platform/packages/modules/Permission

and the code is under PermissionController/src/com/android/permissioncontroller/. In particular, look at permission/utils/Utils.java for the list of parameters and their internal representation.

The source for the device_config utility is here. The implementation of DeviceConfig’s core methods is here.

Just in case it helps someone (future self in particular).

ABX manipulation

The only reference I found for ABX was in CCL Solution Group’s website. The said post describes the format, and also points at a git repository, which includes a couple of utilities for converting from XML to ABX and back.

The problem with this is that when I tried it on my settings_config.xml file, the back-and-forth conversion didn’t end with exactly the same binary. And I’m not sure if the difference matters. So if I can’t be sure that everything that isn’t related to the little change I want to make stays the same, I prefer not touch it at all.

I suppose there will be better tools in the future (or maybe I’ll write something like that myself, but beware, I will do it in Perl).

So this is how to work with the existing tool:

Downloading the pair of utilities from the git repo:

$ git clone https://github.com/cclgroupltd/android-bits

The last commit in the repository that I refer to below is 7031d0b, made in January 2022.

Change directory to ccl_abx, copy the desired ABX (.xml) file into there, and go

$ python3.6 ccl_abx.py settings_config.xml -mr > opened.xml

(python3.6, as it implies, is Python 3.6 on my machine. There are so many versions, and it didn’t work on earlier ones).

This creates an XML file with the content of the ABX file.

Now, manually edit opened.xml and remove the <root> and </root> tags at the beginning and end. Not clear why they were put there to begin with.

As for back-conversion to ABX, here comes some Java.

ABX manipulation II

This is an update on 27.2.23, following a comment I got to this post: It turns out that there are utilities for ABX conversion on the phone itself, which are wrappers to a Java utility:

$ cat /system/bin/abx2xml
export CLASSPATH=/system/framework/abx.jar
exec app_process /system/bin com.android.commands.abx.Abx "$0" "$@"

$ cat /system/bin/xml2abx
export CLASSPATH=/system/framework/abx.jar
exec app_process /system/bin com.android.commands.abx.Abx "$0" "$@"

And they work:

$ abx2xml settings_config.xml settings_config.xml.unabx
$ xml2abx settings_config.xml.unabx settings_config.xml.reabx
$ abx2xml settings_config.xml.reabx settings_config.xml.todiff

Unfortunately, settings_config.xml.reabx didn’t end up identical with settings_config.xml. Making a comparison between the outputs of the UNIX “strings” utility, I found a “true3″ string in each setting in the .reabx file, which wasn’t in the original.

Even weirder, the .unabx wasn’t identical to .todiff. But this made them identical (on a regular Linux machine):

$ dos2unix settings_config.xml.unabx

Say what? The ABX was converted into a text file with CR-LF on the first time, and only with LF on the second? I guess the answer to this is somewhere in the Java code.

But maybe it’s OK. I’ll give this a go sometime.

Ugh. Java stuff

Java is definitely not my cup of tea, neither my expertise. So don’t learn anything from me on how to work with Java. The only thing I can say is that it worked.

First, I installed the Java compiler (Linux mint 19):

# apt install default-jdk

Then change directory to makeabx/src.

Compile everything (this will probably make Java-savvy guys laugh, but hey, it worked and created .class files in the respective dirtectories).

$ javac $(find . -iname \*.java)

There will be a few notes about deprecated API, but that’s fine.

Then run the program with

$ java com.ccl.abxmaker.Main path/to/opened.xml

If this fails, saying something like “Could not find or load main class src.com.ccl.abxmaker.Main”, it’s because I tried to run the program from the wrong directory. Even though shell’s autocomplete filled in the class nicely, it’s still wrong. This has to be done from the src/ directory (at least in the way I clumsily compiled this stuff).

This creates a file with the same name and path as the input, but adds an .abx suffix.

The ABX file that I obtained from back-conversion wasn’t identical to the original, however: For example, the first XML element is <settings version=”-1″>. In the original ABX, it’s encoded as a 32-bit integer (0xffffffff), but in the opened.xml.abx it’s encoded as a string (“-1″).

Does it even matter? I don’t know. If the pair of utilities doesn’t pass this kind of regression test, am I ready to inject the result into my phone’s configuration system? Well, no.

Android 12: Preventing System Update (and that nagging popup window)

Android is not my profession

I just want to say this first: I do completely different things for living. I’m not an Android expert, the last time I wrote code in Java was 20 years ago, and what’s written below is really not professional information nor advice. This is the stuff I found out while trying to get my phone under control.

When the phone doesn’t take “no” for an answer

This is an increasing phenomenon: Software treating us like kids, not asking what to do, not suggesting, but telling us what it’s about to do. Or just does it.

So let’s assume for a second that we’re grown-ups who can make our own decisions, and one of those decisions was not to update our phone’s software. Let’s leave the question if this is clever or not. It’s about who has control over the piece of electronics that you consider yours.

For the record, I would have been perfectly fine with having security patches applied frequently. The problem with updates is that there are “improvements” along with the security fixes, so there’s always something that suddenly doesn’t work after them. Because of a bug that will be fixed on the next update, of course.

The bad news is that to really stop the updates, you need a rooted phone, and if you haven’t done that yet, odds are that the price for doing that (i.e. wiping the phone completely) is much worse than an update itself.

Plus you need adb installed and know how to work with it (I discuss adb briefly in this post). Maybe that will change in the future, if there will be a simple app performing the necessary operations. Or maybe this will be integrated into the app that assists with rooting?

In case you want to get right to the point, jump to “Disabling activities” below. That’s where it says what to do.

All said below relates to Android 12, build SQ1D.220105.007 on a Google Pixel 6 Pro (and it quite appears like it’s going to stay that way).

That nagging popup window

After rooting the phone, there are two immediate actions that are supposed to turn off automatic system updates. Not that it helped much in my case, but here they are:

  • Under Developer Options (which are enabled for unlocking anyhow). Turn off “Automatic system updates”.
  • Go to Settings > Notifications > App Notifications > Google Play Settings and turn off System Update (to silence the “System update paused” notification)

And after some time, the famous “Install update to keep device secure” popup appears. And again, and again:

Screenshot of popup: "Install update to keep device secure"

Which is a nuisance, and there’s always the risk of mistakenly tap “Install now”. But then it escalated to “Install now to control when your device updates”:

Screenshot of popup: "Install now to control when your device updates"

This is a simple ultimatum, made by the machine I’m supposed to own: Either you do the update yourself, or I do it anyhow. And if that turns out to blow up your phone bill, your problem. How cute.

Who’s behind the popup window?

The first step is to figure out what package initiates the request for an update.

But even before that, it’s necessary to understand how Intents and Activities work together to put things on the screen.

This page explains Activities and their relations with Intents, and this page shows a simple code example on this matter.

Activities are (primarily?) foreground tasks that represent a piece of GUI interaction, that are always visible on the screen, and they get paused as soon as the user chooses something else on the GUI.

Then there’s a thing called Intent in Android, which is an abstraction for performing a certain operation. This allows any software component to ask for a certain task to be completed, and that translates into an Activity. So the idea is that any software components requests an operation as an Intent, and some other software component (or itself) listens for these requests, and carries out the matching Activity in response.

For example, WhatsApp (like many other apps) has an Intent published for sharing files (ACTION_SEND), so when any application wants to share something, it opens a menu of candidates for sharing (which is all Intents published for that purpose), and when the user selects WhatsApp to share with, the application calls the Activity that is registered by WhatsApp for that Intent. Which file to share is given as an “extra” to the Intent (which simply means that the call has an argument). Note that what actually happens is that WhatApp takes over the screen completely, which is exactly the idea behind Activities.

Now some hands-on. When the popup appears, go

$ adb shell dumpsys window windows > dump.txt

That produces a lot of output, but there was this segment:

  Window #10 Window{28d0746 u0 com.google.android.gms/com.google.android.gms.update.phone.PopupDialog}:
    mDisplayId=0 rootTaskId=26 mSession=Session{55a992b 9996:u0a10146} mClient=android.os.BinderProxy@95ec721
    mOwnerUid=10146 showForAllUsers=false package=com.google.android.gms appop=NONE
    mAttrs={(0,0)(wrapxwrap) sim={adjust=pan forwardNavigation} ty=BASE_APPLICATION fmt=TRANSLUCENT wanim=0x10302f4 surfaceInsets=Rect(112, 112 - 112, 112)
    Requested w=1120 h=1589 mLayoutSeq=4826
    mBaseLayer=21000 mSubLayer=0    mToken=ActivityRecord{577992f u0 com.google.android.gms/.update.phone.PopupDialog t26}
    mActivityRecord=ActivityRecord{577992f u0 com.google.android.gms/.update.phone.PopupDialog t26}
    mAppDied=false    drawnStateEvaluated=true    mightAffectAllDrawn=true
    mViewVisibility=0x0 mHaveFrame=true mObscured=false
    mGivenContentInsets=[0,0][0,0] mGivenVisibleInsets=[0,0][0,0]
    mFullConfiguration={1.0 425mcc1mnc [en_US,iw_IL,ar_PS] ldltr sw411dp w411dp h834dp 560dpi nrml long hdr widecg port night finger -keyb/v/h -nav/h winConfig={ mBounds=Rect(0, 0 - 1440, 3120) mAppBounds=Rect(0, 130 - 1440, 3064) mMaxBounds=Rect(0, 0 - 1440, 3120) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=undefined mRotation=ROTATION_0} as.2 s.1 fontWeightAdjustment=0}
    mLastReportedConfiguration={1.0 425mcc1mnc [en_US,iw_IL,ar_PS] ldltr sw411dp w411dp h834dp 560dpi nrml long hdr widecg port night finger -keyb/v/h -nav/h winConfig={ mBounds=Rect(0, 0 - 1440, 3120) mAppBounds=Rect(0, 130 - 1440, 3064) mMaxBounds=Rect(0, 0 - 1440, 3120) mWindowingMode=fullscreen mDisplayWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=undefined mRotation=ROTATION_0} as.2 s.1 fontWeightAdjustment=0}
    mHasSurface=true isReadyForDisplay()=true mWindowRemovalAllowed=false
    Frames: containing=[0,145][1440,3064] parent=[0,145][1440,3064] display=[0,145][1440,3064]
    mFrame=[160,810][1280,2399] last=[160,810][1280,2399]
    WindowStateAnimator{9e9cbdf com.google.android.gms/com.google.android.gms.update.phone.PopupDialog}:
      Surface: shown=true layer=0 alpha=1.0 rect=(0.0,0.0)  transform=(1.0, 0.0, 0.0, 1.0)
      mDrawState=HAS_DRAWN       mLastHidden=false
      mEnterAnimationPending=false      mSystemDecorRect=[0,0][0,0]
    mForceSeamlesslyRotate=false seamlesslyRotate: pending=null finishedFrameNumber=0
    mDrawLock=WakeLock{a82daf5 held=false, refCount=5}

Also, in the output of

$ adb shell dumpsys > all.txt

there was a much more to-the-point section saying (pretty much at the beginning of this huge file):

Display 4619827677550801152 HWC layers:
 Layer name
           Z |  Window Type |  Comp Type |  Transform |   Disp Frame (LTRB) |          Source Crop (LTRB) |     Frame Rate (Explicit) (Seamlessness) [Focused]
 Wallpaper BBQ wrapper#0
  rel      0 |         2013 |     DEVICE |          0 |    0    0 1440 3120 |   65.0  142.0 1375.0 2978.0 |                                              [ ]
  rel      0 |            1 |     DEVICE |          0 |    0    0 1440 3120 |    0.0    0.0 1440.0 3120.0 |                                              [ ]
 Dim Layer for - WindowedMagnification:0:31#0
  rel     -1 |            0 |     DEVICE |          0 |    0    0 1440 3120 |    0.0    0.0    0.0    0.0 |                                              [ ]
  rel      0 |            1 |     DEVICE |          0 |   48  698 1392 2511 |    0.0    0.0 1344.0 1813.0 |                                              [*]
  rel      0 |         2000 |     DEVICE |          0 |    0    0 1440  145 |    0.0    0.0 1440.0  145.0 |                                              [ ]
  rel      0 |         2019 |     DEVICE |          0 |    0 2952 1440 3120 |    0.0    0.0 1440.0  168.0 |                                              [ ]
  rel      0 |         2024 |     DEVICE |          0 |    0    0 1440  176 |    0.0    0.0 1440.0  176.0 |                                              [ ]
  rel      0 |         2024 |     DEVICE |          0 |    0 2944 1440 3120 |    0.0    0.0 1440.0  176.0 |                                              [ ]

This is much better, because the window in focus is clearly marked. No need to guess.

Another place to look at is

$ adb shell dumpsys activity recents > recent.txt

Where it said:

  * Recent #0: Task{51b5cb8 #26 type=standard A=10146:com.google.android.gms U=0 visible=true mode=fullscreen translucent=true sz=1}
    userId=0 effectiveUid=u0a146 mCallingUid=u0a146 mUserSetupComplete=true mCallingPackage=com.google.android.gms mCallingFeatureId=com.google.android.gms.ota_base
    intent={act=android.settings.SYSTEM_UPDATE_COMPLETE flg=0x10848000 pkg=com.google.android.gms cmp=com.google.android.gms/.update.phone.PopupDialog}
    rootWasReset=false mNeverRelinquishIdentity=true mReuseTask=false mLockTaskAuth=LOCK_TASK_AUTH_PINNABLE
    Activities=[ActivityRecord{577992f u0 com.google.android.gms/.update.phone.PopupDialog t26}]
    askedCompatMode=false inRecents=true isAvailable=true
    taskId=26 rootTaskId=26
    mResizeMode=RESIZE_MODE_RESIZEABLE_VIA_SDK_VERSION mSupportsPictureInPicture=false isResizeable=true
    lastActiveTime=232790582 (inactive for 23s)

This is interesting, as it says which Intent and Activity stand behind the popup, just by asking what the last Activity requests were. Even more important, if the popup was dismissed or disappeared for any other reason, it can still be found here.

So no doubt, it’s com.google.android.gms that stands behind this popup. That’s Google Mobile Service, and it’s a package that is responsible for a whole lot. So disabling it is out of the question (and uninstalling impossible).

Under the section “ACTIVITY MANAGER PENDING INTENTS (dumpsys activity intents)” there was

    #8: PendingIntentRecord{50a35f1 com.google.android.gms/com.google.android.gms.ota_base broadcastIntent}
      uid=10146 packageName=com.google.android.gms featureId=com.google.android.gms.ota_base type=broadcastIntent flags=0x2000000
      requestIntent=act=com.google.android.chimera.IntentOperation.TARGETED_INTENT dat=chimeraio:/com.google.android.gms.chimera.GmsIntentOperationService/com.google.android.gms.update.INSTALL_UPDATE pkg=com.google.android.gms (has extras)
      sent=true canceled=false

which I suppose indicates that com.google.android.gms has requested to run its own .update.INSTALL_UPDATE at a later stage. In other words, this is the indication of the recurring request to run the INSTALL_UPDATE intent.

Disabling activities

The trick to disable the popup, as well as the update itself, is to disable certain Android Activities. This is inspired by this post in XDA forums.

First, find all activities in the system:

$ adb shell dumpsys package > dumpsys-package.txt

Then look for those specific to the package:

$ grep com.google.android.gms dumpsys-package.txt | less

Actually, one can narrow it down even more to those having the “.update.” substring:

$ grep com.google.android.gms dumpsys-package.txt | grep -i \\.update\\. | less

And eventually, disable what appears to be relevant activities (adb shell commands as root):

# pm disable com.google.android.gms/.update.SystemUpdateActivity
# pm disable com.google.android.gms/.update.SystemUpdateService
# pm disable com.google.android.gms/.update.SystemUpdateGcmTaskService

And possibly also the popup itself:

# pm disable com.google.android.gms/.update.phone.PopupDialog
# pm disable com.google.android.gms/.update.OtaSuggestionActivity

I’m not sure if all of these are necessary. The list might change across different versions of Android.

For each command, pm responds with a confirmation, e.g.

# pm disable com.google.android.gms/.update.SystemUpdateActivity
Component {com.google.android.gms/com.google.android.gms.update.SystemUpdateActivity} new state: disabled

And then reboot (not sure it’s necessary, but often “disable” doesn’t mean “stop”).

Are the changes in effect? Make a dump of the package:

$ adb shell pm dump com.google.android.gms > gms-dump.txt

and search for e.g. SystemUpdateActivity in the file. These features should appear under “disabledComponents:”.

However running “adb shell dumpsys activity intents”, it’s evident that the com.google.android.gms.update.INSTALL_UPDATE intent is still active. So this Intent will still be requested in the future. See below what happens with that.

So it’s quite clear that the popup can be disabled, but it’s less obvious what happens if the system wants to update itself when the relevant activity is disabled. Will it or will it not prevent the update? The proof is in the pudding.

So here’s the pudding

To begin with, the phone didn’t bug me again on updating, and neither has it done anything in that direction. Regardless (or not), there’s no problem updating all apps on the phone, and neither does it provoke any unwanted stuff. I’ve seen some speculations on the web, that System Update was somehow related to Google Play, and given my experience, I don’t think this is the case.

So disabling the Activities did the trick. It’s also possible to see exactly what happened by looking at the output of

$ adb shell logcat -d > all-log.txt

where it says this somewhere around the time it should have started messing around:

08-22 18:39:19.063  1461  1972 I ActivityTaskManager: START u0 {act=android.settings.SYSTEM_UPDATE_COMPLETE flg=0x10048000 pkg=com.google.android.gms (has extras)} from uid 10146
--------- beginning of crash
08-22 18:39:19.069  2838  7279 E AndroidRuntime: FATAL EXCEPTION: [com.google.android.gms.chimera.container.intentoperation.GmsIntentOperationChimeraService-Executor] idle
08-22 18:39:19.069  2838  7279 E AndroidRuntime: Process: com.google.android.gms, PID: 2838
08-22 18:39:19.069  2838  7279 E AndroidRuntime: android.content.ActivityNotFoundException: No Activity found to handle Intent { act=android.settings.SYSTEM_UPDATE_COMPLETE flg=0x10048000 pkg=com.google.android.gms (has extras) }
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:2087)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at android.app.Instrumentation.execStartActivity(Instrumentation.java:1747)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at android.app.ContextImpl.startActivity(ContextImpl.java:1085)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at android.app.ContextImpl.startActivity(ContextImpl.java:1056)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at android.content.ContextWrapper.startActivity(ContextWrapper.java:411)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at com.google.android.gms.update.reminder.UpdateReminderDialogIntentOperation.a(:com.google.android.gms@222615044@22.26.15 (190400-461192076):67)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at com.google.android.gms.update.reminder.UpdateReminderDialogIntentOperation.onHandleIntent(:com.google.android.gms@222615044@22.26.15 (190400-461192076):10)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at com.google.android.chimera.IntentOperation.onHandleIntent(:com.google.android.gms@222615044@22.26.15 (190400-461192076):2)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at uzr.onHandleIntent(:com.google.android.gms@222615044@22.26.15 (190400-461192076):4)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at ffi.run(:com.google.android.gms@222615044@22.26.15 (190400-461192076):3)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at ffh.run(:com.google.android.gms@222615044@22.26.15 (190400-461192076):11)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at cfvf.run(:com.google.android.gms@222615044@22.26.15 (190400-461192076):2)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:637)
08-22 18:39:19.069  2838  7279 E AndroidRuntime: 	at java.lang.Thread.run(Thread.java:1012)
08-22 18:39:19.092  1461  4210 I DropBoxManagerService: add tag=system_app_crash isTagEnabled=true flags=0x2
08-22 18:39:19.094  1461  1552 W BroadcastQueue: Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.stats.service.DropBoxEntryAddedReceiver
08-22 18:39:19.094  1461  1552 W BroadcastQueue: Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.chimera.GmsIntentOperationService$PersistentTrustedReceiver
08-22 18:39:19.270  1461  4215 I ActivityManager: Process com.google.android.gms (pid 2838) has died: fg  SVC
08-22 18:39:19.271  1461  4215 W ActivityManager: Scheduling restart of crashed service com.google.android.gms/.chimera.GmsIntentOperationService in 1000ms for start-requested
08-22 18:39:20.273  1461  1552 D CompatibilityChangeReporter: Compat change id reported: 135634846; UID 10146; state: DISABLED
08-22 18:39:20.274  1461  1552 D CompatibilityChangeReporter: Compat change id reported: 177438394; UID 10146; state: DISABLED
08-22 18:39:20.274  1461  1552 D CompatibilityChangeReporter: Compat change id reported: 135772972; UID 10146; state: DISABLED
08-22 18:39:20.274  1461  1552 D CompatibilityChangeReporter: Compat change id reported: 135754954; UID 10146; state: ENABLED
08-22 18:39:20.275  1461  1553 D CompatibilityChangeReporter: Compat change id reported: 143937733; UID 10146; state: ENABLED
08-22 18:39:20.296  1461  1553 I ActivityManager: Start proc 7305:com.google.android.gms/u0a146 for service {com.google.android.gms/com.google.android.gms.chimera.GmsIntentOperationService}

Clearly, disabling the Activities made them ineligible for handling the SYSTEM_UPDATE_COMPLETE Intent, so an ActivityNotFoundException was thrown. Surprisingly enough, this exception wasn’t caught, so the com.google.android.gms process simply died and was quickly restarted by the system.

I also found these later on:

08-24 07:20:00.085 16126 25006 I SystemUpdate: [Execution,InstallationIntentOperation] Received intent: Intent { act=com.google.android.gms.update.INSTALL_UPDATE cat=[targeted_intent_op_prefix:.update.execution.InstallationIntentOperation] pkg=com.google.android.gms cmp=com.google.android.gms/.chimera.GmsIntentOperationService }.
08-24 07:20:00.114 16126 25006 W GmsTaskScheduler: com.google.android.gms.update.SystemUpdateGcmTaskService is not available. This may cause the task to be lost.
08-24 07:20:00.118 16126 25006 W GmsTaskScheduler: com.google.android.gms.update.SystemUpdateGcmTaskService is not available. This may cause the task to be lost.
08-24 07:20:00.119 16126 25006 W GmsTaskScheduler: com.google.android.gms.update.SystemUpdateGcmTaskService is not available. This may cause the task to be lost.

[ ... ]

08-24 07:20:00.198 16126 25006 I SystemUpdate: [Control,InstallationControl] Installation progress updated to (0x413, -1.000).
08-24 07:20:00.273 16126 25006 I SystemUpdate: [Control,ChimeraGcmTaskService] Scheduling task: DeviceIdle.
08-24 07:20:00.273 16126 25006 W GmsTaskScheduler: com.google.android.gms.update.SystemUpdateGcmTaskService is not available. This may cause the task to be lost.
08-24 07:20:00.274 16126 25006 I SystemUpdate: [Execution,ExecutionManager] Action streaming-apply executed for 0.17 seconds.
08-24 07:20:00.283 16126 25006 I SystemUpdate: [Execution,ExecutionManager] Action fixed-delay-execution executed for 0.01 seconds.

Bottom line: Disabling the activity causes the GMS service to die and restart every time it thinks about updating the system and/or nagging about it. So it won’t happen. Mission accomplished.

Actually, I’m not sure it’s possible to update the phone now, even if I wanted to.

It wasn’t obvious that this would work. Disabling the activity could have canceled just the GUI part of the update process. But it did.

It would have been more elegant to add another package, that supplies an Activity for the SYSTEM_UPDATE_COMPLETE Intent, that runs instead of the original, disabled one. This would have avoided this recurring crash. I don’t know if this is possible though.

Or even better, to disable the recurring call to this Intent. Ideas are welcome.

Or did it really work?

A couple of months later, and I noted something that looked like a huge download. Using “top” on “adb shell” revealed that the update_engine consumed some 60% CPU. Using logcat as above revealed several entries like these:

10-29 20:43:38.677 18745 24156 I SystemUpdate: [Control,InstallationControl] Installation progress updated to (0x111, 0.704).
10-29 20:43:38.717 18745 24156 I SystemUpdate: [Control,InstallationControl] Update engine status updated to 0x003.
10-29 20:43:38.832  1106  1106 I update_engine: [INFO:delta_performer.cc(115)] Completed 1691/2292 operations (73%), 1429588024/2027780609 bytes downloaded (70%), overall progress 71%

So there is definitely some download activity going on. The question is what will happen when it reaches 100%. And even more important, if there’s a way to prevent this from happening.

And maybe I’m just a bit too uptight. Using (as root)

# find /mnt/installer/0/emulated/ -cmin -1 2>/dev/null

to find newly update files, it appears like the activity is in /mnt/installer/0/emulated/0/Android/data/com.google.android.googlequicksearchbox/files/download_cache, and that the downloaded files are like soda-en-US-v7025-3.zip, which appears to be Speech On Device Access. And I’m fine with that.

Or did it really work II?

I owe this part to Namelesswonder’s very useful comment below. Indeed, the logs are at /data/misc/update_engine_log/, and there are indeed some files there from October, which is when there was some update activity. The progress of the download, in percents and bytes, is written in these logs.

The first log has an interesting row:

[1003/115142.712102] [INFO:main.cc(55)] A/B Update Engine starting
[1003/115142.720019] [INFO:boot_control_android.cc(68)] Loaded boot control hidl hal.
[1003/115142.738330] [INFO:update_attempter_android.cc(1070)] Scheduling CleanupPreviousUpdateAction.
[1003/115142.747361] [INFO:action_processor.cc(51)] ActionProcessor: starting CleanupPreviousUpdateAction

Any idea how to schedule CleanupPreviousUpdateAction manually? Sounds like fun to me.

The logs also name the URL of the downloaded package, https://ota.googlezip.net/packages/ota-api/package/d895ce906129e5138db6141ec735740cd1cd1b07.zip which is about 1.9 GB, so it’s definitely a full-blown update.

What is even more interesting from the logs is this line, which appears every time before starting or resuming an OTA download:

[1029/201649.018352] [INFO:delta_performer.cc(1009)] Verifying using certificates: /system/etc/security/otacerts.zip

It’s a ZIP file, which contains a single, bare-bone self-signed certificate in PEM format, which contains no fancy information.

It seems quite obvious that this certificate is used to authenticate the update file. What happens if this file is absent? No authentication, no update. Probably no download. So the immediate idea is to rename this file to something else. But it’s not that easy:

# mv otacerts.zip renamed-otacerts.zip
mv: bad 'otacerts.zip': Read-only file system

That’s true. /system/etc/security/ is on the root file system, and that happens to be a read-only one. But it’s a good candidate for a Magisk override, I guess.

Another thing that was mentioned in the said comment, is that the data goes to /data/ota_package/. That’s interesting, because in my system this directory is nearly empty, but the files’ modification timestamp is slightly before I neutralized the update activities, as mentioned above.

So it appears like the downloaded data goes directly into a raw partition, rather than a file.

There’s also /data/misc/update_engine/prefs/, which contains the current status of the download and other metadata. For example, update-state-next-data-offset apparently says how much has been downloaded already. What happens if this directory is nonexistent? Is it recreated, or is this too much for the updater to take?

As an initial approach, I renamed the “prefs” directory to “renamed-prefs”. A few days later, a new “prefs” directory was created, with just boot-id, previous-slot and previous-version files. Their timestamp matched a new log entry in update_engine_log/, which consisted of a CleanupPreviousUpdateAction. So apparently, the previous download was aborted and restarted.

So this time I renamed update_engine into something else, and added update_engine as a plain file. Before doing this, I tried to make the directory unreadable, but as root it’s accessible anyhow (in Android, not a normal Linux system). So the idea is to make it a non-directory, and maybe that will cause an irrecoverable failure.

Update 8.12.22: Nope, that didn’t help. A new log entry appeared in update_engine_log/, not a single word about the failure to create update_engine/prefs/, but a whole lot of rows indicating the download’s progress. /data/misc/update_engine remained as a plain file, so nothing could be stored in the correct place for the prefs/ subdirectory. I failed to find files with the names of those that are usually in prefs/ anywhere in the filesystem, so I wonder how the download will resume after it’s interrupted.

These are a few unrelated topics, which didn’t turn out helpful. So they’re left here, just in case.

Downloading the source

I had the idea of grabbing the sources for the GMS package, and see what went on there.

The list of Git repos is here (or the GitHub  mirror here). I went for the this one:

git clone https://android.googlesource.com/platform/frameworks/base

or the GitHub mirror:

git clone https://github.com/aosp-mirror/platform_frameworks_base.git

however this doesn’t seem to include the relevant parts.


Another approach I had in mind was to turn off the permission to make a system update. I don’t know if there actually is such.

Android’s permission system allows granting and revoking permissions as required. Just typing “pm” (package manager) on adb shell returns help information

To get a list of all permissions in the system:

$ adb shell pm list permissions -g > list.txt

But that covers everything, so the trick is to search for the appearances of the specific package of interest. Something like

$ grep com.google.android.gms list.txt | sort | less

and look at that list. I didn’t anything that appears to related to system update.

Linux + webcam: Poor man’s DIY surveillance camera


Due to an incident that is beyond the scope of this blog, I wanted to put a 24/7 camera that watched a certain something, just in case that incident repeated itself.

Having a laptop that I barely use, and a cheap e-bay web camera, I thought I set up something and let ffmpeg do the job.

I’m not sure if a Raspberry Pi would be up for this job, even when connected to an external hard disk through USB. It depends much on how well ffmpeg performs on that platform. Haven’t tried. The laptop’s clear advantage is when there’s a brief power outage.

Overall verdict: It’s as good as the stability of the USB connection with the camera.

Note to self: I keep this in the misc/utils git repo, under surveillance-cam/.

Warming up

Show the webcam’s image on screen, the ffmpeg way:

$ ffplay -f video4linux2 /dev/video0

Let ffmpeg list the formats:

$ ffplay -f video4linux2 -list_formats all /dev/video0

Or with a dedicated tool:

# apt install v4l-utils

and then

$ v4l2-ctl --list-formats-ext -d /dev/video0

Possibly also use “lsusb -v” on the device: It lists the format information, not necessarily in a user-friendly way, but that’s the actual source of information.

Get all parameters that can be tweaked:

$ v4l2-ctl --all

See an example output for this command at the bottom of this post.

If control over the exposure time is available, it will be listed as “exposure_absolute” (none of the webcams I tried had this). The exposure time is given in units of 100µs (see e.g. the definition of V4L2_CID_EXPOSURE_ABSOLUTE).

Get a specific parameter, e.g. brightness

$ v4l2-ctl --get-ctrl=brightness
brightness: 137

Set the control (can be done while the camera is capturing video)

$ v4l2-ctl --set-ctrl=brightness=255

Continuous capturing

This is a simple bash script that creates .mp4 files from the captured video:


OUTDIR=/extra/videos  SRC=/dev/v4l/by-id/usb-Generic*
DURATION=3600 # In seconds

while [ 1 ]; do
  TIME=`date +%F-%H%M%S`
  if ! ffmpeg -f video4linux2 -i $SRC -t $DURATION -r 10 $OUTDIR/video-$TIME.mp4 < /dev/null ; then
    echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/unbind
    echo 2-2 | sudo tee /sys/bus/usb/drivers/usb/bind
    sleep 5;

Comments on the script:

  • To make this a real surveillance application, there must be another script that deletes old files, so that the disk isn’t full. My script on this matter is so hacky, that I left it out here.
  • The real problem I encountered was occasional USB errors. They happened every now and then, without any specific pattern. Sometimes the camera disconnected briefly and reconnected right away, sometimes it failed to come back for a few minutes. Once in a week or so, it didn’t come back at all, and only a lot of USB errors appeared in the kernel log, so a reboot was required. This is most likely some kind of combination of cheap hardware, a long and not so good USB cable and maybe hardware + kernel driver issues. I don’t know. This wasn’t important enough to solve in a bulletproof way.
  • Because of these USB errors, those two “echo 2-2″ commands attempt to reset the USB port if ffmpeg fails, and then sleep 5 seconds. The “2-2″ is the physical position of the USB port to which the USB camera was connected. Ugly hardcoding, yes. I know for sure that these commands were called occasionally, but whether this helped, I’m not sure.
  • Also because of these disconnections, the length of the videos wasn’t always 60 minutes as requested. But this doesn’t matter all that much, as long as the time between the clips is short. Which it usually was (less than 5 seconds, the result of a brief disconnection).
  • Note that the device file for the camera is found using a /dev/v4l/by-id/ path rather than /dev/video0, not just to avoid mixing between the external and built-in webcam: There were sporadic USB disconnections after which the external webcam ended up as /dev/video2. And then back to /dev/video1 after the next disconnection. The by-id path remained constant in the sense that it could be found with the * wildcard.
  • Frame rate is always a dilemma, as it ends up influencing the file’s size, and hence how long back videos are stored. At 5 fps, an hour long .mp4 took about 800 MB for daytime footage, and much less than so during night. At 10 fps, it got up to 1.1 GB, so by all means, 10 fps is better.
  • Run the recording on a text console, rather than inside a terminal window inside X-Windows (i.e. use Ctrl-Alt-F1 and Ctrl-Alt-F7 to go back to X). This is because the graphical desktop crashed at some point — see below on why. So if this happens again, the recording will keep going.
  • For the purpose of running ffmpeg without a console (i.e. run in the background with an “&” and then log out), note that the ffmpeg command has a “< /dev/null”. Otherwise ffmpeg expects to be interactive, meaning it does nothing if it runs in the background. There’s supposed to be a -nostdin flag for this, and ffmpeg recognized it on my machine, but expected a console nevertheless. So I went for the old method.

How a wobbling USB camera crashes X-Windows

First, the spoiler: I solved this problem by putting a physical weight on the USB cable, close to the plug. This held the connector steady in place, and the vast majority of the problems were gone.

I also have a separate post about how I tried to make Linux ignore the offending bogus keyboard from being. Needless to say, that failed (because either you ban the entire USB device or you don’t ban at all).

This is the smoking gun in /var/log/Xorg.0.log: Lots of

[1194182.076] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event421)
[1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall"
[1194182.076] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM'
[1194182.076] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events
[1194182.076] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event421"
[1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Vendor 0x1908 Product 0x2311
[1194182.076] (--) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Found keys
[1194182.076] (II) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Configuring as keyboard
[1194182.076] (EE) Too many input devices. Ignoring USB2.0 PC CAMERA: USB2.0 PC CAM
[1194182.076] (II) UnloadModule: "evdev"

and at some point the sad end:

[1194192.408] (II) config/udev: Adding input device USB2.0 PC CAMERA: USB2.0 PC CAM (/dev/input/event423)
[1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: Applying InputClass "evdev keyboard catchall"
[1194192.408] (II) Using input driver 'evdev' for 'USB2.0 PC CAMERA: USB2.0 PC CAM'
[1194192.408] (**) USB2.0 PC CAMERA: USB2.0 PC CAM: always reports core events
[1194192.408] (**) evdev: USB2.0 PC CAMERA: USB2.0 PC CAM: Device: "/dev/input/event423"
[1194192.445] (EE)
[1194192.445] (EE) Backtrace:
[1194192.445] (EE) 0: /usr/bin/X (xorg_backtrace+0x48) [0x564128416d28]
[1194192.445] (EE) 1: /usr/bin/X (0x56412826e000+0x1aca19) [0x56412841aa19]
[1194192.445] (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f6e4d8b4000+0x10340) [0x7f6e4d8c4340]
[1194192.445] (EE) 3: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x39f5) [0x7f6e45c4f9f5]
[1194192.445] (EE) 4: /usr/lib/xorg/modules/input/evdev_drv.so (0x7f6e45c4c000+0x68df) [0x7f6e45c528df]
[1194192.445] (EE) 5: /usr/bin/X (0x56412826e000+0xa1721) [0x56412830f721]
[1194192.446] (EE) 6: /usr/bin/X (0x56412826e000+0xb731b) [0x56412832531b]
[1194192.446] (EE) 7: /usr/bin/X (0x56412826e000+0xb7658) [0x564128325658]
[1194192.446] (EE) 8: /usr/bin/X (WakeupHandler+0x6d) [0x5641282c839d]
[1194192.446] (EE) 9: /usr/bin/X (WaitForSomething+0x1bf) [0x5641284142df]
[1194192.446] (EE) 10: /usr/bin/X (0x56412826e000+0x55771) [0x5641282c3771]
[1194192.446] (EE) 11: /usr/bin/X (0x56412826e000+0x598aa) [0x5641282c78aa]
[1194192.446] (EE) 12: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0x7f6e4c2f3ec5]
[1194192.446] (EE) 13: /usr/bin/X (0x56412826e000+0x44dde) [0x5641282b2dde]
[1194192.446] (EE)
[1194192.446] (EE) Segmentation fault at address 0x10200000adb
[1194192.446] (EE)
Fatal server error:
[1194192.446] (EE) Caught signal 11 (Segmentation fault). Server aborting
[1194192.446] (EE)

The thing is that webcam presents itself as a keyboard, among others. I guess the chipset has inputs for control buttons (which the specific webcam doesn’t have), so as the USB device goes on and off, X windows registers the nonexistent keyboard on and off, and eventually some bug causes it to crash (note that number of the event device is 423, so there were quite a few on and offs). It might very well be that the camera camera connected, started some kind of connection event handler, which didn’t finish its job before it disconnected. Somewhere in the code, the handler fetched information that didn’t exist, it got a bad pointer instead (NULL?) and used it. Boom. Just a wild guess, but this is the typical scenario.

The crash can be avoided by making X windows ignore this “keyboard”. I did this by adding a new file named /usr/share/X11/xorg.conf.d/10-nocamera.conf as follows:

# Ignore bogus button on webcam
Section "InputClass"
 Identifier "Blacklist USB webcam button as keyboard"
 MatchUSBID "1908:2311"
 Option "Ignore" "on"

This way, X windows didn’t fiddle with the bogus buttons, and hence didn’t care if they suddenly went away.

Anyhow, it’s a really old OS (Ubuntu 14.04.1) so this bug might have been solved long ago.

Accumulation of /dev/input/event files

Another problem with this wobbling is that /dev/input/ becomes crowded with a lot of eventN files:

$ ls /dev/input/event*
/dev/input/event0    /dev/input/event267  /dev/input/event295
/dev/input/event1    /dev/input/event268  /dev/input/event296
/dev/input/event10   /dev/input/event269  /dev/input/event297
/dev/input/event11   /dev/input/event27   /dev/input/event298
/dev/input/event12   /dev/input/event270  /dev/input/event299
/dev/input/event13   /dev/input/event271  /dev/input/event3
/dev/input/event14   /dev/input/event272  /dev/input/event30
/dev/input/event15   /dev/input/event273  /dev/input/event300
/dev/input/event16   /dev/input/event274  /dev/input/event301
/dev/input/event17   /dev/input/event275  /dev/input/event302
/dev/input/event18   /dev/input/event276  /dev/input/event303
/dev/input/event19   /dev/input/event277  /dev/input/event304
/dev/input/event2    /dev/input/event278  /dev/input/event305
/dev/input/event20   /dev/input/event279  /dev/input/event306
/dev/input/event21   /dev/input/event28   /dev/input/event307
/dev/input/event22   /dev/input/event280  /dev/input/event308
/dev/input/event23   /dev/input/event281  /dev/input/event309
/dev/input/event24   /dev/input/event282  /dev/input/event31
/dev/input/event25   /dev/input/event283  /dev/input/event310
/dev/input/event256  /dev/input/event284  /dev/input/event311
/dev/input/event257  /dev/input/event285  /dev/input/event312
/dev/input/event258  /dev/input/event286  /dev/input/event313
/dev/input/event259  /dev/input/event287  /dev/input/event314
/dev/input/event26   /dev/input/event288  /dev/input/event315
/dev/input/event260  /dev/input/event289  /dev/input/event316
/dev/input/event261  /dev/input/event29   /dev/input/event4
/dev/input/event262  /dev/input/event290  /dev/input/event5
/dev/input/event263  /dev/input/event291  /dev/input/event6
/dev/input/event264  /dev/input/event292  /dev/input/event7
/dev/input/event265  /dev/input/event293  /dev/input/event8
/dev/input/event266  /dev/input/event294  /dev/input/event9

Cute, huh? And this is even before there was a problem. So what does X windows make of this?

$ xinput list
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
⎜   ↳ ELAN Touchscreen                        	id=9	[slave  pointer  (2)]
⎜   ↳ SynPS/2 Synaptics TouchPad              	id=13	[slave  pointer  (2)]
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    ↳ Power Button                            	id=6	[slave  keyboard (3)]
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]
    ↳ Power Button                            	id=8	[slave  keyboard (3)]
    ↳ Lenovo EasyCamera: Lenovo EasyC         	id=10	[slave  keyboard (3)]
    ↳ Ideapad extra buttons                   	id=11	[slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard            	id=12	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=14	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=15	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=16	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=17	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=18	[slave  keyboard (3)]
    ↳ USB 2.0 PC Cam                          	id=19	[slave  keyboard (3)]

Now, let me assure you that there were not six webcams connected when I did this. Actually, not a single one.

Anyhow, I didn’t dig further into this. The real problem is that all of these /dev/input/event files have the same major. Which means that when there are really a lot of them, the system runs out of minors. So if the normal kernel log for plugging in the webcam was this,

usb 2-2: new high-speed USB device number 22 using xhci_hcd
usb 2-2: New USB device found, idVendor=1908, idProduct=2311
usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 2-2: Product: USB2.0 PC CAMERA
usb 2-2: Manufacturer: Generic
uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (1908:2311)
uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized!
uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized!
input: USB2.0 PC CAMERA: USB2.0 PC CAM as /devices/pci0000:00/0000:00:14.0/usb2/2-2/2-2:1.0/input/input274

after all minors ran out, I got this:

usb 2-2: new high-speed USB device number 24 using xhci_hcd
usb 2-2: New USB device found, idVendor=1908, idProduct=2311
usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 2-2: Product: USB2.0 PC CAMERA
usb 2-2: Manufacturer: Generic
uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (1908:2311)
uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized!
uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized!
media: could not get a free minor

And then immediately after:

systemd-udevd[4487]: Failed to apply ACL on /dev/video2: No such file or directory
systemd-udevd[4487]: Failed to apply ACL on /dev/video2: No such file or directory

Why these eventN files aren’t removed is unclear. The kernel is pretty old, v4.14, so maybe this has been fixed since.

Sample output of v412-all

This is small & junky webcam. Clearly no control over exposure time.

$ v4l2-ctl --all -d /dev/v4l/by-id/usb-Generic_USB2.0_PC_CAMERA-video-index0
Driver Info (not using libv4l2):
	Driver name   : uvcvideo
	Card type     : USB2.0 PC CAMERA: USB2.0 PC CAM
	Bus info      : usb-0000:00:14.0-2
	Driver version: 4.14.0
	Capabilities  : 0x84200001
		Video Capture
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
	Width/Height  : 640/480
	Pixel Format  : 'YUYV'
	Field         : None
	Bytes per Line: 1280
	Size Image    : 614400
	Colorspace    : Unknown (00000000)
	Custom Info   : feedcafe
Crop Capability Video Capture:
	Bounds      : Left 0, Top 0, Width 640, Height 480
	Default     : Left 0, Top 0, Width 640, Height 480
	Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 640, Height 480
Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480
Streaming Parameters Video Capture:
	Capabilities     : timeperframe
	Frames per second: 30.000 (30/1)
	Read buffers     : 0
                     brightness (int)    : min=0 max=255 step=1 default=128 value=128
                       contrast (int)    : min=0 max=255 step=1 default=130 value=130
                     saturation (int)    : min=0 max=255 step=1 default=64 value=64
                            hue (int)    : min=-127 max=127 step=1 default=0 value=0
                          gamma (int)    : min=1 max=8 step=1 default=4 value=4
           power_line_frequency (menu)   : min=0 max=2 default=1 value=1
                      sharpness (int)    : min=0 max=15 step=1 default=13 value=13
         backlight_compensation (int)    : min=1 max=5 step=1 default=1 value=1