Lubuntu 16.04 on ARM: Turning off the “Suspend” etc. options

In short

On an embedded ARM-based Lubuntu 16.04, I had LXDE’s logoff dialog window offering suspend as an option, and when that was chosen, the system got itself into some nasty state with network and keyboard off. The serial console was still active, and yet, I was better off without it.

It turned out that the kernel was misconfigured to announce that it supported suspend to RAM:

# cat /sys/power/state
freeze mem

So no wonder that option was presented to the user on GUI. The solution: Turn off the CONFIG_SUSPEND kernel compilation flag. Recompile, deploy, and that’s it:

# cat /sys/power/state

And the faulty options were gone.

The rest of this post contains things I jotted down as I wasted time trying to find out what the problem was.

Irrelevant notes

  • There’s a systemd-inhibit utility for preventing suspends and sleeps while a certain program is running. Essentially, it issues an “Inhibit” command with what / who, why /mode info on DBus for”org.freedesktop.login1″.
  • polkit’s manual (“man polkit” lacks some info) directs to a list of directories with rules.d files written in JavaScript (!!!). The action/ directories are policy files, in XML, which describe who might be allowed what operation, and what to write (in several languages) in the dialog box asking for authentication (typically a password).
  • Obtaining the sources for the study below
    # apt-get source lxsession-logout
    # apt-get source systemd

A journey in the sources (more wasted time)

I tried to follow how lxsession-logout, which is LXDE’s program that displays the logout dialog box, decides which low-power modes to offer. And what actually happens when suspend is requested.

  • lxsession-logout.c learns if the system can suspend (and hence the button shall be presented) by calling dbus_systemd_CanSuspend().
  • which is implemented in lxsession-logout-dbus-interface.c, and calls systemd_query() with “CanSuspend” as its parameter
  • which (implemented in the same file) in turn opens a session with “org.freedesktop.login1″ over DBus and issues a query
  • Judging by the “BusName=org.freedesktop.login1″ directive in /lib/systemd/system/systemd-logind.service, systemd-logind.service, (running systemd-login) is answering this query.
  • Looking in systemd’s login-dbus.c, “CanSuspend” calls the method method_can_suspend(), which in turn calls method_can_shutdown_or_sleep() with “org.freedesktop.login1.suspend” as the key argument, which calls bus_test_polkit() for an answer on that.
  • Implemented in systemd/shared/bus-util.c, bus_test_polkit() makes a DBus query on “org.freedesktop.PolicyKit1″
  • There are also references to upowerd in lxsession-logout.c, but since stopping this service changes nothing, I focused on logind.
  • Judging by the “BusName=org.freedesktop.PolicyKit1″ directive in /lib/systemd/system/polkitd.service, polkitd.service (running /usr/lib/policykit-1/polkitd) answer this.
  • Back to login-dbus.c, a “Suspend” request causes a call to method_suspend(), which calls method_do_shutdown_or_sleep(), which calls bus_manager_shutdown_or_sleep_now_or_later(), which calls execute_shutdown_or_sleep(). The “unitname” parameter traverses these calls with the value SPECIAL_SUSPEND_TARGET. There are several checks on the way that may cancel the request, but this is the chain to fulfilling it.
  • execute_shutdown_or_sleep() issues a DBus request on “org.freedesktop.systemd1″ (I have a wild guess which process is on the other side).
  • Apparently (without following the execution chain), systemd/src/sleep.c is responsible for putting the system in suspend mode and such. Among others, it writes to /sys/power/state. This is where it hit me that maybe the kernel’s configuration was the problem.

systemd random jots

As systemd seems to be here to stay (or at least I hope so), this is a post of random notes to self that I jot down as I explore it. It will probably grow with time, and become a mixture of basic issues and rather advanced stuff.

Useful references

  • man systemd.service and man systemd.unit as well as others. Really. These are the best sources, it turns out.
  • The excellent Systemd for Admins series (with several relevant and specific topics).
  • The primer for systemd: Basic concepts explained.
  • Red Hat’s guide to creating custom targets (and daemons)
  • The FAQ (with actually useful info!)
  • On the Network Target (and how to run a target only when the network is up)
  • man systemd.special for a list of built-in targets, their meaning and recommended use

systemctl is the name of the game

Forget “service”, “telinit” and “initctl”. “systemctl” is the new swiss knife for starting, stopping, enabling and disabling services, as well as obtaining information on how services are doing. And it’s really useful.

To get an idea on what runs on the system and what unit triggered it off, go

# systemctl status

Note that “systemd status” lists, among others, all processes initiated by each login session for each user. Which is an extremely useful variant of “ps”.

And just a list of all services

# systemctl

Ask about a specific service:

# systemctl status ssh
 ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-12-01 10:37:21 IST; 1h 17min ago
 Main PID: 1018 (sshd)
   CGroup: /system.slice/ssh.service
           └─1018 /usr/sbin/sshd -D

Dec 01 12:26:26 machine sshd[2841]: Accepted publickey for eli from port 45220 ssh2: RSA SHA256:xxx
Dec 01 12:26:26 machine sshd[2841]: pam_unix(sshd:session): session opened for user eli by (uid=0)

Really, isn’t it sweet?

Turning off a service: Find it with the “systemctl status” command above (or just “systemctl”), and then (this is an example of a service not found in the status, because it’s an LSB service);

# systemctl disable tvheadend
tvheadend.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install disable tvheadend
insserv: warning: current start runlevel(s) (empty) of script `tvheadend' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `tvheadend' overrides LSB defaults (0 1 6).

And “enable” for enabling a service.

Needless to say, services can be started, stopped and restarted with “systemctl X service” where X is either start, stop or restart.

For a list of all services (including disactivated):

$ systemctl --all

What makes a systemd service run (on boot)

  • It’s enabled, which means that there’s a symbolic link from some /etc/systemd/*/*.wants directory to the unit file. In the example below, it’s a .path file, so it activates a watch on the path specified, but if it’s a service, it’s kicked off at boot
    # systemctl enable foo.path
        Created symlink from /etc/systemd/system/ to /etc/systemd/system/foo.path
  • In the unit file symlinked to, there’s an [Install] section, which says when it should be kicked off with the WantedBy directive. More precisely, which target or service should be active. Once again, for a plain .service unit file, this kicks off the service, and for an e.g. .path file, this starts the monitoring. man systemd.special for a list of built in targets, and targets can be generated, of course. The most common target for “run me at boot” is Dependency on services is expressed by using the service name with a .service suffix instead.
  • By default, unit files such as .path files kick off the a .service file with the same non-suffix part (this can be changed with a directive in the file. But why?)

See /etc/systemd/system/ for a list of services that are activated on boot. In particular note that not all are symlinks to .service unit files.

General memo jots

  • Always run the systemctl daemon-reload command after creating new unit files or modifying existing unit files. Otherwise, the systemctl start or systemctl enable commands could fail due to a mismatch between states of systemd and actual service unit files on disk.
  • Services are best run in the foreground. Unlike classic UNIX services, there’s no point in daemonizing. All processes belonging to the relevant service are enclosed in a Cgroup anyhow, and systemd handles the daemonizing for Type=simple services. In a clean and uniform manner.
  • Unit files’ suffix indicate their type. When the non-suffix part of two files is the same, they indicate a functional relationship. For example, systemd-tmpfiles-clean.timer says when to launch systemd-tmpfiles-clean.service. Or that systemd-ask-password-console.path gives the path to be watched, and systemd-ask-password-console.service is the service to fire off.
  • After= doesn’t imply a dependency, and Requires= doesn’t imply the order of starting services. Both are needed if one service depends on the other running when it starts.
  • The Type= directive’s main influence is determining when the service is active, i.e. when other services that depend on it can be launched.
  • There’s also “loginctl” which lists the current users and their sessions

Where to find unit files

The configuration files are considered in the following order (later overrules earlier):

  • /lib/systemd/system — where installation scripts write to
  • /run/systemd/system — runtime files
  • /etc/systemd/system — per-system user preferences

Per-user files can be found in ~/.config/systemd/user and possibly also in ~/.local/share/systemd/user.

Enabling console autologin on tty1 and ttyPS0

Following this page and as explained on this page, add /etc/systemd/system/getty@tty1.service.d/autologin.conf (after creating the getty@tty1.service.d directory) as follows:

ExecStart=-/sbin/agetty -a root --noclear %I $TERM

Note that the filename autologin.conf has no significance. It’s suffix and the directory it resides in that matter.

The idea is to override the ExecStart parameter given in /lib/systemd/system/getty@.service template unit, which reads

ExecStart=-/sbin/agetty --noclear %I $TERM

but otherwise have it running the same. The reason for two ExecStart lines is that the empty assignment clears the existing assignment (or otherwise it would have been added on top of it), and the second sets the command.

Note the %I substitute parameter, which stands for the instance of the current tty.

The leading dash means that the exit value of the command is ignored, and may be nonzero without the unit considered as failed (see man systemd.service).

This can’t be repeated with ttyPS0, because systemd goes another way for setting up the serial console: At an early stage in the boot, systemd-getty-generator automatically sets a target, serial-getty@ttyPS0.service, which is implemented by the /lib/systemd/system/serial-getty@.service template unit.

# systemctl status

[ ... ]

         │ ├─system-serial\x2dgetty.slice
           │ │ └─serial-getty@ttyPS0.service
           │ │   └─2337 /sbin/agetty --keep-baud 115200 38400 9600 ttyPS0 vt220

So the solution is adding /etc/systemd/system/serial-getty@ttyPS0.service.d/autologin.conf saying

ExecStart=-/sbin/agetty -a root --keep-baud 115200,38400,9600 %I $TERM

… more to come …

A watchdog script for restarting Cinnamon hogging memory at startup


Having installed Linux Mint 18.1 (kernel 4.4.0-53) with overlayroot on a Gigabyte Brix BACE-3160 (see other notes on this here and here) for my living room media center, I had an issue with bringing up the computer every now and then. Namely, I had a blank screen with mouse pointer only, 100% CPU on the cinnamon process, and some 50% on cinnamon-settings-daemon, eating all RAM (8 GB) gradually until the computer crashes (as it’s swapless).

The whole point with an overlayroot based media center is that it behaves like any electrical appliance. In particular, you know it will recover well from a power outage. And not crash because of some silly desktop manager messing up.

I’m not alone: There’s a bug report on this. There’s also this Git Hub issues page seems to point at the problem, in particular as it was closed pointing at a commit, which was merged into the main repository saying “Add detection for accountsservice background as it’s ubuntu only”.

It seems to be more common with SSD disks (because they’re faster).

And still there’s no solution available, almost a year later. Don’t tell me to upgrade. Upgrading means trading known bugs for new, unknown ones.

Which leaves me with making a workaround. In this case, a watchdog service, which monitors the resident memory usage of the process having the command line “cinnamon”. If it goes above 300 MiB during its first five minutes (more or less), restard the mdm service, which in turn restarts Cinnamon. Plain and disgusting.

The ironic truth is that I haven’t been able to test the watchdog for real, as I’ve failed to repeat the problem, despite a lot of power cycles. But I have verified that it works by lowering the memory threshold, and confirmed that the mdm service is indeed restarted. I also know from previous experience that restarting mdm helps.

The stuff

This should be copied to /usr/local/bin/ (executable at least by root):

use warnings;
use strict;

my $cmd = 'cinnamon';
my $limit = 200; # Maximal resident memory size in MiB
my $timeout = 300; # Approximately how long this script should run, in seconds

my $pid;
my $timer = 0;
my $rss = 0;

eval {
  while (1) {

    if ($timer >= $timeout) {
      if (defined $pid) {
	logger("Quitting. cinnamon process seems to behave well with $rss MiB resident memory");
      } else {
	logger("Quitting without finding the cinnamon process")


    undef $pid if ((defined $pid) && !is_target($cmd, $pid));

    $pid = find_target($cmd) unless (defined $pid);

    next unless (defined $pid); # No target process running yet?

    $rss = resident_memory($pid);

    if ($rss > $limit) {
      $timer = 0;
      logger("Restarting cinnamon: Has reached $rss MiB resident memory");
      system('/bin/systemctl', 'restart', 'mdm') == 0
	or die("Failed to restart mdm service\n");

if ($@) {
  logger("Process died: $@");


######## SUBROUTINES ########

sub find_target {
  my $target = shift;
  foreach my $entry (</proc/*>) {
    my ($pid) = ($entry =~ /^\/proc\/(\d+)$/);
    next unless (defined $pid);

    return $pid
      if (is_target($target, $pid));

  return undef;
sub is_target {
  my ($target, $pid) = @_;
  my ($cmdline, $c);

  local $/;

  open (F, "/proc/$pid/cmdline") or return 0;
  $cmdline = <F>;
  close F;

  return 0 unless (defined $cmdline);

  ($c) = ($cmdline =~ /^([^\x00]*)/);

  return $target eq $c;

sub resident_memory {
  my ($pid) = @_;

  local $/;

  open (F, "/proc/$pid/statm") or return 0;
  my $mem = <F>;
  close F;

  return 0 unless (defined $mem);

  my @entries = ($mem =~ /(\d+)/g);

  return 0 unless (defined $entries[1]); # Should never happen

  return int($entries[1] / 256); # Convert pages to MiBs

sub logger {
  my $msg = shift;
  system('/usr/bin/logger', "cinnamon-watchdog: $msg") == 0
    or warn("Failed to call logger\n");

And this service unit file to /etc/systemd/system/cinnamon-watchdog.service:

Description=Cinnamon watchdog service



Enable service:

# systemctl enable cinnamon-watchdog
Created symlink from /etc/systemd/system/ to /etc/systemd/system/cinnamon-watchdog.service.

That’s it. The service will be active on the next reboot

A few comments on the Perl script

  • The script monitors the memory consumption because it’s a simple and safe indication of the problem, not the problem itself.
  • It accesses the files under /proc/ directly rather than obtaining the information from e.g. ps. I suppose the proc format is more stable than ps’ output. The ps utility obtains its information from exactly the same source.
  • The script exits voluntarily after slightly more than $timeout seconds after its invocation or its last intervention. In other words, if the cinnamon process hogs memory after its first five minutes, the script won’t be there to do anything. Neither should it. Its purpose is very specific.
  • The 200 MiB limit is based upon experience with my own system: It usually ends up with 123 MiB or so.

Random vaguely related notes

Before going for this ugly solution, I actually tried to find the root of the problem (and obviously failed). Below are some random pieces of info I picked up while doing so.

The mdm setup files, in this order:

  • /usr/share/mdm/defaults.conf
  • /usr/share/mdm/distro.conf
  • /etc/mdm/mdm.conf

Things I tried that didn’t help:

  • removing Background line from /var/lib/AccountsService/users/ as suggested below (didn’t make any difference)
  • Replacing autologin with a timed login of 10 seconds (I suspected that the problem was that Accountservice wasn’t up when it was required to say which background to put).

Additional ramblings

It seemed to be a result of the Accountservice daemon attempting to fiddle with the background image.

So in /var/lib/AccountsService/users/ there’s a file called “eli” (my user name, obviously), which says


So what happens if I remove the Background line? Nothing. That is, the problem remains as before.

A sample .xsession-errors when things went wrong

initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/u
pstart: Connection refused
syndaemon: no process found
/etc/mdm/Xsession: Beginning session setup...
localuser:eli being added to access control list
** Message: couldn't access control socket: /run/user/1010/keyring/control: No s
uch file or directory
** Message: couldn't access control socket: /run/user/1010/keyring/control: No s
uch file or directory

(cinnamon-settings-daemon:2002): power-plugin-WARNING **: session inhibition not
 available, cinnamon-session is not available

(cinnamon-settings-daemon:2002): power-plugin-WARNING **: session inhibition not
 available, cinnamon-session is not available

systemd user services and Pulseaudio on Lubuntu 16.04


These are my notes as I made Pulseaudio work on an ARM v7-based Embedded Lubuntu 16.04, which doesn’t support Pulseaudio otherwise.

The goal: On a mini-distribution based upon Lubuntu, for use of others, make Pulseaudio work even though the Lubuntu desktop won’t start it. In fact, it’s supposed to run even without any X server running.

Being an embedded distribution, basically everything (except for certain daemons that drop their privileges) runs as root. Nothing one should try on a desktop computer, but it’s a very common practice on embedded mini-distros. Raspbian excluded.

The problem

There are basically two ways to run pulseaudio: Per-user (actually, per-session) and in system mode, which is discouraged, mostly for security reasons. Given that pulseaudio runs as root on a system where the user is root, I’m not so sure it would have made such a difference. This way or another, I went for user mode.

Which bring me to the actual problem: On a systemd-based OS, pulseaudio expects the XDG_RUNTIME_DIR environment variable to be set to /run/user/UID on its invocation, where UID is the number of the user which shall have access to Pulseaudio. Not only that, this directory must exist when pulseaudio is launched.

Some systemd background

The /run/user/ tmpfs-mounted directory is maintained by pam_systemd (see man pam_systemd, and possibly this too), which adds an entry with a UID when the respective user starts its first session (typically by logging in somehow), and removes this directory (and its content) when the last session for this user is finished.

Among other things pam_systemd does when a user creates its first session, is starting a new instance of the system service user@.service, which runs the systemd user manager instance (typically the template service /lib/systemd/system/user@.service). Which in turn calls “/lib/systemd/systemd –user” with the user ID set to the relevant uid by virtue of a “User=” directive in the said service unit file. So this is how we end up with something like:

# systemctl status
[ ... ]
               │ └─init.scope
               │   ├─2227 /lib/systemd/systemd --user
               │   └─2232 (sd-pam)
               │ ├─2160 /bin/login -f
               │ ├─2238 -bash
[ ... ]

An important aspect of the systemd –user call is that User Services are executed as required: The new user-systemd reads unit files (i.e. the *.wants directories) from ~/.config/systemd/user/, /etc/systemd/user/ and /usr/lib/systemd/user/.

This feature allows certain services to run as long as a specific user has at least one session open, running with this user’s UID. Quite helpful for the Pulseaudio service.

From a practical point of view, the difference from regular systemd services is that the service files are put in the systemd/user directory rather than systemd/system. The calls to systemctl also need to indicate that we’re dealing with user services. The –user flag makes systemctl relate to user services that are enabled or disabled for each user separately, while the –global flag relates to user services that are enabled or disabled for all users. This is reflected in the following:

# systemctl --user enable tryuser
Created symlink from /root/.config/systemd/user/ to /etc/systemd/user/tryuser.service.

which is just for the user calling systemctl, versus

# systemctl --global enable tryuser
Created symlink /etc/systemd/user/, pointing to /etc/systemd/user/tryuser.service.

which enables a user service for each user that has a session in the system. Note that in both cases, the original service unit file was in the same directory, /etc/systemd/user/.

Another little remark: Since the launch of a user service depends on a very specific event (at least one user having a session), the WantedBy directive is typically set ot No need to fuss.

And almost needless to say, there might be several user services running in parallel, one for each user having an active session. Note however that neither “sudo” or “su” generate a new session, as they merely start a process (with a shell) with another user (root).

A pulseaudio service

This solution isn’t perfect, but will probably work well for most people. That is, those who simply log in as themselves (or use auto-login).

Add this as/etc/systemd/user/my_pulseaudio.service:

Description=User Pulseaudio service



Note that the service depends and runs after dbus.service, since Pulseaudio attempts to connect to DBus, among others. Probably not very important, as DBus is likely to already run when a user session starts. Besides, in my specific case, the connection with DBus failed, and yet Pulseaudio worked fine. From /var/log/syslog:

Nov 30 16:28:18 localhost pulseaudio[2093]: W: [pulseaudio] main.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbu
s-daemon without a $DISPLAY for X11

Also pay attention to the %U substitution into the numeric UID, while setting the environment.

To enable this service for all users,

 # systemctl --global enable my_pulseaudio
    Created symlink /etc/systemd/user/, pointing to /etc/systemd/user/my_pulseaudio.service.

The multi-user problem

In a way, it’s quite interesting that the standard method for using Pulseaudio gives access to a single user. In particular, as Pulseaudio was originally developed to allow simultaneous access of the sound hardware by several pieces of software. Even so, it ended up tuned to the single user scenario: One single user sitting in front the computer. If there are processes belonging to other users, they are bogus users, such as “apache” or “dovecot”, which are intended for controlling privileges.

In recent distributions, it seems like pulseaudio is started by the X server, with the user owning it (i.e. the one who has logged in, typically through the GUI login interface). Once again, access to the sound card is given to one single user, who is supposedly sitting in front of a computer. This is the model Microsoft grew its operating systems with.

There’s always the system mode alternative or allowing TCP connections, but these are not mainstream. In theory, there should have been some privilege access mechanism for Pulseaudio, allowing an arbitrary group of clients to connect as required. But since the by far most common usage scenario of sound is part of a GUI experience for the user in front of the machine, Pulseaudio was shaped accordingly.

The main problem of the solution above is that it doesn’t work in a multi-user scenario: The first user creating a session will have audio access. If another user creates a session, the attempt to launch pulseaudio will fail, leaving this user without sound.

To demonstrate how absurd it can get, suppose that user X connects to the system through ssh (and hence generates a session). This user will have pulseaudio running on its behalf, even though it can’t play any sound on the system. Then user Y logs in on the GUI console, but its pulseaudio service fails to launch, because there’s already one running. To make things even more absurd, if user X logs out, its pulseaudio service is killed, and a service for user Y doesn’t start, because what would kick it off?

It doesn’t help checking if the session that launched my_pulseaudio has a console (man loginctl for example), because user X might log in with ssh first (my_pulseaudio launched, but doesn’t activate a pulseaudio process) and then through the console (my_pulseaudio won’t be launched).

In reality, none of these problems are expected to happen often. We all end up logging into a computer with one specific user.

A less preferred alternative: Waiting for /run/user/0

As I tried to find a proper solution, I came up with the idea to launch the service when /run/user/o is created. It’s a working solution in my specific case, where only root is expected to log in.

The idea is simple. Make a path-based launch of the service when /run/user/0 is created.

/etc/systemd/system/my_pulseaudio.path is then:

Description=Detection of session for user 0 before launching Pulseaudio



And the following is /etc/systemd/system/my_pulseaudio.service:

Description=Pulseaudio for user 0


and then enable the service with

# systemctl enable my_pulseaudio.path
Created symlink from /etc/systemd/system/ to /etc/systemd/system/my_pulseaudio.path.

This works effectively like the user service for pulseaudio suggested above, but only for user 0. It might be modified to watch for changes in /run/user, and launch a script which evaluates which actions to take (possibly killing one pulseaudio daemon in favor of another?). But it will still not solve the case where a user logs in with ssh first, and then through the console. So the perfect solution should probably hook on session starts and terminations. However that is done.

The updated DVB-T channels in Israel


On November 1, 2017, the Israeli broadcast authorities split Channel 2 into two channels, 12 and 13. To avoid unfair competition with the other channels, as channel 2 might lose viewers due to the confusion, channel 10 was moved to 14 as well. This requires a rescan of DVB receivers. The changes are rather minor, it turns out.

For the record, I still haven’t attempted to receive the DVT-T2 channels. Not enough motivation, and it probably requires some work with my DVB-T2 dongle (drivers etc.)

The scan

Running a scan in Haifa, Israel, on November 1st 2017 (quite evidently, I received on 538 MHz):

$ dvbv5-scan /usr/share/dvb/dvb-t/il-All
Cannot calc frequency shift. Either bandwidth/symbol-rate is unavailable (yet).
Scanning frequency #1 514000000
Scanning frequency #2 538000000
Lock   (0x1f)
Service Ch 11, provider Idan +: digital television
Service Ch 23, provider Idan +: digital television
Service Ch 12 Keshet, provider Idan +: digital television
Service Ch 13 Reshet, provider Idan +: digital television
Service Ch 14 10, provider Idan +: digital television
Service Ch 99, provider Idan +: digital television
Service Tarbut, provider Idan +: digital radio
Service Bet, provider Idan +: digital radio
Service Gimmel, provider Idan +: digital radio
Service Makan, provider Idan +: digital radio
Service Moreshet, provider Idan +: digital radio
Service Kan88, provider Idan +: digital radio
Service Musica, provider Idan +: digital radio
Service Reka, provider Idan +: digital radio
Service Galatz, provider Idan +: digital radio
Service Galgalatz, provider Idan +: digital radio
Service Radios, provider Idan +: digital radio
Service Kol Hai, provider Idan +: digital radio
Service Kol Barama, provider Idan +: digital radio
Service Lev HaMdina, provider Idan +: digital radio
Service CLASSICAL bu, provider Idan +: digital radio

Looking at the new dvb_channel.conf, the following changes are evident:

  • “Ch 1″ has been renamed to “Ch 11″, and a new audio PID has been added to it (2563)
  • “Ch 2″ has been replaced with “Ch 12 Keshet”. Two subtitle PIDs has been dropped (2610 and 2609)
  • “Ch 10″ has been replaced with “Ch 14 10″, same PIDs for video/audio, but the subtitles have moved from 2642, 2641 and 2640 to 1039, 1038 and 1037.
  • “Ch 33″ has been replaced with “Ch 13 Reshet”, with subtitle PID 1033 added.
  • The radio channel “Aleph” has been renamed to “Tarbut”
  • The radio channel “Dalet” has been renamed to “Makan”
  • The radio channel “88FM” has been renamed to “Kan88″.
  • A new radio channel, “Kol Hai” has been added with service ID 40.

So from a DVB point of view, “Ch 12 Keshet” is the new Channel 2.

The updated dvb_channel.conf

Obtained with the dvb5-scan as shown above:

[Ch 11]
 VIDEO_PID = 2561
 AUDIO_PID = 2562 2563
 PID_06 = 1025
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Ch 23]
 VIDEO_PID = 3585
 AUDIO_PID = 3586
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Ch 12 Keshet]
 VIDEO_PID = 2593
 AUDIO_PID = 2594 2595
 PID_06 = 2608
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Ch 13 Reshet]
 VIDEO_PID = 2657
 AUDIO_PID = 2658
 PID_06 = 1033
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Ch 14 10]
 VIDEO_PID = 2625
 AUDIO_PID = 2626 2627
 PID_06 = 1039 1038 1037
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Ch 99]
 VIDEO_PID = 2689
 AUDIO_PID = 2690
 PID_06 = 2704
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2817
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2833
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2849
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2865
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2881
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2897
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2913
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2929
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2945
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 2961
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 3057
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Kol Hai]
 AUDIO_PID = 3121
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Kol Barama]
 AUDIO_PID = 3137
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

[Lev HaMdina]
 AUDIO_PID = 3153
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

 AUDIO_PID = 3201
 FREQUENCY = 538000000
 BANDWIDTH_HZ = 8000000

The updated stream list

Obtained with ffplay, as shown on this post.

Program 1
 service_name    : ?Ch 11
 service_provider: ?Idan +
 Stream #0:26[0x401](heb): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Stream #0:6[0xa01]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 50 tbr, 90k tbn, 50 tbc
 Stream #0:27[0xa02]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Stream #0:28[0xa03]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 2
 service_name    : ?Ch 23
 service_provider: ?Idan +
 Stream #0:0[0xe01]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:1[0xe02]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 3
 service_name    : ?Ch 12 Keshet
 service_provider: ?Idan +
 Stream #0:17[0xa21]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 50 tbr, 90k tbn, 50 tbc
 Stream #0:18[0xa22]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Stream #0:19[0xa23]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Stream #0:20[0xa30](heb): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Program 4
 service_name    : ?Ch 13 Reshet
 service_provider: ?Idan +
 Stream #0:15[0x409](heb): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Stream #0:7[0xa61]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:16[0xa62]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 5
 service_name    : ?Ch 14 10
 service_provider: ?Idan +
 Stream #0:32[0x40d](heb): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Stream #0:33[0x40e](rus): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Stream #0:34[0x40f](ara): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Stream #0:29[0xa41]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:35[0xa42]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Stream #0:36[0xa43]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 6
 service_name    : ?Ch 99
 service_provider: ?Idan +
 Stream #0:21[0xa81]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt470bg), 720x576 [SAR 12:11 DAR 15:11], 25 fps, 50 tbr, 90k tbn, 50 tbc
 Stream #0:22[0xa82]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Stream #0:23[0xa90](heb): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
 Program 21
 service_name    : Tarbut
 service_provider: Idan +
 Stream #0:30[0xb01]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 22
 service_name    : Bet
 service_provider: Idan +
 Stream #0:24[0xb11]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 23
 service_name    : Gimmel
 service_provider: Idan +
 Stream #0:31[0xb21]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 24
 service_name    : Makan
 service_provider: Idan +
 Stream #0:12[0xb31]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 25
 service_name    : Moreshet
 service_provider: Idan +
 Stream #0:11[0xb41]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 26
 service_name    : Kan88
 service_provider: Idan +
 Stream #0:14[0xb51]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 27
 service_name    : Musica
 service_provider: Idan +
 Stream #0:3[0xb61]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 28
 service_name    : Reka
 service_provider: Idan +
 Stream #0:8[0xb71]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 29
 service_name    : Galatz
 service_provider: Idan +
 Stream #0:13[0xb81]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 30
 service_name    : Galgalatz
 service_provider: Idan +
 Stream #0:5[0xb91]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 36
 service_name    : Radios
 service_provider: Idan +
 Stream #0:2[0xbf1]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 40
 service_name    : Kol Hai
 service_provider: Idan +
 Stream #0:25[0xc31]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 41
 service_name    : Kol Barama
 service_provider: Idan +
 Stream #0:9[0xc41]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 42
 service_name    : Lev HaMdina
 service_provider: Idan +
 Stream #0:4[0xc51]: Audio: aac_latm (HE-AACv2) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp
 Program 45
 service_name    : CLASSICAL bu
 service_provider: Idan +
 Stream #0:10[0xc81]: Audio: aac_latm (HE-AAC) ([17][0][0][0] / 0x0011), 48000 Hz, stereo, fltp

Let’s hope they don’t fiddle with this anymore…

Outgoing SMTP mail servers considerations

Mail with as From address vanishing

It started really bad: Someone asked me why he hasn’t received an answer from me in two weeks, and I had answered his mail the same day I got his.

It turned out that Gmail had thrown mail into the black hole without any warning. Probably the updated DMARC policy, which has been mentioned for long now.

Solution: When using as a From address, also use as the outgoing SMTP mail server. This ensures that the mail arrives properly to recipients as well.

Bonus: The sent mails appear in Google’s web interface’s Sent folder as well. I would have done well without this favor.

That was really annoying, frankly speaking.

SPF entries

The topic is explained on this page. It’s important to keep this entry updated if and when I change outgoing servers.

This mechanism helps spam filters tell if the sender of the mail is authentic, by looking for an SPF record in the domain name’s TXT record.

For example, since I’m relaying through my web host’s server,, the desired SPF record should read:

v=spf1 +a +mx +ip4: +ip4: +ip4: +ip4: +ip4: -all
  • The +a part means to pass the mail if the A entry if the sending domain appears
  • The +mx means the same for the MX entry
  • The other IP4 parts say that if these addresses (or address blocks) appear, pass the mail
  • Finally, the -all part in the end says that if none of the previous entires matches, drop the mail

(In Cpanel, go to EMail > Authentication to set this up)

A word about “include records”. I used to have one saying “”, which means “get the SPF record from”, and add whatever records they have. Which makes sense in a way, since their servers are expected to appear on the list. On the other hand, if this record is missing (or their DNS temporary out of business), it’s a fatal error. So I’m not happy about this idea. The solution above, copying their records (which is the long list of IP4 address blocks), is suboptimal in that I may miss some new servers or so, but this can’t cause a fatal error.

Checking the current SPF record

$ dig txt

; <<>> DiG 9.6.2-P2-RedHat-9.6.2-5.P2.fc12 <<>> txt
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1406
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;            IN    TXT

;; ANSWER SECTION:        14400    IN    TXT    "v=spf1 +a +mx +ip4: +ip4: +ip4: +ip4: +ip4: -all"

;; AUTHORITY SECTION:        28920    IN    NS        28920    IN    NS

;; ADDITIONAL SECTION: 4736 IN    A 131050 IN    A

;; Query time: 173 msec
;; WHEN: Thu Sep  7 18:23:52 2017
;; MSG SIZE  rcvd: 256

Checking directly on the authoritative DNS is better when monitoring quick changes


Gtkwave notes


Gtkwave is a simple and convenient viewer of electronic waveforms. It’s a free software tool at its best: A bit rough to start working with, but after a while it becomes clear that the decisions have been made by someone who uses the tool himself. Really recommended.

So here are my jumpstart notes for the next time I’ll need it.


$ gtkwave this.vcd &

Note to self: My own utility for generating VCD files from raw dumps, named dump2vcd, is in the utils repository.

Saving a signal view set

To have a certain set of signals shown, as well as a certain time region shows, create a “Save File”: In the top menu go File > Write Save File. The automatic filename extension is .sav.

At the next session, just read the save file with File > Read Save File. Or from the command line e.g.

$ gtkwave this.vcd allview.sav &


Note that hovering the mouse pointer over a certain signal’s trace will make it sticky on transitions.

  • Plain left-click: Sets the “current position”. Values seen at left.
  • Left click and drag: Time difference shown at the top
  • Right-click and drag: Zoom in
  • Middle-button (wheel) click: Set the “base marker”. Then move around the primary marker with plain left-clicks, and the time difference between the base marker and primary will appear as the top.

Other nice stuff

  • Find Next / Previous Edge (plain right/left arrows at the toolbar). Select one or several signals on the Signals list, and click on these icons to jump to the next edge of those signals.
  • If a set of signal are named with index suffixes, e.g. state[0] and state[1], Gtkwave automagically shows them as a vector (e.g. state[1:0]).


Notes on USB 1.1 low-level protocol for FPGA implementation


These are the consideration and design decisions I took when designing a transparent hub for low- and full-speed USB (that is, all covered by USB 1.1, and not high-speed as required by USB 2.0).

A transparent 1:1 hub is a device with one male USB plug, and one female plug. It basically substitutes an extension cable, and should not be visible or have any practical influence on the connected parties. The idea is to repeat the signals from one end to another by virtue of two ULPI PHY frontends, one connected to each side, and both connected to an FPGA in the middle.

This is not the recommended practice if all you want is to sniff the data: In that case, just connecting the D+/D- wires directly to a couple of 3.3V-compatible inputs of the FPGA in parallel to the cable is the way to go. Literally tapping the wire (you might need to suppress the chirps that upgrade the link to high-speed, if that’s an issue). A transparent hub is only required if intervention in the traffic is desired, on top of sniffing capabilities.


  • Full operation is required, including connect / disconnect, suspend and resume. It may appear redundant to support suspend on a system that is never turned off or hibernated, but I’ve seen Linux suspending USB hubs when nothing is connected to its downstream ports. Once something is connected to one of the hub’s ports, it issues a resume (long K, see below) signal towards the host, and the host replies with a long K resume signal before SOF packets are sent. Besides, Windows supports a Selective Suspend feature which may suspend certain idle devices to save energy. These are just examples.
  • Both low and full speed must be supported. The logic will not be able to deduce which speed is in effect based upon pullup resistors from the device.
    Reason I: In the specific project, the board has a pullup resistor only on the D+ wire going to the host. The solution was to swap the D+/D- wires on both sides when a low-speed device is used. As a result, the D+ wire seen by the FPGA will always be pulled up on the device side in either case.
    Reason II: If a (regular) hub is connected as a device to the transparent hub, it always presents itself as a full-speed device with the pullup resistor. Low speed data is sent just by reducing the rate in this case, in both directions (see “SE0 detection when a hub is connected as a device” below)
  • There will be no additional intermediate elements (i.e. hubs) between the connected machines, but there may be hubs inside those.
  • The PHY device on both ends is TI’s TUSB1106 USB transceiver.

Comparison with a regular hub

A transparent hub, like a regular one, is required to repeat a variety of signals going from one end to another. The main challenges are to keep track of which side drives the wires, and prevent accidental signal level transitions (noise and differential signaling imbalance) from generating errors.

The USB spec defines maximal delays and jitter for a hub, which are then used to calculate the overall link jitter and turnaround delay. Since the transparent hub is designed to be the only element between the host and device (more or less, as the machines may be internal hubs), the total jitter and delay may be generated by this single element (again, more or less).

As for jitter, the USB 1.1 spec section 7.1.15 says: “Data receivers are required to decode differential data transitions that occur in a window plus and minus a nominal quarter bit cell from the nominal (centered) data edge position”. This is indeed reflected in the total jitter allowed, as presented in tables 7-2 and 7-3 of the same document (20 ns for full speed).

The turnaround delay for low and full speed is defined in USB 1.1 spec, 7.1.19, defining the timeout between 16-18 bit times (applies to both speeds). This calculation takes 5 hubs into consideration, as well as the device’s delay — in our case, the single element may add a few bits times of delay (I never calculated the exact figure, as my implementation went way below this figure).

The following functionalities are required from a regular hub, but can be omitted from a transparent one:

  • Suspend: A hub is required to put itself into a low-power state (Suspend) in the event of no activity on the bus (like any USB device). This isn’t necessary, as the transparent hub doesn’t consume current from the USB wire.
  • Babble / Loss of Activity (LOA) detection: A regular hub must detect if a device transmits junk on the wires, and by doing so, preventing the other devices (those connected to the same hub) from communicating with the host. The hub should kick in if the device doesn’t release the bus with an EOP before the end of the USB frame (once in 1 ms). A transparent hub doesn’t need to take action, since there are no neighboring devices competing for access. If a device is babbling or otherwise holding the wires, the host will soon detect that and take action.
  • Frame tracking: A regular hub must lock on the USB frames (by detecting SOF packets and maintaining a timer). However the purpose of this lock is to detect babble. Hence this isn’t required either.
  • Reset: A regular hub generates the SE0 signal required to reset a device when it’s hot-plugged to it. The transparent hub merely enables the pullup resistor on the upstream port in response to a hotplug on its downstream port, letting the host issue the reset.

Possible bus events

This is a list of events that need proper relaying to the other side.

The repeater should expect the following events from the host:

  • Packet (low or full speed): transition to K followed by SYNC and packet data
  • Long SE0 for reset (2.5 us, even though it’s expected to be several ms)
  • Keep-alive for low-speed device, 2 low-speed bits of SE0
  • Resume signaling (long K), followed by a low-speed EOP
  • PRE packets. See below.

The repeater should expect the following events from the device:

  • Packet (low or full speed): transition to K followed by SYNC and packet data
  • Long SE0 (2 us) = disconnection
  • Resume signaling (long K), followed by release of the bus (no EOP nor driven J).

Note that since the transparent hub doesn’t keep track of frame timing, it doesn’t detect 3 ms of no traffic, and hence doesn’t know when Suspend is required. Therefore, Suspend and Resume signaling is treated like any other signals, and no timeout mechanism is applied on long K periods.

One thing that might be confusing is that when the upstream port is disconnected, it might catch noise, in particular from the electrical network. Both single ended inputs may change states at a rate of 50 or 100 Hz (assuming a 50 Hz network), since no termination is attached to either ports. This is a no-issue, as there’s nothing to disrupt at that point, but may be mistaken for a problem.

Voltage transitions

Like any USB element, a transparent hub must properly detect transitions between J, K, and SE0 correctly, avoiding false detections.

The TUSB1106 transceiver presents the D+/D- wire pairs’ state by virtue of three logic signals: RCV, VP and VM. RCV represents the differential voltage state (J or K, given the speed mode), while VP and VM represent the single-ended voltages. The purpose of VP and VM is detecting an SE0 state, in which case both are low. The other possibilities are either illegal (i.e. both high, as single-ended ’1′ isn’t allowed per USB spec) or redundant (if they are different, RCV should be used, as it measures the differential voltage, rather than two single-ended voltages, and is therefore more immune to noise).

Any USB receiver is in particular sensitive to transitions between J, K and SE0. For example, a transition from J to K after idling means SOP (Start of Packet) and immediately causes the hub repeater to drive the other side. Likewise, a transition into SE0 while a packet is transmitted signals the end of packet (EOP).

The timing of the transitions is crucial, as a DPLL is maintained on the receiving side to lock on the arriving bits. In short, detecting voltage transitions timely and correctly is crucial.

The main difficulty is that switching from J to K essentially involves one of the D+/D- wires going from low to high, and the other one from high to low. Somewhere in the middle, they might both be high or both low. And if both are low, an SE0 condition occurs.

The USB 1.1 spec section 7.1.4 says: “Both D+ and D- may temporarily be less than Vih(min) during differential signal transitions. This period can be up to 14ns (TFST) for full-speed transitions and up to 210ns (TLST) for low-speed transitions. Logic in the receiver must ensure that that this is not interpreted as an SE0.”

So a temporary “detection” of SE0 may be false, and this is discussed next. But before that, can a transition into J or K also be false? Let’s divided it into two cases:

  • A transition from J to K or vice versa: This is detected by a change in the RCV input. The TUSB1106′s datasheet promises no transition in RCV when going into SE0 in its Table 5. So if RCV toggles when the previous state was J or K, it’s not an SE0. We’ll have to trust TI’s transceiver on this. Besides, going from J or K necessarily involves both wires’ voltage changes, but going to SE0 requires only one wire to move. So a properly designed receiver will not toggle RCV before both wires have changed polarity within legal high/low ranges (did I say TI?). This doesn’t contradict that VP and VM may show an SE0 momentarily, as already mentioned.
  • A transition from SE0 to J. This is detected by one of VM or VP going high. This requires one of the wires to reach the high voltage level. This can, in principle, happen momentarily due to single-ended noise (SE0 can be 20 ms long), so requiring that the VM or VP signal remains high for say, 41 ns before declaring the exit of SE0 may be in place. As discussed below, the uses of SE0 are such that it’s never required to time the exit of this state accurately.
  • A transition from SE0 to K. Even though this kind of transition is mentioned in the USB 1.1 spec’s requirement for the hub’s repeater (e.g. see section, there is no situation in the USB protocol for such transition to occur. I’ve also set a trigger for such event, and tried several pieces of electronics, none of which generated such transition. Consequently, there’s no reason to handle this case.

Avoiding false SE0 detection

The SE0 state may appear in the following cases:

  • EOP in either direction, following a packet. The length of the SE0 is approximately 2 bit times (depending on the speed mode), but should be detected after 1 bit time. See “SE0 detection when a hub is connected as a device” below for a possible problem with this.
  • End of a resume signal from host (K state for > 20 ms): A low-speed EOP (two low-speed bit times of SE0).
  • A long SE0 from host (should be detected at 2.5 us, generated as 10 ms): Reset the device
  • A long SE0 from device (should be detected at 2.0 us): Disconnect
  • Keep-alive signaling from host (low-speed only): 2 low-speed bits times of SE0 followed by J.

As mentioned above, a false SE0 may be present for 14 ns on full-speed mode (17% of a bit time) and 210ns in low-speed mode (32% of a bit time). The difference is because the rise and fall times of the D+/D- depend on the speed mode.

Except for detecting EOP in full-speed mode, it’s fine to ignore any SE0 shorter than 210 ns, as all other possibilities are significantly longer.

For detecting EOP in full-speed mode, the 14ns lower limit is enforced instead. If the speed mode can’t be derived from the pullup resistors (as was my case), it’s known from the packet itself. For example, from timing the second transition in the packet’s SYNC pattern.

The truth is, that looking at the output of the TUSB1106 USB transceiver when connected to real devices, it seems like the transceiver was designed to allow its user to be unaware of these issues. Clearly, an effort has been made to avoid these false SE0s in its design.

The VP and VM ports tend to toggle together, and in some cases (depending what USB device is tested) VP and VM were both high for a few nanoseconds on J/K transitions. This was even more evident in low-speed mode. Most likely, seeing both low at any time is enough to detect an SE0, despite the precautions required in the spec.

Also, the RCV signal toggled somewhere in the middle between VP and VM moving, or along with one of these, but never when SE0 was about to appear. So the unaware user of this chip could simply consider any movement in RCV as a true data toggle, and if VM and VP are low, detect SE0 without fussing too much about it.

Or, one can add these mechanisms like I did, just in case.

Summary of J/K/SE0 transition detection

The precautious way:

  • From J or K: If RCV toggles, register the transition to the opposite J/K symbol immediately. Ignore further transitions for 41 ns (half a full-speed bit) to avoid false transitions due to noise (I bet this is unnecessary — I’ve never seen any glitches on the RCV signal).
    If VP and VM are both zero for 210 ns (or 14 ns, if in the midst of a full-speed packet), register a transition to SE0.
  • From SE0: If either VM or VP are high for 41 ns, register a transition to J.

Lazy man’s implementation (will most likely work fine):

  • From J or K: If RCV toggles, register the transition to the opposite J/K symbol. If VM and VP are both zero, register an SE0 (immediately).
  • From SE0: If either VM or VP are high for 41 ns, register a transition to J (immediately).

PRE packets

Section 8.6.5 states that a PRE packet should be issued by the host when it needs to make a low-speed transaction, when there’s a hub in the middle. The special thing with the PRE packet is that it doesn’t end with an SE0. Rather, there are 4 full-speed bit times of “nothing happening”, which is followed by a low-speed SYNC and packet.

This requires some special handling of this case, which actually isn’t described further down this post (it’s a bit complicated, quite naturally).

One tricky thing is that the state of the wires is always K after the PRE packet. In fact, it’s always K after then PID, because the SYNC itself is like transmitting 0x80, and the PID 8 bits itself always contains an even number of zeros, as half of it is the NOT of the other. So it’s always an uneven number of J/K toggles (7 zeros from SYNC, an even number from the PID part), leaving the wires at K.

The USB standard is a bit unclear on what happens after the PRE PID has been transmitted, but it’s quite safe to conclude that it has to switch to J immediately after the PRE packet has completed. The hub is expected to start relaying to the low-speed downstream ports from that moment on (up to 4 full-speed bit times later), and if the upstream port stands at K when that happens, a false beginning of SYNC will be relayed.

And indeed, this is what I’ve observed with real hardware: Immediately after the PRE packet’s last bit, the wires switch to a J state.


These are the implemented states of transparent hub. The rationale is explained after the short outline of the states.

  • Disconnected — this is the initial state, and is also possibly invoked by virtue of SE0 timers explained below. The D+/D- pullup resistor is disconnected at the upstream port (to the host), and none of the ports is driven. When a pullup resistor has been sensed for 2.5us on the downstream port, a matching pullup resistor is enabled at the upstream port. Then wait for the upstream port to change from SE0 to J, or it will be interpreted as an SE0 signal. Then switch to the Idle state. The pullup resistor remains enabled on all other states.
  • Idle — None of the ports is driven. If a ‘K’ is sensed on the upstream port, switch to the “Down” state. If ‘K’ is sensed on the downstream port, switch to the “Up” state. In both cases, this is a J-to-K transition, so it takes place immediately when RCV toggles. An SE0 condition on either port switches the state to Host_SE0 or Device_SE0, whichever applies.
  • Down — J/K symbols are repeated from the upstream to the downstream port. This state doesn’t handle SE0. If a ‘J’ symbol is sensed continuously for 8 bit times, go to “Idle”, which isn’t a normal packet termination, but a result of a missed SE0. “Bit times” means low-speed bit times, unless a full-speed sync transition was detected at the first K-to-J transition during this state. The EOP is handled by the Host_SE0 state.
  • Up — J/K symbols are repeated from the downstream to the upstream port. Exactly like “Down”, only in the opposite direction. Same 8 bit timeout mechanism for returning to “Idle”
  • Host_SE0 — This state is invoked from Idle or Down states when an SE0 is sensed on the upstream port (after waiting TLST or TFST to prevent a false SE0). The downstream port is driven with SE0. When the SE0 condition on the upstream port is exited, switch to Host_preidle.
  • Host_preidle — This state drives the downstream port with a J for a single bit’s time (depends on the speed mode) and then switches the state machine to Idle. This completes EOPs and other similar conditions.
  • Device_SE0 — This state is invoked from Idle or Down states when an SE0 is sensed on the downstream port. The upstream is driven with SE0. When the SE0 condition is exited, switch to Device_preidle.
  • Device_preidle — This state will drives the upstream port with a J for a single bit’s time (depends on the speed mode) and then switches the state machine to Device_preidle.

The states above are outlined for illustration. In a real-life implementation, it’s convenient to collapse Down, Host_SE0 and Host_preidle into a single state (an enriched “Down” state). These three states all drive the downstream port. Host_SE0 is basically Down, as it repeats the data from the upstream to the downstream port. The only special thing is that a preidle phase follows it. This SE0 to preidle sequence is easier to implement by adding state flags, rather than adding states.

Likewise, Up, Device_SE0 and Device_preidle can be collapsed into a single state.

SE0 counters

Two counters are maintained to keep track of how long an SE0 condition has been continuously active, one for the upstream port, and one for the downstream port. These counters are zeroed in two situations:

  • When one of VP or VM is high, indicating not being in an SE0 condition.
  • When the port is question is driven by the other port. For example, the counter for the downstream port is held at zero in the Host_SE0 state (or a reset signal from the host could have been mistaken for a disconnection). A other possible criterion is the PHY’s Output Enable signal.

These counters serve two purposes:

  • Detect non-false SE0: When the counter goes above the relevant threshold value, the SE0 condition is valid. The state switches to Host_SE0 or Device_SE0 (whichever applied) from Idle or Down/Up. This handles EOP.
  • Detect disconnection on the downstream port: When the counter reaches the value corresponding to 2.0 us, the state switches to Disconnected

Note that the transition to the *_SE0 states, and the way they are terminated by virtue of a single bit’s J, covers all of the protocol’s uses of the SE0 condition, including reset from host and keep-alives.

Determining the bit length

As the speed isn’t determined by pullup resistors, a detection mechanism is applied as follows:

  • A flag, “lowspeed” is maintained, to indicate the time of a USB bit. When ’1′, low-speed is assumed (one bit is ~666 ns), and when ’0′, full-speed (~ 83 ns).
  • The flag is set to ’1′ in the Idle state.
  • When one of the Up or Down states is invoked by a transition from J to K, a counter measures the time until the first transition back to J. If this occurs within 6.5 full-speed bit times (~542 ns), the flag is cleared to ’0′.
  • Following transitions (until the next Idle state) are ignored for this purpose.

Rationale: The only situation where full-speed timing is required is full-speed packets. Such begin with a SYNC word, which starts with a few J/K togglings on each bit.

The time threshold could have been set slightly above 83 ns, since the first toggle is expected on the bit following the J-to-K toggle that took the state machine out of Idle. However 6.5 full-speed bit’s time is better, as it gracefully handles the unlikely event that Idle would be invoked in the middle of a transmitted packet (due to a false SE0, for example). Since 6 bits is the maximal length of equal bits (before bit stuffing is employed), the bus must toggle after 6 bits. If it doesn’t after 6.5, it’s not full-speed.

The “lowspeed” flag correctly handles other bus events, such as keep-alive signaling, which consists of a two SE0 low-speed bits followed by a bit of J. Since the flag is set on Idle, it remains such when Host_SE0 is invoked, which will correctly terminate with a low-speed bit’s worth of J.

SE0 detection when a hub is connected as a device

There’s a slight twist when a (real) USB hub is connected as the device to the transparent hub, and a low-speed device is connected to one of the (real) hub’s downstream ports. Any (real) hub’s upstream port runs at full-speed (or higher), so the hub repeats the low-speed device’s signals on the upstream port, using full-speed’s rise/fall times, polarity and other attributes.

A practical implication is that the EOP may consist of a full-speed SE0, if it’s generated by the hub (see citations below). It’s also the hub’s responsibility to ensure that false SE0 time periods are limited to the full-speed spec (i.e. TFST rather than TLST). Hence for the specific case of detecting an EOP that was generated by the (real) hub for truncating a packet that went beyond the end of a frame, the method of detecting SE0 outlined above won’t work, because it will ignore a full-speed SE0 in the absence of a full-speed sync pattern on that packet.

This is however a rare corner case, which stems from a misbehaving device connected to the (real) hub.

Citing 11.8.4 of the USB spec:

“The upstream connection of a hub must always be a full-speed connection. [ ... ] When low-speed data is sent or received through a hub’s upstream connection, the signaling is full-speed even though the bit times are low-speed.”

and also:

“Hubs will propagate upstream-directed packets of any speed using full-speed signaling polarity and edge rates.”


“Although a low-speed device will send a low-speed EOP to properly terminate a packet, a hub may truncate a low-speed packet at the EOF1 point with a full-speed EOP. Thus, hubs must always be able to tear down connectivity in response to a full-speed EOP regardless of the data rate of the packet.

and finally,

“Because of the slow transitions on low-speed ports, when the D+ and D- signal lines are switching between the ‘J’ and ‘K’, they may both be below 2.0V for a period of time that is longer than a full-speed bit time. A hub must ensure that these slow transitions do not result in termination of connectivity and must not result in an SE0 being sent upstream.”

Misalignment of transitions to SE0

Preventing false SE0 comes with a price: The J/K transitions are repeated immediately to the opposite port, but transitions to SE0 must be delayed until they’re confirmed to be such. The spec relates to this issue in section 7.1.14 (Hub Signal Timings, also see Table 7-8): “The EOP must be propagated through a hub in the same way as the differential signaling. The propagation delay for sensing an SE0 must be no less than the greater of the J-to-K, or K-to-J differential data delay (to avoid truncating the last data bit in a packet), but not more than 15ns greater than the larger of these differential delays at full-speed and 200ns at low-speed (to prevent creating a bit stuff error at the end of the packet).”

It’s not clear how the 200 ns requirement can be met given the 210 ns it takes to detect SE0 properly on low-speed. But 10 ns in low-speed mode is probably not much to fuss about.

In theory, it would be possible to delay repeating of the J/K transitions to the opposite port with the relevant expected SE0 delay. This is however not feasible when the speed is deduced from the first SYNC transition (see “Determining the bit length” above).

The chosen solution was to have a digital delay line for all transitions, with the delay of 100 ns. This is equivalent to 1.25 bits delay at full-speed, and slightly longer than a full-speed hub’s delay according to table 7-8, so it’s acceptable. J/K transitions are always delayed by the full delay amount, but transitions to SE0 manipulate the segment in the delay line, so that the output SE0 starts in a time that corresponds to when the SE0 was first seen, not when it was confirmed, in full-speed mode. In low-speed mode, only 100 ns are compensated for, but that’s still within spec.

High-speed suppression

Since the transparent hub supports only USB 1.1, the transition into high speed (per USB 2.0) must not occur. This is guaranteed by the suggested implementation, since the mechanism for upgrading into high speed takes place during the long SE0 period that resets the device. Namely, the USB 2.0 spec section states that the a high speed capable device should drive current into the D- wire during the initial reset, while the host drives an SE0 on the wires. This creates a chirp-K state on the wires, which is essentially a slightly negative differential voltage instead of more or less zero. This is the first stage of the transition into high speed.

But since the host has already initiated an SE0 state when the chirp-K arrives from the device, the transparent hub is in the Host_SE0 state, so the voltages at the device’s side are ignored at that time period. The chirp will have no significance and has no way to get passed on to the host. Hence the host will never sense anything special, and the device will give up the speed upgrade attempt.

NXP / Freescale SDMA and the art of accessing peripheral registers


While writing a custom SDMA script for copying data arriving from an eCSPI peripheral into memory, it occurred to me that there is more than one way to fetch the data from the peripheral. This post summarizes my rather decisive finding in this matter. Spoiler: Linux’ driver could have done better.

I’ve written a tutorial on SDMA scripts in general, by the way, which is recommended before diving into this one.

Using the Peripheral DMA Unit

This is the method used by the official eCSPI driver for Linux. That is, the one obtained from Freescale’s / NXP’s Linux git repository. Specifically, spi_imx_sdma_init() in drivers/spi/spi-imx.c sets up the DMA transaction with

	spi_imx->rx_config.direction = DMA_DEV_TO_MEM;
	spi_imx->rx_config.src_addr = res->start + MXC_CSPIRXDATA;
	spi_imx->rx_config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
	spi_imx->rx_config.src_maxburst = spi_imx_get_fifosize(spi_imx) / 2;
	ret = dmaengine_slave_config(master->dma_rx, &spi_imx->rx_config);
	if (ret) {
		dev_err(dev, "error in RX dma configuration.\n");
		goto err;

Since res->start points at the address resource obtained from the device tree (0x2008000 for eCSPI1), this is the very same address used for accessing the peripheral registers (only the software uses the virtual address mapped to the relevant region).

In essence, it means issuing an stf command to set the PSA (Peripheral Source Address), and then reading the data with an ldf command on the PD register. For example, if the physical address (e.g. 0x2008000) is in register r1:

69c3 (0110100111000011) | 	stf	r1, 0xc3	# PSA = r1 for 32-bit frozen periheral read
62c8 (0110001011001000) | 	ldf	r2, 0xc8	# Read peripheral register into r2

One would expect this to be correct way, or why does this unit exist? Or why does Linux’ driver use it? On the other hand, if this is the right way, why is there a “DMA mapping”?

Using the Burst DMA Unit

This might sound like a bizarre idea: Use the DMA unit intended for accessing RAM for peripheral registers. I wasn’t sure this would work at all, but it does: If the same address that was fed into PSA for accessing a peripheral goes into MSA instead, the data can be read correctly from MD. After all, the same address space is used by the processor, Peripheral DMA unit and Burst DMA unit, and it turns out that the buses are interconnected (which isn’t obvious).

So the example above changes into

6910 (0110100100010000) | 	stf	r1, 0x10    # To MSA, NO prefetch, address is frozed
620b (0110001000001011) | 	ldf	r2, 0x0b    # Read peripheral register into r2

The motivation for this type of access is using copy mode — a burst of up to 8 read/write operations in a single SDMA command. This is possible only from PSA to PDA, or from MSA to MDA. But there is no burst mode from PSA to MDA. So treating the peripheral register as a memory element works around this.

Spoiler: It’s not such a good idea. The speed results below tell why.

Using the SDMA internal bus mapping

The concept is surprisingly simple: It’s possible to access some peripherals’ registers directly in the SDMA assembly code’s memory space. In other words, to access eCSPI1, one can go just

5201 (0101001000000001) | 	ld	r2, (r1, 0) # Read peripheral register from plain SDMA address space

and achieve the equivalent result of the examples above. But r1 needs to be set to a different address. And this is where it gets a bit confusing.

The base address is fairly easy to obtain. For example, i.MX6′s reference manual lists the address for eCSPI1 as 0x2000 in section 2.4 (“DMA memory map”), where it also says that the relevant section spans 4 kB. Table 55-14 (“SDMA Data Memory Space”) in the same document assigns the region 0x2000-0x2fff to “per2″, declares its size as 16 kB, and in the description it says “peripheral 2 memory space (4 Kbyte peripheral’s address space)”. So what is it? 4 kB or 16 kB?

The answer is both: The address 0x2000 is given in SDMA data address format, meaning that each address points at a 32-bit word. Therefore, the SDMA map region of 0x2000-0x2fff indeed spans 16 kB. But the mapping to the peripheral registers was done in a somewhat creative way: The address offsets of the registers apply directly on the SDMA mapping’s addresses.

For example, let’s consider the ECSPI1_STATREG, which is placed at “Base address + 18h offset”. In the Application Processor’s address space, it’s quite clear that it’s 0x2008000 + 0x18 = 0x2008018. The 0x18 offset means 0x18 (24 in decimal) bytes away from the base.

In the SDMA mapping, the same register is accessed at 0x2000 + 0x18 = 0x2018. At first glance, this might seem obvious, but an 0x18 offset means 24 x 4 = 96 bytes away from the base address. A bit odd, but that’s the way it’s implemented.

So even though each address increment in SDMA data address space moves 4 bytes, they mapped the multiply-by-4 offsets directly, placing the registers 16 bytes apart. Attempting to access addresses like 0x2001 yield nothing noteworthy (in my experiments, they all read zero). I believe that the SDMA submodule was designed in France, by the way.

Almost needless to say, these addresses (e.g. 0x2000) can’t be used to access peripherals with Peripheral / Burst DMA units — these units work with the Application Processor’s bus infrastructure and memory map.

Speed tests

As all three methods work, the question is how fast each is. So I ran a speed test. I only tested the peripheral read operation (my application didn’t involve writes), but I would expect more or less the same results for writes. The speed tests were carried out by starting the SDMA script from a Linux kernel module, and issuing a printk when the SDMA script was kicked off. When the interrupt arrived at the completion of the script (resulting from a “done 3″ opcode, not shown in the table below), another printk was issued. The timestamps in dmeg’s output was used to measure the time difference.

In order to keep the influence of the Linux overhead delays low, the tested command was executed within a hardware loop, so that the overall execution would take a few seconds. A few milliseconds of printk delay hence became fairly negligible.

The results are given in the following table:

Peripheral DMA Unit Burst DMA Unit Internal bus mapping Non-IO command
Assembly code stf r1, 0xc3
loop endloop, 0
ldf r2, 0xc8
stf r1, 0x10
loop endloop, 0
ldf r2, 0x0b
loop endloop, 0
ld r2, (r1, 0)
loop endloop, 0
addi r5, 2
Execution rate 7.74 Mops/s 3.88 Mops/s 32.95 Mops/s 65.97 Mops/s

Before concluding the results, a word on the rightmost one, which tested the speed of a basic command. The execution rate, almost 66 Mops/s, shows the SDMA machine’s upper limit. Where this came from isn’t all that clear, as I couldn’t find a matching clock rate in any of the three clocks enabled by Linux’ SDMA driver: clk_ahb, clk_ipg and clk_per.

The reference manual’s section 55.4.6 claims that the SDMA core’s frequency is limited to 104 MHz, but calling clk_get_rate() for clk_ahb returned 132 MHz (which is 2 x 66 MHz…). For the two other which the imx-sdma.c driver declares that it uses, clk_ipg and clk_per (the same clock, I believe), clk_get_rate() returned 60 MHz, so it’s not that one. In short, it’s not 100% what’s going on, except that the figure is max 66 Mops/s.

By the way, I verified that the hardware loop doesn’t add extra cycles by duplicating the addi command, so it ran10 times for each loop. The execution rate dropped to exactly 1/10, so there’s definitely no loop overhead.

OK, so now to the conclusions:

  • The clear winner is using the internal bus. Note that the result isn’t all that impressing, after all. With 33 Mops, 4 bytes each, there’s a theoretical limit of 132 MB/s for just reading. That doesn’t include doing something with the data. More about that below.
  • Note that reading from the internal bus takes just 2 execution cycles.
  • There is a reason for using the Peripheral DMA Unit, after all: It’s twice as fast compared with the Burst DMA Unit.
  • It probably doesn’t pay off to use the Burst DMA Unit for burst copying from a peripheral to memory, even though I didn’t give it a go: The read is twice as slow, and writing to memory with autoflush is rather quick (see below).
  • The use of the Peripheral DMA Unit in the Linux kernel driver is quite questionable, given the results above. On the other hand, the standard set of scripts aren’t really designed for efficiency anyhow.

Copying data from peripheral to RAM

In this last pair of speed tests, the loop reads one value from the peripheral with Internal bus mapping (the fastest way found) and writes it to the general RAM with an stf command, using autoincrement. This is hence a realistic scenario for bulk copying of data from a peripheral data register into memory that is available to the Application Processor.

The test code had to be modified slightly, so the destination address is brought back to the beginning of the buffer every 1,000,000 write operations, since the buffer size is limited, quite naturally. So when the script begins, r7 contains the number of times to loop until resetting the destination address (that is, r7 = 1000000) and r3 contains the number of such sessions to run (was set to 200). The overhead of this larger loop is literally one in a million.

The assembly code used was:

                             | bigloop:
0000 008f (0000000010001111) | 	mov	r0, r7
0001 6e04 (0110111000000100) | 	stf	r6, 0x04	# MDA = r6, incremental write
0002 7802 (0111100000000010) | 	loop endloop, 0
0003 5201 (0101001000000001) | 	ld	r2, (r1, 0)
0004 6a0b (0110101000001011) | 	stf	r2, 0x0b	# Write 32-bit word, no flush
                             | endloop:
0005 2301 (0010001100000001) | 	subi	r3, 1		# Decrement big loop counter
0006 7cf9 (0111110011111001) | 	bf	bigloop		# Loop until r3 == 0
                             | quit:
0007 0300 (0000001100000000) | 	done 3			# Quit MCU execution

The result was 20.70 Mops/s, that is 20.7 Million pairs of read-writes per second. This sets the realistic hard upper limit for reading from a peripheral to 82.8 MB/s. Note that deducing the known time it takes to execute the peripheral read, one can estimate that the stf command runs at ~55.5 Mops/s. In other words, it’s a single cycle instruction until an autoflush is forced every 8 writes. However dropping the peripheral read command (leaving only the stf command) yields only 35.11 Mops/s. So it seems like the DMA burst unit takes advantage of the small pauses between accesses to it.

I should mention that the Linux system was overall idle while performing these tests, so there was little or no congestion on the physical RAM. The results were repeatable within 0.1% of the execution time.

Note that automatic flush was enabled during this test, so the DMA burst unit received 8 writes (32 bytes) before flushing the data into RAM. When reattempting this test, with explicit flush on each write to RAM (exactly the same assembly code as listed above, with a peripheral read and then stf r7, 0x2b instead of 0x0b), the result dropped to 6.83 Mops/s. Which is tantalizingly similar to the 7.74 Mops result obtained for reading from the Peripheral DMA Unit.

Comparing with non-DMA

Even though not directly related, it’s worth comparing how fast the host accesses the same registers. For example, how much time will this take (in Linux kernel code, of course)?

  for (i=0; i<10000000; i++)
    rc += readl(ecspi_regs + MX51_ECSPI_STAT);

So the results are as follows:

  • Reading from an eCSPI register (as shown above): 4.10 Mops/s
  • The same, but from RAM (non-cacheable, allocated with dma_alloc_coherent): 6.93 Mops/s
  • The same, reading with readl() from a region handled by RAM cache (so it’s considered volatile): 58.14 Mops/s
  • Writing to an eCSPI register (with writel(), loop similar to above): 3.8696 Mops/s

This was carried out on an i.MX6 processor with a clock frequency of 996 MHz.

The figures echo well with those found in the SDMA tests, so it seems like the dominant delays come from i.MX6′s bus bridges. It’s also worth nothing the surprisingly slow performance of readl() from cacheable, maybe because of the memory barriers.

NXP / Freescale i.MX6 as an SPI slave


Even though SPI is commonly used for controlling rather low-speed peripherals on an embedded system, it can also come handy for communicating data with an FPGA.

When using the official Linux driver, the host can only be the SPI master. It means, among others, that transactions are initiated by the host: When the bursts take place is completely decided by software, and so is how long they are. It’s not just about who drives which lines, but also the fact that the FPGA is on the responding side. This may not be a good solution when the data rates are anything but really slow: If the FPGA is slave, it must wait for the host to poll it for data (a bit like a USB peripheral). That can become a bit tricky at the higher end of data rates.

For example, if the FPGA’s FIFO is 16 kbit deep, and is filled at 16 Mbit/s, it takes 1 ms for it to overflow, unless drained by the host. This can be a difficult real-time task for a user-space Linux program (based upon spidev, for example). Not to mention how twisted such a solution will end up, having the processor constantly spinning in a loop collecting data, whether there is data to collect or not.

Another point is that the SPI clock is always driven by the SPI master, and it’s usually not a free-running one. Rather, bursts of clock edges are presented on the clock wire to advance the data transaction.

Handling a gated clock correctly on an FPGA isn’t easy when it’s controlled by an external device (unless its frequency is quite low). From an FPGA design point of view, it’s by far simpler to drive the SPI clock and handle the timing of the MOSI/MISO signals with respect to it.

And finally: If a good utilization of the upstream (FPGA to host) SPI channel is desired, putting the FPGA as master has another advantage. For example, on i.MX6 Dual/Quad, the SPI clock cycle is limited to a cycle of 15 ns for write transactions, but to 40 ns or 55 ns on read transactions, depending on the pins used. The same figures are true, regardless of whether the host is master or slave (compare sections in the relevant datasheet, IMX6DQCEC.pdf). So if the FPGA needs to send data faster than 25 Mbps, it can only use write cycles, hence it has to be the SPI master.

CS is useless…

This is the “Chip Select” signal, or “Slave Select” (SS) in Freescale / NXP terminology.

The reference manual, along with NXP’s official errata ERR009535, clearly state that deasserting the SPI’s CS wire is not a valid way to end a burst. Citing the description for the SS_CTL field of ECSPIx_CONFIGREG, section 21.7.4 in the i.MX6 Reference Manual:

In slave mode – an SPI burst is completed when the number of bits received in the shift register is equal to (BURST_LENGTH + 1). Only the n least-significant bits (n = BURST_LENGTH[4:0] + 1) of the first received word are valid. All bits subsequent to the first received word in RXFIFO are valid.

So the burst length is fixed. The question is, what value to pick. Short answer: 32 bits (set BURST LENGTH to 31).

Why 32? First, let’s recall that RXFIFO is 32 bits wide. So what is more natural than packing the incoming data into full 32 bits entries in the RXFIFO, fully utilizing its storage capacity? Well, maybe the natural data alignment isn’t 32 bits, so another packing scheme could have been better. In theory.

That’s where the second sentence in the citation above comes in. What it effectively says is that if BURST_LENGTH + 1 is chosen anything else than a multiple of 32, the first word, which is ever pushed into RXFIFO since the SPI module’s reset, will contain less than 32 received bits. All the rest, no matter what BURST_LENGTH is set to, will contain 32 bits of received data. This is really what happens. So in the long run, data is packet into 32 bit words no matter what. Choosing BURST_LENGTH + 1 other than a multiple of 32 will just mess up things on the first word the RXFIFO receives after waking up from reset. Nothing else.

So why not set BURST_LENGTH to anything else than 31? Simply because there’s no reason to do so. We’re going to end up with an SPI slave that shifts bits into RXFIFO as 32 bit words anyhow. The term “burst” has no significance, since deassertions of CS are ignored anyhow. In fact, I’m not sure if it makes any difference between different values satisfying multiple of 32 rule.

Note that since CS doesn’t function as a frame for bursts, it’s important that the eCSPI module is brought out of reset while there’s no traffic (i.e. clock edges), or it will pack the data in an unaligned and unpredictable manner. Also, if the FPGA accidentally toggles the clock (due to a bug), alignment it lost until the eCSPI is reset and reinitialized.

Bottom line: The SPI slave receiver just counts 32 clock edges, and packs the received data into RXFIFO. Forever. There is no other useful alternative when the host is slave.

… but must be taken care of properly

Since the burst length doesn’t depend on the CS signal, it might as well be kept asserted all the time. With the register setting given below, that means holding the pin constantly low. It’s however important to select the correct pin in the CHANNEL_SELECT field of ECSPIx_CONREG: The host will ignore the activity on the SPI bus unless CS is selected. In other words, you can’t terminate a burst with CS, but if it isn’t asserted, bits aren’t sampled.

Another important thing to note, is that the CS pin must be IOMUXed as a CS signal. In the typical device tree for the mainstream Linux SPI master driver, it’s assigned as a GPIO pin. That’s no good for an SPI slave.

So, for example, if the ECSPI entry in the device tree says:

&ecspi1 {
[ ... ]
	pinctrl-names = "default";
	pinctrl-0 = <&pinctrl_ecspi1_1>;
	status = "okay";

meaning that the IOMUX settings given in pinctrl_ecspi1_1 should be applied, when the Linux driver related to ecspi1 is probed. It should say something like

&iomuxc {
	imx6qdl-var-som-mx6 {
[ ... ]

		pinctrl_ecspi1_1: ecspi1grp {
			fsl,pins = <
				MX6QDL_PAD_DISP0_DAT23__ECSPI1_SS0	0x1f0b1
[ ... ]

The actual labels differ depending on the processor’s variant, which pins were chosen etc. The point is that the _SS0 usage was selected for the pin, and not the GPIO alternative (in which case it would say MX6QDL_PAD_DISP0_DAT23__GPIO5_IO17). The list of IOMUX defines for the i.MX6 DL variant can be found in arch/arm/boot/dts/imx6dl-pinfunc.h.


The timing diagrams for SPI communication in the Reference Manual show only 8 bit examples, with MSB received first. But this applies to 32 bit words as well. But what happens if 4 bytes are sent with the intention of being treated as a string of bytes?

Because the first byte is treated as the MSB of a 32-bit word, it’s going to end up as the last byte when the 32-bit word is copied (by virtue of a single 32-bit read and write) into RAM, whether done by the processor or by SDMA. This ensures that a 32-bit integer is interpreted correctly by the Little Endian processor when transmitted over the SPI bus, but messes up single bytes transmitted.

Where exactly this flipping takes place, I”m not sure, but it doesn’t really matter. Just be aware that if a sequence of bytes are sent over the SPI link, they need to be byte swapped in groups of 4 bytes to appear in the correct order in the processor’s memory.

Register setting

In terms of a Linux kernel driver, the probe of an SPI slave is pretty much the same as the SPI master, with a few obvious differences. For example, the SPI clock’s frequency isn’t controlled by the host, so it probably doesn’t matter so much how the dividers are set (but it’s probably wise to set these dividers to 1, in case the internal clock is used for something).

  ctrl = MX51_ECSPI_CTRL_ENABLE | /* Enable module */
    /* MX51_ECSPI_CTRL_MODE_MASK not set, so it's slave mode */
    /* Both clock dividers set to 1 => 60 MHz, not clear if this matters */
    MX51_ECSPI_CTRL_CS(which_cs) | /* Select CSn */
    (31 << MX51_ECSPI_CTRL_BL_OFFSET); /* Burst len = 32 bits */

  cfg = 0; /* All defaults, in particular, no clock phase / polarity change */

  /* CTRL register always go first to bring out controller from reset */
  writel(ctrl, regs + MX51_ECSPI_CTRL);

  writel(cfg, regs + MX51_ECSPI_CONFIG);

   * Wait until the changes in the configuration register CONFIGREG
   * propagate into the hardware. It takes exactly one tick of the
   * SCLK clock, but we will wait 10 us to be sure (SCLK is 60 MHz)


    Turn off DMA requests (revert the register to its defaults)
    But set the RXFIFO watermark as required by device tree.
	 regs + MX51_ECSPI_DMA);

  /* Enable interrupt when RXFIFO reaches watermark */
  writel(MX51_ECSPI_INT_RDREN, regs + MX51_ECSPI_INT);

The example above shows the settings that apply when the the host reads from the RXFIFO directly. Given the measurements I present in another post of mine, showing ~4 Mops/s with a plain readl() call, it means that at the maximal bus rate of 66 Mbit/s, which is ~2.06 Mops/s (32 bits per read), we have the a processor core 50% busy just on readl() calls.

So for higher data rates, SDMA is pretty much a must.

The speed test

Eventually, I ran a test. With a dedicated SDMA script, SPI clock running at 112 MHz, 108.6 Mbit/s actual throughput:

# time dd if=/dev/myspi of=/dev/null bs=64k count=500
500+0 records in
500+0 records out
32768000 bytes (33 MB, 31 MiB) copied, 2.41444 s, 13.6 MB/s

real	0m2.434s
user	0m0.000s
sys	0m1.610s

This data rate is, of course, way above the allowed SPI clock frequency of 66 MHz, but it’s not uncommon that real-life results are so much better. I didn’t bother pushing the clock higher.

I ran a long and rigorous test looking for errors on the data transmission line (~ 1 TB of data) and it was completely clean with the 112 MHz, so the SPI slave is reliable. For a production system, I don’t think about exceeding 66 MHz, despite this result. Just to have that said.

But the bottom line is that the SPI slave mode can be used as a simple transmission link of 32-bit words. Often that’s good enough.