Microsoft’s outlook.com servers and the art of delivering mails to them

Introduction

Still in 2020, it seems like Microsoft lives up to its reputation: Being arrogant, thinking that anyone in business must be a huge corporate, and in particular ending up completely ridiculous. Microsoft’s mail servers, which accept on behalf of Hotmail, MSN, Office 365, Outlook.com, or Live.com users are no exception. This also affects companies and other entities which use their own domain names, but use Microsoft’s services for handling mail.

This post summarizes my personal experience and accumulated knowledge with delivering mail to their servers. I use a simple Linux sendmail SMTP MTA on a virtual server for handling the delivery of my own private mails as well as a very low traffic of transactional mails from a web server. All in all, it’s about 100 mails / month coming out from that server to all destinations.

So one server, one IP address with a perfect reputation on all open spam reputation trackers, with SPF, DKIM and DMARC records all in place properly.

One may ask why I’m not relying on existing mail delivery services or my ISP. Answer is simple: Any commercial mail delivery server is likely to have its reputation contaminated by some spammer, no matter what protection measures they take. When that happens, odds are that emails will just disappear, because the ISP has little interest in forwarding the bounce message saying that delivery failed. On a good day, they will be handling the problem quickly, and yet the sender of the lost mail won’t be aware that the correspondence is broken.

For this reason, it’s quite likely that small businesses will go on keeping their own, small, email delivery servers, maintaining their own reputation. So when Outlook’s servers are nasty with a single-IP server, they’re not just arrogant, but they are causing delivery issues with small to medium businesses.

To do when setting up the server

For starter info, go here. Microsoft is pretty upfront about not being friendly to new IP addresses (see troubleshooting page for postmasters).

So it’s a very good idea to create a Microsoft account to log into their services, and then join their Smart Network Data Service (SDNS) and Junk Mail Reporting Program. This is the start page for both of these services.

SDNS allows the owner of a mail server to register its IP address range (“Request Access“), so its status can be monitored (“View IP Status”) over time. When all is fine, the IP Status page says “All of the specified IPs have normal status”, and when they don’t like this or other IP address, it’s more like this (click to enlarge):

Microsoft SDNS blocked IP

The Junk Mail Reporting Program (JMRP) allows the owner of the mail server to receive notifications (by email) when a mail message is delivered however deemed suspicious, either by an end-user (marking it as spam) or by automatic means. So it’s a good idea to create a special email address for this purpose and fill in the JMRP form. Even for the sake of claiming that you got no complaints when contacting support later on.

Note that this is important for delivery of mail to any institution relies on Microsoft’s mail infrastructure. A proper IP address blacklist delisting takes you from

Mar 11 20:18:23 sm-mta[5817]: x2BKIL2H005815: to=<xxxxxxx@mit.edu>, delay=00:00:02, xdelay=00:00:02, mailer=esmtp, pri=121914, relay=mit-edu.mail.protection.outlook.com. [104.47.42.36], dsn=5.7.606, stat=User unknown

(but the bounce message indicated that it’s not an unknown user, but a blacklisted IP number) to

Mar 11 21:15:12 sm-mta[6170]: x2BLF8rT006168: to=<xxxxxxx@mit.edu>, delay=00:00:03, xdelay=00:00:03, mailer=esmtp, pri=121915, relay=mit-edu.mail.protection.outlook.com. [104.47.42.36], dsn=2.0.0, stat=Sent (<5C86CFDC.6000206@example.com> [InternalId=11420318042095, Hostname=DM5PR01MB2345.prod.exchangelabs.com] 11012 bytes in 0.191, 56.057 KB/sec Queued mail for delivery)

Note that the session response said nothing about a blacklisted IP, however the bounce message (not shown here) did.

Finally, Microsoft suggest getting a certification from Return Path. A paid-for service, clearly intended for large companies and in particular mass mailers to get their spam delivered. Microsoftish irony at its best.

To do when things go wrong

First thing first, read the bounce message. If it says that it’s on Microsoft’s IP blacklist, go to the Office 365 Anti-Spam IP Delist Portal and delist it.

Then check the IP’s status (requires logging in). If you’re blocked, contact support. This doesn’t require a Microsoft login account, by the way. I’m not sure if this link to the support page is valid in the long run, so it’s on SNDS’ main page (“contact sender support”) as well as Troubleshooting page.

My own ridiculous experience

I kicked off my mail server a bit more than a year ago. There was some trouble in the beginning, but that was no surprise. Then things got settled and working for a year, and only then, suddenly & out of the blue, a mail to a Hotmail address bounced with:

Action: failed
Status: 5.7.1
Diagnostic-Code: SMTP; 550 5.7.1 Unfortunately, messages from [193.29.56.92] weren't sent. Please contact your Internet service provider since part of their network is on our block list (S3140). You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. [VE1EUR01FT021.eop-EUR01.prod.protection.outlook.com]

And indeed, checking the IP status indicated that is was blocked “because of user complaints or other evidence of spamming”.

So first I went to the mail logs. Low traffic. No indication that the server has been tricked into sending a lot of mails. No indication that it has been compromised in any way. And when a server has been compromised, you know it.

No chance that there were user complaints, because I got nothing from JMRP. So what the “evidence of spamming”?

My best guess: A handful transactional mail messages (at most) to their servers for authenticating email addresses that were marker suspicious by their super software. Putting these messages in quarantine for a few hours is the common solution when that happens. Spam is about volume. If all you got was 4-5 messages, how could that be a spam server? Only if you look at percentage. 100% suspicious. Silly or what?

So I filled in the contact support form, and soon enough I got a message saying a ticket has been opened, and 30 minutes later saying

We have completed reviewing the IP(s) you submitted. The following table contains the results of our investigation.

Not qualified for mitigation
193.29.56.92
Our investigation has determined that the above IP(s) do not qualify for mitigation. These IP(s) have previously received mitigations from deliverability support, and have failed to maintain patterns within our guidelines, so they are ineligible for additional mitigation at this time.

Cute, heh? And that is followed by a lot of general advice, basically copied from the website, recommending to join JMRP and SDNS. Which I had a year earlier, of course. The script that responded didn’t even bother to check that.

But it also said:

To have Deliverability Support investigate further, please reply to this email with a detailed description of the problem you are having, including specific error messages, and an agent will contact you.

And so I did. I wrote that I had joined those two programs a year ago, that the mail volume is low and so on. I doubt it really made a difference. After sending the reply, I got a somewhat automated response rather quickly, now with a more human touch:

Hello,

My name is Ayesha and I work with the Outlook.com Deliverability Support Team.

IP: 193.29.56.92

We will be looking into this issue along with the Escalations Team. We understand the urgency of this issue and will provide an update as soon as this is available. Rest assured that this ticket is being tracked and we will get back to you as soon as we have more information to offer.

Thank you for your patience.

Sincerely,
Ayesha

Outlook.com Deliverability Support

And then, a few days later, another mail:

Hello,

My name is Yaqub and I work with the Outlook.com Deliverability Support Team.

Recent activity coming from your IP(s): ( 193.29.56.92) has been flagged by our system as suspicious, causing your IP to become blocked. I have conducted an investigation into the emails originating from your IP space and have implemented mitigation for your deliverability problem. This process may take 24 – 48 hours to replicate completely throughout our system.

Please note that lifting the block does not guarantee that your email will be delivered to a user’s inbox. However, here are some things that can help you with delivery:

(and here came the same suggestions on JMRP and SDNS)

And about 24 hours later, the IP status went back to OK again. And my emails went through normally.

Well, almost. A few days even further down, I attempted to send an email to a live.co.uk destination, and once again, I got the same rejection message (in block list, S3140). The only difference was that the mail server on the other side was hotmail-com.olc.protection.outlook.com (residing in the US), and now eur.olc.protection.outlook.com (somewhere in Europe).

I checked the IP’s status in SDNS and it was fine. So updating the Europeans on the updated IP status takes a bit time, or what?

So I replied to last email I got from Microsoft’s support, saying it failed with live.co.uk. I didn’t get any reply, but a few hours later I tried again, and the mail went through. Coincidence or not.

This time I also caught the related messaged from the mail log. It’s

May 01 15:10:28 sm-mta[2239]: 041FASMh002237: to=<xxxxx@live.co.uk>, ctladdr=<eli@billauer.co.il> (510/500), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=121816, relay=eur.olc.protection.outlook.com. [104.47.1.33], dsn=5.0.0, stat=Service unavailable
May 01 15:10:28 sm-mta[2239]: 041FASMh002237: 041FASMh002239: DSN: Service unavailable

for a failure, and

May 02 06:23:00 sm-mta[4024]: 0426Mx1I004021: to=<xxxxx@live.co.uk>, ctladdr=<eli@billauer.co.il> (510/500), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=121808, relay=eur.olc.protection.outlook.com. [104.47.18.97], dsn=2.0.0, stat=Sent (<5EAD11C3.20105@billauer.co.il> [InternalId=21887153366859, Hostname=AM6EUR05HT060.eop-eur05.prod.protection.outlook.com] 10627 bytes in 0.246, 42.064 KB/sec Queued mail for delivery -> 250 2.1.5)

for success.

Lesson learned: Contact support and insist.

And the lesson to all those using Microsoft’s mail services: Your provider cuts off your email contacts arbitrarily. Because they are Microsoft.

Ftrace: The Linux kernel hacker’s swiss knife

Introduction

I ran into ftrace completely by chance, while trying to figure out why the call to usb_submit_urb() took so long time. In fact, it wasn’t. It was pr_info() that delayed the output. And it was ftrace that got me to realize that.

Whether you’re into dissecting existing kernel code, and want to know which function calls which, or if you need a close look on what your own code actually does (and when), ftrace is your companion. And it does a lot more.

And that’s maybe its problem: It offers so many different possibilities, that its documentation gets not so inviting to read. Add some confusing terminology and focus on advanced issues, and one gets the impression that starting to use it is a project in itself.

This is definitely not the case. This post consists of some simple & useful tasks. It’s not much about accuracy, doing it the right way nor showing the whole picture. It’s about about getting stuff done. If you’ll need to nail down something more specific, read the docs. It looks like they got virtually any useful scenario covered.

I’ll divert from keeping things simple in part on events at the end of this post. The concept of events is fairly simple, but the implementation, well, well. But the point is partly to demonstrate exactly that.

Does your kernel support ftrace?

Ftrace is often enabled in compiled kernels, but not always. Look for /sys/kernel/debug/tracing/ (as root) and in particular go

# cat available_tracers

If function_graph and function aren’t listed as available_tracers, the kernel needs to be recompiled with the correct options. Namely, CONFIG_FUNCTION_TRACER, CONFIG_FUNCTION_GRAPH_TRACER, CONFIG_STACK_TRACER and CONFIG_DYNAMIC_FTRACE.

References

These are some good resources. I would usually put them at the end of the post, but noone is expected to get there.

  • A gentle introduction by Alex Dzyoba.
  • /sys/kernel/debug/tracing/README — it’s like the -h flag on user space utilities
  • Detailed documentation from the kernel tree: Documentation/trace/ftrace.rst (and other files in the same directory)
  • A tutorial in LWN, part 1 and part 2.
  • Some good slides on the topic.

Getting around a bit

They’ll tell you that there’s a special tracefs to mount, but it’s easily available under /sys/kernel:

# cd /sys/kernel/debug/tracing/

For tracing invocations of functions, there are two formats: The simple “function” and it’s more sophisticated “function_graph”. It’s easiest to just try them out: Select the function graph tracer

# echo function_graph > current_tracer

Watch the tracing output

# less trace

There’s also trace_pipe with a “tail -f” flavor.

The output isn’t very helpful for now, as it shows every function invoked on all processors. Too much information. We’ll get to filtering just below.

Turn tracing off

# echo 0 > tracing_on

or on

# echo 1 > tracing_on

and clear (empty) the trace:

# echo > trace

Note that turning tracing off turns off everything, including trace_printk() discussed below. It’s a complete halt. To just stop one of function tracers, better go

# echo nop > current_tracer

The trace data is stored in circular buffers, so old entries are overwritten by newer ones if these buffers get full. The problem is that there’s a separate buffer for each CPU, so once this overwriting begins, the overall trace may miss out traces from only some CPUs on those time segments. Therefore I prefer turning off overwriting completely. At least the beginning of the trace reflects what actually happened (and the mess is left to the end):

# echo 0 > options/overwrite

Now to the “function” variant, just which function was invoked by which along with a timestamp. There is however no information on when the function returned (how much time it took), but the absolute timestamp can be matched with dmesg stamp. But this requires selecting ftrace’s clock as global:

# echo function > current_tracer
# echo global > trace_clock

It’s very important to note that printk (and its derivatives) can take up to a few milliseconds, so an apparent mismatch in the timestamps between a printk and a function called immediately after it may be a result of that.

Using trace_printk()

It’s quite common to use printk and its derivatives for debugging kernel code, which is fairly OK if there’s no problem if these slow down the execution — I’ve seen printk taking a few milliseconds (in process context, I should say).

So for quick and time-accurate printk-like debugging, definitely go for trace_printk(). Plus you get some extra info, such as the process / interrupt context and which CPU it ran on. It’s just a matter of adding something like

trace_printk("Submitting buffer ID=%d, len=%d\n", id, count);

in your kernel code, instead of pr_info(), dev_info() or whatever. No extra header file or anything of that sort needed, as far as I can tell.

I should mention that trace_printk() is intended for temporary debug messages, and these can’t be left in production code. If you want debug messages that stay there, go for events. Trickier, but the right way to go.

trace_printk() messages are logged by default, even when current_tracer is set to “nop”. However tracing must be on. In short, a newly booted system will show trace_printk()’s output in “trace”.

These messages will be interleaved with the other traces if current_tracer is in non-nop mode. This can be useful when the situation of interest occurs rarely — for example a code segments takes too long to run. Since the outcome is known only at the end, the simple solution is to run function tracing, and call trace_printk() when the time difference exceeds a threshold. This message will of course appear after the function sequence in the trace, but it’s easily looked up with plain text tools (e.g. “less”).

The system boots up with tracing on and in “nop” mode, but if you’ve been fiddling with these, bring them back with

# echo nop > current_tracer
# echo 1 > tracing_on

And then view the messages just like before (“trace”, “trace_pipe” or the CPU-individual counterparts).

The reason you don’t want to leave trace_printk() in production code is that the first call will leave a message of this form in the kernel log:

[ 1954.872204] **********************************************************
[ 1954.875230] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[ 1954.878242] **                                                      **
[ 1954.881257] ** trace_printk() being used. Allocating extra memory.  **
[ 1954.884281] **                                                      **
[ 1954.887345] ** This means that this is a DEBUG kernel and it is     **
[ 1954.890397] ** unsafe for production use.                           **
[ 1954.893470] **                                                      **
[ 1954.896496] ** If you see this message and you are not debugging    **
[ 1954.899527] ** the kernel, report this immediately to your vendor!  **
[ 1954.902539] **                                                      **
[ 1954.905598] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[ 1954.908713] **********************************************************

This is of course nothing to be alarmed about (if you’re the one who made the trace_printk() calls, that is). Seems like the purpose of this message is to gently convince programmers to use events in production code instead.

Tracing a specific function call

Ftrace comes with great filtering capabilities, which can be really complicated. So let’s take a simple usage case. Say that I want to see how much time elapses from the call to usb_submit_urb() in my driver and the callback function. Never mind the details. First, I might want to verify that the function is really traceable — it might not be if it has been optimized away by the compiler or if it’s a #define macro (which definitely isn’t the case for usb_submit_urb(), since it’s an exported function, but anyhow).

So first, look it up in the list of available functions (cwd is /sys/kernel/debug/tracing):

# grep submit_urb available_filter_functions
usb_hcd_submit_urb
usb_submit_urb

Yep, it’s there. So add it to the functions to filter, along with my driver’s callback function, which is bulk_out_completer():

# echo usb_submit_urb bulk_out_completer > set_ftrace_filter
# echo function > current_tracer
# echo > trace

Note that multiple functions can be given to set_ftrace_filter, delimited by plain whitespace. Wildcards can also be used, as shown below.

Now perform the operation (use dd command in my case) that makes the driver active, and harvest the output:

# head -30 trace
# tracer: function
#
# entries-in-buffer/entries-written: 64/64   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
              dd-847   [001] ....  1787.067325: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067335: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067342: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067348: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067353: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067358: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067363: usb_submit_urb <-try_queue_bulk_out
              dd-847   [001] ....  1787.067369: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.068019: bulk_out_completer <-__usb_hcd_giveback_urb
    kworker/2:1H-614   [002] ....  1787.068080: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.068528: bulk_out_completer <-__usb_hcd_giveback_urb
    kworker/2:1H-614   [002] ....  1787.068557: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.069039: bulk_out_completer <-__usb_hcd_giveback_urb
    kworker/2:1H-614   [002] ....  1787.069062: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.069533: bulk_out_completer <-__usb_hcd_giveback_urb
    kworker/2:1H-614   [002] ....  1787.069556: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.070012: bulk_out_completer <-__usb_hcd_giveback_urb
    kworker/2:1H-614   [002] ....  1787.070036: usb_submit_urb <-try_queue_bulk_out
          <idle>-0     [002] d.h1  1787.070506: bulk_out_completer <-__usb_hcd_giveback_urb

Not sure how useful the data above is, but it demonstrates how a sequence of events can be analyzed easily.

Just to close this issue, let’s take a look on wildcards. This is easiest shown with an example:

# echo '*_submit_urb' > set_ftrace_filter
# cat set_ftrace_filter
usb_hcd_submit_urb
usb_submit_urb

So quite evidently, the wildcard was applied when the filter was set, and its usage is quite trivial.

Don’t forget to remove the filter when it’s not necessary anymore. These filters won’t go away otherwise, and tracing may appear to suddenly not work if you go on doing something else. So simply:

# echo > set_ftrace_filter
# cat set_ftrace_filter
#### all functions enabled ####

It’s not a coincidence that I didn’t use function_graph above — the functions that are called by the selected function(s) won’t show anyhow. It does show the duration of the call, which may be helpful. This brings us to

Which functions does function X call?

This is a great way to get an idea of who-calls-what. Let’s stick with usb_submit_urb(). What other functions does it call, specifically in my case?

Simple. Be sure to have removed any previous filters (see above) and then just go:

# echo usb_submit_urb > set_graph_function
# echo function_graph > current_tracer
# echo > trace

Then do whatever causes the function to be called, after which “trace” reads something like:

# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 0)               |  usb_submit_urb() {
 0)   0.979 us    |    usb_urb_ep_type_check();
 0)               |    usb_hcd_submit_urb() {
 0)   0.427 us    |      usb_get_urb();
 0)               |      xhci_map_urb_for_dma [xhci_hcd]() {
 0)               |        usb_hcd_map_urb_for_dma() {
 0)   0.658 us    |          dma_direct_map_page();
 0)   1.739 us    |        }
 0)   2.642 us    |      }

[ ... ]

and it goes on.

This is a good place to remind that if all you wanted was to get the stack trace of calls at a certain point in your code, WARN() and WARN_ONCE() may be handier.

Tracing a segment: quick and dirty

This method doesn’t really trace a specific segment. It’s a bit of a dirty trick: Tracing is disabled at first, and then enabled before the function call, and disabled after it. All function calls that take place on all CPUs during that time are recorded. It may mean additional noise, but this noise may also be the explanation to why something went wrong (if it went wrong).

There is probably a more elegant solution. In particular, ftrace triggers have “traceon” and “traceoff” actions, so this is probably the classier way to go. But let’s do something quick, dirty and simple to understand.

So say that I want to understand what usb_submit_urb() is up to and what other functions it calls along. I could go, in my own driver:

tracing_on();
rc = usb_submit_urb(urb, GFP_KERNEL);
tracing_off();

With this driver loaded into the kernel, then go something like:

# cd /sys/kernel/debug/tracing/
# echo global > trace_clock
# echo 32768 > buffer_size_kb
# echo 0 > tracing_on
# echo function > current_tracer

This assumes that current_tracer was “nop” before this (it’s by default), so the trace buffer begins empty. And when it’s changed to “function” at the last command, nothing happens, because tracing was just turned off. The buffer size is set to 32 MB per CPU (which is quite a lot).

Then run whatever makes the relevant code execute, and then

# less trace

The problem is that trace output from several CPUs is interleaved in this output. But we know which CPU the relevant command ran on from the trace itself, so obtain a filtered version (e.g. for CPU #2):

# less per_cpu/cpu2/trace

Events

The term “events” is somewhat misleading. An event is just a piece of printk-like debug message that can be enabled or disabled in runtime. It’s a convenient way to leave the debugging messages in place even in production code, and make it possible even for end-users to harvest the information without needing to compile anything.

I’ll try to explain this by a (ehm-ehm) simple example. Let’s enable the kmalloc event:

# echo nop > current_tracer
# echo kmalloc > set_event

This turns off the function tracer, and enables the kmalloc event. Looking at “trace” we now have something like:

# tracer: nop
#
# entries-in-buffer/entries-written: 1047/1047   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
            sshd-781   [003] ....  5317.446761: kmalloc: call_site=ffffffff8153b210 ptr=000000001f5ab582 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC
            sshd-781   [003] ...1  5317.446788: kmalloc: call_site=ffffffff8153cbf1 ptr=000000002525fcc0 bytes_req=1024 bytes_alloc=1024 gfp_flags=GFP_ATOMIC|__GFP_NOWARN|__GFP_NOMEMALLOC
            sshd-781   [003] ....  5317.748476: kmalloc: call_site=ffffffff8153b210 ptr=0000000095781cbc bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC
            sshd-781   [003] ...1  5317.748501: kmalloc: call_site=ffffffff8153cbf1 ptr=00000000c9801d3d bytes_req=1024 bytes_alloc=1024 gfp_flags=GFP_ATOMIC|__GFP_NOWARN|__GFP_NOMEMALLOC
            sshd-781   [003] ....  5317.900662: kmalloc: call_site=ffffffff8153b210 ptr=000000008e7d4585 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC
            sshd-781   [003] ...1  5317.900687: kmalloc: call_site=ffffffff8153cbf1 ptr=0000000004406a83 bytes_req=1024 bytes_alloc=1024 gfp_flags=GFP_ATOMIC|__GFP_NOWARN|__GFP_NOMEMALLOC
            bash-792   [000] ....  5318.420356: kmalloc: call_site=ffffffff8119f2c0 ptr=00000000a6835237 bytes_req=184 bytes_alloc=192 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO
            bash-792   [000] ....  5318.420365: kmalloc: call_site=ffffffff8119f34d ptr=000000008e7d4585 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO
            bash-792   [000] ....  5318.420404: kmalloc: call_site=ffffffff8116feff ptr=00000000c30e90f8 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL|__GFP_ZERO
            sshd-781   [003] ....  5318.420408: kmalloc: call_site=ffffffff8153b210 ptr=000000000b7b85e5 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC
            bash-792   [000] ....  5318.420415: kmalloc: call_site=ffffffff811701e4 ptr=00000000f54127a0 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL|__GFP_ZERO
            bash-792   [000] ....  5318.420431: kmalloc: call_site=ffffffff812eb5e2 ptr=0000000084bbe3b4 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL|__GFP_ZERO
            sshd-781   [003] ...1  5318.420435: kmalloc: call_site=ffffffff8153cbf1 ptr=00000000a6a3ac50 bytes_req=1024 bytes_alloc=1024 gfp_flags=GFP_ATOMIC|__GFP_NOWARN|__GFP_NOMEMALLOC
            bash-792   [000] ....  5318.420473: kmalloc: call_site=ffffffff811b1aac ptr=0000000095972087 bytes_req=56 bytes_alloc=64 gfp_flags=GFP_KERNEL_ACCOUNT

These are all kmalloc calls that were executed since the event was enabled. There are of course ways to filter the events to specific occasions, but this is really not something I’ve gotten into (yet?).

It’s however interesting to see how these messages came about. To do this, try the following (after enabling the kmalloc event as shown above):

# echo '*kmalloc*' > set_graph_function
# echo function_graph > current_tracer
# echo > trace

And then the trace output can be something like:

# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 0)               |  finish_task_switch() {
 0)               |    _raw_spin_unlock_irq() {
 0)   0.794 us    |      do_raw_spin_unlock();
 0)   0.777 us    |      preempt_count_sub();
 0)   4.402 us    |    }
 0)   8.919 us    |  }
 0)               |  __kmalloc_reserve.isra.9() {
 0)               |    __kmalloc_track_caller() {
 0)   0.784 us    |      kmalloc_slab();
 0)   0.654 us    |      should_failslab();
 0)   0.711 us    |      check_irq_off();
 0)               |      cache_alloc_debugcheck_after() {
 0)   4.691 us    |        check_poison_obj();
 0)   0.685 us    |        poison_obj();
 0)   7.353 us    |      }
 0)               |      /* kmalloc: call_site=ffffffff81801a80 ptr=000000000493ffc2 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC */
 0) + 15.571 us   |    }
 0) + 17.452 us   |  }

And it goes on like this several times. So now we have the event output in the middle of the function graph, and we can also see what happened: __kmalloc_reserve(), which is defined in net/core/skbuff.c, calls kmalloc_node_track_caller(), which is translated into __kmalloc_track_caller() by virtue of a #define in slab.h. That function is defined in mm/slab.c, but it just redirects the call to __do_kmalloc_node(), which makes the calls visible in the trace, and eventually calls kmem_cache_alloc_node_trace(). This call isn’t registered, most likely because it was optimized away by the compiler. And it reads:

#ifdef CONFIG_TRACING
void *
kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
{
	void *ret;

	ret = slab_alloc(cachep, flags, _RET_IP_);

	ret = kasan_kmalloc(cachep, ret, size, flags);
	trace_kmalloc(_RET_IP_, ret,
		      size, cachep->size, flags);
	return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_trace);
#endif

So there we have it. But wait! Where is trace_malloc defined? The answer lies at the top of slab.c:

#include <trace/events/kmem.h>

which includes include/trace/events/kmem.h, the beginning of which reads

 /* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kmem

#if !defined(_TRACE_KMEM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_KMEM_H

#include <linux/types.h>
#include <linux/tracepoint.h>
#include <trace/events/mmflags.h>

DECLARE_EVENT_CLASS(kmem_alloc,

	TP_PROTO(unsigned long call_site,
		 const void *ptr,
		 size_t bytes_req,
		 size_t bytes_alloc,
		 gfp_t gfp_flags),

	TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags),

	TP_STRUCT__entry(
		__field(	unsigned long,	call_site	)
		__field(	const void *,	ptr		)
		__field(	size_t,		bytes_req	)
		__field(	size_t,		bytes_alloc	)
		__field(	gfp_t,		gfp_flags	)
	),

	TP_fast_assign(
		__entry->call_site	= call_site;
		__entry->ptr		= ptr;
		__entry->bytes_req	= bytes_req;
		__entry->bytes_alloc	= bytes_alloc;
		__entry->gfp_flags	= gfp_flags;
	),

	TP_printk("call_site=%lx ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s",
		__entry->call_site,
		__entry->ptr,
		__entry->bytes_req,
		__entry->bytes_alloc,
		show_gfp_flags(__entry->gfp_flags))
);

DEFINE_EVENT(kmem_alloc, kmalloc,

	TP_PROTO(unsigned long call_site, const void *ptr,
		 size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags),

	TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags)
);

Tired already? We’re almost there. I won’t get into the details of this (mostly because I don’t know), but note two crucial points: One is the TP_printk(), which matches the output format of the event’s output. The second part is the DEFINE_EVENT, which defines a function linked with the kmem_alloc class, named trace_kmalloc(). The define-macro magic takes place in include/linux/tracepoint.h, and essentially translates DEFINE_EVENT into a DECLARE_TRACE, which is then turned into a __DECLARE_TRACE, which begins with:

#define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
	extern struct tracepoint __tracepoint_##name;			\
	static inline void trace_##name(proto)				\
	{								\
		if (static_key_false(&__tracepoint_##name.key))		\
			__DO_TRACE(&__tracepoint_##name,		\
				TP_PROTO(data_proto),			\
				TP_ARGS(data_args),			\
				TP_CONDITION(cond), 0);			\
		if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) {		\
			rcu_read_lock_sched_notrace();			\
			rcu_dereference_sched(__tracepoint_##name.funcs);\
			rcu_read_unlock_sched_notrace();		\
		}							\
	}								\
[ ... ]

And then it goes on with several related functions. The point with this long story, except for the obvious masochism, was to show that events really are just some kind of printk, only not all that easy to use. It also explains why that scary message is needed to get people off trace_printk().

But since there’s some code to copy from, it shouldn’t be all that bad to set up new events.

Linux driver: Creating device files for a hotpluggable device

Introduction

Most device drivers hook onto an already existing subsystem: Audio, networking, I2C, whatever. If it’s a specialized piece of hardware, a single device file is typically enough, and you get away with a misc device or, as shown in the kernel’s USB example (drivers/usb/usb-skeleton.c), with the USB framework’s usb_register_dev() call.

However in some cases, a number of dedicated device files are required, belonging to a dedicated class. Only in these cases does it make sense to generate the device files explicitly. A driver that does this for no good reason will be hard to get into the Linux kernel tree.

Hotpluggable devices are tricky in particular because of the two following reasons:

  • They may vanish at any time, so the driver must be sure to handle that properly. Namely, not to free any resources as long as they may be accessed by some other thread.
  • It’s impossible initialize the device driver once and for all against a known set of devices when the driver loads.

This post focuses on USB devices, and in particular on deleting the device files and releasing the kernel that support them when the USB device is unplugged.

Spoiler: There’s no problem deleting the device files themselves, but handling the cdev struct requires some attention.

Everything in this post relates to Linux kernel v5.3.

A reminder on device files

First, we shall look at the usual way to do it, which is in fact unsuitable for a hotpluggable device. But we have to start with something.

There are three players in this game:

  • The cdev struct: Makes the connection between a set of major/minors and a pointer to a struct file_operations, containing the pointers to the functions (methods) that implement open, release, read, write etc. The fops, in short.
  • The device files: Those files in /dev that are accessible by user space programs.
  • The class: Something that must be assigned to device files that are created, and assists in identifying their nature (in particular for the purpose of udev).

The class is typically a global variable in the driver module, and is created in its init routine:

static struct class *example_class;

static int __init example_init(void)
{
  example_class = class_create(THIS_MODULE, examplename);
  if (IS_ERR(example_class))
    return PTR_ERR(example_class);

  return 0;
}

Because a class is typically global in the module, and hence not accessible elsewhere, it’s impossible to create device files on behalf of another class without using their API and restrictions (for example, misc devices). On the other hand, if you try to push a driver which creates a new class into the kernel tree, odds are that you’ll have to explain why you need to add yet another class. Don’t expect a lot of sympathy on this matter.

The next player is the cdev struct. Its role is to connect between a major + a range of minors and file operations. It’s typically part of a larger struct which is allocated for each physical device. So it usually goes something like

struct example_device {
  struct device *dev;

  struct cdev cdev;

  int major;
  int lowest_minor; /* Highest minor = lowest_minor + num_devs - 1 */

  int num_devs;

  struct kref kref;

  /* Just some device related stuff */
  struct list_head my_list;
  __iomem void *registers;
  int fatal_error;
  wait_queue_head_t my_wait;
}

The only part that is relevant for this post is the struct cdev and the others marked in bold, but I left a few others that often appear in a IOMM device.

Note that the example_device struct contains the cdev struct itself, and not a pointer to it. This is the usual way, but that isn’t the correct way for a USB device. More on that below.

As mentioned above, the purpose of the cdev struct is to bind a major/minor set to a struct of fops. Something like

static const struct file_operations example_fops = {
  .owner      = THIS_MODULE,
  .read       = example_read,
  .write      = example_write,
  .open       = example_open,
  .flush      = example_flush,
  .release    = example_release,
  .llseek     = example_llseek,
  .poll       = example_poll,
};

The cdev is typically initialized and brought to life with something like

  struct example_device *mydevice;
  dev_t dev;

  rc = alloc_chrdev_region(&dev, 0, /* minor start */
			   mydevice->num_devs,
			   examplename);
  if (rc) {
    dev_warn(mydevice->dev, "Failed to obtain major/minors");
    return rc;
  }

  mydevice->major = major = MAJOR(dev);
  mydevice->lowest_minor = minor = MINOR(dev);

  cdev_init(&mydevice->cdev, &example_fops);
  mydevice->cdev.owner = THIS_MODULE;

  rc = cdev_add(&mydevice->cdev, MKDEV(major, minor),
		mydevice->num_channels);
  if (rc) {
    dev_warn(mydevice->dev, "Failed to add cdev. Aborting.\n");
    goto bummer;
  }

So there are a number of steps here: First, a major and a range of minors is allocated with the call to alloc_chrdev_region(), and the result is stored in the dev_t struct.

Then the cdev is initialized and assigned a pointer to the fops struct (i.e. the struct that assigns open, read, write release). It’s the call to cdev_add() that makes the module “live” as a device file handler by binding the fops to the set of major/minor set that was just assigned.

If there happens to exist files with the relevant major and minor in the file system, they can be used immediately to execute the methods in example_fops. This is however not likely in this case, since they were allocated dynamically. The very common procedure is hence to create them in the driver (which also triggers udev events, if such are defined). So there can be several calls to something like:

    device = device_create(example_class,
			   NULL,
			   MKDEV(major, i),
			   NULL,
			   "%s", devname);

Note that this is the only use of example_class. The class has nothing to do with the cdev.

And of course, all this must be reverted when the USB device is unplugged. So we’re finally getting to business.

Is it fine to remove device files on hot-unplugging?

Yes, but remember that the file operation methods may very well be called on behalf of file descriptors that were already opened.

So it’s completely OK to call device_destroy() on a device that is still opened by a process. There is no problem creating device files with the same names again, even while the said process still has the old file opened. It’s exactly like any inode, which is visible only by the process(es) that has a file handle on them. A device file is just a file. Remove it, and it’s really gone only when there are no more references to it.

In fact, for a simple “cat” process that held a deleted device, the entry in /proc went

# ls -l /proc/756/fd
total 0
lrwx------. 1 root root 64 Mar  3 14:12 0 -> /dev/pts/0
lrwx------. 1 root root 64 Mar  3 14:12 1 -> /dev/pts/0
lrwx------. 1 root root 64 Mar  3 14:12 2 -> /dev/pts/0
lr-x------. 1 root root 64 Mar  3 14:12 3 -> /dev/example_03 (deleted)

So no drama here. Really easy.

Also recall that removing the device files doesn’t mean all that much: It’s perfectly possible (albeit weird) to generate extra device files with mknod, and use them regardless. The call to device_destroy() won’t make any difference in this case. It just removes those convenience device files in /dev.

When to release the cdev struct

Or more precisely, the question is when to release the struct that contains the cdev struct. The kernel example’s suggestion (drivers/usb/usb-skeleton.c) is to maintain a reference counter on the enclosing struct (a kref). Then increment the reference count for each file opened, decrement for each file release, and also decrement it when the device is disconnected. This way, the device information (e.g. example_device struct above) sticks around until the device is disconnected and there are no open files. There is also an issue with locking, discussed at the end of this post.

But when cdev is part of this struct, that is not enough. cdev_del(), which is normally called in the device’s disconnect method, disables the accessibility of the fops for opening new file descriptors. But there’s much to the comment from fs/char_dev.c, above the definition of cdev_del(): “This guarantees that cdev device will no longer be able to be opened, however any cdevs already open will remain and their fops will still be callable even after cdev_del returns.”

So what’s the problem, one may ask. The kref keeps the cdev until the last release! (hopefully with proper locking, as discussed at the end of this post)

Well, that’s not good enough: It turns out that the struct cdev is accessed after the fops release method has been called, even for the last open file descriptor.

Namely, the issue is with __fput() (defined in fs/file_table.c), which is the function that calls the fops release method, and does a lot of other things that are related to the release of a file descriptor (getting it off the epoll lists, for example): If the released inode is a character device, it calls cdev_put() with the cdev struct after the release fops method has returned.

Which makes sense, after all. The cdev’s reference count must be reduced sometime, and it can’t be before calling the release, can it?

So cdev_put calls kobject_put() on the cdev’s kobject to reduce its reference count. And then module_put() on the owner of the cdev entry (the owning module, that is) as given in the @owner entry of struct cdev.

Therefore, there’s a nasty OOPS or a kernel panic if the struct cdev is on a memory segment that has been freed. Ironically enough, the call to cdev_put() brings the cdev’s reference count to zero if cdev_del() has been called previously. That, in turn, leads to a call to the kobject’s release method, which is cdev_default_release(). In other words, the oops is caused by the mechanism that is supposed to prevent the cdev (and the module) the exact problem that it ends up creating.

Ironic, but also the hint to the solution.

The lame solution

The simplest way is to have the cdev as a static global variable of the relevant module. Is this accepted practice? Most likely, as Greg K-H himself manipulated a whole lot of these in kernel commit 7e7654a. If this went through his hands, who am I to argue. However this goes along with allocating a fixed pool of minors for the cdev: The number of allocated minors is set when calling cdev_add().

The backside is that cdev_add() can only be called once, so the range of minors must be fixed. This is commonly solved by setting up a pool of minors in the module’s init routine (256 of them in usb-skeleton.c, but there are many other examples).

Even though it’s a common solution in the kernel tree, I always get a slight allergy to this concept. How many times have we heard that “when it was designed, it was considered a lot” thing?

The elegant solution

In short: Allocate the cdev dynamically. Instead of

struct example_device {
  struct device *dev;

  struct cdev cdev;
  [ ... ]
}

go

struct example_device {
  struct device *dev;

  struct cdev *cdev;
  [ ... ]
}

so the cdev struct is referred to with a pointer instead. And instead of the call to cdev_init(), go:

  mydevice->cdev = cdev_alloc();
  if (!mydevice->cdev)
    goto bummer;

  mydevice->cdev->ops = &example_fops;
  mydevice->cdev->owner = THIS_MODULE;

And from there go ahead as usual. The good part is that there’s no need to free a cdev that has been allocated this way. The kernel frees it automatically when its reference count goes down to zero (it starts at one, of course). So all in all, the kernel counts the references to cdev as files are opened and closed. In particular, it decrements it when cdev_del() is called. So it really vanishes only when it’s not needed anymore.

Note that cdev_init() isn’t called. Doing this will cause a kernel memory leak (which won’t affect the allocation of major and minors, by the way). See “Read the Source” below, which also shows the details on how this solves the problem.

Only note that if cdev_add() fails, the correct unwinding is:

  rc = cdev_add(&mydevice->cdev, MKDEV(major, minor),
		mydevice->num_channels);
  if (rc) {
    dev_warn(mydevice->dev, "Failed to add cdev. Aborting.\n");
    kobject_put(&mydevice->cdev->kobj);
    goto bummer2;
  }

In other words, don’t call cdev_del() if cdev_add() fails. It’s can’t be deleted if it hasn’t been added. Decrementing its reference count is the reverse operation. This is how it’s done by __register_chrdev(), defined in char_dev.c. That’s where cdev_add() and cdev_del() are defined, so they should know…

Know cdev’s reference count rules

Since cdev’s is wiped out by the kernel, it’s important to know how the kernel counts its reference count. So these are the rules:

  • cdev is assigned a reference count of 1 by the call to cdev_alloc() (by virtue of kobject_init). Same goes for cdev_init(), but that’s irrelevant (see code snippets below).
  • cdev’s reference count is not incremented by the call to cdev_add(). So it stays on 1, which is sensible.
  • cdev’s reference count is decremented on a call to cdev_del(). This makes sense, even though it kind-of breaks the symmetry with cdev_add(). But the latter takes a free ride on the ref count of cdev_alloc(), so that’s how it comes together.
  • A reference increment is done for each opened related file, and decremented on file release.
The bottom line is that if cdev_del() is called when there is no currently opened relevant device file, it will go away immediately.

For the extra pedantic, it may seem necessary to call kobject_get(&mydevice->cdev->kobj) immediately after cdev_alloc(), and then kobject_put() only after freeing the example_device struct, because it contains the pointer to the cdev. This is what reference counting means: Count the pointers to the resource. However since the cdev struct is typically only used for the cdev_del() call, nothing bad is likely to happen because of this pointer to nowhere after the cdev has been freed. It’s more a matter of formality.

This extra reference count manipulation can also be done with cdev_get() and cdev_put(), but will add an unnecessary and possibly confusing (albeit practically harmless) reference count to the module itself. Just be sure to set the cdev’s @owner entry before calling cdev_get() or things will get messy.

Read the Source

Finally, I’ll explain why using cdev_alloc() really helps. The answer lies in the kernel’s fs/char_dev.c.

Let’s start with cdev_init(). It’s short:

void cdev_init(struct cdev *cdev, const struct file_operations *fops)
{
  memset(cdev, 0, sizeof *cdev);
  INIT_LIST_HEAD(&cdev->list);
  kobject_init(&cdev->kobj, &ktype_cdev_default);
  cdev->ops = fops;
}

Noted that kobject_init? It initializes a kernel object, which is used for reference counting. And it’s of type ktype_cdev_default, which in this case only means that the release function is defined as

static struct kobj_type ktype_cdev_default = {
  .release	= cdev_default_release,
};

So when cdev->kobj’s reference count goes to zero, cdev_default_release() is called. Which is:

static void cdev_default_release(struct kobject *kobj)
{
  struct cdev *p = container_of(kobj, struct cdev, kobj);
  struct kobject *parent = kobj->parent;

  cdev_purge(p);
  kobject_put(parent);
}

Arrgghh! So there’s a release function! Why can’t it free the memory as well? It wouldn’t have been perfect. Well, a catastrophe, in fact. How could it free a memory segment within another enclosing struct?

But in fact, there is such a release function, with a not-so-surprising name:

static void cdev_dynamic_release(struct kobject *kobj)
{
  struct cdev *p = container_of(kobj, struct cdev, kobj);
  struct kobject *parent = kobj->parent;

  cdev_purge(p);
  kfree(p);
  kobject_put(parent);
}

Exactly the same, just with the kfree() in exactly the right spot. Backed up by

static struct kobj_type ktype_cdev_dynamic = {
  .release	= cdev_dynamic_release,
};

and guess which function uses it:

struct cdev *cdev_alloc(void)
{
  struct cdev *p = kzalloc(sizeof(struct cdev), GFP_KERNEL);
  if (p) {
    INIT_LIST_HEAD(&p->list);
    kobject_init(&p->kobj, &ktype_cdev_dynamic);
  }
  return p;
}

Now let’s compare it with cdev_init():

  • It allocates the cdev instead of using an existing one. Well, that’s the point, isn’t it?
  • It doesn’t call memset(), because the segment is already zero by virtue of kzalloc.
  • It doesn’t assign cdev->fops, because it doesn’t have that info. The driver is responsible for this now.
  • It sets the kernel object to have a release method that includes the kfree() part, of course.

This is why cdev_init() must not be called after cdev_alloc(): Even though it will do nothing harmless apparently, it will re-init the kernel object to ktype_cdev_default. That’s easily unnoticed, since the only thing that will happen is that kfree() won’t be called. Causing a very small, barely notable, kernel memory leak. No disaster, but people go to kernel-hell for less.

When and how to free example_device

Now back to the topic of maintaining a reference count on the device’s information (e.g. struct example_device). It should contain this struct kref, which allows keeping a track on when the struct itself should be kept in memory, and when it can be deleted. As mentioned earlier, the kref is automatically initialized with a reference count of 1, and is then incremented every time the open method is called for a related device file, decremented for every release of such, and once again decremented when the device itself is disconnected.

On the face of it, easy peasy: The struct goes away when there are no related open device files, and the device itself is away too. But what if there’s a race condition? What if a file is opened at the same time that the device is disconnected? This requires a mutex.

The practice for using kref is to decrement the struct’s reference count with something like

kref_put(&mydevice->kref, cleanup_dev);

where cleanup_dev is a function that is called if the reference count reached zero, with a pointer to the kdev. The function then uses container_of to find the address of the structure containing the kref, and frees the former. Something like

static void cleanup_dev(struct kref *kref)
{
  struct example_device *dev =
    container_of(kref, struct example_device, kref);

  kfree(dev);
}

The locking mechanism is relatively simple. All it needs to ensure is that the open method doesn’t try to access the example_device struct after it has been freed. But since the open method must do some kind of lookup to find which example_device struct is relevant, by checking if it covers the major/minor of the opened device file, the name of the game is to unlist the example_device before freeing its memory.

So if the driver implements a list of example_device structs, one for each connected USB device, all that is necessary is to protect the access to this list with a mutex, and to hold that mutex while kref_put() is called. Likewise, this mutex is taken by the open method before looking in the list, and is released only after incrementing the reference count with kref_get().

And then make sure that the list entry is removed in cleanup_dev.

Bonus: When is it OK to access the USB API’s structs?

Not directly related, but definitely worth mentioning: The memory chunk, to which struct usb_interface *interface points to (which is given to both probe and disconnect) is released after the call to the disconnect method returns. This means that if any other method holds a copy of the pointer and uses it, there must be some kind of lock that prevents the disconnect call to return as long as this pointer may be in use. And of course, prevents any other thread to start using this pointer after that. Otherwise even something as innocent as

dev_info(&interface->dev, "Everything is going just great!\n");

may cause a nasty crash. Sleeping briefly on the disconnect method is OK, and it solves this issue. Just be sure no other thread sleeps forever with that lock taken. Should not be an issue, because asynchronous operations on the USB API have no reason to block.

This is demonstrated well in the kernel’s own usb-skeleton.c, by virtue of io_mutex. In the disconnection method, it goes

mutex_lock(&dev->io_mutex);
dev->interface = NULL;
mutex_unlock(&dev->io_mutex);

and then, whenever the driver wants to touch anything related USB, it goes

mutex_lock(&dev->io_mutex);
if (!dev->interface) {
  mutex_unlock(&dev->io_mutex);
  retval = -ENODEV;
  goto error;
}

and keeps holding that mutex during all business with the kernel’s USB API. Once again, this is reasonable when using the asynchronous API, so no call blocks.

It’s however not possible to hold this mutex in URB completer callbacks, since they are executed in atomic context (an interrupt handler or tasklet). These callbacks routines are allowed to assume that the interface data is legit throughout their own execution, because the kernel’s USB subsystem makes sure to complete all URBs (with a -ENOENT status) before tearing the interface down.

This is true unless the soft_unbind flag is explicitly set by the device driver, which means “if set to 1, the USB core will not kill URBs and disable endpoints before calling the driver’s disconnect method.”

For example, in usb-skeleton.c, dev->interface->dev is used for an error message in the completion callbacks. Also, usb_unbind_interface() (in usb/core/driver.c) sets intf->condition to USB_INTERFACE_UNBINDING and then calls usb_disable_interface() to terminate all URBs before calling the disconnect method. So this ensures no new URBs are queued and the old ones are completed.

Linux kernel programming: Do I need a lock?

Introduction

Writing a device driver for Linux (or other kernel programming) always requires keeping parallel execution in mind. It’s often enough to follow common programming patterns, using spinlocks and mutexes in the same way that everyone else does. But somehow, I always find myself looking at my own code, and ask myself: Am I absolutely sure that this will work? Isn’t there any crazy race condition that will show up once in a while and make things fail horribly in the most elusive way?

Or even worse: Will this work properly if CPU 1 does this and CPU 2 does that? When can I just ignore the fact that one piece of code will run on one CPU and the other piece on another?

Maybe because my main expertise is logic design (FPGA), I have this thing about looking for by-definition guarantees for proper functionality. This is a good habit in programming in general, I believe, and in kernel programming in particular.

I’ve already written a post with a similar topic, but at the time the focus was on memory mapped I/O. This somewhat messy post it’s all about the programming model. Quite ironically, it’s difficult to find the right order of topics in a post discussing parallel execution.

Comments and corrections in particular are welcome in comments below. The main incentive to write this post is actually to validate by view on this topic.

The curse of awareness

One may do kernel programming happily until the concept of memory barriers come to mind. The awareness that both the compiler and the processor may reorder, optimize away or even add read and write operations to memory kind makes one wonder how any trivial piece of code even works. The notion that memory writes may not propagate as once might expect between CPUs in an SMP processor (they all are nowadays) undermines that basic, naïve, assurance that all variables have the values that were last assigned to them, even if that took place earlier by a different function. I mean, what if that function ran on a different processor? Did the changes all make it to the new one?

Memory barriers, by themselves, offer a disappointing level of assurance: An SMP write barrier (smp_wmb() ) just says that all writes until that barrier are guaranteed to have been propagated before the writes that came after it. The SMP read barrier (smp_rmb() ) likewise says that all reads before the barrier are guaranteed to appear to have been done before those that came after the barrier.

This isn’t very convincing, but at least is allows setting up a “valid” flag for some data. For example, one thread can fill a data structure, call smp_wmb() and then write to a flag in the data structure that says it’s OK for use. The other thread can busy-wait on that same flag, and once it’s set, call smp_rmb() and then access the data. Because of the use of both barriers, the data is guaranteed to be valid when it’s read by the second thread.

But who’s being thinking about memory barriers? What about all code I’ve written until now? Did it work just by chance?

As I’ve noted in that earlier post, the SMP memory barriers translate into nothing on an x86 platform. This is really bad news, because most of us develop on these platforms, so if the barrier jugglery is done incorrectly, it shows only when the code runs on a different platform. In other words, the complaint will come later and out of nowhere.

Read the kernel docs

Before diving deeper into the subtle issues, here’s a list of references. The related docs in the kernel’s sources (as of v5.3, these tend to move):

  • Documentation/memory-barriers.txt
  • tools/memory-model/Documentation/recipes.txt
  • tools/memory-model/Documentation/explanation.txt
  • Documentation/atomic_t.txt
  • Documentation/core-api/workqueue.rst

The simple rule

If you’re absolutely sure that a taking a spinlock or mutex will never wait for anything, don’t take it. Maybe consider a memory barrier.

Put otherwise: If there’s a code segment A, that is guaranteed to reach its end before code segment B begins its execution, it’s functionally guaranteed to behave as if they ran sequentially on the same processor — all data is updated at the beginning of segment B as at the end of segment A.

The keyword here is guaranteed: It means that some means of the kernel API is relied upon in this matter. There are several such means available, for example:

  • Kernel locks (spinlocks, semaphores, mutexes) or course. But that’s what we’re trying to avoid.
  • Mechanisms that ensure non-reentrance. For example, tasklets and work items are guaranteed to run on one CPU in the entire system at most.
  • Calls to synchronization functions: For example, after calling cancel_work_sync() or flush_workqueue(), the thread that called this functions has the same view as the last related work item (and possibly changes that occurred afterwards, of course).
  • Device drivers can rely on that disconnect() isn’t called along with probe(), and that the release fops method isn’t called before that last read or write method has returned.
  • … and there are tons of other, of course.

The rationale behind this rule is that if this data dependency rule wasn’t guaranteed, virtually nothing in the kernel would work. Synchronization of data is in no programmer’s mind until there’s an explicit possibility for parallel execution (including interrupts). And even when there’s parallel execution in mind, programmers use locks, and (correctly) assume that the ball is passed to the thread holding the lock, along with the entire memory view being synchronized as well.

In short: If you have the relevant lock, or are otherwise guaranteed by the kernel API that nobody’s currently fiddling with the exact same piece of memory, you have the right to ignore any consistency issues. Just because everyone else does, so the kernel API makes sure it actually works that way.

Waking up a waiting process

A special case is the wait queue: One thread calling one of the wait_event*() API functions to sleep until a condition occurs, and a second thread (often an interrupt) calls wake_up*(). If this relationship ensured exclusive execution (i.e. the thread that calls wait_event*() won’t run again until the second one is finished), then the second thread may assume that it’s synchronized with the first thread at the place that it made the wake_up call.

This is important in particular regarding the condition that is passed to the wait_event*() call. It’s however quite common to use a spinlock on the second thread when accessing the common data, since it can’t be guaranteed the the first thread won’t be invoked again (in particular if it’s an ISR).

Lock-this lock-that

So here’s the catch: It’s tempting to take the some kind of lock to feel safer. Something like:

spin_lock_irqsave(&x->spinlock, flags);
safe_value = x->value;
spin_unlock_irqrestore(&x->spinlock, flags);

and with this, have a comfy feeling about safe_value being correct somehow. After all, its value was obtained with protection gear on. But was that protection necessary?

A lock that is inserted today will be very difficult to remove in the future, be it because doing so requires recalling all considerations that are fresh in your memory today. And this unnecessary lock may cause difficulties, in particular for adding features that aren’t in sight today. So by all means, better safe than sorry is not an excuse for a lock. Think it through, for each and every one.

Access with no protection at all

So what happens if we read or write carelessly from a memory region that is possibly accessed by other threads? Well, the first thing to consider is that the compiler may play some nasty pranks with optimization. The classic example is this failing busy-wait loop

while (!x->ready);

which is supposed to wait until x->ready becomes true by virtue of some other thread changing it. Almost any optimizing compiler will interpret this as a single-thread program, hence x->ready won’t change by itself, and implement the equivalent of

if (!x->ready)
  while (1);

or in other words, an infinite loop. At the compiler level, this can be solved with the volatile attribute, however odds are that a patch containing that word will be rejected. It’s considered a red flags saying you’ve misunderstood something. Even though quite a few drivers in the kernel tree use volatile.

Rather, READ_ONCE() and WRITE_ONCE() should be used when an explicit read or write of memory should take place. These prevent the compiler’s optimization (typically by using the volatile keyword, actually) but also the compiler’s reordering of successive READ_ONCE() and WRITE_ONCE(). Note however that these don’t imply memory barriers — they tell the compiler what to do, but not (necessarily) the processor. In particular, the implementation of these for most processors leaves it free to reorder the accesses as long as the result is the same as a single-thread program. In other words: If another CPU fiddles with the same memory regions, this reordering can bite back.

So if memory barriers are used, are READ_ONCE() and WRITE_ONCE() needed? If memory barriers are set correctly along with plain C memory accesses to ensure correct ordering with respect to another CPU, and the compiler’s optimization stunts are neutralized anyhow, why bother? And if it’s for I/O, use readl() / writel() or similar functions.

recipes.txt explains that:

If there are multiple CPUs, accesses to shared variables should use READ_ONCE() and WRITE_ONCE() or stronger to prevent load/store tearing, load/store fusing, and invented loads and stores.

Say what? Load/store tearing? That means that, for example, a 32-bit processor executes a write of 32-bit word (an int) in a non-atomic manner? Splitting it into bytes? Huh?

This is discussed further in explanations.txt, in the section named THE READS-FROM RELATION, which also emphasizes:

It requires all accesses to be properly aligned and of the location’s actual size.

To make a long story short: No sane processor will split a natural, aligned access of a work into a non-atomic operation. But to be safe, use READ_ONCE and WRITE_ONCE when it’s important to nail down a memory access, and be sure that the address is aligned to the element’s size.

This discussion has another important implication: When using WRITE_ONCE() and READ_ONCE() with aligned addresses, one can rely on that the value that is read with READ_ONCE() is one of those that were written by some WRITE_ONCE() operation in the past, and never some kind of intermediate junk value, even when it’s read from another CPU. So even if we don’t know exactly when the value of WRITE_ONCE() will reach the other CPUs (not even if it’s before or after other WRITE_ONCE’s), we know it’s always seen with one of the sane values it was assigned with.

As a side note, it’s often tempting to ask oneself whether the WRITE_ONCE will be seen “immediately” by another thread. I just wrote the value, is it on the other CPU yet? And now? So well, it’s not a valid question. If the dependence between the two threads is important, a mechanism should be put in place to ensure synchronization. Locks, memory barriers, whatever. If it isn’t, the other thread should work perfectly fine reading the value before the WRITE_ONCE().

When to use atomic_t

Consider this:

spin_lock_irqsave(&x->spinlock, flags);
x->value++;
spin_unlock_irqrestore(&x->spinlock, flags);

Among others, this makes sure that no increment gets lost because two CPUs were doing it at the same time. But what if this is the only way x->value is changed? Is the spinlock necessary at all? Why not just this?

x->value++;

Like, find me a processor that doesn’t have an atomic increment opcode.

Well, the immediate answer is that the value after the increment may not have propagated to a neighboring processor, so if it also increments the same field more or less at the same time, both add 1 to the same value, and one increment is lost.

So the correct way for doing this is to define x->value of atomic_t type, and then

atomic_inc(&x->value);

possibly without any lock whatsoever. This guarantees that the increment is accounted for properly, even when two CPUs do this in parallel. Or as written in explanations.txt:

What does it mean to say that a read-modify-write (rmw) update, such as atomic_inc(&x), is atomic? It means that the memory location (x in this case) does not get altered between the read and the write events making up the atomic operation. In particular, if two CPUs perform atomic_inc(&x) concurrently, it must be guaranteed that the final value of x will be the initial value plus two.

As for ordering and memory barriers, some commands offer this and others don’t. Refer to atomic_t.txt for more information on that. For example, atomic_inc() gives no ordering protection whatsoever.

There are, of course, a whole range of other atomic operations supported. See atomic_t.txt.

As for plain reads and writes of atomic_c, there are dedicated functions: atomic_read() and atomic_set(), which are implemented with just READ_ONCE() and WRITE_ONCE(). Or as written in atomic_t.txt:

The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release() respectively. Therefore, if you find yourself only using
the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all and are doing it wrong.

Simply put: atomic_t is for atomic modifications, like atomic_inc(). Don’t use it where READ_ONCE() and WRITE_ONCE() would do the job literally equally well.

Once again, it’s tempting to ask oneself when and how soon the other CPU sees the incremented value, and the answer remains the same: It should work just fine whether it sees the value before or after the increment, and if it doesn’t, you’re doing it wrong.

Can work items on the same workqueue run concurrently?

Or put the other way around, is sequential execution ensured? If work item A and B are queued on the same workqueue, do I need any locks to protect stuff that both access? Or can I rely on the kernel to prevent concurrent execution?

The truth is that I don’t have a definite answer on this one. Comments are welcome (below).

On one hand, the official kernel docs say

… the worker executes the functions associated with the work items one after the other.

The “one after the other” thing sounds quite reassuring. But does “after the other” relate to after executing the previous item, or after it has finished? That brings me to the other hand, which is reading the source. Namely process_one_work() in kernel/workqueue.c, which is called, among others, by process_scheduled_works() for the obvious purpose. Actually, it’s enough to read the comments associated with the former:

As long as context requirement is met, any worker can call this function to process a work.

and

A single work shouldn’t be executed concurrently by multiple workers on a single cpu. Check whether anyone is already processing the work. If so, defer the work to the currently executing one.

Actually, a single work shouldn’t be executed concurrently at all, as far as I know. But the point is that the test relates to a work item against itself, and it says nothing about other work items.

So my conclusion, for which comments are welcome, is as follows:

  • A lock is required to ensure synchronization between different work items, even if they are queued on the same workqueue…
  • … but things will most likely work perfectly fine without a lock in this scenario, because odds are that only a single worker thread will be allocated for the workqueue. In other words, the fact that it works without a lock doesn’t mean you’ve done it right.
  • Sleeping in a work item is likely to delay other work items in the same workqueue. The fact that a lock is needed to protect against concurrency doesn’t mean that the kernel will produce another work thread when you want it.

Bonus: A note on locks

I didn’t find a good place to push this in, but I still want this on this post.

From recipes.txt, on locking:

Any CPU that has acquired a given lock sees any changes previously seen or made by any CPU before it released that same lock.

Note that it says “seen or made” not just made. In other words, getting the lock gives the complete memory view of the CPU that released it, not just changes it made. Also note that it says “lock” — that means spinlocks as well as mutexes.

This is most likely how the kernel API ensures the synchronization between segments of code that are guaranteed not to run in parallel: There is always some kind of lock on some kind of data structure that says “X can now run, Y can’t”.

A few epoll jots

Just a few things I wrote down while getting the hang on Linux’ epoll working with a named pipe. There’s also a little test program at Github.

  • Be sure to read this and this.
  • An event list for a file descriptor can be added only once with epoll_ctl(…, EPOLL_CTL_ADD, …). Calling epoll_ctl for adding an event entry for a file descriptor which is already listed results with an EEXIST error (the manual says so, and hey, it also happens).
  • The @events member passed to epoll_ctl() is an OR of all events to watch. The @events member in the array of events returned by epoll_wait() are the events that are in effect.
  • It’s fine to register events that are unrelated (i.e. will never happen), not that there’s any reason to do so deliberately.
  • If several events are triggered for the same file descriptor, they are ORed in one array entry by epoll_wait().
  • Without the EPOLLET flag (edge-triggered), the same event keeps appearing endlessly until cleared by some I/O action.
  • In particular, EPOLLHUP is returned continuously on a FIFO (named pipe) opened for read with the other side unopened.
  • Same for EPOLLERR with a FIFO opened for write.
  • In edge-triggered mode (with EPOLLET) an event is generated each time new data is fed or drained on the other side, even if the previous data hasn’t been cleared. In this sense, it isn’t really edge-triggered. Probably the calls to wake_up() (in different variants) in the driver causes this.
  • As expected, if a FIFO is opened for read with O_NONBLOCK, there is no event whatsoever when the other side is opened — only when data arrives.
  • Important: If the counterpart side is closed and then reopened while there is no epoll_wait() blocking, this will go unnoticed. The solution is probably to have a tight loop only picking up events, and let some other thread take it from there.

Jots on named pipes (FIFOs in Linuxish)

Major disclaimer

These are pretty random jots that I made while evaluating named pipes as a solution for project. I eventually went for a driver in the kernel for various reasons, so I never got to verify that anything written below is actually correct.

I’ve also written a small post on epoll with named pipes (in Linux, of course) along with a tiny test program.

Linux named pipes (FIFOs)

Section 3.162 of IEEE Std 1003.1-2001, Base Definitions, defines FIFO Special File (or FIFO) trivially, and refers to POSIX IEEE Std 1003.1-2001 for “Other characteristics” related to lseek( ), open( ), read( ), and write( ). In section 3.273 it defines Pipe, and says that it “behaves identically to a FIFO special file”.

From POSIX IEEE Std 1003.1-2001, System Interfaces, Issue 6, line 27153: “When opening a FIFO with O_RDONLY or O_WRONLY set: (…) If O_NONBLOCK is clear, an open( ) for reading-only shall block the calling thread until a thread opens the file for writing. An open( ) for writing-only shall block the calling thread until a thread opens the file for reading.”

Even though the POSIX standard is available for download at a cost of a few hundred dollars, there’s the Open Group Base Specification, which matches it quite well: Base Definitions and System Interfaces.

This is a good source on pipes.

  • The buffer for each pipe is 64 kB by default on Linux
  • An anonymous pipe (created with pipe() ) is just like a named pipe (i.e. a FIFO special file), only that the node is in pipefs, which only the kernel has access to. There are slight differences in how they’re handled.

Reading the source code (fs/pipe.c)

  • This source clearly handles both named and anonymous pipes (the behavior differs slightly).
  • There’s F_SETPIPE_SZ and F_GETPIPE_SZ fcntl calls, which clearly allow setting the buffer size. Added in patch 35f3d14dbbc58 from 2010 (v2.6.35).
  • fifo_open(): The is_pipe boolean is true if the FIFO is a pipe() (has a magic of PIPE_FS). In other words, if it’s false, it’s a named pipe.
  • If a FIFO is opened for read or write (not both), the open() call blocks until there’s a partner, unless O_NONBLOCK is set, in which case the behavior is somewhat messy. The comments imply that this is required by POSIX.
  • It’s perfectly fine that a FIFO is opened multiple times both for read and write. Only one reader gets each piece of data, and quite randomly. Writers contribute to the stream independently.
  • FIFOs and pipes implement the FIONREAD ioctl() command for telling how many bytes are immediately ready for reading. This is the only ioctl() command implemented however.
  • The flags returned by epoll_wait() are as understood from pipe_poll(). Not clear what purpose the list of EPOLL* flags in those calls to wake_up_interruptible_sync_poll() has.

Things to look at:

  • What if a file descriptor open for read is closed and there’s no write() blocking on the other side? Is there a way to get a software notification? A zero-length write always succeeds, even if the other side is closed (this case is handled explicitly before the part that produces the EPIPE / SIGPIPE).
  • Same in the other direction: A descriptor open for write closes, but there’s no read() to get the EOF. Likewise, a zero-length read() always succeeds, even if the other side is closed (which makes sense, because even if it’s closed, reading should continue until the end).
  • Is there a way to wait for the partner without blocking, or must a thread block on it?
  • What happens on a close() followed immediately by an open()?
  • What about user opening the file in the wrong direction?

open() with O_NONBLOCK? (not)

It’s tempting to open a FIFO with O_NONBLOCK, so there needs not to be a thread blocking while waiting for the other side to be opened. POSIX IEEE Std 1003.1-2001, System Interfaces, Issue 6,says in the part defining open(), page 836:

When opening a FIFO with O_RDONLY or O_WRONLY set:

  • If O_NONBLOCK is set, an open( ) for reading-only shall return without delay. An open( ) for writing-only shall return an error if no process currently has the file open for reading.
  • If O_NONBLOCK is clear, an open( ) for reading-only shall block the calling thread until a thread opens the file for writing. An open( ) for writing-only shall block the calling thread until a thread opens the file for reading.

In the list of error codes for open(), ENXIO is bound (along with another irrelevant sceniario) to: O_NONBLOCK is set, the named file is a FIFO, O_WRONLY is set, and no process has the file open for reading.

Linux’ implementation of FIFOs follows this exactly.

This rules out using O_NONBLOCK for opening a file for write — it will simply not work. As for opening a file for read, it will work, but the epoll() call won’t wake up the process before there is data to read. Opening the other side only doesn’t generate any event.

Windows named pipes

The basic reading: Microsoft’s introduction and a list of relevant API functions.

General notes:

  • The concept of named pipes in Windows seems to resemble UNIX domain sockets in that it’s formed with a client / server terminology. Unlike domain sockets, the client may “connect” with a plain file open() (actually CreateFile and variants). Hence Windows’ named pipes can be truly full duplex between two processes.
  • But named pipes can also be accessed remotely. The permissions need to be set explicitly to avoid that.
  • The client’s opening of a named pipe is known by the return of ConnectNamedPipe (or a respective event in OVERLAPPED mode).
  • Pipe names are not case-sensitive.
  • A pipe is created (by the server) with access mode PIPE_ACCESS_INBOUND, PIPE_ACCESS_OUTBOUND or PIPE_ACCESS_DUPLEX, indicating its direction, as well as the number of instances — the number of times the pipe can be opened by clients. There’s PIPE_TYPE_BYTE and PIPE_TYPE_MESSAGE types, the former creating a UNIX-like stream, and the second treats writes to the pipe as atomic messages.
  • A named pipe ceases to exist when its number of instances goes to zero. It’s an object, not a file. Hence for a sensible application where a single process generates the named pipe, they vanish when the process terminates.
  • Use PIPE_WAIT even when non-blocking behavior is required. Don’t use PIPE_NOWAIT flag on creation for achieving non-blocking use of the pipe. Overlapping access is the correct tool. The former is available to allow compatibility with some Microsoft software accident.
  • If a client needs to connect to a pipe which has all its instances already connected, it may wait for its turn with WaitNamedPipe.
  • When finishing a connection with a client, FlushFileBuffers() should be called to flush pending written data (if the client didn’t close the connection first) and then DisconnectNamedPipe().
  • The suitable mode for working is overlapped I/O. This is the official example.
  • There’s a scary remark on this page and this claiming that named pipes are effectively useless for IPC, and that the object’s name has changed, and this is also discussed here. It seems however that this remark doesn’t relate to anything else than UWP apps. Or else it wouldn’t have cause the vulnerability mentioned here, and how could Docker use it this way?

Windows named pipes: Detecting disconnection of client

The word out there is that a disconnection can’t be detected unless there’s an outstanding ReadFile or WriteFile request. This holds true in particular for TCP sockets. So first try this: Make sure there’s only one instance, and let the server call WaitNamedPipeA as soon as a connection is made. This call will hopefully return when the real client disconnects. This will not work if Windows wants the server to disconnect as well before considering the pipe instance vacant enough. It might, because the client is allowed to connect before the server. It all depends on when Windows decrements the reference count and cleans up.

usbpiper: A single-threaded /dev/cuse and libusb-based endpoint to device file translator

Introduction

Based upon CUSE, libusb and the kernel’s epoll capability, this is a single-threaded utility which generates one /dev/usbpiper_* device file for each bulk / interrupt endpoint on a USB device. For example, /dev/usbpiper_bulk_in_01 and /dev/usbpiper_bulk_out_03.

It’s an unfinished project, that was stopped before a lot of obvious tasks in the TODO list were done. This is why several parameters are hardcoded and some memory allocations aren’t freed. Plus several other implications listed below.

It’s available at Github: https://github.com/billauer/usbpiper

I eventually went for a good old kernel driver instead. This post explains why, and you probably want to read it if you have plans on this utility or want to use FUSE or CUSE otherwise. That post also explains why I went right on to /dev/cuse rather than using libfuse.

Nevertheless, the project may very well be useful for development of USB projects, as a boilerplate or a getting-started utility. It also shows how to implement epoll-based asynchronous USB transfers, as well as implementing a CUSE-based device file driver in userspace, implementing the protocol of /dev/cuse directly (i.e. without relying on libfuse). And all this as a single thread program.

But what was the utility meant to do in the first place?

The underlying idea is simple: With a single-threaded userspace program, create a plain character device for each BULK (or INTERRUPT) endpoint that is found on a selected USB device, and allow data to be sent to each OUT endpoint by opening a device file, and just write data to it. With “cat” for example. And the other way around, read data from each IN endpoint by reading data from another device file. This description is simplistic, however it may work quite well when working on a USB device project. Just be sure to read the details below on setting up usbpiper. Doing that pretty much covers the necessary gory details.

What usbpiper definitely isn’t: It’s NOT a user-space driver for XillyUSB (a generic FPGA IP core for SuperSpeed USB 3.0, based upon the FPGA’s Gigabit transceivers). XillyUSB requires a dedicated driver, which implements a specific protocol with the IP core.

Confusing usbpiper with XillyUSB’s driver is easy, because both share the idea of plain device files for I/O with a USB device. In fact, usbpiper started off as a user-space driver for XillyUSB, but never got to the point of covering XillyUSB’s protocol.

Another possible source of confusion is usbfs. It’s a USB filesystem, so what is there to add? So yes, usbfs is used by libusb to allow a low-level driver for a USB device to be written in user space (usbpiper uses this interface, of course). It doesn’t allow a simple access to the data.

It’s recommended to look on this post on the protocol with /dev/cuse before diving into this one.

What works and in what ways it’s unfinished

usbpiper is executed with no arguments. It takes control of the selected USB device’s interface (which one — see below) and creates a /dev/usbpiper_* device file for each bulk or endpoint endpoint that it finds. The file’s name reflects the endpoint’s number, direction and bulk vs. interrupt.

It has however only been tested on bulk endpoints. Interrupt endpoints may work, but has not been tested, and isochronous endpoints are ignored. Also, usbpiper doesn’t free memory properly, in particular not buffers and other memory consuming stuff that are related to libusb.

Several parameters would normally be set through command-line parameters, but they are hardcoded.

The verbosity level can be set by editing some defines in usbpiper.h. In particular, a lot of messages are reduced by replacing

#define DEBUG(...) { fprintf(stderr, __VA_ARGS__); }

with

#define DEBUG(...)

In usbpiper.c, max_size defines the largest number of bytes that can be handled in a CUSE READ or WRITE request.

In usb.c, the following parameters are hardcoded:

  • FIFOSIZE: The effective number of bytes in the FIFO between the CUSE and USB units. The actual FIFO size for OUT endpoints is larger by max_size, for reasons explained in the “Basic data flow principle” section below.
  • vendorID and prodID define the device to be targeted. Note that the find_device() function in usb.c explicitly finds the device from the list of devices on the bus, so it can be altered to select the device based upon other criteria.
  • int_idx and alt_idx are the Interface and Alternate Setting indexes for selection on the device. More on this issue below.
  • td_bufsize is the size of the buffer that goes which each transfer. Set to 64 kiB, which is probably an overkill for most devices, but reasonable for proper bandwidth utilization with SuperSpeed devices. Also see below why it should be large when working with just some device.
  • numtd: The maximal number of outstanding transfers for each endpoint. A large number is good for high-bandwidth applications (with SuperSpeed) since it gives the hardware controller several transfers in a row before software intervention is required. Make it too big, and libusb_submit_transfer() may fail (the controller got more than it could accept).

Features that were meant to be added, but I never got there:

  • Array size of epoll should be dynamic (number of held file descriptors). Currently it’s ARRAYSIZE in usbpiper.c.
  • A file was supposed to be bidirectional. Makes no sense in this usage scenario, and bidirectional was never tested.
  • Non-blocking OPEN not supported
  • Was intended to support USB hotplugging
  • Adaption to XillyUSB’s protocol

USB Transfers and why you should care about them

There is a good reason why there isn’t any pipe-like plain device file interface for any USB device by default: usbpiper overlooks several details in the communication of a USB device.

The most important issue is that USB communication is divided into transfers, and are generally not treated as a continuous stream of data. The underlying model in the USB spec is that the host software initiates a transfer of a given number of bytes (in or out), the USB framework carries it out against the device, and then informs the software that it has been finished. The USB spec’s authors seem to have thought that the mainline usage of the USB bus would be done with a functional call saying something like “send this packet of data to the device”. Or another function saying “receive X bytes from the device”, which returns with a buffer pointing to the data buffer.

The USB framework supports asynchronous transfers, of course, but that doesn’t change the notion that the host’s software explicitly requests each transfer with a given number of bytes. All communication is cut into packet-like chunks of data with clear, boundaries. The device is allowed to divert from the host’s transfer requests only in one way: On IN endpoints, it’s allowed to terminate a transfer with less bytes than the host expected, and this is not considered an error.

However generally speaking, any software that communicates with a device directly (i.e. a device driver) is expected to know when the device expects transfers and of what size. usbpiper ignores this completely. Therefore, it may very well not work properly with just any device. This is less of an issue if the device is developed along with using usbpiper.

The three points to note are hence:

  • usbpiper sets byte count of OUT transfers according to the momentary buffer fill, up to a certain limit (td_bufsize). If the device expects a certain number of bytes in the transfer (which is legit) or the transfers are longer than in can take — things will break, of course. A device may also be sensitive to transfer boundaries, which usbpiper pays no attention to. If the device expects a fixed length for all transfers, this issue can be worked around by modifying try_queue_bulkout() never send a partially filled transfer, and set the desired length instead of td_bufsize.
  • usbpiper sets td_bufsize as the length of IN transfers, however the host doesn’t inform the device on how long the transfer is expected to be. The device driver is supposed to know the maximal length of an IN transfer that the device will respond with, and prepare a buffer long enough. Otherwise, a babbling error results (libusb returns LIBUSB_ERROR_OVERFLOW). td_bufsize is set to 64 kiB which is unlikely to be exceeded by USB devices — but this isn’t guaranteed.
  • Another issue with IN endpoints is that the information on where the boundaries of the transfers is lost: usbpiper just copies the data into a FIFO, which is read continuously on the other side. If the protocol of an IN endpoint relies on the driver knowing where a transfer started, usbpiper won’t be useful. This can be the case if the transfers are packets with a header, but without a data length field. This makes sense against a driver that receives the transfers directly.

Interfaces and alternate settings

A USB device may present several interfaces, and each interface may have alternate settings. This isn’t a gory technical detail, but can be the difference between getting your device working with usbpiper or not, in particular if it’s not something you designed yourself.

Even though a device is assigned an address on the USB bus, any USB driver claims the control of an interface of that device. In other words, it’s perfectly normal that several, possibly independent drivers control a single physical device. A keyboard / mouse combo device or a sound card with MIDI and joystick interface (not so common today). Or like a scanner / printer, which also acts as a card reader:

$ usb-devices
T:  Bus=01 Lev=03 Prnt=44 Port=03 Cnt=01 Dev#= 45 Spd=480 MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=03f0 ProdID=7a11 Rev=01.00
S:  Manufacturer=HP
S:  Product=Photosmart B109a-m
S:  SerialNumber=MY5687428T02D2
C:  #Ifs= 4 Cfg#= 1 Atr=c0 MxPwr=2mA
I:  If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=cc Prot=00 Driver=(none)
I:  If#= 1 Alt= 0 #EPs= 2 Cls=07(print) Sub=01 Prot=02 Driver=usblp
I:  If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
I:  If#= 3 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage

Note that the device effectively behaves like two independent devices: A scanner / printer and a USB disk.

It’s therefore important to not just set the Vendor / Product IDs correctly, but also the interface. usb-devices and lsusb -vv may help making the correct selection.

Alternate setting is less common, but a single interface may have different usage modes. If present, this must be set correctly as well.

Basic data flow principle

The purpose of the utility is to move data from a USB endpoint to a CUSE device file or vice versa. To accomplish this, there is a plain RAM FIFO allocated for each such data stream.

For an IN endpoint, the USB frontend queues asynchronous transfer requests using libusb. For each IN transfer that is finished, the data is copied into the relevant FIFO. On the FIFO’s other side, the read() calls on the device file (i.e. CUSE READ requests) are fulfilled, as necessary, by submitting data that is fetched from the FIFO. Overflow of the FIFO is prevented by queuing IN transfer requests only when there’s enough room in the FIFO to accept the data that all outstanding requests may carry, if they all return with a full buffer. Underflow is not an issue, but the read() call isn’t completed if there is no data to submit, in which case read() blocks.

For an OUT endpoint, a the handler of a write() call (i.e. CUSE WRITE requests) copies the data into the relevant FIFO. As a result of the FIFO containing data, the USB frontend may queue new OUT transfers with the data available — it may also not do so, in particular if the number of already outstanding transfer stands at the maximal available. The FIFO is protected from overflow by blocking the write() call until there is enough room in the FIFO. The exact condition relates to the fact the length of the data buffer of each CUSE WRITE request is limited by a number (max_size in the code) that is set during CUSE initialization. A WRITE request is hence not completed (hence preventing another one) until there is room for max_size additional bytes in the FIFO, after writing the current request’s data to the FIFO. This ensures that the usbpiper process always has where to put the data, and doesn’t need to block — which it’s now allowed to, being a single-threaded utility.

The requirement of always having max_size bytes of data vacant in the FIFO gets slightly trickier when a WRITE request is interrupted (i.e. receives an INTERRUPT request on its behalf). This forces usbpiper to immediately complete the request. In order to ensure the requirement on the FIFO, usbpiper possibly unwinds the FIFO, throwing away data so that the FIFO’s write fill is at most max_size bytes below full. This doesn’t break the data stream’s integrity or continuity, because the write() call returns with the number of bytes actually written (or an -EINTR, if none). If the FIFO was unwound, the number of bytes that were discarded is reduced from write()’s return value, giving the caller of write() the correct picture of how much data was consumed.

Execution flow

Recall from above that usbpiper doesn’t rely on libfuse, but rather communicates with the CUSE framework directly through /dev/cuse.

As the utility’s single thread needs to divide attention between the USB tasks and those related to CUSE, a single epoll() file descriptor is allocated for all open /dev/cuse files as well as those supplied by the libusb framework. A epoll_wait() event loop is implemented in usbpiper.c: Each entry in the epoll_event array contains a pointer a small structure, which contains a function to call and a pointer to a private data pass it to the function.

The communication protocol with /dev/cuse is discussed on another post. For the purpose of the current topic, the CUSE kernel framework creates a device file in /dev/ as a result of each time /dev/cuse being opened and a simple read-write handshake completed. After this, for each operation on the related device file (e.g. open(), read(), write() etc) a request packet is passed to the server (i.e. usbpiper in this case) by virtue of read() calls to the /dev/cuse file handle. The operation blocks until the server responds by writing a buffer to the same file handle, which contains a status header and possibly data. Responses to requests are not necessarily written in the same order as the requests. A unique ID number in the said status header ensures the pairing between requests and their responses.

read() calls from /dev/cuse block when there’s nothing to do, and are therefore subject to epoll in usbpiper. write() calls never block.

However this is not enough: For example, an epoll entry may indicate a new WRITE request on a CUSE file descriptor, which fills one of the FIFOs with data. As a result, there might be a new opportunity to queue new USB transfers. There are many software design approaches for how to make one action trigger others — the one taken in usbpiper is the simplest and messiest: Letting the performer of the action call the functions that may benefit from the opportunity directly. In the given example, this means that process_write() calls try_queue_bulkout() directly. The latter calls try_complete_write() in turn.

The function nomenclature in this utility is consistent in that several functions have a try_*() prefix to mark that they are opportunity oriented. It would have been equally functional, cleaner and more elegant (however less efficient) to call all try_*() functions on behalf of all endpoints and device files. Or alternatively, maintain some queue of try_*() function calls, however this wouldn’t take away the need for awareness of which actions may open what opportunity.

Delays and timeouts

There are a couple of situations where a timer is required. A timerfd is allocated for each device file, serving the following two scenarios:

  • Related to IN endpoints: When a READ request can’t be completed with the full number of bytes that are required, usbpiper waits up to 10 ms for data from the IN endpoint to fill the relevant FIFO. After this timeout, try_complete_read() completes the request as soon as there is any data in the FIFO. The rationale is to avoid a flood of READ request and responses if the data arrives frequently and in small chunks.
  • Related to OUT endpoints: When a RELEASE request arrives, and there is still data in the relevant FIFO, try_complete_release() waits up to 1000 ms for the FIFO to drain by the OUT endpoint. After this, try_complete_release() completes the request, hence closing the related device file (not /dev/cuse) after emptying the FIFO.

A single timer can be used for both tasks, because a RELEASE can’t occur before all outstanding requests have been completed on the related device file (Linux’ device file API ensures that). Besides, each device file can be related only to either an IN or OUT endpoint, so once again, the timer won’t be necessary for both uses at the same time.

A similar 10 ms timeout could have been implemented for OUT endpoints, i.e. generate an OUT transfer only if the FIFO contains enough data for a full transfer buffer. This wouldn’t require another timer, for the first reason given above. However this possibility was dropped in favor of another mechanism for preventing unnecessary I/O: try_queue_bulkout() submits a transfer with less than a full buffer only if there is no other outstanding transfer on the same endpoint. The reason for opting out the 10 ms timer for this purpose has to do with the original purpose of this usbpiper, as a driver for XillyUSB (which didn’t materialize).

FUSE / CUSE kernel driver dissection notes

What this post is about

Before anything: If you’re planning on using FUSE / CUSE for an application, be sure to read this first. It also explains why I bothered looking at the kernel code instead of using libfuse.

So these are some quite random notes I took while trying to figure out how to talk with /dev/cuse directly by reading the sources directly. I’m probably not going to touch CUSE with a five-foot stick again, so maybe this will help someone out there.

Everything said here relates to Linux v5.3. As FUSE a bit of hack-on-demand kind of filesystem, things change all the time.

CUSE vs. FUSE

CUSE is FUSE’s little brother, allowing to generate a single device file in /dev, having the driver implemented in user space. Compared with FUSE’s ability to mount an entire filesystem, CUSE much lighter, and is accordingly implemented as a piggy-back on the FUSE driver.

CUSE and FUSE are reached from user space through different device files: A server (i.e. driver) for FUSE opens /dev/fuse, and a server for CUSE opens /dev/cuse.

Note that the user application program that opens /dev/cuse or /dev/fuse is called the server. It’s actually a driver, but the latter term is saved for the FUSE kernel framework.

The driver for /dev/cuse is implemented entirely in fs/fuse/cuse.c, and it does quite little: All file operation methods for /dev/cuse are redirected to those used for /dev/fuse (by literally copying the list of methods), except for open and release.

The CUSE-specific method for open runs a slightly different initialization procedure against the server (more about this below) and eventually generates a character device file instead of making a filesystem mountable.

This character device file is assigned I/O methods that are handled in cuse.c, however their implementation relies heavily on functions that are defined in the mainline FUSE driver. Effectively, this device file is a FUSE file which is forced to use “direct I/O” methods to present a data pipe abstraction.

It might very well be that it’s possible to obtain the same result by setting up a small mounted filesystem with a file with certain settings, but I haven’t investigated this further. It seems however that the application program will have to open the file with the O_DIRECT flag for this to work. See Documentation/filesystems/fuse-io.txt in the kernel source tree.

The relevant source files

The FUSE filesystem handles I/O requests of two completely different types: Those related to the file system that is mounted in relation to it (or the device file generated on behalf of CUSE), and those related to the character device which the FUSE / CUSE server opens. This might cause a slight confusion, but the kernel code sticks to a naming convention that pretty much avoids it.

The interesting files in the kernel tree:

  • fs/fuse/file.c — Methods for handling I/O requests from the FUSE-mounted file system. The typical function name prefix is fuse_file_*.
  • fs/fuse/dev.c — Methods for handling I/O requests from /dev/fuse. The typical function name prefix is fuse_dev_*.
  • fs/fuse/cuse.c — CUSE-specific driver. Responsible for generating /dev/cuse, and make it behave quite like /dev/fuse. In fact, it routes a lot of function calls to the FUSE driver. The typical function name prefix is cuse_channel_* for methods handling I/O requests from /dev/cuse. Functions named just cuse_* are handlers for the CUSE-generated character device. Note that the /dev/cuse character device is referred to as the “channel” so it’s not confused with the other one.
  • include/uapi/linux/fuse.h — Header file with all structures and constants that are visible in user space
  • fs/fuse/fuse_i.h — Header file with everything that isn’t visible from user space.

FUSE protocol

It’s probably necessary to be acquainted with writing a Linux kernel character device (at least) in order to understand the nuts and bolts of FUSE. It’s actually helpful to have worked with a device driver for Microsoft Windows as well, since flow of I/O requests resembles the IRP concept in Windows’ driver model:

Each I/O request by the user space program goes into the kernel and is translated into a data structure which contains the information, and that data structure is handed over to the server (i.e. the driver in user space). The server queues the request for processing and acknowledges its reception, but not its completion. Rather, the server processes the request in its own free time, and when finished, it turns it back to the I/O system that requested it, along with the status and possibly data. If the user program blocks on the completion of the I/O system call (async I/O is also supported), it does so until the server turns back the request.

So there’s a flow of requests arriving from /dev/fuse (or /dev/cuse, as applicable), and a flow of responses written to the same file descriptor by the driver. The relation between the requests and responses is asynchronous (which is the main resemblance with IRPs), so the responses may arrive in no particular order.

The main difference from Windows’ IRP model is that Windows’ kernel makes calls to I/O operation handlers in the device driver (just like a Linux driver, but with the driver’s hands tied) with a pointer to the IRP. With FUSE, all requests go through a single pipe (good old UNIX design philosophy) and the driver chooses what to do with each. Also, in Windows, there’s a special treatment of requests that can be finished immediately — the driver can return with a status saying so. FUSE’s take on this matter is congratulations, finish the request and submit the response. Now or later makes no essential difference.

This way or another, the FUSE / CUSE server should not block or otherwise delay the reception of requests from /dev/fuse while handling a previous request (a Windows device driver is not allowed to block because it runs in arbitrary thread context, but that’s really irrelevant here). Even if it can’t or isn’t expected to handle another request before the current one is done, it must keep receiving requests while handling previous ones, at least for one reason: Accepting requests to handle a signal (interrupt) for an already queued request. More on that below.

The other side of the coin: A read() call from /dev/fuse or /dev/cuse may block, and will do so until there’s a request available to handle. On the other hand, a write() never blocks (which makes sense, since it merely informs the kernel driver a request has been finished). The poll() system call is implemented, so epoll() and select() can be used on /dev/fuse and /dev/cuse, rather than blocking a thread on waiting for a request (libfuse doesn’t take advantage of this).

I/O requests

The request from /dev/fuse or /dev/cuse is starts with a header of the following form (defined in the kernel’s include/uapi/linux/fuse.h and libfuse’s libfuse/include/fuse_kernel.h):

struct fuse_in_header {
	uint32_t	len;
	uint32_t	opcode;
	uint64_t	unique;
	uint64_t	nodeid;
	uint32_t	uid;
	uint32_t	gid;
	uint32_t	pid;
	uint32_t	padding;
};

The header is then followed by data that is related to the request, if necessary.

@len is the number of bytes in the request, including the fuse_in_header struct itself.

@opcode says what operation is requested, out those listed in enum fuse_opcode in the same header files (the opcodes are also listed and explained on this page).

@unique is the identifier of the request, to be used in the response. Note that if bit 0 is set (i.e. @unique is odd), the request is an interrupt notification to another request (with the @unique after clearing bit 0). This is not true on all kernel versions however.

The rest — nodeid, uid, gid and pid are quite obvious. But it’s noteworthy that the process ID is exposed to the driver in user space.

Reads from /dev/{cuse,fuse} are done in one single read() requests, which dequeues one request from one of the kernel driver’s requests queues: One for INTERRUPT requests, one for FORGET requests, and one for all the others. They are prioritized in this order (i.e. INTERRUPT go before any other etc.).

The read() call is atomic: It must request a number of bytes that is larger or equal to the request’s @len, or the request is discarded and -EIO is returned instead. For this reason, the number of bytes of any read() from /dev/cuse or /dev/fuse must be max_write + fuse_in_header, where @max_write is as submitted on the cuse_init_out structure in response to an INIT request (see below) (max_write is expected to be 4096 at least).

However oddly enough, in libfuse’s fuse_lowlevel.c it says

	se->bufsize = FUSE_MAX_MAX_PAGES * getpagesize() +
		FUSE_BUFFER_HEADER_SIZE;

(the session’s buffer size of arriving requests are se->bufsize) and then libfuse’s fuse_i.h goes

#define FUSE_MAX_MAX_PAGES 256

but how is that an upper limit of something?

I/O responses

Responses are written by the server into the same file descriptor of /dev/fuse or /dev/cuse, starting with a header as follows:

struct fuse_out_header {
	uint32_t	len;
	int32_t		error;
	uint64_t	unique;
};

The meaning of @len and @unique are the same in the request: @len includes the header itself, and @unique is a copy of the identifier of the request (with some extra care when handling interrupt requests).

@error is the status. Zero means success, negative numbers are in the well-known form of -EAGAIN, -EINVAL etc. It’s expected to be zero or negative (but not below -999). If it’s non-zero, the response must consist of a header only, or the write() call that submits the response returns -EINVAL.

A response write() is atomic as well: The number of bytes requested in the call must equal to @len, or the call returns -EINVAL.

How requests are made in the kernel code

For each request to the server, a struct fuse_req is allocated and initialized to contain the information on the request to send and what the answer is about to look like. This begin with calling fuse_get_req() or fuse_get_req_for_background(), which both call __fuse_get_req(struct fuse_conn *fc, unsigned npages, bool for_background).

To make a long story short, this function allocates the memory for the struct fuse_req itself as well a memory array of npages entries of struct page and struct fuse_page_desc. It also initializes several functional fields of the structure, among others the pages, page_descs, max_pages entries, as well as setting the reference count to 1, the FR_PENDING flag and initializing the two list headers and the wait queue. The pid, uid and gid fields in the information for the request are also set.

Then the fuse_req structure is set up specifically for the request. In particular, the @end entry points at the function to call by request_end() following the arrival of a response from the server or the abortion of the request.

The fuse_req has two entries, @in and @out, which are of type fuse_in and fuse_out, respectively. Note that “in” and “out” are from the server’s perspective, so “in” means kernel to server and vice versa.

struct fuse_arg {
	unsigned size;
	void *value;
};

struct fuse_in_arg {
	unsigned size;
	const void *value;
};

struct fuse_in {
	struct fuse_in_header h;
	unsigned argpages:1;
	unsigned numargs;
	struct fuse_in_arg args[3];
};

struct fuse_out {
	struct fuse_out_header h;
	unsigned argvar:1;
	unsigned argpages:1;
	unsigned page_zeroing:1;
	unsigned page_replace:1;
	unsigned numargs;
	struct fuse_arg args[2];
};

Despite the complicated outline, the usage is quite simple. It’s summarized in detail at the rest of this section, but in short: The request consists of a fuse_in_header followed by arguments, which is just a concatenation of strings (there are @in.numargs of them), which are set up when the request is prepared. @value and @size are set up in an array of struct fuse_in_arg.

The response is a concatenation of fuse_out_header and @out.numargs arguments, once again these are concatenated strings. The sizes and buffers are set up when the request is generated. The @argvar flag is possibly set to allow a shorter response at the expense of the last argument. Look at the function pointed by @end for how these arguments are interpreted.

And now the longer version of the two clauses above:

When a request is prepared for transmission to the server by fuse_dev_do_read(), it concatenates the @h entry in the struct fuse_in with @numargs “arguments”. Each “argument” is a string, which is represented as a fuse_in_arg entry in the @args array, by a pointer @value and the number of bytes given as @size. So it’s a plain string concatenation of @numargs + 1 strings, the first with a fixed size (of struct fuse_in_header) and some variable-length strings. What makes it seem complicated is the paging-aware data copying mechanism.

As for handling the arrival of responses from the server: Except for notifications and interrupt replies, fuse_dev_do_write() handles the write() request, which must include everything in the buffer submitted, as follows. The first bytes are copied into the fuse_req’s @out.h, or in other words, the fuse_out’s @h entry. So this consumes the number of bytes in a struct fuse_out_header.

The rest is chopped into arguments (by copy_out_args() ), following the same convention of @numargs concatenated strings, each having the length of @size and written into the buffer pointed by @value. @numargs as well as entries of the struct fuse_arg array are set when preparing the request — when the response arrives, the relevant buffers are filled. And don’t confuse struct fuse_arg with struct fuse_args, which is completely different.

copy_out_args() checks the header’s @error field before copying anything. If it’s non-zero, no copying takes place: The response is supposed to consist of a struct fuse_out_header only.

The last argument of a response from the server may be shorter (possibly zero length) than its respective @size entry if and only if the @argvar entry in the related struct fuse_out struct is set (which is possibly done when preparing the request). If this is the case, the server simply submits less bytes than the sum of the header + all argument’s @size, and the last argument is shortened accordingly. This may sound complicated, but it just means, for example, that a response to READ submits the data that it managed to collect.

Once again, all this sounds a bit scary, but take the relevant snippet from cuse_send_init() defined in the kernel’s fs/fuse/cuse.c:

	req->in.h.opcode = CUSE_INIT;
	req->in.numargs = 1;
	req->in.args[0].size = sizeof(struct cuse_init_in);
	req->in.args[0].value = arg;
	req->out.numargs = 2;
	req->out.args[0].size = sizeof(struct cuse_init_out);
	req->out.args[0].value = outarg;
	req->out.args[1].size = CUSE_INIT_INFO_MAX;
	req->out.argvar = 1;
	req->out.argpages = 1;
	req->pages[0] = page;
	req->page_descs[0].length = req->out.args[1].size;
	req->num_pages = 1;
	req->end = cuse_process_init_reply;
	fuse_request_send_background(fc, req);

It’s quite clear: The driver sends one argument (i.e. one string) after the header, and expects two back in the response. And the function that handles the response is cuse_process_init_reply(). So it’s fairly easy to tell what is sent and what is expected in return.

How CUSE implements read()

The CUSE driver (cuse.c) assigns cuse_read_iter() for the read_iter fops method. This function sets the file position to zero, and calls fuse_direct_io(), defined in file.c. Not to be confused with fuse_direct_IO(), defined in the same file.

The latter function retrieves the number of bytes to process as its local variable @count. It then loops on sending requests and retrieving the data as follows (outlined for non-async I/O): fuse_send_read() is called for sending a READ request to the server by calling fuse_read_fill() and fuse_request_send(). The latter is defined in dev.c, and calls __fuse_request_send(), which queues the request for transmission (with queue_request()) and then waits (i.e. blocks, sleeps) until the response with a matching unique ID has arrived (by calling request_wait_answer()). This happens by virtue of the server’s invocation of a write() on its /dev/cuse filehandle, with a matching unique ID.

Back to the loop on @count, fuse_send_read() returns with the number of bytes of the response’s first argument — that is, the length of the data that arrived. The loop hence continues with checking the error status of the response (in the @error field). If there was an error, or if there were less bytes than requested in the response, the loop terminates. Also if @count is zero after deducing the number of arrived bytes from it.

The return value of fuse_direct_io(), which is also the return of the cuse_read_iter(), is the number of bytes that were read (in total), if this number is non-zero, even if the loop quit because of an error. Only no bytes were received, the function returns the @error field in the response (which is zero if there was neither an error nor data).

The rationale behind the loop and the way it handles errors is that a single read() request by the application may be chopped into several READ requests if the read() can’t be fit into a single READ request (i.e. the read()’s @count is larger than max_read, as specified on the INIT response). It’s therefore necessary to iterate.

How CUSE implements write()

The CUSE driver (cuse.c) assigns cuse_write_iter() for the write_iter fops method. This function sets the file position to zero, and like cuse_read_iter(), it calls fuse_direct_io(), defined in file.c. Only with different arguments to tell the latter function that the data goes in the opposite direction.

fuse_direct_io() calls fuse_send_write() instead of fuse_send_read, which calls fuse_write_fill() instead of fuse_read_fill(). And then fuse_request_send() is called, which sends the request and waits for its response. fuse_send_write() returns with the number of bytes that were actually written, as it appears in the @size entry of the struct fuse_write_out in the response.

Note that the kernel driver sends a buffer along with the WRITE call, and the server chooses how much to consume from it, and then tells the kernel about that in the response. This requires a small discussion on partial handling of write().

The tricky thing with a write() is that the application program supplies a buffer to write, along with the number of bytes to write. It’s perfectly fine to point to a huge buffer and set the count to the entire buffer. Any character device driver may write the entire buffer, or just as much as it can at the moment, and return the number of bytes written. The fact that a huge number of bytes were requested makes no difference, because the character device driver treats the request as if it was for the number of bytes it could write. The rest of the buffer is ignored.

So there are two problems, both arising when the buffer of the write() from the application program is large: One is how to make sure that the server has allocated a buffer large enough to receive the data in one go (recall that both requests and responses must be done in a single I/O operation). The second and smaller problem is the wasted I/O of data in a WRITE request that is eventually ignored, because the server chose to consume less than available.

To prevent huge buffers from being transmitted to the server and then ignored, the server supplies a max_write parameter in its response to an INIT request, that sets the maximal number of bytes for transmitted on a WRITE server request (it should be 4096 or larger). So the write() operation is chopped up into smaller buffers by FUSE / CUSE as necessary to meet this restriction.

This parameter is a tradeoff between reducing the number of I/Os with the server and the possibility to waste data transfers. fuselib picks 128 kB.

There is no similar problem with read() calls, because the server submits the number of bytes actually read in the response after the response header that says how many bytes are submitted. Nevertheless, there is a separate max_read limit for CUSE sessions nevertheless (but not for FUSE, which copies it from max_write).

Handling interrupts (signals)

There is a lot of fuss about this topic, which is discussed on a separate post. To make a long story short, a server must be able to process INTERRUPT requests. To the server, such request is just like the others, in the sense that it comprises of a struct fuse_in_header followed by a single argument:

struct fuse_interrupt_in {
	uint64_t	unique;
};

The function that implements this in the kernel is fuse_read_interrupt() in dev.c.

Note that there are two @unique IDs in the request. One is in the header, which is ID of the interrupt request itself. The second is in the argument, which is the unique ID of the request that should be interrupted. The server should not assume any special connection between the two (there is such since kernel v4.20, due to commit c59fd85e4fd07).

When a server receives an INTERRUPT request, it shall immediately send a response (i.e. completion) of the request with the @unique given in the argument. An -EINTR status may be reported, in accordance the common POSIX rules.

Note that even though an INTERRUPT request is guaranteed to be conveyed to the server after the request it relates to, it may arrive after the server’s response has been submitted if a race condition occurs. As a result, the server may receive INTERRUPT requests with a @unique ID that it doesn’t recognize (because it has removed its records while responding). Therefore, the server should ignore such requests.

On the other hand, if multiple threads fetch requests from the same file descriptor (of /dev/cuse or /dev/fuse), one thread may decode the INTERRUPT request before the original request has been recorder. This possibility is present in the libfuse implementation, and is the reason behind the complication discussed in that other post.

POLL requests

Poll is different from many other requests in that it requires two (or even more) responses from the server:

  • An immediate response, with the bitmap informing which operations are possible right away
  • Possibly additional notifications, when one or more of the selected operations have become possible.

fuse_file_poll in file.c handles poll() is calls on a file. It queues a FUSE_POLL request, with one argument, consisting of a fuse_poll_in struct:

struct fuse_poll_in {
	uint64_t	fh;
	uint64_t	kh;
	uint32_t	flags;
	uint32_t	events;
};

The @events entry is set with

inarg.events = mangle_poll(poll_requested_events(wait));

which supplies a bitmap of the events that are waited for in POSIX style (mangle_poll() is defined in the kernel’s poll.h, which does the conversion).

@flags may have one flag set, FUSE_POLL_SCHEDULE_NOTIFY, saying that there’s a process actually waiting. If it’s set, the server is required to send a notification when the file becomes ready. If cleared, the server may send such notification, but it will be ignored.

@fh and @kh are the file’s file handle, in userspace and kernel space respectively (the latter is systemwide unique).

If there is a process waiting, the file is then registered in a dedicated data structure (an RB tree), and will be kept there until the file is released. The underlying idea is that if a file descriptor has been polled once, it’s likely happen a lot of times to follow.

Either way, the POLL request is submitted, and the server is expected to submit a response with a poll bitmap, which is deconverted into kernel format, and used as the poll() return value. Consequently, poll() blocks until the response arrives.

Should the server respond with an -ENOSYS status, no more POLL requests are sent to the server at the rest of the session, and DEFAULT_POLLMASK is returned on this and all subsequent poll() calls. Defined in poll.h:

#define DEFAULT_POLLMASK (EPOLLIN | EPOLLOUT | EPOLLRDNORM | EPOLLWRNORM)

So there’s the poll response:

struct fuse_poll_out {
	uint32_t	revents;
	uint32_t	padding;
};

Rather trivial — just the events that are active.

More interesting, is the notifications. The server may send a notification anytime by setting @unique to zero and the @error field to the code of the notification request (FUSE_NOTIFY_POLL == 1). The @opcode field is ignored in this case (there is no opcode for notifications).

There’s one argument in a poll notification:

struct fuse_notify_poll_wakeup_out {
	uint64_t	kh;
};

where @kh echoes back the value in the poll request.

In dev.c, fuse_notify() calls fuse_notify_poll(), which in turn calls fuse_notify_poll_wakeup() (in file.c) after a few sanity checks.

fuse_notify_poll_wakeup() looks up the value of @kh entry in the dedicated data structure. If it’s not found, the notification is silently ignored. This is considered OK, since the server is allowed to send notifications even if FUSE_POLL_SCHEDULE_NOTIFY wasn’t set.

If the entry is found, wake_up_interruptible_sync() is called on the file’s wait queue that is used only in relation to poll (which is known from the entry in the data structure). That’s it.

poll() is supported by FUSE since kernel v2.6.29 (Git commit 95668a69a4bb8, Nov 2008)

CUSE INIT requests

The bringup of the device file is initiated by the kernel driver, which sends an CUSE_INIT request. The server sets up the connection and device file’s attributes by responding to this request.

In cuse.c, cuse_channel_open(), implements /dev/cuse’s method for open(). Aside from allocating and initializing a struct cuse_conn for containing the private data of this connection, it calls cuse_send_init() for queuing an CUSE_INIT (opcode 4096) request to the new file handle. Note that this is different from the FUSE_INIT (opcode 26) that arrives from /dev/fuse.

The request consists of a struct fuse_in_header concatenated with a struct cuse_init_in:

struct cuse_init_in {
	uint32_t	major;
	uint32_t	minor;
	uint32_t	unused;
	uint32_t	flags;
};

The major and minor fields are the FUSE_KERNEL_VERSION and FUSE_KERNEL_MINOR_VERSION, telling the server which FUSE version the kernel offers. flags is set to 0x01, which is CUSE_UNRESTRICTED_IOCTL.

The pid, uid and gid in the header are those of the process that opened /dev/cuse — not really interesting. @unique is typically 1 (but don’t rely on it — it can be anything in future versions). On fairly recent kernels, it continues with 2 and increments by 2 for each request to follow. On older kernels, it just counts upwards with steps of 1. The unique ID mechanism was changed in kernel commit c59fd85e4fd07 (September 2018, v4.20) for the purpose of allowing a hash of unique IDs in the future.

The response is a string concatenation of the following three elements (header + two arguments):

  • A struct fuse_out_header, with the header for the response (with @unique typically set to 1)
  • A struct cuse_init_out with some information (more on that below)
  • A null-terminated string that reads e.g. “DEVNAME=mydevice” (without the quotes, of course) for generating the device file /dev/mydevice. Don’t forget to actually write the null byte in the end, or the device generation fails with a “CUSE: info not properly terminated” in the kernel log.

struct cuse_init_out is defined as

struct cuse_init_out {
	uint32_t	major;
	uint32_t	minor;
	uint32_t	unused;
	uint32_t	flags;
	uint32_t	max_read;
	uint32_t	max_write;
	uint32_t	dev_major;
	uint32_t	dev_minor;
	uint32_t	spare[10];
};

The fields of cuse_init_out are as follows:

  • @major and @minor have the same meaning as these fields in struct cuse_init_in, but they reflect the version that the server is designed for, and hence rules the session. As of kernel v5.3 (which implement FUSE version 7.26), @major must be 7 and @minor at least 11, or the initialization fails. FUSE 7.11 was introduced in kernel v2.6.29 in 2008. See include kernel sources’ uapi/linux/fuse.h for revision history.
  • @max_read and @max_write are the maximal number of bytes in the payload of a READ and WRITE request, respectively. Note that @max_write forces read() requests from /dev/cuse to supply a @count parameter of at least @max_write + the size of struct fuse_out_header + the size of struct fuse_write_out, or WRITE requests may fail. Same goes for @max_read and struct fuse_in_header and struct fuse_read_in. What counts is the length of the requests and their possible responses, which includes the lengths of the non-data parts.
  • @flags: If bit 0 (CUSE_UNRESTRICTED_IOCTL) is set, unrestricted ioctls is enabled.
  • @dev_major and @dev_minor are the created device file’s major and minor numbers. This means that the server needs to make sure that the aren’t already allocated.

FORGET requests

These requests inform a FUSE server that there’s no need to retain information on a specific inode. This request will never appear on /dev/cuse.

FUSE / CUSE signal handling: The very gory details

First: If you’re planning on using FUSE / CUSE for an application, be sure to read this first. It also explains why I didn’t just take what libfuse offered.

Overview

This is a detour from another post of mine, which dissects the FUSE / CUSE kernel driver. I wrote this separate post on signal handling because of some confusion on the matter, which ended up with little to phone home about.

To understand why signals is a tricky issue, suppose that an application program is blocking on a read() from a /dev file that is generated by CUSE. The server (i.e. the driver of this device file in userspace) has collected some of the data, and is waiting for more, which is why it doesn’t complete the request. And then a “harmless” signal (say, SIGCHLD) is sent to the application program.

Even though that program is definitely not supposed to terminate on that signal, the read() should return ASAP. And because it has already collected some data (and possibly consumed it from its source), it should return with the number of bytes already read, and not with an -EINTR (which is the response if it has no data when returning on an interrupt).

So the FUSE / CUSE must notify the server that an interrupt has arrived, so that the relevant request is finished quickly, this way or another. To make things even trickier, it might very well be, that while notification on the interrupt is being prepared and sent to the server, the server has already finished the request, and is in the middle of returning the response.

Luckily, the FUSE / CUSE kernel interface offers a simple solution to this: An INTERRUPT request is sent to the server in response to an interrupt to the application program, with a unique ID number that matches a previously sent request. The server responds with normally returning a response for the said request, possibly with -EINTR status, exactly like a kernel character driver’s response to a signal.

The only significant race condition is when the server has already finished handling the request, for which the INTERRUPT request arrives, and has therefore forgotten the unique ID that comes with it. In this case, the server can simply ignore the INTERRUPT request — it has done the right thing anyhow.

So why this long post? Because I wanted to be sure, and because the little documentation there is on this topic, as well as the implementation in libfuse are somewhat misleading. Anyhow, the bottom line has already been said, if you’d like to TL;DR this post.

The official version

There is very little documentation on FUSE in general, however there is a section in the kernel source tree’s Documentation/filesystems/fuse.txt:

If a process issuing a FUSE filesystem request is interrupted, the following will happen:

  1. If the request is not yet sent to userspace AND the signal is fatal (SIGKILL or unhandled fatal signal), then the request is dequeued and returns immediately.
  2. If the request is not yet sent to userspace AND the signal is not fatal, then an “interrupted” flag is set for the request. When the request has been successfully transferred to userspace and this flag is set, an INTERRUPT request is queued.
  3. If the request is already sent to userspace, then an INTERRUPT request is queued.

INTERRUPT requests take precedence over other requests, so the userspace filesystem will receive queued INTERRUPTs before any others.

The userspace filesystem may ignore the INTERRUPT requests entirely, or may honor them by sending a reply to the original request, with the error set to EINTR.

It is also possible that there’s a race between processing the original request and its INTERRUPT request. There are two possibilities:

  1. The INTERRUPT request is processed before the original request is processed
  2. The INTERRUPT request is processed after the original request has been answered

If the filesystem cannot find the original request, it should wait for some timeout and/or a number of new requests to arrive, after which it should reply to the INTERRUPT request with an EAGAIN error. In case 1 the INTERRUPT request will be requeued. In case 2 the INTERRUPT reply will be ignored.

The description above is correct (see detailed dissection of kernel code below) however beginning from the “race condition” part it gets somewhat confusing.

Race condition?

In the rest of this post, there’s a detailed walkthrough of the involved functions in the v5.3.0 kernel, and there’s apparently no chance for the race condition mentioned fuse.txt. It’s not even an old bug that was relevant when interrupt handling was introduced with Git commit a4d27e75ffb7b (where the text cited above in fuse.txt was added as well): Even looking at the original commit, there’s a clear locking mechanism that prevents any race condition in the kernel code. This was later replaced with memory barriers, which should work just the same.

All in all: An INTERRUPT request is queued, if at all, only after the related request has been submitted as the answer to a read() by the server.

So what is this all about, then? A multi-threaded server, which spreads requests randomly among work threads, might indeed handle requests in a random order. It seems like this is what the “race condition” comment refers to.

The solution to the non-existing problem

Had there been a possibility that INTERRUPT request may arrive before the request it relates to, the straightforward solution would be to maintain an orphan list of Unique IDs of INTERRUPT requests that didn’t have a request processed when the INTERRUPT request arrived. This list would then be filled with INTERRUPT requests that arrived too early (before the related request) or too late (after the request was processed).

Then, for each non-INTERRUPT request that arrives, see if it’s in the list, and if so, remove the Unique ID from the list, and treat the request as interrupted.

But the requests that were added into the list because of the “too late” scenario will never get off the list this way. So some garbage collection mechanism is necessary.

The FUSE driver facilitates this by allowing a response with an -EAGAIN status to INTERRUPT requests. Even though no response is needed to INTERRUPT requests, an -EAGAIN response will cause the repeated queuing of the INTERRUPT request by the kernel if the related request is still pending, and otherwise do nothing.

So occasionally, the server may go through its list of orphans, and send an -EAGAIN response to each entry, and delete this entry as the response is sent. If the deleted entry is still relevant, it will be re-sent by the kernel, so it’s re-listed (or possibly handled directly if the related request has arrived in the meantime). Entries from the “too late” scenario won’t be re-listed, because the kernel will do nothing in reaction to the -EAGAIN response.

This is the solution suggested in fuse.txt on the race conditions issue. The reason this solution is suggested in the kernel’s documentation, even though it relates to a problem in a specific software implementation, is probably to explain the motivation to the -EAGAIN feature. But boy, was it confusing.

How libfuse handles INTERRUPT requests

Spoiler: The solution to the non-existent problem is implemented in libfuse 3.9.0 (and way back) as described above. The related comment was written based upon a problem that arose with libfuse. Which is multithreaded, of course.

The said garbage collection mechanism is run on the first entry in the list of orphaned INTERRUPT requests each time a non-INTERRUPT request arrives and has no match against any of the list’s members. This ensures that the list is emptied quite quickly, and without risk of an endless loop circulation of INTERRUPT requests, because the arrival of a non-INTERRUPT request means that the queue for INTERRUPT requests in the kernel was empty at that moment. A quirky solution to a quirky problem.

Note that even when libfuse is run with debug output, it’s difficult to say anything about ordering, as the debug output shows processing, not arrival. And the log messages come from different threads.

The problem of unordered processing of INTERRUPT requests could have been solved much more elegantly of course, but libfuse is a patch on patch, so they made another one.

And for the interested, this is the which-function-calls-what in libfuse.

So in libfuse’s fuse_lowlevel.c, the method for handling interrupts, do_interrupt(), first attempts to find the related request, and if it fails, it adds an entry to a session-specific list, se->interrupts. Then there’s check_interrupt(), which is called by fuse_session_process_buf_int() for each arriving request that isn’t an INTERRUPT itself. This function looks up the list for the request, and if it’s found, it sets that request’s “interrupted” flag, and removes it from the list. Otherwise, if the list is non-empty, it removes the first entry of se->interrupts and returns it to the caller, which initiates an EAGAIN for that.

Read the source

Since this is an important topic, let’s look on how this is implemented. So from this point until the end of this post, these are dissection notes of the v5.3.0 kernel source. There are commits applied all the time in this region, but in essence it seems to be the same for a long time.

Generally speaking, all I/O operations that are initiated by the application program (read(), write(), etc.) end up with the setup of a fuse_req structure containing the request information in file.c, and its submission to the server front-end with a call to fuse_request_send(), which is defined in dev.c. If the I/O is asynchronous, fuse_async_req_send() is called instead, but that’s irrelevant for the flow discussed now. fuse_request_send() calls __fuse_request_send(), which in turn calls queue_request() which puts the request in the queue, and more importantly, request_wait_answer(), which puts the process to sleep until the request is completed (or something else happens…).

And now details…

So what does request_wait_answer() do? First, let’s get acquainted with some of the flags that are related to each request (i.e. in struct fuse_req’s flags entry), see also fuse_i.h:

  • FR_FINISHED: request_end() has been called for this request, which happens when the response for this request has arrived (but not processed yet — when that is done the request is freed). Or when it has been aborted for whatever reason (and once again, the error has not been processed yet).
  • FR_PENDING: The request is on the list of requests for transmission to the server. The flag is set when the fuse_req structure of a new request is initialized, and cleared when fuse_dev_do_read() has completed a server’s read() request. Or alternatively, failed for some reason, in which case request_end() has been called to complete the request with an error. So when it’s set, the request has not been sent to the server, but when cleared, it doesn’t necessarily mean it has.
  • FR_SENT: The request has been sent to the server. This is set by fuse_dev_do_read() when nothing can fail anymore. It differs from !FR_PENDING in that FR_PENDING is cleared when there’s an error as well.
  • FR_INTERRUPTED: This flag is set if an interrupt arrived while waiting for a response from the server.
  • FR_FORCE: Force sending of the request even if interrupted
  • FR_BACKGROUND: This is a background request. Irrelevant for the blocking scenario discussed here.
  • FR_LOCKED: Never mind this: It only matters when tearing down the FUSE connection and aborting all requests, and it determines the order in which this is done. It means that data is being copied to or from the request.

request_wait_answer()

With this at hand, let’s follow request_wait_answer() step by step:

  • Wait (with wait_event_interruptible(), sleeping) for FR_FINISHED to be set. Simply put, wait until a response or any interrupt has arrived.
  • If FR_FINISHED is set, the function returns (it’s a void function, and has no return value).
  • If any interrupt occurred while waiting, set FR_INTERRUPTED and check FR_SENT. If the latter was set, call queue_interrupt() to queue the request on the list of pending interrupt requests (unless it is already queued, as fixed in commit 8f7bb368dbdda. The same struct fuse_req is likely to be listed in two lists; one for the pending request and the second for the interrupt).

Note that these three bullets above are skipped if the FUSE connection had the “no_interrupt” flag on invocation to request_wait_answer(). This flag is set if the server answered to any interrupt request in the current session’s past with an -ENOSYS.

  • Wait again for FR_FINISHED to be set, now with wait_event_killable(). This puts the process in the TASK_KILLABLE state, so it returns only when the condition is met or on a fatal signal. If wait_event_interruptible() was awaken by a fatal signal to begin with, there will be no waiting at all on this stage (because the signal is still pending).
  • If FR_FINISHED is set, the function returns. This means that a response has been received for the request itself. As explained below, this is unrelated to the interrupt request’s fate.
  • Otherwise, there’s a fatal signal pending. If FR_PENDING is set (the request has not been sent to server yet), the request is removed from the queue for transmission to the server (with due locking). It’s status is set to -EINTR, and the function returns.

Note that these three bullets are skipped if the FR_FORCE flag is set for this request. And then, there’s the final step if none of the above got the function to return:

  • Once again, wait for FR_FINISHED to be set, but this time with the dreaded, non-interruptible wait_event(). In simple words, if the server doesn’t return a response for the request, the application that is blocking on the I/O call is sleeping and non-killable. This is not so bad, because if the server is killed (and hence closes /dev/fuse or /dev/cuse), all its requests are marked with FR_FINISHED.

To see the whole picture, a close look is needed on fuse_dev_do_read() and fuse_dev_do_write(), which are the functions that handle the request and response communication (respectively) with the driver.

fuse_dev_do_write()

Starting with fuse_dev_do_write(), which handles responses: After a few sanity checks (e.g. that the data lengths are in order), it looks up the request based upon the @unique field (for responses to interrupt requests, the original request is looked for). If the request isn’t found, the function returns with -ENOENT.

If the response has an odd @unique field, it’s an interrupt request response. If the @error field is -ENOSYS, the “no_interrupt” flag is set for the current connection (see above). If it’s -EAGAIN, another interrupt request is queued immediately. Otherwise the interrupt request response is ignored and the function returns. In other words, except for the two error codes just mentioned, it’s pointless to send them. The desired response to an interrupt request is to complete the original request, not responding to the interrupt request.

So now to handling regular responses: The first step is to clear FR_SENT, which sort-of breaks the common sense meaning of this flag, but it’s probably a small hack to reduce the chance of an unnecessary interrupt request, as the original request is just about to finish.

The response’s content is then copied into kernel memory, and request_end() is called, which sets FR_FINISHED, then removes the request from the queue of pending interrupts (if it’s queued there), and after that it returns with due error code (typically success).

So not much interesting here.

fuse_dev_do_read() step by step

The function returns with -EAGAIN if /dev/fuse or /dev/cuse was opened in non-blocking mode, and there’s no data to supply. Otherwise, it waits with wait_event_interruptible_exclusive_locked() until there’s a request to send to the server in any of the three queues (INTERRUPT, FORGET or regular requests queues). If the server process got an interrupt, the wait function returns with -ERESTARTSYS, and so does this function (this is bug? It should be -EINTR).

First, the queue of pending interrupts is checked. If there’s any entry there, fuse_read_interrupt() is called, which generates a FUSE_INTERRUPT request with the @unique field set to the original request’s @unique, ORed with FUSE_INT_REQ_BIT (which equals 1). The request is copied into the user-space buffer, and fuse_dev_do_read() returns with the size of this request.

Second, FORGET requests are submitted, if such are queued.

If none of the INTERRUPT and FORGET were sent, the first entry in the request queue is dequeued, and its FR_PENDING flag is cleared. The I/O data handling then takes place.

Just before returning, the FR_SENT flag is set, and then FR_INTERRUPTED is checked. If the latter is set, queue_interrupt() is called to queue the request on the list of pending interrupt requests (unless it is already queued. Once again, the same struct fuse_req is likely to be listed in two lists; one for the pending request and the second for the interrupt). Together with request_wait_answer(), this ensures that an interrupt is queued as soon as FR_SENT is set: If the waiting function returned before FR_SENT is set, FR_INTERRUPTED is set by request_wait_answer() before checking FR_SENT, so fuse_dev_do_read will call queue_interrupt() after setting FR_SENT. If the waiting function returned after FR_SENT is set, request_wait_answer() will call queue_interrupt(). And in case of a race condition, both will call this function; note that each of the two racers sets one flag and checks opposite in reverse order with respect to each other. And calling queue_interrupt() twice results in queuing the interrupt request only once.

Linux CUSE (and FUSE): Why I ditched two months of work with it

Introduction

If you’re planning to use CUSE (or FUSE) for an application you care about, this post is for you. That includes future self. I’m summarizing my not-so-pleasant journey with this framework here, with focus on how I gradually realized that I should start from the scratch with an old-school kernel module instead.

Most important, if you run CUSE on a v5.0 to v5.3 Linux kernel, you’re in for an imminent OOPS that requires an immediate reboot of the computer. This was the final straw for me (more like a huge log). Even if the user-space driver detected the kernel version and refused to run on kernels that would crash, that would mean it wouldn’t run on the most common distributions at the time of release. And I asked if I want to depend on a subsystem that is maintained this way.

Maybe I should have listened to what Linus had to say about FUSE back in 2011:

People who think that userspace filesystems are realistic for anything but toys are just misguided.

Unfortunately, it seems like the overall attiude towards FUSE is more or less in that spirit, hence nobody gets alarmed when the relevant code gets messier than is usually allowed: FUSE is nice for that nifty GUI that allows me to copy some files from my smartphone to the computer over a USB cable. It fails when there are many files, but what did I expect. Maybe it’s a problem with FUSE, maybe with the MTP/PTP protocol, but the real problem is that it’s treated as a toy.

As for myself, I was tempted to offer a user-space device driver for a USB product I’ve designed. A simple installation, possibly from binaries, running on virtually any computer. CUSE is around for many years, and opens a file in /dev with my name of choice. It makes the device file behave as if it was backed by a driver in the kernel (more or less). What could possibly go wrong?

And a final note before the storytelling: This post was written in the beginning of 2020. Sometimes things change after a while. Not that they usually do, but who knows?

Phase I: Why I opted out libfuse

The natural and immediate choice for working with FUSE is to use its ubiquitous library, libfuse. OK, how does the API go? How does it work?

libfuse’s git commits date back to 2001, and the project is alive by all means, with several commits and version updates every month. As for documentation, the doc/ subdirectory doesn’t help much, and its mainpage.dox says it straight out:

The authoritative source of information about libfuse internals (including the protocol used for communication with the FUSE kernel module) is the source code.

Simply put, nothing is really documented, read the source and figure it out yourself. There’s also an example/ directory with example code, showing how to get it done. Including a couple of examples for CUSE. But no API at all. Nothing on the fine details that make the difference between “look it works, oops, now it doesn’t” and something you can rely upon.

As for the self-documenting code, it isn’t a very pleasant experience, as it’s clearly written in “hack now, clean up later (that is, never)” style.

There are however scattered pieces of documentation, for example:

So with the notion that messy code is likely to bite back, I decided to skip libfuse and talk with /dev/cuse directly. I mean, kernel code can’t be that messy, can it?

It took me quite some time to reverse-engineer the CUSE protocol, and I’ve written a couple of posts on this matter: This and this.

Phase II: Accessing /dev/cuse causing a major OOPS

After nearly finishing my CUSE-based (plus libusb and epoll) driver on a Linux v4.15 machine , I gave it a test run on a different computer, running kernel v5.3. And that went boooom.

Namely, just when trying to close /dev/cuse, an OOPS message as follows appeared, leaving Linux limping, requiring an immediate reboot:

kernel: BUG: spinlock bad magic on CPU#0, cat/951
kernel: general protection fault: 0000 [#1] PREEMPT SMP PTI
kernel: CPU: 0 PID: 951 Comm: cat Tainted: G           O      5.3.0-USBTEST1 #1
kernel: RIP: 0010:spin_bug+0x6a/0x96
kernel: Code: 04 00 00 48 8d 88 88 06 00 00 48 c7 c7 90 ef d5 81 e8 8c af 00 00 41 83 c8 ff 48 85 db 44 8b 4d 08 48 c7 c1 85 ab d9 81 74 0e <44> 8b 83 c8 04 00 00 48 8d 8b 88 06 00 00 8b 55 04 48 89 ee 48 c7
kernel: RSP: 0018:ffffc900008abe18 EFLAGS: 00010202
kernel: RAX: 0000000000000029 RBX: 6b6b6b6b6b6b6b6b RCX: ffffffff81d9ab85
kernel: RDX: 0000000000000000 RSI: ffff88816da16478 RDI: 00000000ffffffff
kernel: RBP: ffff88815a109248 R08: 00000000ffffffff R09: 000000006b6b6b6b
kernel: R10: ffff888159b58c50 R11: ffffffff81c5cd00 R12: ffff88816ae00010
kernel: R13: ffff88816a165e78 R14: 0000000000000012 R15: 0000000000008000
kernel: FS:  00007ff8be539700(0000) GS:ffff88816da00000(0000) knlGS:0000000000000000
kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kernel: CR2: 00007ffe4fa2fee8 CR3: 000000016b5d0002 CR4: 00000000003606f0
kernel: Call Trace:
kernel: do_raw_spin_lock+0x19/0x84
kernel: fuse_prepare_release+0x3b/0xe7 [fuse]
kernel: fuse_sync_release+0x37/0x49 [fuse]
kernel: cuse_release+0x16/0x22 [cuse]
kernel: __fput+0xf0/0x1c2
kernel: task_work_run+0x73/0x86
kernel: exit_to_usermode_loop+0x4e/0x92
kernel: do_syscall_64+0xc9/0xf4
kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9

OK, so why did this happen?

To make a long story short, because a change was made in the FUSE kernel code without testing it on CUSE. I mean, no test at all.

The gory details: A spinlock was added to struct fuse_inode, but someone forgot that the CUSE doesn’t have such struct related to it, because it’s on /dev and not on a FUSE mounted filesystem. Small mistake, no test, big oops.

Even more gory details: Linux kernel commit f15ecfef058d94d03bdb35dcdfda041b3de9d543 adds a spinlock check in fuse_prepare_release() (among others), saying

	if (likely(fi)) {
		spin_lock(&fi->lock);
		list_del(&ff->write_entry);
		spin_unlock(&fi->lock);
	}

For this to even be possible, an earlier commt (ebf84d0c7220c7c9b904c405e61175d2a50cfb39) adds a struct fuse_inode *fi argument to fuse_prepare_release(), and also makes sure that it’s populated correctly. In particular, in cuse.c, it goes:

struct fuse_inode *fi = get_fuse_inode(inode);

(what would I do without git blame?).

But wait. What inode? Apparently, the idea was to get the inode’s struct fuse_inode, which is allocated and initialized by fuse_alloc_inode() in fs/fuse/inode.c. However this function is called only as a superblock operation — in other words, when the kernel wants to create a new inode on a mounted FUSE filesystem. A CUSE device file doesn’t have such entry allocated at all!

get_fuse_inode() is just a container_of(). In other words, it assumes that @inode is an entry inside a struct fuse_inode, and returns the address of the struct. But it’s not. In the CUSE case, the inode belongs to a devfs or something. get_fuse_inode() returns just some random address, and no wonder do_raw_spin_lock() whines that it’s called on something that isn’t a spinlock at all.

The relevant patches were submitted by Kirill Tkhai and committed by Miklos Szeredi. None of whom made the simplest go-no-go test on CUSE after this change, of course, or they would have spotted the problem right away. What damage could a simple change in cuse.c make, after all?

The patch that fixed it

This issue is fixed in kernel commit 56d250ef9650edce600c96e2f918b9b9bafda85e (effective in kernel v5.4) by Miklos Szeredi, saying “It’s a small wonder it didn’t blow up until now”. Irony at its best. He should have written “the FUSE didn’t blow”.

So this bug lived from v5.0 to v5.3 (inclusive), something like 8 months. 8 months without a single minimal regression test by the maintainer or anyone else.

The patch removes the get_fuse_inode() call in cuse.c, and calls fuse_prepare_release() with a NULL instead. Meaning there is no inode, like it should.