On Drawing Squares

I’ve been looking at the sample drawing in speccy to try and speed it up. For a given frame, it renders about 10000 points as 9-pixel anti-aliased squares using Cairo, in Python. On my desktop (Debian stable, Nouveau driver) it gets a pokey 1 FPS.

My graphics knowledge is about 20 years old at this point, and while I did once know various arcane tricks on the hardware of the day, none of that is relevant now. So I know basically nothing about modern graphics. It is still not clear to me to what extent the 2D primitives use acceleration, and it doesn’t help much that, in as many years that I’ve been asleep on this topic, there seems to have been as many new APIs introduced and retired: today there are at least XRender, XAA, EXA, UXA, SNA, and Glamor — did I miss any?

Anyway, here are the results on my desktop from things that I’ve tried so far [1]:

test                    FPS
----                    -------
orig (aa)                0.904780
no_aa                    1.103061
no_cairo                 8.372354
no_cairo_aa              2.408143
batched                 19.873722
batched_aa              29.045358
batched_by_color         7.611148
batched_by_color_aa     14.856336

The tests are:

  • orig: the current code, where each rectangle is filled immediately in turn.
  • no_cairo: I skip Cairo completely and manually render to a GdkPixbuf internally, under the theory that draw function call overhead is the big killer
  • batched: using Cairo with one cr.fill() call for all of the rectangles. This means all are rendered with the same color, and as such could not actually be used. But it is interesting as a data point on batching vs. not.
  • batched_by_color: one cr.fill() call per distinct color
  • aa refers to having antialiasing enabled

Batching appears to make a huge difference, and binning and batching by color is the most performant usable setup. It’s not really difficult to batch by color since the colors are internally just an 8-bit intensity value, though in my quick test I am hashing the string representation of the float triples so it’s not as quick as it could be.

I don’t understand the performance increase on batching with AA applied. Perhaps this is an area where some sort of acceleration is disabled when AA is also disabled? This is a repeatable outcome on this hardware / driver.

Then I tested on my laptop and got these numbers (Debian Testing, Intel graphics):

test                    FPS
----                    -------
orig (aa)               28.554757
no_aa                   27.632071
no_cairo                 9.954314
no_cairo_aa              2.958732
batched                 43.917732
batched_aa              28.750845
batched_by_color        19.933492
batched_by_color_aa     19.803839

Here, just using Cairo is the best. My guess is batching by color is only worse because of the higher CPU cost of the hashing and the rather inefficient way I did it. Given the close performance of batched_aa and the original test though, batching done well probably isn’t going to make much difference.

Finally, I went back to the desktop and tried Nvidia binary drivers (oh, how I don’t miss those).

test                    FPS
----                    -------
orig                     6.586327
no_aa                   15.216683
no_cairo                 9.072748
no_cairo_aa              2.368938
batched                 39.192580
batched_aa              27.717076
batched_by_color        18.681605
batched_by_color_aa     16.450811

It seems Nouveau still has some tricks to learn. In this case AA is, as expected, slower than non-AA. Overall, batching still seems to give a decent speedup on this hardware, so I may look into actually doing that. Moral of the story: use the laptop.

As for speccy, I actually had a use for it recently: I needed to see whether a particular testmode on ath5k was functional. Although I can’t really measure anything precisely with the tool, I can at least see if the spectrum looks close enough. Below is a snapshot from my testing, showing that in this testmode, the device emits power on 2412 MHz with a much narrower bandwidth than normal.

This matches, as far as I can tell, the spectrum shown on the NDA-encumbered datasheet.

[1] Approaches not tried yet: reducing the point count by partial redraws, using something like VBOs in GL, or rewriting parts in C. Feel free to let me know of others.

memory µ-optimizing

While looking at some kmalloc statistics for my little corner of wireless, I found a new tool for the toolbox: perf kmem. To test it out, I ran a simulated connected 25-node mesh in mac80211_hwsim.

Here’s what perf kmem stat --caller had to say:

---------------------------------------------------------------------
 Callsite            | Total_alloc/Per | Total_req/Per  | Hit   | ...
---------------------------------------------------------------------
 mesh_table_alloc+3e |      1600/32    |       400/8    |    50 |
 mesh_rmc_init+23    |    204800/8192  |    102600/4104 |    25 |
 mesh_path_new.const |    315904/512   |    241864/392  |   617 |
[...]

The number of mesh nodes shows up in the ‘Hit’ column in various ways: we can pretty easily see that mesh_table_alloc is called twice per mesh device, mesh_rmc_init is called once, and mesh_path_new is called at least once for every peer (617 ~= 25 * 24). We might scan the hit column to see if there are any cases in our code that perform unexpectedly high numbers of allocations, signifying a bug or poor design.

The first two columns are also interesting because they show potentially wasteful allocations. Taking mesh_table_alloc as an example:

(gdb) l *mesh_table_alloc+0x3e
0x749fe is in mesh_table_alloc (net/mac80211/mesh_pathtbl.c:66).
61		newtbl = kmalloc(sizeof(struct mesh_table), GFP_ATOMIC);
62		if (!newtbl)
63			return NULL;
64
65		newtbl->known_gates = kzalloc(sizeof(struct hlist_head), GFP_ATOMIC);
66		if (!newtbl->known_gates) {
67			kfree(newtbl);
68			return NULL;
69		}
70		INIT_HLIST_HEAD(newtbl->known_gates);

We are allocating the size of an hlist_head, which is one pointer. This is kind of unusual: typically the list pointer is embedded directly into the structure that holds it (here, struct mesh_table). The reason this was originally done is that the table pointer itself could be switched out using RCU, and a level of indirection for the gates list ensured that the same head pointer is seen by iterators with both the old and new table. With my rhashtable rework, this indirection is no longer necessary.

To track allocations with 8-byte granularity would impose an unacceptable level of overhead, so these 8-byte allocations all get rounded up to 32 bytes (first column). Because we allocated 50 of them (25 devices, two tables each), in all we asked for 400 bytes, and actually got 1600, a waste of 1200 bytes. Forty-eight wasted bytes per device is generally nothing worth worrying about, considering we usually only have one or two devices — but as this kmalloc is no longer needed, we can just drop it and also shed a bit of error-handling code.

In the case of mesh_rmc_init, we see an opportunity for tuning. The requested allocation is the unfortunate size of 4104 bytes, just a bit larger than the page size, 4096. Allocations of a page (so-called order-0 allocations) are much easier to fulfill than the order-1 allocations that actually happen here. There are a few options here: we could remove or move the u32 in this struct that pushes it over the page size, or reduce the number of buckets in the hashtable, or switch the internal lists to hlists in order to get this to fit into 4k, and thus free up a page per device. The latter is the simplest approach, and the one I chose.

I have patches for these and a few other minor things in my local tree and will send them out soon.

HSTS applied

Now that I’ve had a few months’ experience with Let’s Encrypt, and I have cert renewals completely automated, bobcopeland.com is now fully https and serving HSTS headers. I’m now using acme-tiny to renew the certs, thanks to their dedication to making the code easily auditable.

On mac vi and error exits

Sometimes I have to use a Mac, and various things drive me nuts about it. The worst is git commit aborting because I typoed the writeout command after having written the best commit log ever. Here’s a google crumb for others experiencing this, and ideally for someone who wants to track down and fix the problem upstream.

Synopsis: do the below, and type the following once vim starts up: :Wq[enter]:q[enter]

$ vi || echo broken
broken
$ vim || echo broken
$ vim --version
VIM - Vi IMproved 7.4 (2010 Aug 15, compiled Jul  9 2015 23:58:42)
Compiled by root@apple.com
[...]

/usr/bin/vi is of course a symlink to /usr/bin/vim that makes it act worse. Now, there are some folks who have various theories about why vi is different in this respect, or what vimrc stuff might cause vi to exit with an error if some error happened once upon a time in the editing session, but honestly I can’t find myself caring. Instead I just set my $EDITOR to pretend vi doesn’t exist anymore.

Carbs

My brother-in-law gave me a copy of the bread cookbook Flour Water Salt Yeast by Ken Forkish (as far as I know, his actual name) for Christmas, and so I made five pizzas. The one pictured below is a touch over-baked on top, but pretty close. It was a little too chewy owing to having used bread flour, yet had a nice airy crumb. I think this replaces Peter Reinhart’s recipe as my go-to from now on — both are about 70% hydration but this one had more bubbles.

The book has some tips for baking it in a home oven, which were pretty useful, though I found his suggestion of temporarily using the broiler caused my oven temp to drop 30 degrees in the process. Still working on the heat aspect, but I’m getting better at dealing with wet doughs.

Incidentally, this one is for the kids: my own favorite toppings, stolen from a pizza joint in Crystal City, VA, are prosciutto and arugula.

Homemade pizza

Also cooked last week: pot roast with homemade egg noodles, and butter chicken.

Homemade noodles

Whew, I think it’s going to be takeout all this week.

wherein I did a git push

As I wrote somewhere in the Metaverse, I’m running the wireless-testing tree now, which is about as minimal a contribution as one could make. It essentially means running a script once daily; cron could do it all, if not for the part where I enter my ssh passphrase to actually push out the updates. For the curious, here’s the script.

One of the killer features of git used here is git rerere, which remembers conflict resolutions and reapplies them when reencountered. A human (me) still needs to resolve the conflict once, but after that it’s pretty much on autopilot for the rest of that kernel series. Normally one should reinspect the automatic merge made by git-rerere, but at least for this testing tree, throwing caution to the wind is warranted and thus the script has some kludges around that.

Every pushed tag had a successful build test with whatever (not claimed to be exhaustive) kernel config I am using locally. I’ve considered some more extensive tests here, such as running the hostapd test suite (takes an hour with 8 VMs on my hardware) or boot testing on my mesh APs (requires a giant stack of out-of-tree patches), but, given the parentheticals, I’m keeping it simple for now.

Observant persons may have noted some missing dates in the tags. Generally, this means either nothing actually changed in the trees I’m pulling, or it is during the merge window when things are too in flux to be useful. Or maybe I forgot.

sae.py

For a while now, I’ve wanted an easy way to decrypt a mesh packet capture when I know the SAE passphrase. This would be quite handy for debugging.

I reasoned that, if I knew the shared secret, and had captured the over-the-air exchange, it should be possible to make such a tool, right? We know everything each station knows in that case, don’t we?

So I spent a bit of time last week reimplementing SAE in python to come to grips with its various steps. And I found my reasoning was flawed.

Similarly to Diffie-Hellman, SAE exchanges values that include the composition of a random value (we’ll call them r1 and r2) with some public part P, in such a way that it is hard to extract r1 even if you know P (e.g. r1*P and r2*P with ECC, although this is not exactly how SAE is specified). These values can be used by each peer to arrive at a shared secret provided they know the original random number (e.g., something like r1 * r2 * P). The crucial point is that r1 and r2 cannot be determined from the exchange alone. They exist only in memory on each peer during the exchange.

So if the secrecy doesn’t actually depend on the password, what is it there for? The answer is authentication: SAE is a password authenticated key exchange (PAKE). The password ensures that the party we are talking to is who we think it is, or at least someone who also knows the password. In the case of SAE, P is derived from the password.

As for the original goal, what we can do instead is use the CCMP keys from a wpa_supplicant log and decrypt frames with those. That is not nearly as convenient, though.

And thus my postings for 2015 are at an end; happy new year, all!

RFID reader

In the previous installment, I was messing with the serial library on Arduino to get it to work on the older board. The purpose was this:

Pictured is an RFID reader from SparkFun that I picked up years ago and sat in my parts box since then. This RFID reader is intended to read Mifare tags and as such predates the whole NFC thing by a year or three. The programming interface is to send commands over serial to the device such as “Authenticate sector N”, “Read sector N”, etc. I’ve connected D7 to D2 so that I could use D2 as the interrupt pin to suit my Arduino.

It appears to work, in that I can dump sectors from my yubikey and a tap-and-go credit card. The sector trailer parts look valid, and the first few bytes of sector 0 have the device ID (verified by the NFC reader on my phone). I can’t seem to read any NDEF information; whether that is due to limitations in the reader, bugs in my code, or my almost zero knowledge about how all this stuff works, I’m not sure. Sketch code is over here.

As to possible applications, for that I have no real idea either, which is probably why it sat in the parts bin all this time.

Software Serial on the Arduino NG

As mentioned before, I have a really old Arduino, the Arduino NG with an Atmega8 on board. No fancy embedded Linux wifi-enabled SoC on that thing.

Recently, I found myself doing a bit of yak shaving when I was trying to use a code that was meant for Arduinos of a more recent vintage. And here is what I learned:

The Atmega8 does not have the PCICR (which I believe stands for “pin change interrupt control”) register. This feature allows multiplexing of interrupts so that you can have multiple pins signal the same external interrupt, and the pins that trigger interrupts are programmable. On the Atmega8, you can only use Arduino pins D2 and D3 to directly signal interrupts 0 and 1 respectively.

So the device I was trying to use operates over serial, and the supplied code uses the built-in SoftwareSerial bit-banging serial library to talk to it.
The SoftwareSerial library attaches an interrupt to the configured Arduino RX pin (device TX) to trigger when it goes low, indicating the start of a transmission from the device. This library was only written with the more configurable chips in mind, so didn’t support direct interrupts like on the Atmega8. Google didn’t turn up any obvious fixes other than people running into the same compiler error I initially hit:

error: 'PCICR' was not declared in this scope.

In the end a few short changes are needed to make it work: one to disable digitalPinToPCICR() and related macros (as there is no such register on this device); and another change to attach the proper interrupt inside SoftwareSerial. A patch is here.

To debug this, I hooked up the TX side of a USB-serial adapter to the RX pin on the Arduino, ran screen(1) on the computer to control it, and started mashing keys. Until I got the irq triggering working properly, I also monitored the output with a DMM just to make sure the output was going low when I held down a key. A test sketch for that looks like this:

SoftwareSerial port2(2, 3);

void setup()
{
  Serial.begin(9600);
  port2.begin(9600);
  Serial.println("Listening...");
}

void loop()
{
  while (port2.available()) {
    Serial.write(port2.read());
  }
}

It works fine. I’ll write about what I was doing with it in some future installment.

Now (mostly) encrypted

I signed up for the let’s encrypt beta and got my cert setup today. I let my old CA-purchased cert lapse when it came for re-up a couple of years ago, so happy there’s soon to be a free alternative.

Enjoy the HTTPS! (Except for some stray mixed content I need to kill — sorry for that.)