++router

As a sometime wireless hacker, it’s a bit embarrassing to admit that I’ve had the factory firmware on my wifi router all this time, but when I first tried OpenWrt on it, ath9k was only a few months old and dropped connections all the time. Thus, I made do with the factory install but ran many of the essential services (dns, dhcp, tftp, etc) from my Linux workstation. And life continued apace.

After a recent network upgrade, I found I could no longer make my router understand ipv6, and so it was time to put the original firmware to pasture. In the intervening years, ath9k grew up, so I took another try with OpenWrt. The install took about 20 minutes, most of which was configuring the firewall and copying my existing dnsmasq config into a uci-friendly format. Everything works great and my ipv6 is back. Nice job, all involved!

I suppose I could now eat even more dogfood by running a mesh interface on one of the radios. In the past, I’ve tinkered with mesh as a wireless distribution system, but I don’t have much of a use for that currently with every room in the new place being wired. Perhaps my backyard could use expanded coverage?

Fake Wireless Errors

When I did my previous work with mac0211_hwsim, I wrote the channel model in matlab and pre-generated a huge lookup table of frame error rates for different SNR values and transmission rates so that the simulator didn’t have to do any thinking for each packet. Obviously that’s a bit limiting and not in any way upstreamable in something like wmediumd.

So, I sat down today and rewrote it in C to see how bad the computation is. Actually, it’s not awful: I didn’t carefully benchmark it, but it sits at around 30 µsec per calculation, and there is probably a good deal of low hanging fruit such as making factorial() cache its computations, or fitting the output curves with cubics. I stuck the initial code on a wmediumd topic branch over here.

I verified the output matched my matlab code and charted it as above. Careful observers will note the 9 mbps rate is always worse than 12 mbps; this was a finding of D. Qiao and S. Choi in “Goodput enhancement of IEEE 802.11a wireless LAN via link adaptation,” in Proc. IEEE ICC01, from which most of my math was appropriated.

Simulating Wireless

A Study of 802.11 Bitrate Selection in Linux (January 2010).

I didn’t think too much of this paper when I wrote it as a term project in grad school. As an academic paper, it doesn’t really present anything novel. The equations underlying my wireless medium simulation, for example, are wholesale lifted from sources. In the few academic papers still being written on the subject, rate controllers that do not specifically look at collisions are old news (even though Minstrel tends to get loss differentiation implicitly through the magic of probability). Even at the time, looking at non-QoS 802.11 DCF and only 802.11a rates made the whole exercise a bit dated, and the world has definitely moved on in the intervening years. The paper did, however, find a few flaws (or perhaps over-exuberances) in Minstrel’s multi-rate-retry mechanism that may still be unfixed upstream, and many more flaws in PID (one I fixed upstream, but it is still not usable). I wanted to go back and redo the physical experiments before submitting patches to Minstrel, but life intervened.

However, I’ve recently been talking to the good folks at cozybit, who picked up where I left off by creating wmediumd (which does more or less the same thing but in a more polished fashion). There were still some things in my version that wmediumd lacks today, so I’m posting the paper to give it a slightly wider audience. I’d be interested to hear of any glaring flaws in the model or approach. Given time, I’d like to bring those missing features (namely, signal-level-based loss, and optional transmission time simulation) to wmediumd and repeat the experiments there.

As for the fixes to Minstrel, the basic theme is reducing the number of retries to avoid backoff, since at some point it is better to drop packets and send the next batch at a lower rate rather than retrying for tens of ms. This patch (untested) addresses one of the two points I mentioned in the paper. The other fix, to compute the backoff time per-slot, was an über-kludge in my experiments; I’ll have to see if there’s an upstreamable way to do that. Pretty much everyone (even for pre-11n devices) is using Minstrel-HT now, so it would be worthwhile to refresh and see if the issues were carried over there as well.

My drive is now solid state

I always learn something new when doing some misguided thing such as “let’s copy our OS onto a new disk using tar like they did back in the day!” The impetus was a shiny new OCZ Vertex 3 to replace the spinning rust in my ancient Macbook. As a co-worker says, it’s like putting a gold steering wheel in a 1970 Pinto. My plan was roughly to:

  • Stuff the new drive in the desktop
  • Make a tar backup of the old laptop drive across the network to the desktop
  • Partition new drive, aligning to flash block sizes
  • Mount the new partition somewhere, and untar the backup
  • Chroot in the new directory, and do a grub-install

Simple. All of that went great, I thought.

The first hurdle when swapping the new drive in was a physical one: the laptop disk bay uses Torx, smaller ones than any adapters I have. This obstacle was cleared by creative use of needle-nosed pliers, and I was on to my first boot attempt, and subsequent failure.

Among the exciting new discoveries:

  • You really, really need to remember to mark /boot as bootable. Otherwise, the dumb Mac firmware will give you the blinking question mark despite all of your previous care.

  • If your user/groups don’t match between the two systems, you probably just borked up a bunch of uid/gids. Luckily there are the -nouser and -nogroup args to find, and almost everything outside of /home is root:root.
  • Once again, Macs won’t boot a rescue USB flash drive unless it has EFI crud laying around in the root directory.
  • UUID-based fstab and grub.conf are not happy when you have an entirely new drive
  • Debian tar doesn’t understand xattrs, that’s a RedHat feature (although I caught this one in time and compiled RedHat’s tar on the Debian system).

On the whole though, it wasn’t too bad. The install took maybe twice as long as installing from media and restoring my usual backup despite the mistakes, and this way I don’t have to reinstall huge numbers of packages to get back to where I was. And so far the drive seems quite speedy, even with my sad 1.5 Mbps interface. Plus it doesn’t hurt that the installation did a giant defrag of the whole OS.

Updated to version 6

I bought a new wireless router the other day, a D-Link DIR-825 dual-band a/g/b/n. This is a step up in several regards from my previous 2.4 GHz 11g Linksys, not least of which is the D-Link’s ability to run OpenWRT with ath9k. Installing OpenWRT with prebuilt binaries is dirt simple; it took all of about 5 minutes to set up.

One of my motivations was to do some IPv6 testing in advance of World IPv6 Day. Comcast doesn’t yet offer IPv6 addresses to the public at large, so I set up a Hurricane Electric 6in4 tunnel. This is quite easy to configure in OpenWRT.

So now I can load ipv6.google.com in my browser. It’s approximately 2 better than the normal site.

log2 in bash

Turns out bash supports enough arithmetic operators to implement lg() just as you might in some real language:


function log2 {
    local x=0
    for (( y=$1-1 ; $y > 0; y >>= 1 )) ; do
        let x=$x+1
    done
    echo $x
}

z=$(log2 64)


I had a somewhat valid reason to want to do this. The one thing that continually gets me when I try to do dumb things like this is when to use the raw variable name and when to use the dollar sign. Yes, the semantics are well-defined, but my perl habits run deep. Similarly, in ruby, all my variables are global.

Backup re-revisited

I’ve reconsidered using git as my backup choice du jour. The main problem (and feature) of using git was having a checked out repo in my home directory. I found I was forever worried, given my admittedly horrid muscle memory habit of doing ‘git reset –hard’ periodically inside source trees, that I’d accidentally do it from the wrong directory and lose any recent work. Luckily, I never did that.

Two obvious solutions: don’t use a checkout in the home directory, instead using rsync to the repo; or use a wrapper/custom command name for the git-as-backup program to avoid accidents. Well, I went with the third option: use rdiff-backup like a normal person. It’s packaged by Fedora and Debian, so I only needed a small tweak to my backup scripts to make that happen every night. And someone wrote a FUSE filesystem (archfs) to mount the backups as normal directories, so there’s no real loss of convenience under this scheme.

My RPM database is now corrupted. Just in time.

Smolt

The stats from smolt are pretty interesting if, like all stats, entirely useless. Some curiousities:

  • i686 still beats x86_64 by a ton
  • A (very) few people change their runlevel
  • Acer is high in the vendor list, I guess they are still killing the netbook market
  • People don’t configure their swap appropriately
  • SMP is now the norm, outside of embedded kit anyway
  • I want a 4+ GHz cpu
  • No one uses omfs :( …I guess I need to submit a profile.

F10

So, I’ve been a user of Debian (and lately Ubuntu) since around 2001, with RedHat, Mandrake, and Slackware being in use before then. Debian was like a revelation: ‘apt’ is how package management should be! I still have my server running Debian stable, but I thought I’d try putting Fedora 10 on my laptop this go-round to see how it compares to Ubuntu. All the marketing hype about Ubuntu being mere aggregators of others’ hard work had something to do with that as well. Besides, yum has been around for years now so surely it is as good as apt by now.

Here are my thoughts: I still find yum a little clunky for a few things; maybe that’s just my expertise in apt speaking. LVM was the first thing to go — it wasn’t hard to do from the graphical installer. The much hailed boot graphics stuff only worked with vesafb for me since they dropped the modesetting code for Intel from the kernel. I had to overhaul the installkernel script to properly update grub and not bother with an initrd, since I hate them. Finally, all configuration seems to be HAL driven now, which just means putting more random undocumented crap into huge XML files in /etc to get your touchpad working. Lovely, I’m sure Ubuntu is busy adopting that mess. On the plus side, a nice looking gnome setup with reasonable defaults. On the whole, Fedora 10 is a solid release, though it will still take some time to get it configured to my liking. Perhaps by then I’ll give openSuse a spin.

Backups revisited

I spent most of last weekend doing home IT tasks. That involved upgrading my main desktop machine from Pentium III to an Athlon XP. Welcome to 7 years ago! But most of the work was spent reorganizing my data and coming up with a better backup regime.

Now that hard drives are so cheap, and we now rent a storage space, spending $1/GB-month for off-site network backup is just not worth it any more. Also, with my off-site backup, I was only keeping a single full backup, which is not terribly useful if a few weeks elapse before you notice something is missing. So, I have been playing around with incremental backups using rsync and hard links, similar to the way Apple’s time machine supposedly works. Then I stumbled across ‘gibak,’ a set of shell scripts that use the git version control system as the backup tool.

In the end, I went with my own dozen-liner script to use git and metastore, with rsync/cifs to collect the stuff in windowsland for backup in separate repositories. A cron job does a daily commit and push from the checked-out repo in my home directory. So far, the result is pretty nice. If I screw something up, a ‘git reset’ gets me back to any earlier date. It also solves a minor annoyance with keeping files in sync across multiple machines: both can use a clone of the git repo and then syncing is as easy as a push from one and a pull to the other. I can rotate portable hard drives to the storage area to solve the ‘apartment burning down’ scenario, though I’m admittedly vulnerable to the ‘global thermonuclear war’ scenario.

I’ve already used this scheme to rebuild a machine’s home dir and it worked flawlessly. Hopefully the same will hold when I move my laptop from Ubuntu 8.04 to Fedora 10. Anyway, this should keep me satisfied until btrfs is everywhere and I can just use filesystem snapshots.