AWS precursor

Let’s say it’s 1975, and you have a mountain of data (2 megs) to process, and a heap of cash. Whom do you call to run your Cobol? Your local airline, of course!

Excerpt:

IBM 360/195 for only 50 cents a SECOND

Guaranteed Turnaround! 2meg; 2314’s – 3330’s – 3420’s
OS/MVT
HASP/RJE
MPSX-GPSS-PMS-SSP-CSMP
Ans Cobol, Fortran G, G1, H, Assembler F & H, PL/1 F and PL/1 Optimizing and Checkout Compilers.

Our typical customer is knowledgeable in OS; has good working knowledge of JCL, Utilities and the functions of the compilers/assemblers he uses.
[…]
Call or Write
UNITED AIRLINES

Courtesy of a yellowed copy of Computerworld my dad found among his things. I noticed Google has scanned a lot of old issues but I couldn’t find this one in their archive.

No, I don’t remember those days.

Grepping 300 gigs

It’s fun when a problem is simple, yet the program to solve it takes long enough to run that you can write a second, or third, much faster version while waiting on the first to complete. Today I had a 1.8 billion line, 300 GB sorted input file of the form “keytvalue”, and another file of 20k interesting keys. I wanted lines matching the keys sent to a third file, where the same key might appear multiple times in the input.

What doesn’t work: any version of grep -f, which I believe is something like O(N*M).

What could have worked: a one-off program to load the keys into a hash, and output the data in a single pass. Actually, I had written this one-off program before for a different task. In this case, the input file doesn’t need to be sorted, but the key set must be small and well-defined. It is O(N+M).

What worked: a series of sorted grep invocations (sgrep, but not the one in ubuntu). That is O(lgN * M), which can be a win if M is small compared to N. See also comm(1) and join(1), but I don’t know off the top of my head whether they start with a binary search. I had a fairly low selectivity in my input set so the binary search helps immensely.

What also could have worked: a streaming map-reduce or gnu parallel job (the input file was map-reduce output, but my cluster was busy at the time I was processing this, and I am lazy). That would still be O(N+M), but distributing across P machines in the cluster would reduce it by around the factor P. P would need to be large here to be competitive with the previous solution from a complexity standpoint.

Making up numbers, these take about 0, 5, 1, and 20 minutes to write, respectively.

Of course, algorithmic complexity isn’t the whole picture. The last option, or otherwise splitting the input/output across disks on a single machine, would have gone a long way to address this problem:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda              10.50     0.00  361.00    9.75 83682.00  4992.00   478.35   139.42  174.20    6.43 6385.95   2.70 100.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

You can’t go any faster when blocked on I/O.

Note the similarities with join algorithms in databases — variations on nested loop, hash, and sort-merge joins are all present above.

Mavened.

So, I’m applying a one line patch to a Java package, and oh yes, it needs maven. Hooray!

$ apt-cache depends --recurse maven | grep Depends | 
     grep -v '<' | egrep lib.*java | sort | uniq | wc -l
203
[... start build and watch maven download another pile ...]
$ find ~/.m2 -name *pom | wc -l
851

I find it hard to believe that there are that many useful libraries in the world.

goto in bash

If you are as old and nerdy as I, you may have spent your grade school days hacking in the BASIC computer language. One of the (mostly hated) features of the (mostly hated) language was that any statement required a line number; this provided both the ability to edit individual lines of the program without a screen editor, as well as de facto labels for the (mostly hated) GOTO and GOSUB commands. But you could also use line numbers to run your program starting from any random point: “RUN 250” might start in the middle of a program, typically after line 250 exited with some syntax error and was subsequently fixed.

Today, in bash, we have no such facility. Why on earth would anyone want it, with the presence of actual flow control constructs? Who knows, but asking Google about “bash goto” shows that I am not the first.

For my part, at $work, I have a particular script which takes several days to run, each part of which may take many hours, and, due to moon phases, may fail haphazardly. If a command fails, the state up to that point is preserved, so I just need to continue where that left off. Each major part of the job is already factored into individual scripts, so I could cut-and-paste commands from the failure point onward, but I’m lazy.

Thus, I present bash goto. It runs sed on itself to strip out any parts of the script that shouldn’t run, and then evals it all. Prepare to cringe.


#!/bin/bash
# include this boilerplate
function jumpto
{
    label=$1
    cmd=$(sed -n "/$label:/{:a;n;p;ba};" $0 | grep -v ':$')
    eval "$cmd"
    exit
}

start=${1:-"start"}

jumpto $start

start:
# your script goes here...
x=100
jumpto foo

mid:
x=101
echo "This is not printed!"

foo:
x=${x:-10}
echo x is $x

results in:


$ ./test.sh
x is 100
$ ./test.sh foo
x is 10
$ ./test.sh mid
This is not printed!
x is 101

My quest to make bash look like assembly language draws ever nearer to completion.

Update 2019/05/21: A reader pointed out that executing one of the labels results in bash complaining “command not found” and suggested putting the labels in a comment, which works just fine without any other changes (but if you like, you can drop the “grep -v” in that case). You might also be interested in the takes of folks on reddit, or my own take of being discussed on reddit.

Update 2023/08/04: Greetings, and my apologies, to Hacker News readers all these years later. Hi!

Fake Wireless Errors

When I did my previous work with mac0211_hwsim, I wrote the channel model in matlab and pre-generated a huge lookup table of frame error rates for different SNR values and transmission rates so that the simulator didn’t have to do any thinking for each packet. Obviously that’s a bit limiting and not in any way upstreamable in something like wmediumd.

So, I sat down today and rewrote it in C to see how bad the computation is. Actually, it’s not awful: I didn’t carefully benchmark it, but it sits at around 30 µsec per calculation, and there is probably a good deal of low hanging fruit such as making factorial() cache its computations, or fitting the output curves with cubics. I stuck the initial code on a wmediumd topic branch over here.

I verified the output matched my matlab code and charted it as above. Careful observers will note the 9 mbps rate is always worse than 12 mbps; this was a finding of D. Qiao and S. Choi in “Goodput enhancement of IEEE 802.11a wireless LAN via link adaptation,” in Proc. IEEE ICC01, from which most of my math was appropriated.

Simulating Wireless

A Study of 802.11 Bitrate Selection in Linux (January 2010).

I didn’t think too much of this paper when I wrote it as a term project in grad school. As an academic paper, it doesn’t really present anything novel. The equations underlying my wireless medium simulation, for example, are wholesale lifted from sources. In the few academic papers still being written on the subject, rate controllers that do not specifically look at collisions are old news (even though Minstrel tends to get loss differentiation implicitly through the magic of probability). Even at the time, looking at non-QoS 802.11 DCF and only 802.11a rates made the whole exercise a bit dated, and the world has definitely moved on in the intervening years. The paper did, however, find a few flaws (or perhaps over-exuberances) in Minstrel’s multi-rate-retry mechanism that may still be unfixed upstream, and many more flaws in PID (one I fixed upstream, but it is still not usable). I wanted to go back and redo the physical experiments before submitting patches to Minstrel, but life intervened.

However, I’ve recently been talking to the good folks at cozybit, who picked up where I left off by creating wmediumd (which does more or less the same thing but in a more polished fashion). There were still some things in my version that wmediumd lacks today, so I’m posting the paper to give it a slightly wider audience. I’d be interested to hear of any glaring flaws in the model or approach. Given time, I’d like to bring those missing features (namely, signal-level-based loss, and optional transmission time simulation) to wmediumd and repeat the experiments there.

As for the fixes to Minstrel, the basic theme is reducing the number of retries to avoid backoff, since at some point it is better to drop packets and send the next batch at a lower rate rather than retrying for tens of ms. This patch (untested) addresses one of the two points I mentioned in the paper. The other fix, to compute the backoff time per-slot, was an über-kludge in my experiments; I’ll have to see if there’s an upstreamable way to do that. Pretty much everyone (even for pre-11n devices) is using Minstrel-HT now, so it would be worthwhile to refresh and see if the issues were carried over there as well.

Packet Timing

Continuing with the charting theme, I wrote a quick python script yesterday to generate timing diagrams from pcap files while learning matplotlib. It’s just at the proof-of-concept stage but could be useful to debug things like power-saving in wireless. Surely something like this exists somewhere already, right? I couldn’t find it in the Google.

Along the way I had to write a python radiotap parser; that might actually be useful to others so I’ll try to put it on github at some point.

Graphing HBase Splits

I was asked for a blog post on this topic. I also do birthdays!

HBase achieves balance by splitting regions when they reach a certain size, and by evenly distributing the number of regions among cluster machines. However, the balancer will not run in some cases (e.g. if there are regions stuck in transition), and balancing the number of regions alone may not help if region sizes are not mostly the same size. If a region server is hosting more regions than the others, requests to that server experience higher latency, and batch (map-reduce) jobs take longer to complete due to parallel skew.

At $work we graph these data hourly, and here’s how we do it.

First, we run the following JRuby script from cron. [Note: I’ve been advised (thanks ntelford!) that HServerInfo is gone in newer releases and you now need to get HServerLoad via ClusterStatus.getLoad(server_name).]

# This ruby script dumps a text file of region sizes and the servers
# they are on, for determining balance/split effectiveness.
#
# Usage: hbase org.jruby.Main region_hist.rb
#
include Java
import org.apache.hadoop.hbase.ClusterStatus
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.HServerInfo
import org.apache.hadoop.hbase.HServerLoad
import org.apache.hadoop.hbase.HTableDescriptor
import org.apache.hadoop.hbase.client.HBaseAdmin

def main()
    conf = HBaseConfiguration.new()
    client = HBaseAdmin.new(conf)

    status = client.clusterStatus
    status.serverInfo.each do |server|
        server_name = server.serverName
        printed_server = false

        load = server.load
        rload = load.regionsLoad
        rload.each do |region|
            region_name = region.nameAsString
            size = region.storefileSizeMB
            puts "#{server_name}t#{region_name}t#{size}"
            printed_server = true
        end
        if !printed_server then
            puts "#{server_name}tNONEt0"
        end
    end
end

main()

The script generates a flat file with server name, region name, and region
store file size fields:

c1.example.com,60020,1333483861849  .META.,,1   2
c2.example.com,60020,1333484982245  edges,175192748,176293002,1331824785017.fc03e947e571dfbcf65aa16dfd073804    1723
...

We then process this through a pile of Perl (I’ll spare the details) to generate several other data files. First, there’s a flat file with a sum of region sizes and region count per table:

server  -ROOT-  -ROOT-  .META.  .META.  edges   edges   ids ids verts   verts   userdata    userdata    types   types   stats   stats   topics  topics  topics_meta topics_meta maps    maps
c1  0   0   0   0   51041   41  0   0   27198   12  0   0   0   0   585 2   0   0   0   0   0   0
c2  0   0   0   0   49260   40  3501    1   20090   10  0   0   0   0   772 3   0   0   0   0   0   0
...

Next, for each table, we generate a file showing the total size of each region:

1   230
2   510
3   1200

The first column is just a line numbering. The region sizes are sorted to make the final chart easier to read.

From there we use gnuplot to generate a histogram of the regions by numbers and by size, and then a per-table chart of the region size distribution. The gnuplot file looks like this:

set terminal png
set key invert reverse Left outside
set key autotitle columnheader
set boxwidth 0.75
set xtics nomirror rotate by -45 font ",8"
set key noenhanced
set yrange[0:*]
set style histogram rowstacked gap 1 title offset 2, 0.25
set style data histogram
set style fill solid border -1
set output 'hbase-fig1.png'
plot 'load.dat' using 3:xtic(1), for [i=2:11] '' using 2*i+1

set output 'hbase-fig2.png'
plot 'load.dat' using 2:xtic(1), for [i=2:11] '' using 2*i

set ylabel "Store Size (MB)"
set xlabel "Store"
unset xtics
set output 'hbase-fig3.png'
plot 'splits_edges.dat' using 2:xtic(1) title 'edges'

Here’s the end result (I changed the server and table names but this is otherwise real data):

In the above figure, we can see there’s a good balance of the number of regions across the region servers. We can also easily see which servers are hosting which regions, such as the important, but small, -ROOT- and .META. tables. So far, so good.

In this image, we see that the total size is not very well balanced: server c13 has a lot more data than the others. Taken together, these graphs indicate that our regions are not all the same size. The next image shows this more dramatically.

Here we see that around 60% of our regions for this table are smaller than 1 Gig, and the remaining 40% are split between 1-2G and 2-4G sizes. We would rather see a baseline at 2G (half the max region size), and the midpoint around 3G assuming evenly distributed splits. In our case, we had increased the region size of our largest table late in the game, so there are a ton of small regions here that we should try to merge.

Seeing the regions at a glance has been a useful tool. In one case, we got a factor of 8 speedup in a map-reduce job by re-splitting and manually moving regions to ensure that all the regions were evenly distributed across the cluster — the difference between running a job once a week vs. running it once a day.

Orders of magnitude

I had a Hadoop map-reduce job that kept timing out, which led to this interesting discovery:

$ time ./json-parser-test.py

real	0m0.205s
user	0m0.152s
sys	0m0.032s

$ time ./json-parser-test-no-speedups.py

real	0m2.069s
user	0m2.044s
sys	0m0.024s

$ time jython ./json-parser-test-no-speedups.py

real	79m59.785s
user	80m23.709s
sys	0m14.441s

Moral: use Java-based JSON libraries if you have to use Jython and JSON. Also, Java sucks.

Parsing HRegionInfo in Python

I’ve been doing a fair amount of HBase work lately at $work, not least of which is pybase, a python module that encapsulates Thrift and puts it under an API that looks more or less like the Cassandra wrapper pycassa (which we also use).

When running an HBase cluster, one must very quickly learn the stack from top to bottom and be ready to fix the metadata when catastrophe strikes. Most of the necessary information about HBase regions is stored in the .META. table; unfortunately some of the values therein are serialized HBase Writables. One usually uses JRuby and directly loads Java classes to deal with the deserialization, but we’re a Python shop and doing it all over thrift would be ideal.

Thus, here’s a quick module to parse out HRegionInfo along with a few generic helpers for Writables. I haven’t decided yet whether this kind of thing belongs in pybase.

I’m curious whether there is an idiomatic way to do advancing pointer type operations in python without returning an index everywhere. Perhaps converting an array to a file-like object?

#!/usr/bin/python
import struct

def vint_size(byte):
    if byte >= -112:
        return 1

    if byte <= -120:
        return -119 - byte

    return -111 - byte

def vint_neg(byte):
    return byte < -120 or -112 <= byte < 0
        
def read_byte(data, ofs):
    return (ord(data[ofs]), ofs + 1)

def read_long(data, ofs):
    val = struct.unpack_from(">q", data, offset=ofs)[0]
    return (val, ofs + 8)

def read_vint(data, ofs):
    firstbyte, ofs = read_byte(data, ofs)

    sz = vint_size(firstbyte)
    if sz == 1:
        return (firstbyte, ofs)

    for i in xrange(0, sz):
        (nextb, ofs) = read_byte(data, ofs)
        val = (val << 8) | nextb

    if vint_neg(firstbyte):
        val = ~val

    return (val, ofs)

def read_bool(data, ofs):
    byte, ofs = read_byte(data, ofs)
    return (byte != 0, ofs)

def read_array(data, ofs):
    sz, ofs = read_vint(data, ofs)
    val = data[ofs:ofs+sz]
    ofs += sz
    return (val, ofs)

def parse_regioninfo(data, ofs):
    end_key, ofs = read_array(data, ofs)
    offline, ofs = read_bool(data, ofs)
    region_id, ofs = read_long(data, ofs)
    region_name, ofs = read_array(data, ofs)
    split, ofs = read_bool(data, ofs)
    start_key, ofs = read_array(data, ofs)
    # tabledesc: not about to parse this
    # hashcode: int

    result = {
        'end_key' : end_key,
        'offline' : offline,
        'region_id' : region_id,
        'region_name' : region_name,
        'split' : split,
        'start_key' : start_key,
    }
    return result