35 seconds

After a bit of a break, I am back to doing the NYTXW every day, except when I forget. Out of laziness, I am just using their webapp instead of my own, thereby saving at least two clicks. As a result, I get their fancy stats deal for free.

As a completely pointless milestone, I’d like to break the five minute barrier sometime this year. I got pretty close with last Monday’s puz, and it didn’t feel like it was possible to type any faster. But I understand that in the competitive world, times are typically half that. Five minutes is somewhere around four seconds per answer, seems doable.

Finis

Dear reader, I am writing this from my bunker in lockdown. Here we are on the final day of 2020, and all I can say is, whew, we made it.

The front page of this blog goes back a year and a half which doesn’t say a lot about my level of commitment to this here writing thing of late. I shall try to make up for that by summarizing some of the projects I actually completed in 2020, but which I was too lazy to document contemporaneously so they might as well have not happened at all. Well here’s your bunch of write-ups all squashed into one blog entry, happy now?

I celebrated my first anniversary at Amazon in October, a year which saw us ship a huge cross-team project and deliver an outage-free Christmas. If you are one of the millions who interacted with Alexa this year, my many colleagues and I helped make that happen. If Alexa responded with something completely nonsensical or useless, well, then, that was probably some other team’s fault.

I’m happy to still be working in software, 22 years since I took my first full-time job. Back then, I took one of my first paychecks to the local hi-fi stereo vendor and purchased a Paradigm home theater setup. My roommate and I didn’t have any furniture to speak of, but who needs that when you can watch VHS tapes in 5.1 surround on a giant tube TV! These days, the surrounds and center channel speakers are gathering dust, but I still use the bookshelf speakers and sub. Recently, I noticed these poor old Atoms were rattling whenever the bass kicked in. Youtubers said that this is common and you need to replace your foam surrounds and you can buy a kit and do it yourself and did you know that you could just buy a new pair of monitors for as little as $5000 and also get some gold plated optical cables while you are at it for the warmest possible digital sound. So yes, I did buy such a kit and I did do it myself.

Well, this was an epically bad glue job, but there are no longer any clicks while listening to Technotronic’s _Pump Up The Jam_, so we are good for another 20 years or so.

Dining: for those that don’t know, in 2020, we were hit with a global SARS-Cov-2 pandemic. Never a family to eat out much anyway, we cut out the few remaining visits to eateries in the interest of not dying. The odd craving did strike though, so I fashioned a cheesecake, bagels, fried chicken, lattes, and doughnuts _with my bare hands_!

As in previous years, I grew a garden over the summer. The new-to-me crops: cucumbers and potatoes. Of the former, I had a lot: I ended up with something like ten pints of pickles even after having cukes in salads every day. I had only a few potatoes, but was surprised to find that the home grown varieties had a much different, nuttier taste than supermarket spuds. Both will probably make an appearance next year but I’ll need to balance out the yields. Also ended up with quite a few jalapenos which turned into a dozen jars of pepper jelly, and the usual amount of tomatoes (sauces, paste, pizza toppers, and so on). All this despite a family of rabbits literally living in my raised bed.

Anyway, this is all I can remember doing in 2020, or at least those things I have pictures of. Here’s hoping we get some vaccines in 2021 and we can go outside again. Wake me when that happens.

In which I faked a person

Having successfully shipped a project at $dayjob after some extended crunch time, I took this week off to recharge. This naturally gave me the opportunity to, um, write more code. In particular, I worked a bit on my crossword constructor while also constructing a crossword. I’m a bit rusty in this area, so while I was able to fill a puzzle with a reasonable theme, I’m probably going to end up redoing the fill before trying to publish that one because some areas are pretty yuck.

Which brings me to computing project number two: a neural net that tries to grade crosswords. Now, I can and have done this using some composite scores of the entries, based on some kind of word list rankings, but for this go-round I thought it would be fun to emulate the semi-cantankerous NYT crossword critic, Rex Parker. Parker (his nom de plume) is infamous for picking apart the puzzle every day and identifying its weak spots. Some time ago, Dave Murchie set up a website, Did Rex Parker Like The Puzzle, which, as the URL suggests, gives the short-on-time enthusiast the Reader’s Digest version. What if we had, say, wouldrexparkerlikethepuzzle.com: would this level of precognition inevitably lead us into an apocalyptic nightmare, even worse than the one we currently inhabit? Let us throw caution into the wind like so many Jurassic Park scientists and see what happens.

I didn’t do anything so fancy as to generate prose with GPT-3; instead I just trained a classifier using images of the puzzles themselves. Maybe, thought I, a person (and therefore a NN) can tell whether the puzzle is good or bad just by looking at the grid. Let’s assume Rex is consistent on what he likes — if so we could use simple image recognition to tell whether something is Rex-worthy or not. Thanks to Murchie’s work, I already had labels for 4 years of puzzles, so I downloaded all of those puzzles and trained an NN on them, as one does.

I tried a couple of options for the grid images. In one experiment, I used images derived from the filled grids, letters and all; in another, I considered only the empty grid shape itself. It didn’t make much difference either way, which suggests the language aspect of the puzzle is not really useful or adequately captured by the model.

How well did it work? Better than a coin flip, but not by a lot.

When trained with filled grids, it achieved an accuracy of 58.7%. When trained with just the grid shape, it achieved an accuracy of 61.4%.

Both models said he would like today’s (10-31-2020) puzzle, about which he was actually fairly ambivalent. My guess is the model is really keying in on number of black squares as a proxy for it being a Friday or Saturday puz, which he tends to like better than any other day of the week and therefore this one was highly ranked. Probably just predicting on number of squares would have performed similarly.

Socially distant

I haven’t posted at all since COVID-19 hit in this area, partly because work (from home) has been all-consuming and a welcome distraction from the outside world, and partly because, during my non-work hours, the old brain has loaded up an endless patter of anxiety:

Was that coughing gentleman really two meters away or maybe one and a half? Is it drafty in here, or do I have the chills? Is the shortness of breath and periodic chest pain a sign of COVID-19, or just your run-of-the-mill heart attack? Should I write another blog post and if so, will it be my last one, and if is the last, would that really be the blog post I want to end on?

And so on.

But I have decided that in some future generation there will be a Ken Burns style documentary on this whole thing and the future Ken Burns will need contemporaneous writing for his voice-overs. And who am I to deny future Ken Burns that material.

So know, dear reader, that so far in Month Six of the Apocalypse, we are all doing well. We have our health, food, shelter, and depressions in the driveway where our cars have sat motionless for half a year.

One silver lining: it turns out my hobbies were fairly pandemic-aligned: I already had sourdough starter going, a garden planned, puzzles for months, and a sewing machine at the ready. Since then, I also learned how to cut my own hair, make my own espresso, and service my own furnace. The children have traded meat-space friends for 24/7 screen time and, from their point of view, this seems to have been an auspicious swap. If they are any indication, we’ll all be just fine when the singularity hits. Which will probably be next year at this rate.

Watts up

One of my goals with this new computer is to be more aggressive about power saving: keeping it in suspend more often, using wake-on-lan for external access, etc. To that end, I dusted off the old kill-a-watt and took some baseline measurements:

Off, but plugged in: 2W
Suspend: 2W
On, idle: 48W (old machine: 100!)
Kernel build: 200W (old machine: 150, but also took 15x longer)
ML training with GPU at 100%: 400W

So long as I don’t run ML training 24-7, I am already going to save a lot of energy with this build.

New build

Last year, I spent a few weeks dabbling in machine learning, which remains an interesting area to explore though not directly related to my day-to-day work. Although the economics generally work in favor of doing ML in the cloud, there’s something to be said for having all of your code and data local and not having to worry about shutting down virtual hosts all the time. My 10+ year old PC just doesn’t cut it for ML tasks, and so I made a new one.

The main requirements for me are lots of cores (for kernel builds) and a hefty GPU or four (for ML training). For more than two GPUs, you’re looking at AMD Threadrippers; for exactly two you can go with normal AMD or intel processors. The Threadrippers cost about $500 more factoring in the motherboard. I decided that chances of me using more than two GPUs (or even more than one) were pretty darn slim and not worth the premium.

In the end I settled on a 12-core Ryzen 9 3900X with RTX 2070 GPU coming in around $1800 USD with everything. Unfortunately, in this arena everything is marketed to gamers, so I have all kinds of unasked-for bling from militaristic motherboard logos to RGB LEDs in the cooler. Anyway, it works.

Just to make up a couple of CPU benchmarks based on software I care about:

filling a 7x7 word square (single core performance)
~~~~~~~~~~
old:
real	0m10.689s
user	0m10.534s
sys	0m0.105s

new:
real	0m2.274s
user	0m2.243s
sys	0m0.016s

allmodconfig kernel build
with -j $CORES_TIMES_TWO (multicore performance)
~~~~~~~~~~
old:
real	165m11.219s
user	455m42.557s
sys	135m37.557s

new:
real	9m31.778s
user	193m31.477s
sys	23m19.117s

This is all with stock clock settings and so on. I haven’t tried ML training yet, but the speedup there would be +inf considering it didn’t work at all on my old box.

Virtual doorbell

We had some cameras installed at our house last year, partly for security, but mainly so we could know who is at the door before answering it in our PJs. Unfortunately, the software that comes with the camera DVR is pretty clunky, so it takes way too long to bring up the feed when the doorbell rings and I often don’t bother.

Luckily, the DVR exposes RTSP streams that you can capture and playback with your favorite mpeg player. And I just learned how to build a pretty good image classifier that needed a practical application.

A ridiculously good-looking person is at the door

Thus, I built an app to tell whether someone is at the door, before they ring the bell. I labeled some 4000 historical images as person or non-person, trained a CNN, and made a quick python app to run inference on the live feed. When a person is in range of the door camera, it aims the lasers tells you so.

Doorbell MVP

Not bad for having two whole weeks of deep learning under my belt. The interface could stand to be much better, of course. A little web page that sends a browser notification and link to the image or live feed would be the obvious next step. Perhaps the lasers are after that.

I know this is something that comes out of the box with commercial offerings such as Ring, but at least my images aren’t being streamed to the local police.

Uprooting

Summer came late and is leaving early this year, so it’s about harvest time now. This year, I did some radishes, onions, salad greens, carrots, green beans, jalapenos, bell peppers, basil, and tomatoes. Most of the carrots didn’t grow long roots, but I got a few. The rabbits got to the peppers and green beans before I did, while leaving the lettuce mostly alone — I realized when I had a dozen full heads that we just don’t eat that much salad, so a lot of it bolted in the end. Now I know what 4-foot tall red lettuce looks like.

Not planted this time were tomatillos, but I had a bunch of vines grow from last year’s seeds anyway. I have been pulling up most of those while keeping one large bush for possible salsa verde in a month or so.

Anyhow, I got enough produce to make a few jars of red salsa (when supplemented with some store-bought locally grown tomatoes), and I didn’t work as hard at it this year as previously, so I’m calling it a win.

In a few weeks I’ll be pulling up all the remaining plants, and trying to get a cover crop started, which is something I didn’t know about until recently.

In other uprooting news: I previously wrote a post about ending my engagement at facebook. I dropped that post, because the reasons aren’t actually very interesting and that bridge has plenty of water under it by now. But the outcome of all of that is that I’ll be starting at Amazon next week, in an actual, real live office. That last part is going to be a bit of a trial: after ten years of working from home, I’m not especially looking forward to commuting downtown, and it will certainly be a stress on the family to rearrange our schedules to accommodate that. But it will be nice to have non-virtual coworkers for a change, and I’m excited about tackling some of the challenges facing my new team.

In which I trained a neural net

The entire sum of my machine learning experience thus far is a couple of courses in grad school, in which I wrote a terrible handwriting recognizer and various half-baked natural language parsers. As luck would have it, I was just a couple of years too early for the Deep Learning revolution — at the time support vector machines were all the rage — so I’ve been watching the advancements of the last few years with equal measures idle interest and bewilderment. Thus, when I recently stumbled across the fast.ai MOOC, I couldn’t resist following along.

I have to say I really enjoy the approach of “use the tools first, then learn the theory.” In the two days since I started the course, I already built a couple of classifiers and got very good results, much more easily than with my handwriting recognizer of yore.

My first model was trained on 450 baby pictures of my children and achieved 98% accuracy. Surprisingly, the mistakes did not confirm our priors, that Alex and Sam look most similar as babies — instead it tended to confuse them both with Ian. The CNN predicts that I am 80% Alex. Can’t argue with that math.

The second classifier was trained on Pokemon, Transformers, and Jaegers (giant robots from Pacific Rim). This gets about 90% accuracy; not surprisingly, it has a hard time telling apart the robot classes, but has no trouble picking out the Pokemons.

I’m still looking for a practical application, but all in all, it’s a fun use for a GPU.