Watts up

One of my goals with this new computer is to be more aggressive about power saving: keeping it in suspend more often, using wake-on-lan for external access, etc. To that end, I dusted off the old kill-a-watt and took some baseline measurements:

Off, but plugged in: 2W
Suspend: 2W
On, idle: 48W (old machine: 100!)
Kernel build: 200W (old machine: 150, but also took 15x longer)
ML training with GPU at 100%: 400W

So long as I don’t run ML training 24-7, I am already going to save a lot of energy with this build.

New build

Last year, I spent a few weeks dabbling in machine learning, which remains an interesting area to explore though not directly related to my day-to-day work. Although the economics generally work in favor of doing ML in the cloud, there’s something to be said for having all of your code and data local and not having to worry about shutting down virtual hosts all the time. My 10+ year old PC just doesn’t cut it for ML tasks, and so I made a new one.

The main requirements for me are lots of cores (for kernel builds) and a hefty GPU or four (for ML training). For more than two GPUs, you’re looking at AMD Threadrippers; for exactly two you can go with normal AMD or intel processors. The Threadrippers cost about $500 more factoring in the motherboard. I decided that chances of me using more than two GPUs (or even more than one) were pretty darn slim and not worth the premium.

In the end I settled on a 12-core Ryzen 9 3900X with RTX 2070 GPU coming in around $1800 USD with everything. Unfortunately, in this arena everything is marketed to gamers, so I have all kinds of unasked-for bling from militaristic motherboard logos to RGB LEDs in the cooler. Anyway, it works.

Just to make up a couple of CPU benchmarks based on software I care about:

filling a 7x7 word square (single core performance)
~~~~~~~~~~
old:
real	0m10.689s
user	0m10.534s
sys	0m0.105s

new:
real	0m2.274s
user	0m2.243s
sys	0m0.016s

allmodconfig kernel build
with -j $CORES_TIMES_TWO (multicore performance)
~~~~~~~~~~
old:
real	165m11.219s
user	455m42.557s
sys	135m37.557s

new:
real	9m31.778s
user	193m31.477s
sys	23m19.117s

This is all with stock clock settings and so on. I haven’t tried ML training yet, but the speedup there would be +inf considering it didn’t work at all on my old box.

Virtual doorbell

We had some cameras installed at our house last year, partly for security, but mainly so we could know who is at the door before answering it in our PJs. Unfortunately, the software that comes with the camera DVR is pretty clunky, so it takes way too long to bring up the feed when the doorbell rings and I often don’t bother.

Luckily, the DVR exposes RTSP streams that you can capture and playback with your favorite mpeg player. And I just learned how to build a pretty good image classifier that needed a practical application.

A ridiculously good-looking person is at the door

Thus, I built an app to tell whether someone is at the door, before they ring the bell. I labeled some 4000 historical images as person or non-person, trained a CNN, and made a quick python app to run inference on the live feed. When a person is in range of the door camera, it aims the lasers tells you so.

Doorbell MVP

Not bad for having two whole weeks of deep learning under my belt. The interface could stand to be much better, of course. A little web page that sends a browser notification and link to the image or live feed would be the obvious next step. Perhaps the lasers are after that.

I know this is something that comes out of the box with commercial offerings such as Ring, but at least my images aren’t being streamed to the local police.

In which I trained a neural net

The entire sum of my machine learning experience thus far is a couple of courses in grad school, in which I wrote a terrible handwriting recognizer and various half-baked natural language parsers. As luck would have it, I was just a couple of years too early for the Deep Learning revolution — at the time support vector machines were all the rage — so I’ve been watching the advancements of the last few years with equal measures idle interest and bewilderment. Thus, when I recently stumbled across the fast.ai MOOC, I couldn’t resist following along.

I have to say I really enjoy the approach of “use the tools first, then learn the theory.” In the two days since I started the course, I already built a couple of classifiers and got very good results, much more easily than with my handwriting recognizer of yore.

My first model was trained on 450 baby pictures of my children and achieved 98% accuracy. Surprisingly, the mistakes did not confirm our priors, that Alex and Sam look most similar as babies — instead it tended to confuse them both with Ian. The CNN predicts that I am 80% Alex. Can’t argue with that math.

The second classifier was trained on Pokemon, Transformers, and Jaegers (giant robots from Pacific Rim). This gets about 90% accuracy; not surprisingly, it has a hard time telling apart the robot classes, but has no trouble picking out the Pokemons.

I’m still looking for a practical application, but all in all, it’s a fun use for a GPU.

New Directions in Commandline Editing

Update: I finally isolated this to gnome’s multiple input method support. Shift + Space is bound by default to switch input sources, and it is in whatever the next input method is that unusual things happen. Turn that stuff off in the keyboard shortcuts! Find below the original post about the visible symptoms.

Dear lazyweb,

Where in {bash, readline, gnome terminal, wayland, kernel input} does the following brain damage, newly landed in debian testing, originate?

  • pressing ctrl-u no longer erases to beginning of line, but instead starts unicode entry with a U+ prompt, much like shift-ctrl-u used to do, except the U is upper-cased now
  • hitting slash twice rapidly underlines the first slash (as if to indicate some kind of special entry prompt, but one I’m unfamiliar with and I cannot get it to do anything useful anyhow), and then just eats the second slash and the underline goes away
  • characters typed quickly (mostly spaces) get dropped

This stuff is so hard to google for and it is killing my touch typing. Email me if you know, and I’ll update this accordingly so that maybe google will know next time.

More filler

I noticed some HTML5 crossword construction apps have sprung up over the last year, so I no longer have first mover status on that. Also their UIs are generally better and I dislike UI work, so it seemed reasonable to join forces. Thus, I sent some PRs around.

It was surprising to me that the general SAT solver used by Phil, written in C compiled to asmjs was so much slower than a pure Javascript purpose-built crossword solver (mine). I assumed the SAT solver might make certain search optimizations that could overcome the minor optimizations related to mine being only a crossword solver. So, yay, my code is not that bad?

In these apps the filler runs as a web worker so even when slow it isn’t too noticable (except for hogging a core).

Anyway you can try out my filler today with Kevin (filler code is here, which is exactly the code in my own app except with some janky whitespace because I stripped out all flow annotations).

Making copies

One of my early goals this year has been to revamp the backup regime at the old homestead. Previously, I have relied on rotating external drives connected to my main desktop, and putting most content on a samba share that also got backed up. But it was a bit ad-hoc and things not on the share only got backed up sporadically. I’d rather not use a cloud backup service because reasons, so I bought a NAS, and now it looks like this:

  • Daily:
    • download cloud assets (google photo)
    • backup all disks to NAS (borg backup on Linux, Windows backup). Incrementals of 7 days / 4 weeks / 6 months
    • backup NAS to external drive (rsync)
  • Weekly:
    • borg check the latest backup
    • swap external disk with one in fire safe
  • Monthly:
    • swap external disk with off-site disk

I think this should work well enough until I use up enough storage to outstrip the individual external disks, then I’ll have to rethink things. Too far, or not too far enough?

Router redux

I had just a little bit of downtime over the last few weeks while Angeline and I reacquainted ourselves with how to take care of an infant. It’s like riding a bicycle: there are tons of things you forgot since last time.

One of the long-on-my-todo-list items finally got completed: I upgraded all of my wifi routers. I have 5: an old dual band 11n router and four TP-Link 11ac units, all of which were running some oldish build of OpenWRT.

I decided the 11n router’s time has come for the trash-heap, so I put the latest stable LEDE build on one of the TP-Links and swapped it out.

The other three are just access points without the router: they just have all the ports on the unit bridged together, connected to the router via a wired switch. I also have a mesh wifi interface up on each unit so that I could place the unit anywhere regardless of wired connectivity (though, in practice, I have wired drops everywhere so I don’t really use this.) For these, I build from source with just the required bits. I added a serial port to one of the units so I can test builds there before rolling out to the other two.

In all it was pretty painless since the LEDE build is more or less the same as OpenWRT. I did go through (LEDE) recovery once and found this fun issue:

root@(none):/tmp# sysupgrade -n lede-ar71xx-generic-archer-c7-v2-squashfs-sysupgrade.bin
Image metadata not found
killall: watchdog: no process killed
Commencing upgrade. All shell sessions will be closed now.
Failed to connect to ubus
root@(none):/tmp#

…because sysupgrade has different paths for failsafe vs not; and for some reason $FAILSAFE is not always set. Do this to work-around:

root@(none):/tmp# export FAILSAFE=1
root@(none):/tmp# sysupgrade -n lede-ar71xx-generic-archer-c7-v2-squashfs-sysupgrade.bin
Image metadata not found
killall: watchdog: no process killed
Commencing upgrade. All shell sessions will be closed now.
root@(none):/tmp# Connection to 192.168.1.1 closed by remote host.

React reaction

An embarrassing admission to make: I wrote something in javascript.[1]

Actually, I wrote it in React, because I still think of javascript as that awful language that works differently in every browser, and so, you know, to stay relevant, I decided to learn the framework that web developers everywhere have already cast into yesterday’s wastebin for the new shiny. This, pretty much.[2]

Anyway, I replaced the third-party crossword app on my website with my own NIH version and made it somewhat responsive [3] so that it works on my phone. Also there are two new-ish puzzles since the last time I posted, dated 2016-11-25 and 2016-12-30. Each of them has its own set of problems, but I think I am at least learning where I need to improve.

To this neophyte, my impression is that using React (and ES6) is much nicer than that terrible language that they compile down to. On the other hand, the pattern of moving state up the object hierarchy away from an object’s view feels backwards and not very OO-like. Perhaps if I also used Flux then I might get enlightened on this point, but that is for another day.

[1] Here’s a nickel, kid. Buy yourself a real language.
[2] The day that my relevance depends on knowing the latest JS framework is the day that I will take up a completely different career.
[3] Anything worth doing is worth doing somewhat.