My drive is now solid state

I always learn something new when doing some misguided thing such as “let’s copy our OS onto a new disk using tar like they did back in the day!” The impetus was a shiny new OCZ Vertex 3 to replace the spinning rust in my ancient Macbook. As a co-worker says, it’s like putting a gold steering wheel in a 1970 Pinto. My plan was roughly to:

  • Stuff the new drive in the desktop
  • Make a tar backup of the old laptop drive across the network to the desktop
  • Partition new drive, aligning to flash block sizes
  • Mount the new partition somewhere, and untar the backup
  • Chroot in the new directory, and do a grub-install

Simple. All of that went great, I thought.

The first hurdle when swapping the new drive in was a physical one: the laptop disk bay uses Torx, smaller ones than any adapters I have. This obstacle was cleared by creative use of needle-nosed pliers, and I was on to my first boot attempt, and subsequent failure.

Among the exciting new discoveries:

  • You really, really need to remember to mark /boot as bootable. Otherwise, the dumb Mac firmware will give you the blinking question mark despite all of your previous care.
  • If your user/groups don’t match between the two systems, you probably just borked up a bunch of uid/gids. Luckily there are the -nouser and -nogroup args to find, and almost everything outside of /home is root:root.
  • Once again, Macs won’t boot a rescue USB flash drive unless it has EFI crud laying around in the root directory.
  • UUID-based fstab and grub.conf are not happy when you have an entirely new drive
  • Debian tar doesn’t understand xattrs, that’s a RedHat feature (although I caught this one in time and compiled RedHat’s tar on the Debian system).

On the whole though, it wasn’t too bad. The install took maybe twice as long as installing from media and restoring my usual backup despite the mistakes, and this way I don’t have to reinstall huge numbers of packages to get back to where I was. And so far the drive seems quite speedy, even with my sad 1.5 Mbps interface. Plus it doesn’t hurt that the installation did a giant defrag of the whole OS.

Updated to version 6

I bought a new wireless router the other day, a D-Link DIR-825 dual-band a/g/b/n. This is a step up in several regards from my previous 2.4 GHz 11g Linksys, not least of which is the D-Link’s ability to run OpenWRT with ath9k. Installing OpenWRT with prebuilt binaries is dirt simple; it took all of about 5 minutes to set up.

One of my motivations was to do some IPv6 testing in advance of World IPv6 Day. Comcast doesn’t yet offer IPv6 addresses to the public at large, so I set up a Hurricane Electric 6in4 tunnel. This is quite easy to configure in OpenWRT.

So now I can load ipv6.google.com in my browser. It’s approximately 2 better than the normal site.

log2 in bash

Turns out bash supports enough arithmetic operators to implement lg() just as you might in some real language:


function log2 {
    local x=0
    for (( y=$1-1 ; $y > 0; y >>= 1 )) ; do
        let x=$x+1
    done
    echo $x
}

z=$(log2 64)


I had a somewhat valid reason to want to do this. The one thing that continually gets me when I try to do dumb things like this is when to use the raw variable name and when to use the dollar sign. Yes, the semantics are well-defined, but my perl habits run deep. Similarly, in ruby, all my variables are global.

Backup re-revisited

I’ve reconsidered using git as my backup choice du jour. The main problem (and feature) of using git was having a checked out repo in my home directory. I found I was forever worried, given my admittedly horrid muscle memory habit of doing ‘git reset –hard’ periodically inside source trees, that I’d accidentally do it from the wrong directory and lose any recent work. Luckily, I never did that.

Two obvious solutions: don’t use a checkout in the home directory, instead using rsync to the repo; or use a wrapper/custom command name for the git-as-backup program to avoid accidents. Well, I went with the third option: use rdiff-backup like a normal person. It’s packaged by Fedora and Debian, so I only needed a small tweak to my backup scripts to make that happen every night. And someone wrote a FUSE filesystem (archfs) to mount the backups as normal directories, so there’s no real loss of convenience under this scheme.

My RPM database is now corrupted. Just in time.

Smolt

The stats from smolt are pretty interesting if, like all stats, entirely useless. Some curiousities:

  • i686 still beats x86_64 by a ton
  • A (very) few people change their runlevel
  • Acer is high in the vendor list, I guess they are still killing the netbook market
  • People don’t configure their swap appropriately
  • SMP is now the norm, outside of embedded kit anyway
  • I want a 4+ GHz cpu
  • No one uses omfs 🙁 …I guess I need to submit a profile.

F10

So, I’ve been a user of Debian (and lately Ubuntu) since around 2001, with RedHat, Mandrake, and Slackware being in use before then. Debian was like a revelation: ‘apt’ is how package management should be! I still have my server running Debian stable, but I thought I’d try putting Fedora 10 on my laptop this go-round to see how it compares to Ubuntu. All the marketing hype about Ubuntu being mere aggregators of others’ hard work had something to do with that as well. Besides, yum has been around for years now so surely it is as good as apt by now.

Here are my thoughts: I still find yum a little clunky for a few things; maybe that’s just my expertise in apt speaking. LVM was the first thing to go — it wasn’t hard to do from the graphical installer. The much hailed boot graphics stuff only worked with vesafb for me since they dropped the modesetting code for Intel from the kernel. I had to overhaul the installkernel script to properly update grub and not bother with an initrd, since I hate them. Finally, all configuration seems to be HAL driven now, which just means putting more random undocumented crap into huge XML files in /etc to get your touchpad working. Lovely, I’m sure Ubuntu is busy adopting that mess. On the plus side, a nice looking gnome setup with reasonable defaults. On the whole, Fedora 10 is a solid release, though it will still take some time to get it configured to my liking. Perhaps by then I’ll give openSuse a spin.

Backups revisited

I spent most of last weekend doing home IT tasks. That involved upgrading my main desktop machine from Pentium III to an Athlon XP. Welcome to 7 years ago! But most of the work was spent reorganizing my data and coming up with a better backup regime.

Now that hard drives are so cheap, and we now rent a storage space, spending $1/GB-month for off-site network backup is just not worth it any more. Also, with my off-site backup, I was only keeping a single full backup, which is not terribly useful if a few weeks elapse before you notice something is missing. So, I have been playing around with incremental backups using rsync and hard links, similar to the way Apple’s time machine supposedly works. Then I stumbled across ‘gibak,’ a set of shell scripts that use the git version control system as the backup tool.

In the end, I went with my own dozen-liner script to use git and metastore, with rsync/cifs to collect the stuff in windowsland for backup in separate repositories. A cron job does a daily commit and push from the checked-out repo in my home directory. So far, the result is pretty nice. If I screw something up, a ‘git reset’ gets me back to any earlier date. It also solves a minor annoyance with keeping files in sync across multiple machines: both can use a clone of the git repo and then syncing is as easy as a push from one and a pull to the other. I can rotate portable hard drives to the storage area to solve the ‘apartment burning down’ scenario, though I’m admittedly vulnerable to the ‘global thermonuclear war’ scenario.

I’ve already used this scheme to rebuild a machine’s home dir and it worked flawlessly. Hopefully the same will hold when I move my laptop from Ubuntu 8.04 to Fedora 10. Anyway, this should keep me satisfied until btrfs is everywhere and I can just use filesystem snapshots.

XMLization

The libpam-mount configuration file has changed to a new XML format.

Aaaaaaghhh, no!!!!

sed

Sometimes I look at a long Unix pipeline and think, “I should do this in perl.” Other times, it is, “I bet I can do this all in sed.” So, here’s how to print just the lines of a file following a successful match in sed:

sed -n ':s; /^regexp$/{b l}; n; b s; :l; p; n; b l' file.txt

Lazyweb, is there a shorter way?