How Big Was North Korea’s Bomb ?

My good friend Alec wrote on hearing about the DPRK nuclear test:

One presumes that there is a small chance it’ll have been staged with conventionals;

That got me thinking – how large a bomb was it ? We know the USGS detected a mag 4.2 shock so I went hunting around to see if there was an algorithm for converting magnitudes on the Richter Scale into energy, and, hopefully, into kilotons or megatons. It turns out J.C. Lahr wrote up a method for the “Comparison of earthquake energy to nuclear explosion energy” and helpfully included a piece of Fortran code to create a table of comparisons.

A quick “apt-get install gfortran” and a bit of mucking around with the code and I had an approximate answer:

Mag.   Energy      Energy      TNT         TNT         TNT         Hiroshima
       Joules      ft-lbs      tons        megatons   equiv. tons  bombs
4.2   0.126E+12   0.929E+11   0.301E+02   0.301E-04   0.201E+04   0.134E+00

So a magnitude 4.2 earthquake is (roughly) equivalent to a 2 kiloton device, less than one fifth of the size of Hiroshima bomb. This means it’s probably unlikely to have been a conventional device.

So what North Korea tested was fairly small in these days of megaton devices but certainly nothing you’d want to be anywhere near..

LUV (Melbourne Chapter) October General Meeting: Intel Architecture and Hacked Slugs

Paraphrased from the original.

Start: Oct 3 2006 – 19:00
End: Oct 3 2006 – 21:00

Location: The Buzzard Lecture Theatre. Evan Burge Building. Trinity College Main Campus. The University of Melbourne. Parkville. Melways Map: 2B C5.

Intel’s Core Architecture by David Jones

David Jones is a Solutions Specialist with Intel Australia specialising in Server Architecture, working directly with end users such as Westpac Bank, Ludwig Cancer Research, VPAC and others advising on latest technologies available from Intel. David has been with Intel for 10 years and in IT for 20 years, coming from a UNIX background. Today David will introduce Intel’s latest Architecture (Core Architecture) and explain the differences between Hyperthreading and Dual Core technologies.

Hacked slugs, solving all your problems with little NAS boxes by Michael Still

This talk will discuss how to get your own version of Linux running on a Linksys NSLU2, known to the Linux community as a slug. This is a consumer grade network attached storage (NAS) system. These devices are quite inexpensive, are physically small, and run on low voltage DC power. I also discuss how to handle having your firmware flash go bad, and provide some thoughts on projects made possible by these devices. The presentation will also include extra demonstrations of the process of flashing and setting up one of these devices.

Ed: as usual there will be a pre-meeting curry at 6pm

The Vacation Mail Responder – 1.2.6.2 Released

The Vacation Mail Responder has been abandoned for over 5 years now, so I contacted the former maintainers and asked them about taking on the project. They were happy about that and so now I find myself looking after it, along with Brian May.

I’ve made a minor bug fix (to add the Precedence: bulk header to all responses it generates) and updated the maintainer information and just released 1.2.6.2, over 5 years from the 1.2.6.1 release.

The main question is now, of course, where do we go from here ? One of the options we’re seriously considering is whether we should rebase from the native packages in Debian & Ubuntu as their version has been independently developed and gone much further than this one.

But for now I can go to sleep tonight feeling happy that I’ve taken on my first open source project and started to breath some life into it once more..

Linux File System Development

Been trying to catch up on some of my LWN reading (I’m weeks behind at the moment) but have stumbled upon one of those gems of information that LWN has so often – a report on the 2006 Linux File Systems Workshop.

The first page gives a useful introduction into how disk technologies are advancing and the problems that massively increasing capacity versus slowly increasing seek times create for filesystem developers. For instance:

In summary, over the next 7 years, disk capacity will increase by 16 times, while disk bandwidth will increase only 5 times, and seek time will barely budge! Today it takes a theoretical minimum 4,000 seconds, or about 1 hour to read an entire disk sequentially (in reality, it’s longer due to a variety of factors). In 2013, it will take a minimum of 12,800 seconds, or about 3.5 hours, to read an entire disk – an increase of 3 times. Random I/O workloads are even worse, since seek times are nearly flat. A workload that reads, e.g., 10% of the disk non-sequentially will take much longer on our 8TB 2013-era disk than it did on our 500GB 2006-era disk.

The second page reports on the first day of the workshop which covered hardware, errors and recovery and current filesystem design. If you are interested in filesystems or are just curious about how they work and how they can break then please go read it, it’s an outstanding article! Also read the comments, there’s some interesting stuff there too.

The last page then gets into new ideas for filesystem techniques that are designed around fixing the problems that were identified in the first day. This is nicely summarised by the comment:

These goals can be summarized as “repair-driven file system design” – designing our file system to be quickly and easily repaired from the beginning, rather than bolting it on afterward.

Very encouraging. It goes on to describe a number of different filesystem concepts that could be incorporated into new filesystems, including one (chunkfs) that could be pretty much a new filesystem in its own right.

Linux FUSE Port of “Open” Solaris ZFS

Because Sun unfortunately chose to create a new and GPL incompatible license for “Open” Solaris it is not legally possible to directly port their interesting ZFS filesystem code into the Linux kernel, so any Linux kernel implementation would need to be a clean room rewrite under a GPL compatible license.

However, there is a way around this license incompatibility problem for filesystems by using the Linux Filesystem in UserSpace (FUSE) project. It (as the name implies) allows a filesystem to run in user space rather than in the kernel, a system now used by many other filesystem projects as an easy way of providing a filesystem paradigm for all sorts of wacky ideas (including filesystem access to Wikipedia through WikipediaFS).

So the Google Summer of Code 206 project is sponsoring Ricardo Correia in Portugal to port ZFS from “Open” Solaris to FUSE – he’s keeping a blog of his progress too.

Ricardo writes:

I’m very pleased to announce that, thanks to Google, Linux will (hopefully) have a working ZFS implementation by August 21st, 2006.

Good luck to him – I’ve had a demo of ZFS on Alec’s laptop and it looked quite snazzy – it’s kind of a fusion of an online resizeable filesystem & logical volume manager.