The Parable of the Tulsa

In the beginning was the Xeon, and it was 32-bit.

Then Intel moved over the face of the Xeon and created Nocona, which was 64-bit, and Intel thought it was, well, OK.

So Intel took Nocona and added more L2 cache, and thus begat Irwindale. Intel saw Irwindale was good, but Opteron was still better.

So Intel was wrathful and split Irwindale asunder internally, creating Paxville DP, with dual cores.

Intel looked at Paxville DP and said unto itself “still not enough cache!” and soon more cache grew within the Paxville DP and thus begat Tulsa.

Thus endeth the lesson, from the book of Wikipedia, Chapter Xeon..

Ahem..

Google To Warn About Pages With Malware

The BBC is reporting that Google will try and warn people about pages they return that may contain malware.

Initially the warnings seen via the search site will be generic and simply alert people to the fact that a site has been flagged as dangerous. Eventually the warnings will become more detailed as Stop Badware researchers visit harmful sites and analyse how they try to subvert users’ machines.

I had a play with one example that the BBC quotes:

A research report released in May 2006 looked at the safety of the results returned by a search and found that, on average, 4-6% of the sites had harmful content on them. For some keywords, such as “free screensavers” the number of potentially dangerous sites leapt to 64%.

But I couldn’t get it to warn me – perhaps it’s because Google knows I’m not running Windows ? πŸ™‚

Microsoft, Firefox and Bad (X)HTML

So Microsoft stuffed up their redirect of real web browsers at one point from their preview page (you’ll need to set your browser to lie and say it’s IE 6 on XP to avoid the now-working redirect to the standard MS home page).

I think it’s a device to try and hide the fact that it’s the usual MS generated broken markup, the W3C validator spat out 113 errors and 13 info messages and their CSS doesn’t fare much better!

They’ve got their work cut out for them if they want to avoid making it a significant regression on the results for the current home page which has only 2 HTML errors (though still with a considerable number of CSS bugs).

419 Spam Giggle

Had a 419 spam this morning that slipped through the filters (now fed to SpamAssassin) that started with the following – do they know something that I don’t ? πŸ™‚

Dear Fiend,

Sadly it’s probably just an attempt to evade the “Dear Friend” test..

0.8 DEAR_FRIEND BODY: Dear Friend? ThatÒ€ℒs not very dear!

Linux File System Development

Been trying to catch up on some of my LWN reading (I’m weeks behind at the moment) but have stumbled upon one of those gems of information that LWN has so often – a report on the 2006 Linux File Systems Workshop.

The first page gives a useful introduction into how disk technologies are advancing and the problems that massively increasing capacity versus slowly increasing seek times create for filesystem developers. For instance:

In summary, over the next 7 years, disk capacity will increase by 16 times, while disk bandwidth will increase only 5 times, and seek time will barely budge! Today it takes a theoretical minimum 4,000 seconds, or about 1 hour to read an entire disk sequentially (in reality, it’s longer due to a variety of factors). In 2013, it will take a minimum of 12,800 seconds, or about 3.5 hours, to read an entire disk – an increase of 3 times. Random I/O workloads are even worse, since seek times are nearly flat. A workload that reads, e.g., 10% of the disk non-sequentially will take much longer on our 8TB 2013-era disk than it did on our 500GB 2006-era disk.

The second page reports on the first day of the workshop which covered hardware, errors and recovery and current filesystem design. If you are interested in filesystems or are just curious about how they work and how they can break then please go read it, it’s an outstanding article! Also read the comments, there’s some interesting stuff there too.

The last page then gets into new ideas for filesystem techniques that are designed around fixing the problems that were identified in the first day. This is nicely summarised by the comment:

These goals can be summarized as “repair-driven file system design” – designing our file system to be quickly and easily repaired from the beginning, rather than bolting it on afterward.

Very encouraging. It goes on to describe a number of different filesystem concepts that could be incorporated into new filesystems, including one (chunkfs) that could be pretty much a new filesystem in its own right.