ZFS versus XFS with Bonnie++ patched to use random data

I’ve patched Bonnie++ (( it’s not ready for production use as it isn’t controlled by a command line switch and relies on /dev/urandom existing )) to use a block of data from /dev/urandom instead of all 0’s for its block write tests. The intention is to see how the file systems react to less predictable data and to remove the unfair advantage that ZFS has with compression (( yes, I’m going to send the patch to Russell to look at )).

Continue reading

First Alpha Release of ZFS Using FUSE for Linux with Write Support

Ricardo Correia has announced on his blog about porting Sun Solaris’s ZFS to Linux using FUSE that he has an alpha release with working write support out:

Performance sucks right now, but should improve before 0.4.0 final, when a multi-threaded event loop and kernel caching support are working (both of these should be easy to implement, FUSE provides the kernel caching).

He might be being a little modest about performance, one commenter (Stan) wrote:

Awesome! I compared a zpool with a single file (rather than a partition) compared to ext2 on loopback to a single file. With bonnie++, I was impressed to see the performance of zfs-fuse was only 10-20% slower than ext2.

Stan then went and tried another interesting test:

For fun, check out what happens when you turn compression on and run bonnie++. The bonnie++ test files compress 28x, and the read and write rates quadruple! It’s not a realistic scenario, but interesting to see.

Ricardos list of what should be working in this release is pretty impressive:

  • Creation, modification and destruction of ZFS pools, filesystems, snapshots and clones.
  • Dynamic striping (RAID-0), mirroring (RAID-1), RAID-Z and RAID-Z2.
  • It supports any vdev configuration which is supported by the original Solaris implementation.
  • You can use any block device or file as a vdev (except files stored inside ZFS itself).
  • Compression, checksumming, error detection, self-healing (on redundant pools).
  • Quotas and reservations.

Read his STATUS file to find out what isn’t working too (the main one there I spotted was zfs send and recv).

Caveat: this is an alpha release, so it might eat your data.

Google Earth Overlay of DSE Bushfire Updates in Victoria

Back in January 2006 some clueful person came up with the idea of creating a Google Earth overlay to monitor bushfires in Victoria.

It pulls in the latest image from the Department of Sustainability and Environment (DSE) from their current incidents page about fires and overlays it on the satellite imagery.

Red circles are controlled fires, red stars are contained fires and red fires are “going” (i.e. not controlled or contained).

Australian Government Upsets Google

The ABC is reporting that there is draft Australian copyright legislation (( legislation here thanks to KatteKrab )) that could make it a requirement for all commercial search engines to contact the copyright holder of every web page/site in Australia and obtain permission for their site to be spidered for indexing.

This is because apparently the proposed legislation will only

protect libraries, archives and research institutions but leave commercial entities like Google out in the cold.

Google’s submission is quoted as saying:

“Given the vast size of the Internet it is impossible for a search engine to contact personally each owner of a web page to determine whether the owner desires its web page to be searched, indexed or cached,” Google submitted.

“If such advanced permission was required, the Internet would promptly grind to a halt.”

I disagree, the Internet wouldn’t grind to a halt, but we might find that Australian based sites would drop off the larger worlds radar as they were expunged from search engines. I don’t know how the legislation would impact sites like mine which whilst being written by someone in Australia (( OK, I’m in LA at the moment, but I’ll be back soon )) are hosted overseas ?

LUV (Melbourne Chapter) October General Meeting: Intel Architecture and Hacked Slugs

Paraphrased from the original.

Start: Oct 3 2006 – 19:00
End: Oct 3 2006 – 21:00

Location: The Buzzard Lecture Theatre. Evan Burge Building. Trinity College Main Campus. The University of Melbourne. Parkville. Melways Map: 2B C5.

Intel’s Core Architecture by David Jones

David Jones is a Solutions Specialist with Intel Australia specialising in Server Architecture, working directly with end users such as Westpac Bank, Ludwig Cancer Research, VPAC and others advising on latest technologies available from Intel. David has been with Intel for 10 years and in IT for 20 years, coming from a UNIX background. Today David will introduce Intel’s latest Architecture (Core Architecture) and explain the differences between Hyperthreading and Dual Core technologies.

Hacked slugs, solving all your problems with little NAS boxes by Michael Still

This talk will discuss how to get your own version of Linux running on a Linksys NSLU2, known to the Linux community as a slug. This is a consumer grade network attached storage (NAS) system. These devices are quite inexpensive, are physically small, and run on low voltage DC power. I also discuss how to handle having your firmware flash go bad, and provide some thoughts on projects made possible by these devices. The presentation will also include extra demonstrations of the process of flashing and setting up one of these devices.

Ed: as usual there will be a pre-meeting curry at 6pm

Google Co-Op – Annotating The Web

Looks like Google is working on a new service to allow users to add labels to topics that they (hopefully) know something about. The idea then is that other people then subscribe to your labels if they feel you are accurate and that then influences their search results. Sort of like routing by rumour protocols in computer networks.

So their intention is to get around the fact that webmasters don’t put explicit semantic markup in their pages yet by exploiting the fact that it’s much easier to get other people who know about topics to provide annotations for existing pages through a third party site that (many) others can then use in their normal searches.

I guess the first thing there that springs to mind for me is “what an opportunity for guerrilla marketing” – PR companies subscribe as “ordinary people”, but skew their recommendations towards the people paying them. If that sounds far fetched then don’t forget that techniques like this have been around for over 2 decades – consider it the marketeers version of computer security’s “social engineering“.

Initially found via the Evolving Trends blog.

Google To Warn About Pages With Malware

The BBC is reporting that Google will try and warn people about pages they return that may contain malware.

Initially the warnings seen via the search site will be generic and simply alert people to the fact that a site has been flagged as dangerous. Eventually the warnings will become more detailed as Stop Badware researchers visit harmful sites and analyse how they try to subvert users’ machines.

I had a play with one example that the BBC quotes:

A research report released in May 2006 looked at the safety of the results returned by a search and found that, on average, 4-6% of the sites had harmful content on them. For some keywords, such as “free screensavers” the number of potentially dangerous sites leapt to 64%.

But I couldn’t get it to warn me – perhaps it’s because Google knows I’m not running Windows ? 🙂