XFS, JFS and ZFS/FUSE Benchmarks on Ubuntu Feisty Fawn

Having upgraded to the Feisty beta I thought it would be fun to see what (if any) affect it had on filesystem performance (especially given my previous aide memoir).

For these tests I stuck to my 3 favourites, JFS (from IBM), XFS (from SGI) and ZFS (from Sun, ported to Linux using FUSE by Ricardo Correia due to Sun’s GPL-incompatible license). This is a follow on from a slew of earlier ZFS & XFS benchmarking I did reported on previously (( here, here, here and here )).

Summary: for Bonnie++ JFS is fastest, XFS next fastest and ZFS slowest and Feisty made XFS and ZFS go faster (didn’t record my previous JFS results sadly).

The fact that ZFS is slowest of the three is not surprising as the Linux FUSE port hasn’t yet been optimised (Ricardo is concentrating on just getting it running) and is also hampered by running in user space. That said it still manages a respectable speed on this hardware and does have useful functionality that makes it useful to me.

Continue reading

ZFS on Linux with FUSE reaches first beta release

I’m a bit behind at the moment, but this is something worth a mention.

Ricardo Correia’s port of Sun’s ZFS (which I’ve been playing with for a while) has finally reached its first beta release!

He’s made some useful performance improvements recently as well as tidying up some of the memory handling and fixing various bugs, including that bug from the upstream that yours truly got bitten by.

Here’s an updated Bonnie++ run for comparison.

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
inside           2G           18838   5  6995   2           18277   2 144.4   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  2795   5  9658   9  3462   5  2739   5 13736  11  4015   6
inside,2G,,,18838,5,6995,2,,,18277,2,144.4,0,16,2795,5,9658,9,3462,5,2739,5,13736,11,4015,6

real    10m12.258s
user    0m0.840s
sys     0m18.605s

Compared to previous results the write speed has improved, but the read speed seems to have dropped off a bit. Still, it’s early days yet.

ZFS Bug From Solaris Found in Linux FUSE Version and Fixed

Those who know me in my day job know that I’m pretty good at breaking things, so I guess I shouldn’t be surprised I found a ZFS bug that was from the OpenSolaris code base and had been sitting there for about a year unnoticed. The ZFS on Linux developer has now fixed the bug and sent a patch back upstream, so hopefully there will be a fix in OpenSolaris because of work done on Linux!

The good thing is that because I found it on Linux running ZFS using FUSE the bug didn’t take my system down when the ZFS file system daemon died. 🙂 http://www.csamuel.org/2007/06/19/zfsfuse-makes-it-to-linuxworld-and-lwn/

Must Remember for Future ZFS on Linux Testing..

Linus added support for block device based filesystems into 2.6.20, so it’ll be interesting to see what (if any) effect on ZFS/FUSE it will have, especially given it’s named in the commit. 🙂

I never intended this, but people started using fuse to implement block device based “real” filesystems (ntfs-3g, zfs).

Looks like Ubuntu’s Feisty Fawn will ship with this as the 2.6.20 kernels in the development versions have the fuseblk filesystem showing up in /proc/filesystems once you’ve loaded the fuse module, and the fuse-utils package seems to support it too.

Update: Sadly it appears that this isn’t much use for ZFS. 🙁

ZFS Disk Mirroring, Striping and RAID-Z

This is the third in a series of tests (( the previous ones are ZFS on Linux Works! and ZFS versus XFS with Bonnie++ patched to use random data )), but this time we’re going to test out how it handles multiple drives natively, rather than running over an existing software RAID+LVM setup. ZFS has the ability to dynamically add disks to a pool for striping (the default) mirroring or RAID-Z (with single or double parity) which are designed to improve speed (with striping), reliability (with mirroring) and performance and reliability (with RAID-Z).

Continue reading

ZFS versus XFS with Bonnie++ patched to use random data

I’ve patched Bonnie++ (( it’s not ready for production use as it isn’t controlled by a command line switch and relies on /dev/urandom existing )) to use a block of data from /dev/urandom instead of all 0’s for its block write tests. The intention is to see how the file systems react to less predictable data and to remove the unfair advantage that ZFS has with compression (( yes, I’m going to send the patch to Russell to look at )).

Continue reading

First Alpha Release of ZFS Using FUSE for Linux with Write Support

Ricardo Correia has announced on his blog about porting Sun Solaris’s ZFS to Linux using FUSE that he has an alpha release with working write support out:

Performance sucks right now, but should improve before 0.4.0 final, when a multi-threaded event loop and kernel caching support are working (both of these should be easy to implement, FUSE provides the kernel caching).

He might be being a little modest about performance, one commenter (Stan) wrote:

Awesome! I compared a zpool with a single file (rather than a partition) compared to ext2 on loopback to a single file. With bonnie++, I was impressed to see the performance of zfs-fuse was only 10-20% slower than ext2.

Stan then went and tried another interesting test:

For fun, check out what happens when you turn compression on and run bonnie++. The bonnie++ test files compress 28x, and the read and write rates quadruple! It’s not a realistic scenario, but interesting to see.

Ricardos list of what should be working in this release is pretty impressive:

  • Creation, modification and destruction of ZFS pools, filesystems, snapshots and clones.
  • Dynamic striping (RAID-0), mirroring (RAID-1), RAID-Z and RAID-Z2.
  • It supports any vdev configuration which is supported by the original Solaris implementation.
  • You can use any block device or file as a vdev (except files stored inside ZFS itself).
  • Compression, checksumming, error detection, self-healing (on redundant pools).
  • Quotas and reservations.

Read his STATUS file to find out what isn’t working too (the main one there I spotted was zfs send and recv).

Caveat: this is an alpha release, so it might eat your data.

Minimum Memory for OpenSolaris ?

Dear Lazyweb,

Alec has been bugging me to try OpenSolaris with ZFS on something (a small laptop he suggested) but I’ve run into problems. My only spare box is an ancient Olivetti Netstrada, about 10 years old with 4 (yes, four) Pentium Pro 200MHz CPUs and a whopping (for its time) 256MB RAM.

Problem is that whilst Linux happily boots and runs on it the two OpenSolaris LiveCD’s I’ve tried (Nexenta and Belenix) both fail. Nexenta says that there’s not enough RAM to unpack the RAM disk (not surprising as their site says it needs 512MB to run) and the Belenix one just leaves the screen in a mess of pretty colours as soon as Grub tries to run the loaded kernel.

Solaris Kernel Crash on Olivetti Netstrada quad 200MHz PPro, 256MB RAM boat anchor

I then tried to boot the Nexenta install CD (they claim it can run in 256MB, though no mention of its installers needs) and got the same pretty pattern of colours when the kernel executed. 🙁

I do have one other PC, the only problem is that’s got even less RAM and the CD drive doesn’t appear to want to open any more, grrr..

Linux FUSE Port of “Open” Solaris ZFS

Because Sun unfortunately chose to create a new and GPL incompatible license for “Open” Solaris it is not legally possible to directly port their interesting ZFS filesystem code into the Linux kernel, so any Linux kernel implementation would need to be a clean room rewrite under a GPL compatible license.

However, there is a way around this license incompatibility problem for filesystems by using the Linux Filesystem in UserSpace (FUSE) project. It (as the name implies) allows a filesystem to run in user space rather than in the kernel, a system now used by many other filesystem projects as an easy way of providing a filesystem paradigm for all sorts of wacky ideas (including filesystem access to Wikipedia through WikipediaFS).

So the Google Summer of Code 206 project is sponsoring Ricardo Correia in Portugal to port ZFS from “Open” Solaris to FUSE – he’s keeping a blog of his progress too.

Ricardo writes:

I’m very pleased to announce that, thanks to Google, Linux will (hopefully) have a working ZFS implementation by August 21st, 2006.

Good luck to him – I’ve had a demo of ZFS on Alec’s laptop and it looked quite snazzy – it’s kind of a fusion of an online resizeable filesystem & logical volume manager.