btrfs 0.13 and XFS benchmarks

Back in February Chris Mason announced btrfs 0.13, so I thought I’d give it a quick go as I’d not touched it since testing btrfs 0.5 back in August. Back then, on some pretty meaty hardware, there was a considerable difference between XFS and btrfs and I was curious as to how they’d compare now.

The test hardware this time is a quad core Intel box with 8GB RAM and a pair of 750GB SATA drives in a RAID-1 mirror. It is running Kubuntu Hardy Heron (now in beta) with a 2.6.25-rc6 kernel.

A quick blast with Bonnie++ surprised me, btrfs matched XFS for read, writes and rewrites (though with higher CPU usage, presumably due to the fact that it’s checksuming all the data) and then blew XFS away for meta-data operations.

Operation XFS btrfs
Block write (KB/s) 50572 42087
Block rewrite (KB/s) 23739 23296
Block read (KB/s) 52512 53108
Sequential creates (/s) 4095 23569
Sequential deletes (/s) 3404 15901
Random creates (/s) 1819 27919
Random deletes (/s) 1397 21561

Here are the full results:


Version 1.03b       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
quad            16G           50572  10 23739   4           52512   6 431.6   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4095  19 +++++ +++  3404  14  1819   8 +++++ +++  1397   6

real    23m32.841s
user    0m1.340s
sys     1m33.566s


Version 1.03b       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
quad            16G           42087  42 23296   7           53108   9 345.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23569 100 +++++ +++ 15901  99 27919 100 +++++ +++ 21561 100

real    24m28.868s
user    0m1.436s
sys     4m18.040s

13 thoughts on “btrfs 0.13 and XFS benchmarks

  1. When I first added the test for creat/unlink speed I was accused of tailoring the benchmark to ReiserFS. :-#

    I expect any modern filesystem to use some type of balanced trees for directories and therefore easily beat the older filesystems.

    I would be interested to see Postal results for both filesystems. Get Postal doing 20 TCP connections to the server at the same time and have a RAID array with 4 or more disks for storage and the result should be interesting.

    My theory is that Postal will show some significant benefits for XFS at the high end which are not shown by Bonnie++. But I have not had time to test this theory.

  2. i just switched from 6 disk array raid5 XFS system to reiserfs on my 100k+ users free hosting , and i’ve noticed real change, but it was not even near to what it showed in my tests. xfs being like 10 or even 50 times slower on some test is of course not even 2 slower in real life, BUT i have notice like 20-30% faster system – meaning less tasks are waiting for I/O on reiserfs than they were on XFS.

    so my LA is lower now, and the system is more responsive. so for free hosting for example, where i limit max file size to just 3MB, XFS is just a bad choice.

    other than that, btrfs blew away xfs, and even reiserfs in tests. it was matched only by reiser4, but reiser4 is really not stable still.

    for me, btrfs is what i wanted to reiser4 be, + quick fsck + some other great stuff. meaning, as soon as btrfs is stable, i’m going to switch to it. if it’s as good as reiser4 in tests, i just can’t believe it won’t be at least very good in real life. reiser4 blew away every fs other than btrfs and i tested it in real life – it is EXTREMLY fast. not counting the lockups 😉

    so please, make btrfs quick for metadata, don’t overcomplicate it like every other fs out there. make it quick as reiser4, but stable, write it cleanly so you will get into kernel quickly, and you will be #1 choice for performance geeks, and after many patches maybe #1 competitor for ext3/4.

  3. Forgot to mention – in the benchmarking I did Reiser4 (when it didn’t crash the box) was slower than ReiserFS and both ext3 and ext4 were faster than both the Reiser filesystems for the workload I was using.

    It’s horses for courses, people should test their workload and pick whichever works best for them!

  4. The XFS developers do a lot of regression testing at the high-end. XFS doesn’t look as good on a simple benchmark as it may do when you have some unusual high load situation in real life.

    If you consider the value of your time and the value of uptime vs the cost of more disks then XFS starts to look a little more attractive too.

  5. i don’t know, reiser4 was extremly fast for me, it was used as users partition only, files usually 10-100KB size. but it was not stable, i had few kernel oopses, and it was not worth it.

    XFS was slow for metadata, and operating on a dir with 100 000 subdirs was so slow, that it was just crazy.

    reiserfs is like 20-30% faster , talking about real life example, not tests. i personally think, that ext4 would be even faster than reiserfs, but i didn’t dare to make users data partion on still dev fs.

    so when considering only stable FSes, i can only say that reiserfs is faster than xfs (xfs tweaked, meaning like 8 different options mkfs/mount time) , on real life example, when there are many small files and many directories.

    big files – i’m sure XFS would blew away reiserfs easly, but i don’t need it.

  6. Most people don’t even try to get 100,000 sub-directories. Either hash the names or use the first character or two for names so that there are multiple levels of directory. The way Squid stores it’s files is an example of how to work around such bottlenecks.

  7. Juice, that’s fine, my point is that (as is a general rule in computing) the only benchmark that really matters is your workload.

    For the sort of workload I was testing the results were different, but that’s just to be expected. If we were testing a video streaming system it’d be different again! 😉

  8. Pingback: The Musings of Chris Samuel » Blog Archive » ZFS-FUSE Bonnie++ benchmark update

  9. Until it becomes the default, it can be worth tweaking XFS parameters to improve the speed of metadata operations. Quick test on my workstation (Linux, 2GB ram):

    # mkfs.xfs -f
    # mount -t xfs
    # bonnie++ -u nobody

    Version 1.03c ——Sequential Output—— –Sequential Input- –Random-
    -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    attale 4G 10871 29 10931 2 8050 3 23348 61 37155 4 134.3 0
    ——Sequential Create—— ——–Random Create——–
    -Create– –Read— -Delete– -Create– –Read— -Delete–
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 646 3 +++++ +++ 660 2 655 3 +++++ +++ 319 1

    # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d agcount=4
    # mount -o logbsize=256k,nobarrier
    # bonnie++ -u nobody

    Version 1.03c ——Sequential Output—— –Sequential Input- –Random-
    -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    attale 4G 11028 30 10690 2 8091 3 18795 48 34657 4 132.3 0
    ——Sequential Create—— ——–Random Create——–
    -Create– –Read— -Delete– -Create– –Read— -Delete–
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 1011 4 +++++ +++ 707 2 1093 4 +++++ +++ 526 2

  10. Pingback: Bonnie++ Results for XFS on Dell E4200 SSD « The Musings of Chris Samuel

  11. Pingback: HOWTO - Kubuntu 9.04, RAID-10, LVM2, and XFS… « Linux Free Trade Zone

Comments are closed.