I’ve patched Bonnie++ (( it’s not ready for production use as it isn’t controlled by a command line switch and relies on /dev/urandom
existing )) to use a block of data from /dev/urandom
instead of all 0’s for its block write tests. The intention is to see how the file systems react to less predictable data and to remove the unfair advantage that ZFS has with compression (( yes, I’m going to send the patch to Russell to look at )).
XFS
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP inside 2G 40005 10 18433 5 41640 5 197.1 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1432 4 +++++ +++ 1493 2 1446 3 +++++ +++ 212 0 inside,2G,,,40005,10,18433,5,,,41640,5,197.1,1,16,1432,4,+++++,+++,1493,2,1446,3,+++++,+++,212,0 real 6m15.955s user 0m0.256s sys 0m16.697s
ZFS without compression
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP inside 2G 15233 1 8584 1 29187 2 83.6 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2084 4 6209 4 1579 3 1832 4 10994 7 1578 4 inside,2G,,,15233,1,8584,1,,,29187,2,83.6,0,16,2084,4,6209,4,1579,3,1832,4,10994,7,1578,4 real 9m58.832s user 0m1.276s sys 0m9.609s
ZFS with compression
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP inside 2G 12466 1 7399 1 29136 2 74.9 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2056 4 6703 3 1733 3 1491 4 10076 8 1360 3 inside,2G,,,12466,1,7399,1,,,29136,2,74.9,0,16,2056,4,6703,3,1733,3,1491,4,10076,8,1360,3 real 11m22.843s user 0m1.176s sys 0m9.773s
ZFS+tcmalloc without compression
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP inside 2G 14401 1 9314 2 32431 2 86.1 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2063 4 8081 5 1646 2 1992 4 10481 10 1945 3 inside,2G,,,14401,1,9314,2,,,32431,2,86.1,0,16,2063,4,8081,5,1646,2,1992,4,10481,10,1945,3 real 9m34.116s user 0m1.576s sys 0m9.217s
ZFS+tcmalloc with compression
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP inside 2G 11441 1 8403 1 31642 1 85.8 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2099 5 8058 3 1831 2 2069 4 10915 9 1870 3 inside,2G,,,11441,1,8403,1,,,31642,1,85.8,0,16,2099,5,8058,3,1831,2,2069,4,10915,9,1870,3 real 10m37.041s user 0m1.336s sys 0m8.073s
Summary
Pretty conclusive win for XFS at the moment, though ZFS is still better for file creation and deletion (with the exception of sequential deletes) and it is really early days for ZFS so plenty of room for manoeuvre yet.
Pingback: ZFS Disk Mirroring, Striping and RAID-Z at The Musings of Chris Samuel
Pingback: ZFS on Linux with FUSE reaches first beta release at The Musings of Chris Samuel