Introduction and experimental questions
I’m setting up a home server/NAS using Snow Leopard and MacZFS. I have an odd mixture of WD disks: 2x30EZRSDTL, 2x20EADS, and 1x20EARS. I want to join them in one large storage array, so, naturally, I am interested in what would be the optimal configuration from performance/redundancy point.
I need a long time archive storage for music, photos, and videos, so I am looking at raidz2 or possibly raidz level of setup. Yes, I know that neither is a sufficient replacement for a good backup strategy, but I’m setting up an offsite backup for the most important data. Potentially I can always reconstruct the rest of the information, I just don’t want to loose it to a trivial disk error. I figure that raidz2 is a much more reliable configuration, albeit a slower one. But how much slower?
My first question is How different are raidz and raidz2 arrays in performance?
I read that giving a whole drive to ZFS improves performance as ZFS can make use of the disk write cache. On the other hand, setting the drives up following the MacZFS Quick Start guide, — basically creating a small EFI partition on each drive, — saves some hassle of getting the final pool mounting and unmounting correctly.
My second question is How different is performance of the array when ZFS uses whole disks vs. ZFS only using the disk partitions as recommended by the MacZFS guide?
Considering that my disks are of different sizes, giving ZFS whole drives seemed like wasting good TBs (I would have to sacrifice a TB from each 3TB drive), but I am willing to do that if the performance hit would be significant. On the other hand it would be good to have a few relatively small slices (1TB each 8-)) for things that do not require redundancy, but require support for Spotlight.
My third question is How different is performance of a whole disk array vs. an array consisting of disk slices?
Finally, some of my drives, specifically, 20EARS and 30EZRSDTL, are the Advance Format (AF) drives with 4K sectors advertised as 512B sectors. I read about the sector size problem (the ‘ashiftgate’?) and how wrong size sectors may decrease performance. I downloaded MacZFS source, applied the ashift patch suggested by JasonRM on MacZFS mailing list, and compiled myself a copy of zpool tool with ashift=12 hardcoded. Now I can use that version of zpool (zpool12) to initialize my pools.
My last question is How different is the performance of a pool created with ashift 9 vs one with ashift 12?
The test system is HP Proliant Microserver with AMD Athlon II Processor NEO @1.30 GHz, with 4GB of DDR3 ECC memory, running OSX 10.6.8 and MacZFS 74.1.0. Disk write cache enabled in BIOS. At least for now, the kernel runs in 32 bit mode.
To test pools of different configurations I’ve ran
- dd bs=1m count=10000 if=/dev/zero of=test
- dd bs=1m count=10000 of=/dev/null if=test
3 times on each configuration and averaged the results.
I considered 3 arrays:
- 5 drives: 2x30EZRSDTL, 2x20EADS, and 1x20EARS
- 4 drives: 2x30EZRSDTL and 2x20EADS
- 3 drives: 2x20EADS and 1x20EARS
and also looked at individual drives. The speed numbers are in MiB/s.
Here I summarize the findings. Let’s start with the most dramatic result. Question 4: How different is the performance of a pool created with ashift 9 vs one with ashift 12?
With the AF drives (WD 20EARS, 30EZRSDTL) having ashift=12 (4K sectors) enabled is a must.
A single drive pool write speed increased by 28.3% for 20EARS and by 7.7% for 30EZRSDTL when the pool was created with ashift=12. The write speed was not affected for a single 20EADS. The read speed was not affected for any of the drives.
For the multi disks array configurations the effect was even more pronounced.
Performance benefits range from 14% to 668% depending on the configuration. Apparently there is a huge performance hole when my collection of disks is joined into a raidz2 array with ashift=9. Write and read speeds are abysmal at 12MiBs and 46MiBs. When using zpool12 to initialize the storage I observe 92 and 232 MiBs for the same configuration.
Another exception is the read speed for raidz2 array of 3 drives (who would want to build this anyway?), which did not change significantly between ashift 9 and 12.
Question 1: How different are raidz and raidz2 arrays in performance?
The table shows that there is about 30% drop in write speed when going from raidz to raidz2 for 5 drives. That’s OK. I can live with it, I’m not going to write to the storage that often. There is also an 11% drop in read speed. That’s OK too assuming the speed stays above 1Gb — my network is going to be the system’s bottleneck anyhow. Of course, the speed will drop as the drives fill out, but I cannot test that right now.
Question 2: How different is performance of the array when ZFS uses whole disks vs. ZFS only using the disk partitions as recommended by the MacZFS guide? and Question 3: How different is performance of a whole disk array vs. an array consisting of disk slices?
|disk count||operation||whole drive||full disk partition||change vs full disk||2TB partition||change vs full disk|
I see that I am taking about 5% hit on writing and 4% hit on reading when going from whole disk vdevs to 2TB partitions as vdevs. But I’m saving 2TB space in the process. I’ll take it. Did I say it’s a server on a budget?
My final configuration uses 6 disks, — I have one more 30EZRSDTL, I could not experiment with it because at that time it had all my data. Now that the system is complete, there is a raidz2 array of 6x2TB partitions. The write and read speeds of the array at 25% full capacity are 80.9 and 250.9 MiBs.
Note of caution: Firstly, the performance numbers reflect sustained read/write speed of the array. I cannot draw any conclusion about random read/write performance. But given the nature of the storage, that’s less important. Secondly, I have an odd collection of disks and an odd system, my conclusions may not transfer accurately to another setup — YMMV. I thought I would share them to provide a perspective on a real-life application of MacZFS.
Here is the output from bonnie++
Version 1.03d ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 300M 17916 32 82532 44 46038 44 47504 88 160887 71 1359 15 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 7718 96 32695 99 10259 98 7863 90 +++++ +++ 10386 97