MacZFS Speed

Introduction and experimental questions

I’m setting up a home server/NAS using Snow Leopard and MacZFS. I have an odd mixture of WD disks: 2x30EZRSDTL, 2x20EADS, and 1x20EARS. I want to join them in one large storage array, so, naturally, I am interested in what would be the optimal configuration from performance/redundancy point.

I need a long time archive storage for music, photos, and videos, so I am looking at raidz2 or possibly raidz level of setup. Yes, I know that neither is a sufficient replacement for a good backup strategy, but I’m setting up an offsite backup for the most important data. Potentially I can always reconstruct the rest of the information, I just don’t want to loose it to a trivial disk error. I figure that raidz2 is a much more reliable configuration, albeit a slower one. But how much slower?

My first question is How different are raidz and raidz2 arrays in performance?

I read that giving a whole drive to ZFS improves performance as ZFS can make use of the disk write cache. On the other hand, setting the drives up following the MacZFS Quick Start guide, — basically creating a small EFI partition on each drive, — saves some hassle of getting the final pool mounting and unmounting correctly.

My second question is How different is performance of the array when ZFS uses whole disks vs. ZFS only using the disk partitions as recommended by the MacZFS guide?

Considering that my disks are of different sizes, giving ZFS whole drives seemed like wasting good TBs (I would have to sacrifice a TB from each 3TB drive), but I am willing to do that if the performance hit would be significant. On the other hand it would be good to have a few relatively small slices (1TB each 8-)) for things that do not require redundancy, but require support for Spotlight.

My third question is How different is performance of a whole disk array vs. an array consisting of disk slices?

Finally, some of my drives, specifically, 20EARS and 30EZRSDTL, are the Advance Format (AF) drives with 4K sectors advertised as 512B sectors. I read about the sector size problem (the ‘ashiftgate’?) and how wrong size sectors may decrease performance. I downloaded MacZFS source, applied the ashift patch suggested by JasonRM on MacZFS mailing list, and compiled myself a copy of zpool tool with ashift=12 hardcoded. Now I can use that version of zpool (zpool12) to initialize my pools.

My last question is How different is the performance of a pool created with ashift 9 vs one with ashift 12?

Experimental setup

The test system is HP Proliant Microserver with AMD Athlon II Processor NEO @1.30 GHz, with 4GB of DDR3 ECC memory, running OSX 10.6.8 and MacZFS 74.1.0. Disk write cache enabled in BIOS. At least for now, the kernel runs in 32 bit mode.

To test pools of different configurations I’ve ran

  • dd bs=1m count=10000 if=/dev/zero of=test
  • dd bs=1m count=10000 of=/dev/null if=test

3 times on each configuration and averaged the results.

I considered 3 arrays:

  • 5 drives: 2x30EZRSDTL, 2x20EADS, and 1x20EARS
  • 4 drives: 2x30EZRSDTL and 2x20EADS
  • 3 drives: 2x20EADS and 1x20EARS

and also looked at individual drives. The speed numbers are in MiB/s.

Results

Here I summarize the findings. Let’s start with the most dramatic result. Question 4: How different is the performance of a pool created with ashift 9 vs one with ashift 12?

With the AF drives (WD 20EARS, 30EZRSDTL) having ashift=12 (4K sectors) enabled is a must.

operation ashift=9 ashift=12 change
30EZRSDTL Write 76.04 81.85 7.6%
Read 120.43 118.70 -1.4%
20EADS Write 66.78 67.54 1.1%
Read 95.97 95.81 -0.2%
20EARS Write 55.36 71.05 28.3%
Read 99.74 98.97 -0.8%
Table 1. Single drive pool. ZFS owns the whole disk.

A single drive pool write speed increased by 28.3% for 20EARS and by 7.7% for 30EZRSDTL when the pool was created with ashift=12. The write speed was not affected for a single 20EADS. The read speed was not affected for any of the drives.

For the multi disks array configurations the effect was even more pronounced.

disk count operation ashift=9 ashift=12 change
5 Write 79.04 132.74 67.9%
Read 175.76 262.01 49.1%
4 Write 86.12 98.21 14.0%
Read 172.72 202.69 17.4%
3 Write 58.26 94.25 61.8%
Read 82.22 114.12 38.8%
Table 2. raidz. ZFS owns whole disks.
disk count operation ashift=9 ashift=12 change
5 Write 12.03 92.40 668.4%
Read 46.18 232.11 402.6%
4 Write 50.68 71.18 40.5%
Read 126.17 170.74 35.3%
3 Write 38.79 58.65 51.2%
Read 116.38 113.68 -2.3%
Table 3. raidz2. ZFS owns whole disks.

Performance benefits range from 14% to 668% depending on the configuration. Apparently there is a huge performance hole when my collection of disks is joined into a raidz2 array with ashift=9. Write and read speeds are abysmal at 12MiBs and 46MiBs. When using zpool12 to initialize the storage I observe 92 and 232 MiBs for the same configuration.

Another exception is the read speed for raidz2 array of 3 drives (who would want to build this anyway?), which did not change significantly between ashift 9 and 12.

Question 1: How different are raidz and raidz2 arrays in performance?

disk count operation raidz raidz2 change
5 Write 132.74 92.40 -30.39%
Read 262.01 232.11 -11.41%
4 Write 98.21 71.18 -27.52%
Read 202.69 170.74 -15.76%
3 Write 94.25 58.65 -37.77%
Read 114.12 113.68 -0.38%
Table 5. ashift=12. ZFS owns whole disks.

The table shows that there is about 30% drop in write speed when going from raidz to raidz2 for 5 drives. That’s OK. I can live with it, I’m not going to write to the storage that often. There is also an 11% drop in read speed. That’s OK too assuming the speed stays above 1Gb — my network is going to be the system’s bottleneck anyhow. Of course, the speed will drop as the drives fill out, but I cannot test that right now.

Question 2: How different is performance of the array when ZFS uses whole disks vs. ZFS only using the disk partitions as recommended by the MacZFS guide? and Question 3: How different is performance of a whole disk array vs. an array consisting of disk slices?

disk count operation whole drive full disk partition change vs full disk 2TB partition change vs full disk
5 Write 92.40 90.98 -1.54% 87.88 -4.90%
Read 232.11 222.66 -4.07% 223.28 -3.81%
4 Write 71.18 70.72 -0.65% 68.63 -3.59%
Read 170.74 173.32 1.51% 168.61 -1.25%
3 Write 58.65 57.92 -1.25%
Read 113.68 112.17 -1.33%
Table 6. raidz2, ashift=12.

I see that I am taking about 5% hit on writing and 4% hit on reading when going from whole disk vdevs to 2TB partitions as vdevs. But I’m saving 2TB space in the process. I’ll take it. Did I say it’s a server on a budget?

My final configuration uses 6 disks, — I have one more 30EZRSDTL, I could not experiment with it because at that time it had all my data. Now that the system is complete, there is a raidz2 array of 6x2TB partitions. The write and read speeds of the array at 25% full capacity are 80.9 and 250.9 MiBs.

Note of caution: Firstly, the performance numbers reflect sustained read/write speed of the array. I cannot draw any conclusion about random read/write performance. But given the nature of the storage, that’s less important. Secondly, I have an odd collection of disks and an odd system, my conclusions may not transfer accurately to another setup — YMMV. I thought I would share them to provide a perspective on a real-life application of MacZFS.

Here is the output from bonnie++

Version 1.03d

     ------Sequential Output------ --Sequential Input- --Random-
     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
300M 17916  32 82532  44 46038  44 47504  88 160887  71  1359  15
     ------Sequential Create------ --------Random Create--------
     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  16  7718  96 32695  99 10259  98  7863  90 +++++ +++ 10386  97

Advertisements
3 comments
  1. I don’t know if JasonRM deleted his github repository or what, but I can’t find that ashift patch anymore. There’s a code bounty on ‘-o ashift=12’ so if you could post that patch to the list or email it to me, that’d be gr8.

    • Anton said:

      This one is a slight change to the original patch – I hardcoded ashift=12 value into zpool. Jason had code (commented out in this patch) that asked for the ashit value.

      --- maczfs/usr/src/cmd/zpool/zpool_vdev.c       2011-08-27 11:31:35.000000000 -0700
      +++ maczfs-a/usr/src/cmd/zpool/zpool_vdev.c     2011-08-27 11:41:47.000000000 -0700
      @@ -499,6 +499,25 @@
              verify(nvlist_add_string(vdev, ZPOOL_CONFIG_PATH, path) == 0);
              verify(nvlist_add_string(vdev, ZPOOL_CONFIG_TYPE, type) == 0);
              verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_IS_LOG, is_log) == 0);
      +
      +#ifdef __APPLE__
      +//    fprintf(stderr, "Please enter the ashift value you would like to use. Default is 9 (512b blocks).\n");
      +//    fprintf(stderr, "ASHIFT Value (9-14)[9]: ");
      +//    int ashift = 0;
      +//    char c, buff [ 13 ]; /* signed 32-bit value, extra room for '\n' and '' */
      +//    fgets(buff, sizeof buff, stdin) && sscanf(buff, "%d%c", &ashift, &c) == 2 && (c == '\n' || c == '');
      +//     
      +//    if (ashift  14){
      +//        ashift = 9;
      +//    }
      +//    fprintf(stderr, "Setting ashift=%d\n", ashift);
      +//    verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_ASHIFT, ashift) == 0);
      +
      +    int ashift = 12;
      +    fprintf(stderr, "Setting ashift=%d\n", ashift);
      +    verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_ASHIFT, ashift) == 0);
      +#endif
      +
              if (strcmp(type, VDEV_TYPE_DISK) == 0)
                      verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_WHOLE_DISK,
                          (uint64_t)wholedisk) == 0);
      

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: