Archive

NAS

I finally got a bit of time to write about the new server I built last year. It has been up and running since May 2012. It runs Mountain Lion Server with Open Directory allowing me to manage multiple users and devices on my home network; stores and shares my media collection; handles Time Machine backups; and serves as a web server for some of my small projects.

The Microserver is now serves as an offsite backup unit storing nightly zfs snapshots of the data. It seems to work fine for this purpose.

Lets start with listing the requirements I had for the server box. Here they are in no particular order.

  1. Capacity. I want the server to maintain a huge data storage pool. The box has to hold a lot of hard drives. How many hard drives should I plan for? That sort of comes down to the next topic…
  2. Reliability. I want the storage to be very reliable to possible hard drive failures. If I use zfs for storage, I can dedicate some drives for the actual data and some for redundant information that can recover files in case of drive failure. (It does not work exactly like this, but it’s OK for the analysis.) I estimated that I need space for 6 drives: 4 to old the data and 2 for redundancy. If I go with 2TB drives, it gives me 8TB storage. If I use 3TB drives, it comes to 12TB storage, which should serve me well for quite a while.
  3. Blu-ray. I need a space for a Blu-ray drive to watch movies and write and occasional disk.
  4. Finally, I need space for a system drive, because it’s very likely I would not be able to boot from a ZFS drive.
  5. The box has to be small. I have a Mac Pro at home and I do not need another box of similar size sitting around.
  6. It has to be quiet. The box will be sitting in a living room and I do not want to hear it.
  7. I want the box to run MacOS X. I can configure a Linux box, but it would take me too much time and effort that I do not want to expend.
  8. It has to be powerful enough to run some computational tasks, like indexing and searching of document collections.
  9. It’s a server, so it does not need support for a very good graphics card. At the same I want to be able to plugin a DVI monitor occasionally.

Why did I want to replace the Microserver? With 6 drives mounted inside, it has no space for an optical drive (fail on #3). It uses a very power efficient CPU, which is fine for a file server, but insufficient for any other tasks (fail on #8). It uses AMD CPU. OSX for AMD is getting less and less support from the hackintosh community and AMD compatibility with the current OSX versions is falling behind. For example, OSX kernel runs in 32 bit mode on an AMD CPU. The latest ZEVO requires 64 bit kernel. So, I cannot update my ZFS setup on the Microserver beyond the ZEVO Developer Edition beta from last summer.

So, why did I not go with a Mac Pro? Because of three reasons: Mac Pro is huge (#5), there is not enough space for 6 hard drives in a Mac Pro (#1), and it’s expensive. Very expensive. Finally, while researching the information about the Microserver, I got drawn into the experience of building a computer and I wanted to build one.

This story will have multiple parts. I will cover the hardware parts, the assembly, the system installation, the software, and configuration. I plan to organize the notes I made, write down the reasons for the choices I made while assembling the server, and describe the lessons I learned in the meantime.

Microserver works well for me except for a couple small things: it noticeable chokes on concurrent reads and writes over the network and it cannot do any hard cpu work. I decided to see if I can put together a replacement with more power. I still want to have a small package, get an Intel processor, and ECC support. I went looking for a board. There are two of them out there now: Portwell WADE 8011 and Intel s1200kp. The former is a dream with 6 SATA ports, but it’s difficult to find and I saw someone on a forum quoting $390 Portwell is asking for it. The latter is available everywhere, e.g., $170 at Newegg, but it only has 4 SATA ports. Putting it together with a 4 SATA controller card, small case, Xeon E-1235, and a decent power supply, the bill comes to about $800 without tax. Expensive.

The main problem is that I cannot get a good feel if the rig like that would run OSX. There are some small hints on the forums that someone managed to get Lion running on the Xeon, but it’s not clear if that was the same board.

One of the 3TB drives started acted weird. I got emails from the smartd that the drive keeps failing the offline tests. I did a scrub – it found a couple errors. The drive went to WD for replacement, I got another one next day,

Two weeks later another drive, – a 2TB, – started misbehave. smartctl showed 300+ “Pending Sectors”. I took the drive offline and wiped it with zeroes using Disk Utility: diskutil zeroDisk /dev/diska, the Pending Sectors count went down to 0 with no more errors reported. It looks like the drive fixed itself. I placed the drive back into the pool and reslivered it.

In part 2 I considered how the new kernel and some kernel tcp variables affect network transfer speed. In that analysis I evaluated disks shared over AFP and ran simple file copy from and to the share. Here I compare AFP with NFS.

NFS and SMB vs. AFP. All numbers in MB/s
AFP NFS change vs AFP SMB change vs AFP
Write 57.37 25.02 -90.57% 36.71 -24.00%
Read 63.99 72.99 25.06% 41.50 -28.89%

The Table shows that NFS is significantly faster than AFP when reading from the server. But it is also extremely slow when writing to the server. I cannot figure out why this is happening and I need to run more experiments.

For comparison I also measured the same read/write action while mounting the share via SMB. It is slower than AFP.

I found an interesting blog that enumerates possible hardware for a DIY home NAS. It’s a very succinct analysis covering motherboards, cases, hard drive enclosures, etc. I probably did a very similar analysis when looking to build my NAS but I did have time and energy to write about it. I’m glad someone did. Enjoy the reading…

And, here is another blog post that describes different SATA controllers and boards suitable for software RAID and ZFS. Also very educational.

I have already looked at the network transfer speed and concluded that increasing MTU significantly improves the speed. I decided to revisit this question after updating the kernel. The second reason is that I keep having weird issues with the network. My two computers: the Microserver and Mac Pro are connected to an Airport Extreme base station. Since I increased MTU on the Mac Pro, some outgoing http connections stopped responding. For example, Xcode stopped accessing my svn at the 127.0.0.1 address. I lost access to some web pages on the Mac Pro when using my external hostname: e.g., I can access http://my.hostname.net/ but accessing http://my.hostname.net/afolder/ would fail, while apache log claims that it did serve the page. It looked like the data could not find the way back to me. Also, Wake On Demand stopped working for Mac Pro. I was not happy with the situation. Update: Apparently, WOD does not work on the Mac Pro even after resetting network parameters to default values. Looks like a Lion problem (booting in 10.6.7 I get WOD back)

I also realized that my experiments measuring the network speed were somewhat flawed – I ran copy from a disk on one machine to a disk on another machine. That speed is affected by the network throughput and by the disk speed. This time I copied 1GB file to and from a RAM disk on Mac Pro to eliminate the Pro’s drives from the equation.

To make a RAM disk you can use the one-liner

diskutil erasevolume HFS+ "ram disk" \
 `hdiutil attach -nomount ram://2500000`
Table 1. Comparing network speed transfers from a RAM disk on the server to the RAM disk on the client. The values are in MB/s.
MTU 9000 MTU 1500 9000 vs 1500
Write 90.5 87.6 0.53%
Read 81.1 80.7 3.33%

You can see that the improvement for MTU 9000 with the new kernel is less dramatic. I saw 10% speed improvement on local server IO. Now I observe a similar increase in speed over the network. It looks like the new kernel is more efficient and gives the CPU more room to breathe.

You can also see that I’m getting almost 90MB/s writes and 80MB/s reads over AFP. The theoretical limit of a gigabit link is somewhere around 125MB/s. I did some experiments with iperf and saw transfer speeds around 114MB/s. So my link is running close to the theoretical maximum. However, AFP does add some noticeable overhead to the transfer.

I have been reading TCP tuning guides on the web. Most of them suggest tweaking tcp parameters. Specifically, they recommend increasing values for net.inet.tcp.sendspace and net.inet.tcp.recvspace. The default values for these variables are set to 64K (sysctl net.inet.tcp.sendspace). I decided to raise them to 524288.

This time I measure the speed from the RAM disk on the client to the raidz array on the server:

Table 2. Comparing network speed transfers for different values of net.inet.tcp.sendspace and net.inet.tcp.recvspace. The values are in MB/s.
default increased difference
Write 48.30 57.37 9.64%
Read 58.36 63.99 18.77%

Unfortunately, you cannot directly compare those numbers with the numbers in the old table – the experimental conditions were different. But you can see the overhead caused by using the physical drive vs. using the RAM disk.

Would changing the tcp constants increase the speed for the higher MTU value? A quick run showed that they do not affect it. But a more detailed analysis would be needed to explore that questions. I’ll see if I can run more experiments. However, given the problems created by the higher MTU values, I’m inclined to go back to the default setting on MTU and raise the tcp constants.

I have created /etc/sysctl.conf file with the following lines on both the server and the client:

net.inet.tcp.sendspace=524288
net.inet.tcp.recvspace=524288

There is a new legacy kernel out for 10.6.8. Lets see if it makes a difference for my setup.

The numbers are in MBs.
old new change
write 84.8 96.7 14.0%
read 263.1 290.7 10.5%
Version 1.03d
       ------Sequential Output------ --Sequential Input- --Random-
       -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
  Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
  300M 32849  72 81068  40 45718  38 53773  94 160056  68  2092  19
       ------Sequential Create------ --------Random Create--------
       -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16  8128  88 12142  83  7367  93  7690  93 14005  99  7919  99

ZFS array became significantly faster with the new kernel.