Archive

Hardware

The server runs 6 Western Digital 3TB green hard drives. They are quiet, cheap, run cool, and have sufficient speed for my storage needs. I also installed an LG Bluray combo drive into the optical slot, and a OWC 60GB SSD into the slot underneath the optical bay.

The motherboard has only 4 SATA connectors and I needed to plug in 8 drives. I needed a SATA card. This page gives a nice overview why you do not need an expensive RAID card for a ZFS-based system. I needed a 4 port HBA card that would work with Mac OS X. Officially, the only computer where you would use such a card is a Mac Pro and Mac Pros already have as many SATA ports as hard drive slots. The market for Mac compatible HBA cards with internal SATA connectors is very small and only a few cards are available. One of them is Sonnet Tempo. It is quite expensive though. I took a gamble and bought HighPoint Rocket 640L for about a third of the price of the Tempo. It is not officially supported by HighPoint as a Mac card, but it’s twin 644L that has 4 external SATA connectors is supported. It turned out I was lucky! The card works with Mac OS X out of the box and I could even boot from it.

The CPU is rated for 95W of power. The drives are rather efficient, using around 6W on peak. The total power consumption of the system is well below 200W. Finding a good inexpensive PSU proved to be a challenge. Finally I ended up with SeaSonic S12II 380W. It is 80 Plus Bronze certified and has very positive reviews around the web. There are a few issues with the supply, though. Firstly, it’s Bronze certified and you can get platinum certified PSUs now, which produce from one half to one third less heat at the server power levels. Less heat means less work for the cooling system and less current draw from the wall. These PSUs result in quieter and more efficient systems. Alas, they were not available when I was building the system. The second problem is that it is not modular. It means all the wires are firmly attached to the box and you have to deal with the wires you are not using by finding a place to stuff them inside the case. The place is not easy to find in a case that small. I wanted to put in a modular PSU. I even had my eyes on Seasonic 400W fanless PSU. However, it looks like that most of the modular PSU are also long: 160mm. The case is rated for 140mm long PSU. You can put in 150mm long one, but 160mm long PSU is very tight fit and the cables are likely not going to bend well.

Finding a CPU cooler was interesting as well. The distance between the PSU and the motherboard is about 110mm. You cannot put a tower heatsink in that space. Reading the Silent PC Review articles I found a number of good heatsinks that would fit the case. It looked like Noctua NH-L12 would fit and provide some exceptional cooling performance. I looked over the compatibility page on the Noctua web site and found out that the cooler block PCIe slot on the Intel board. I needed that slot for the HBA card. I got one of the other coolers in that list: Scythe Samurai ZZ.

One thing that surprised me while I was building the server is how small was the selection of possible components. Living in the Mac Universe and listening to stories about how much variety and choices the PC side has, I sort of assumed that if you are to build a computer from parts, there will many different models to choose from. Not exactly true…

Lets start with the case. As I said I want the server to be as small as possible. That comes down to a Mini ITX motherboard and the corresponding case. Lets find a case designed for a Mini ITX board that can hold 6 hard drives, one optical drive, and one SSD drive. After searching high and low I found only two cases that satisfy these requirements: Lian Li PC-Q08 and Lian Li PC-Q18.

Lin Li PC-Q08A

Lian Li PC-Q18A

If you know of any other cases that meet the above requirements, please drop me a note in the comment section.

The former case is about two years old, the latter showed up in the stores right when I was building the server. After looking at the specs I picked PC-Q08 for two simple reasons: It has space for a 110mm CPU cooler (PC-18Q can only accommodate 80mm cooler) and it does not have the ugly Lian Li label in front of the box. And, it was about $50 cheaper at that time. There are some aspects where the newer case is better: it has 140mm fan on top of the case (instead of 120mm); it has a dedicated motherboard tray inside (the former case mounts the motherboard on a side panel). There is some space behind the tray for better cable management. It also has a bit more space for a PSU (160mm vs 140mm). Four of the drives are mounted on a “real” SATA backplane, making installation and management of the drives easier. I read that the overall construction is more solid resulting in significant vibration reduction. However, the cooler height was a very important factor for me and I will talk about it in one of the next posts.

I finally got a bit of time to write about the new server I built last year. It has been up and running since May 2012. It runs Mountain Lion Server with Open Directory allowing me to manage multiple users and devices on my home network; stores and shares my media collection; handles Time Machine backups; and serves as a web server for some of my small projects.

The Microserver is now serves as an offsite backup unit storing nightly zfs snapshots of the data. It seems to work fine for this purpose.

Lets start with listing the requirements I had for the server box. Here they are in no particular order.

  1. Capacity. I want the server to maintain a huge data storage pool. The box has to hold a lot of hard drives. How many hard drives should I plan for? That sort of comes down to the next topic…
  2. Reliability. I want the storage to be very reliable to possible hard drive failures. If I use zfs for storage, I can dedicate some drives for the actual data and some for redundant information that can recover files in case of drive failure. (It does not work exactly like this, but it’s OK for the analysis.) I estimated that I need space for 6 drives: 4 to old the data and 2 for redundancy. If I go with 2TB drives, it gives me 8TB storage. If I use 3TB drives, it comes to 12TB storage, which should serve me well for quite a while.
  3. Blu-ray. I need a space for a Blu-ray drive to watch movies and write and occasional disk.
  4. Finally, I need space for a system drive, because it’s very likely I would not be able to boot from a ZFS drive.
  5. The box has to be small. I have a Mac Pro at home and I do not need another box of similar size sitting around.
  6. It has to be quiet. The box will be sitting in a living room and I do not want to hear it.
  7. I want the box to run MacOS X. I can configure a Linux box, but it would take me too much time and effort that I do not want to expend.
  8. It has to be powerful enough to run some computational tasks, like indexing and searching of document collections.
  9. It’s a server, so it does not need support for a very good graphics card. At the same I want to be able to plugin a DVI monitor occasionally.

Why did I want to replace the Microserver? With 6 drives mounted inside, it has no space for an optical drive (fail on #3). It uses a very power efficient CPU, which is fine for a file server, but insufficient for any other tasks (fail on #8). It uses AMD CPU. OSX for AMD is getting less and less support from the hackintosh community and AMD compatibility with the current OSX versions is falling behind. For example, OSX kernel runs in 32 bit mode on an AMD CPU. The latest ZEVO requires 64 bit kernel. So, I cannot update my ZFS setup on the Microserver beyond the ZEVO Developer Edition beta from last summer.

So, why did I not go with a Mac Pro? Because of three reasons: Mac Pro is huge (#5), there is not enough space for 6 hard drives in a Mac Pro (#1), and it’s expensive. Very expensive. Finally, while researching the information about the Microserver, I got drawn into the experience of building a computer and I wanted to build one.

This story will have multiple parts. I will cover the hardware parts, the assembly, the system installation, the software, and configuration. I plan to organize the notes I made, write down the reasons for the choices I made while assembling the server, and describe the lessons I learned in the meantime.

Microserver works well for me except for a couple small things: it noticeable chokes on concurrent reads and writes over the network and it cannot do any hard cpu work. I decided to see if I can put together a replacement with more power. I still want to have a small package, get an Intel processor, and ECC support. I went looking for a board. There are two of them out there now: Portwell WADE 8011 and Intel s1200kp. The former is a dream with 6 SATA ports, but it’s difficult to find and I saw someone on a forum quoting $390 Portwell is asking for it. The latter is available everywhere, e.g., $170 at Newegg, but it only has 4 SATA ports. Putting it together with a 4 SATA controller card, small case, Xeon E-1235, and a decent power supply, the bill comes to about $800 without tax. Expensive.

The main problem is that I cannot get a good feel if the rig like that would run OSX. There are some small hints on the forums that someone managed to get Lion running on the Xeon, but it’s not clear if that was the same board.

One of the 3TB drives started acted weird. I got emails from the smartd that the drive keeps failing the offline tests. I did a scrub – it found a couple errors. The drive went to WD for replacement, I got another one next day,

Two weeks later another drive, – a 2TB, – started misbehave. smartctl showed 300+ “Pending Sectors”. I took the drive offline and wiped it with zeroes using Disk Utility: diskutil zeroDisk /dev/diska, the Pending Sectors count went down to 0 with no more errors reported. It looks like the drive fixed itself. I placed the drive back into the pool and reslivered it.

In part 2 I considered how the new kernel and some kernel tcp variables affect network transfer speed. In that analysis I evaluated disks shared over AFP and ran simple file copy from and to the share. Here I compare AFP with NFS.

NFS and SMB vs. AFP. All numbers in MB/s
AFP NFS change vs AFP SMB change vs AFP
Write 57.37 25.02 -90.57% 36.71 -24.00%
Read 63.99 72.99 25.06% 41.50 -28.89%

The Table shows that NFS is significantly faster than AFP when reading from the server. But it is also extremely slow when writing to the server. I cannot figure out why this is happening and I need to run more experiments.

For comparison I also measured the same read/write action while mounting the share via SMB. It is slower than AFP.

I found an interesting blog that enumerates possible hardware for a DIY home NAS. It’s a very succinct analysis covering motherboards, cases, hard drive enclosures, etc. I probably did a very similar analysis when looking to build my NAS but I did have time and energy to write about it. I’m glad someone did. Enjoy the reading…

And, here is another blog post that describes different SATA controllers and boards suitable for software RAID and ZFS. Also very educational.