AppCode is still a toy. It might be fine for a small Objective C project, but it fails miserably for C++ code. Its inline indexing and code analysis is slow and irresponsive locking down the whole app when presented with a large C++ file. It also gives you a lot of false positive warnings (and often errors) on the code that compiles fine with clang.
I have to say, version 2 is better than version 1, which was a complete disaster. But it is still far from being ready to replace XCode.
It’s kind of a bummer they are writing it in Java. IDEA is/was sort of OK: you are writing Java code in an IDE written in Java. IDEA is a cross-platform, so you better use a cross-platfom language. And, I can see why they are using Java for AppCode — they can reuse a lot of pieces from the other IDEs. But, it is still slower and clunkier than a native app. The interface looks all wrong…
The server runs 6 Western Digital 3TB green hard drives. They are quiet, cheap, run cool, and have sufficient speed for my storage needs. I also installed an LG Bluray combo drive into the optical slot, and a OWC 60GB SSD into the slot underneath the optical bay.
The motherboard has only 4 SATA connectors and I needed to plug in 8 drives. I needed a SATA card. This page gives a nice overview why you do not need an expensive RAID card for a ZFS-based system. I needed a 4 port HBA card that would work with Mac OS X. Officially, the only computer where you would use such a card is a Mac Pro and Mac Pros already have as many SATA ports as hard drive slots. The market for Mac compatible HBA cards with internal SATA connectors is very small and only a few cards are available. One of them is Sonnet Tempo. It is quite expensive though. I took a gamble and bought HighPoint Rocket 640L for about a third of the price of the Tempo. It is not officially supported by HighPoint as a Mac card, but it’s twin 644L that has 4 external SATA connectors is supported. It turned out I was lucky! The card works with Mac OS X out of the box and I could even boot from it.
The CPU is rated for 95W of power. The drives are rather efficient, using around 6W on peak. The total power consumption of the system is well below 200W. Finding a good inexpensive PSU proved to be a challenge. Finally I ended up with SeaSonic S12II 380W. It is 80 Plus Bronze certified and has very positive reviews around the web. There are a few issues with the supply, though. Firstly, it’s Bronze certified and you can get platinum certified PSUs now, which produce from one half to one third less heat at the server power levels. Less heat means less work for the cooling system and less current draw from the wall. These PSUs result in quieter and more efficient systems. Alas, they were not available when I was building the system. The second problem is that it is not modular. It means all the wires are firmly attached to the box and you have to deal with the wires you are not using by finding a place to stuff them inside the case. The place is not easy to find in a case that small. I wanted to put in a modular PSU. I even had my eyes on Seasonic 400W fanless PSU. However, it looks like that most of the modular PSU are also long: 160mm. The case is rated for 140mm long PSU. You can put in 150mm long one, but 160mm long PSU is very tight fit and the cables are likely not going to bend well.
Finding a CPU cooler was interesting as well. The distance between the PSU and the motherboard is about 110mm. You cannot put a tower heatsink in that space. Reading the Silent PC Review articles I found a number of good heatsinks that would fit the case. It looked like Noctua NH-L12 would fit and provide some exceptional cooling performance. I looked over the compatibility page on the Noctua web site and found out that the cooler block PCIe slot on the Intel board. I needed that slot for the HBA card. I got one of the other coolers in that list: Scythe Samurai ZZ.
I recall reading somewhere that one reason for the name Apple Computers was that the word “Apple” precedes “Atari” in a phonebook. So it is of no surprise for a company name to serve hidden (or no so hidden) marketing goals. But sometimes the company name just screams marketing: The other week I ran into a company called Schiit Audio. Yes, it is pronounced the way you think it is pronounced. They make HiFi headphone amplifiers and DACs. The reviewer comments are full of easy puns like “that’s some serious s**t” and “I got my sh**t in the mail.”
While the name works and easily sticks to memory (pun intended), there is a danger here: the funny (and somewhat derogatory) company name gets linked to the products and no smart craftsman wants that. To offset the effect, their product catalog reads as an index of Nordic mythology. “What amp do you have? – I got Valhalla” sounds much better than “… I have Schiit.” (Have you ever noticed how luxury car makers do it the other way around? Who knows the difference between Acura TSX and NSX? Who cares. It’s an Acura! While the names like Civic and Accord are self explanatory.)
The other danger is that with the name like Schiit the amplifiers have to be at least above average. Because every potential problem would be amplified (yes, pun intended) by the name: “I heard that their amp can destroy my headphones — Hey, they make some broken s**t!” So the name forces them to keep up the quality of the work.
So, does the name work? How do I know? I’m not an expert in HiFi. The reviews I read are full of praises. The reviews are also full of disclaimers like “this is good, for the price they asking.” Considering the price they are asking is way above what a sane person would pay for a piece of an audio playback chain, I’d say that they must be rather awesome.
One more comment: for a specialized engineering company they got a very nice website. It is well-desinged, clean, and with good photos. Some of the photos show devices with vacuum tubes sticking out. Apparently, vacuum tubes are all the rage with HiFi people. Some claim that amplifiers on tubes sound different and better than solid state amplifiers. But I was surprised that someone still makes the vacuum tubes. I dug into the website and found out that (a) the tubes is almost the only thing they import and (b) they import the tubes from Russia. Running a few searches on the web turned up a few sites selling the tubes. It looks like the highest quality tubes do come from Russia. I guess that makes sense: USSR had a very strong engeneering-oriented industry that made very competitive electronics components and then the industry got stuck in limbo because of the political and economic turmoil. When it advanced, it did not advance fast enough to completely abandon the tube production, and now these tubes found a serendipitous market. Fascinating. Well, score one for the old country.
Lets talk about the motherboard. As I wrote in the previous post I was looking for the smallest possible board. That’s the Mini ITX size. I wanted to put the latest Intel CPU on it, that means LGA 1155 socket. I wanted the server to use ECC memory. There are a lot of discussions on the Web talking ECC up and down. After reading some boards and some papers, I came to the conclusion that if I have the box to handle a huge amount of data, I’d rather make sure a chance of it getting corrupted in memory is minimized. Finally, I wanted to get some decent graphics support from the Intel CPU itself, that means C206 chipset. I could find only two motherboards that satisfy these requirements: WADE-8011 from Portwell and S1200KP from Intel. The latter was widely available and relatively cheap comparing to the former one, so the decision was simple: I ordered the Intel board. Please note that S1200KP does not support Ivy Bridge CPUs. Intel has an updated version S1200KPR out that supports both Sandy and Ivy Bridge.
The choice of the CPU was pretty straightforward as well — I went with the cheapest Xeon CPU that supported graphics and 8 threads: e3-1235. In retrospect, I could have used a less power hungry CPU, e.g., i3-2125. I thought that 30W tradeoff was worth additional speed and performance. I might have been wrong, but boy! Does this thing fly!
One thing that surprised me while I was building the server is how small was the selection of possible components. Living in the Mac Universe and listening to stories about how much variety and choices the PC side has, I sort of assumed that if you are to build a computer from parts, there will many different models to choose from. Not exactly true…
Lets start with the case. As I said I want the server to be as small as possible. That comes down to a Mini ITX motherboard and the corresponding case. Lets find a case designed for a Mini ITX board that can hold 6 hard drives, one optical drive, and one SSD drive. After searching high and low I found only two cases that satisfy these requirements: Lian Li PC-Q08 and Lian Li PC-Q18.
Lin Li PC-Q08A
Lian Li PC-Q18A
If you know of any other cases that meet the above requirements, please drop me a note in the comment section.
The former case is about two years old, the latter showed up in the stores right when I was building the server. After looking at the specs I picked PC-Q08 for two simple reasons: It has space for a 110mm CPU cooler (PC-18Q can only accommodate 80mm cooler) and it does not have the ugly Lian Li label in front of the box. And, it was about $50 cheaper at that time. There are some aspects where the newer case is better: it has 140mm fan on top of the case (instead of 120mm); it has a dedicated motherboard tray inside (the former case mounts the motherboard on a side panel). There is some space behind the tray for better cable management. It also has a bit more space for a PSU (160mm vs 140mm). Four of the drives are mounted on a “real” SATA backplane, making installation and management of the drives easier. I read that the overall construction is more solid resulting in significant vibration reduction. However, the cooler height was a very important factor for me and I will talk about it in one of the next posts.
I finally got a bit of time to write about the new server I built last year. It has been up and running since May 2012. It runs Mountain Lion Server with Open Directory allowing me to manage multiple users and devices on my home network; stores and shares my media collection; handles Time Machine backups; and serves as a web server for some of my small projects.
The Microserver is now serves as an offsite backup unit storing nightly zfs snapshots of the data. It seems to work fine for this purpose.
Lets start with listing the requirements I had for the server box. Here they are in no particular order.
- Capacity. I want the server to maintain a huge data storage pool. The box has to hold a lot of hard drives. How many hard drives should I plan for? That sort of comes down to the next topic…
- Reliability. I want the storage to be very reliable to possible hard drive failures. If I use zfs for storage, I can dedicate some drives for the actual data and some for redundant information that can recover files in case of drive failure. (It does not work exactly like this, but it’s OK for the analysis.) I estimated that I need space for 6 drives: 4 to old the data and 2 for redundancy. If I go with 2TB drives, it gives me 8TB storage. If I use 3TB drives, it comes to 12TB storage, which should serve me well for quite a while.
- Blu-ray. I need a space for a Blu-ray drive to watch movies and write and occasional disk.
- Finally, I need space for a system drive, because it’s very likely I would not be able to boot from a ZFS drive.
- The box has to be small. I have a Mac Pro at home and I do not need another box of similar size sitting around.
- It has to be quiet. The box will be sitting in a living room and I do not want to hear it.
- I want the box to run MacOS X. I can configure a Linux box, but it would take me too much time and effort that I do not want to expend.
- It has to be powerful enough to run some computational tasks, like indexing and searching of document collections.
- It’s a server, so it does not need support for a very good graphics card. At the same I want to be able to plugin a DVI monitor occasionally.
Why did I want to replace the Microserver? With 6 drives mounted inside, it has no space for an optical drive (fail on #3). It uses a very power efficient CPU, which is fine for a file server, but insufficient for any other tasks (fail on #8). It uses AMD CPU. OSX for AMD is getting less and less support from the hackintosh community and AMD compatibility with the current OSX versions is falling behind. For example, OSX kernel runs in 32 bit mode on an AMD CPU. The latest ZEVO requires 64 bit kernel. So, I cannot update my ZFS setup on the Microserver beyond the ZEVO Developer Edition beta from last summer.
So, why did I not go with a Mac Pro? Because of three reasons: Mac Pro is huge (#5), there is not enough space for 6 hard drives in a Mac Pro (#1), and it’s expensive. Very expensive. Finally, while researching the information about the Microserver, I got drawn into the experience of building a computer and I wanted to build one.
This story will have multiple parts. I will cover the hardware parts, the assembly, the system installation, the software, and configuration. I plan to organize the notes I made, write down the reasons for the choices I made while assembling the server, and describe the lessons I learned in the meantime.
Reading ZEVO forums at greenbytes.com I ran into a nifty way to solve most of ZFS-over-AFP-sharing problems. To recap: I have a few zfs systems defined on my server pool. When I enable file sharing, only some of those systems show up as volumes on the network. There seems to be no clear indicator of why some systems do show up as volumes so users can connect to them and others do not. Form reading comments around the web it looks like Apple is now hardcoding some HFS file system properties into the software, which delegates alternative files systems to the role of second-class citizens and breaks functionality (including AFP sharing) for them. Bad Apple.
The solution is to trick the AFP server into believing that every file system it sees is HFS: a small library I installed on the server and now all volumes show up in AFP network browser on the client machine.