Reading ZEVO forums at I ran into a nifty way to solve most of ZFS-over-AFP-sharing problems. To recap: I have a few zfs systems defined on my server pool. When I enable file sharing, only some of those systems show up as volumes on the network.  There seems to be no clear indicator of why some systems do show up as volumes so users can connect to them and others do not. Form reading comments around the web it looks like Apple is now hardcoding some HFS file system properties into the software, which delegates alternative files systems to the role of second-class citizens and breaks functionality (including AFP sharing) for them. Bad Apple.

The solution is to trick the AFP server into believing that every file system it sees is HFS: a small library I installed on the server and now all volumes show up in AFP network browser on the client machine.


I spent a weekend trying to get a client machine to automount a volume from the server via samba. I need three users on the client to be able to access (read/write) files on the server. I want to preserve the information about the file creator, so each user has his/her own login credentials for the server, — I don’t want to have just one user login for everyone. I need the volume to be mounted three times, each time for a different user with different login credentials. I need the mount point to well-defined — your traditional “connect to server” in Finder does not work to well, it uses /Volumes as the mount location and then names each new volume mount sequentially, e.g., Media, Media1, Media2, etc.

One solution that looked promising is to automount the volume into each of the users home directory under the user’s server login credentials. That volume would look like another folder with all files owned by the user. I looked over the autofs guide and created a direct map file that looked like this:

/Users/user1/Media -fstype=smbfs,soft ://user1:pwd1@server.local/Media
/Users/user2/Media -fstype=smbfs,soft ://user2:pwd2@server.local/Media
/Users/user3/Media -fstype=smbfs,soft ://user3:pwd3@server.local/Media

It looked like it was supposed to work. And it worked. For two users. For the third one, the mounted directory assumed root ownership and was unaccessible. I relaunched automount a few times and suddenly all three users can access their respective Media folders. I rebooted the client machine, now users 2 and 3 can see, mount, and use the folders, but user1 could not. I spent most of the day trying to figure out why the mounting was so unstable. Finally, I gave up on this approach.

My current solution, the one that seems to work reliably, is to create an indirect map for every user into a hidden folder somewhere on the startup disk

sudo mkdir /UsersVolumes
sudo chflags hidden /UsersVolumes

add this to /etc/auto_master

/UsersVolumes my_indirect_map -nosuid

and then /etc/my_indirect_map looks like

user1 \
     /Media -fstype=smbfs,soft ://user1:pwd1@server.local/Media

user2 \
     /Media -fstype=smbfs,soft ://user2:pwd2@server.local/Media

user3 \
     /Media -fstype=smbfs,soft ://user3:pwd3@server.local/Media

I also put a soft link in every user’s home folder to the appropriate Media folder.

cd /Users/user1
ln -s /UserVolumes/user1/Media
chmod -h 0700 Media

The last line should ensure that only user1 will be able to access the link and trigger the mount.

It is useful to have a mail server running on the server machine. For example, the S.M.A.R.T. monitoring daemon can let me know when a problem occurs with one of the disks. Here is very detailed guide on how to setup the postfix.

Setting up the server so files created on a share are always readable (and writeable) by a group proved to be a bit tricky. Lion clients tend to create files that are only accessible by the user who created them. It works well for private folders, but creates problems for common shares like media archives. If one user saves a photo on the common share another user cannot access it even if they are bot in the same group. So here are the steps to share the share:

  1. assign a common group to the share: sudo chgrp -R media /Volumes/Media
  2. set group suit bit on the directory, so files created in the directory have the required group ownership: sudo chmod g+s /Volumes/Media
  3. set ACL for the media group to allow reading and writing on the share and set the inheritance to files, folders, and descendants. You can it from a command line, I used Sandbox a free tool by Michael Watson.
  4. propagate the ACL permission down the share subtree. Use Sandbox.
  5. enable ACL for samba shares: sudo defaults write /Library/Preferences/SystemConfiguration/ AclsEnabled -bool YES

The new samba file sharing in Lion (and in Mountain Lion) breaks things sometime. I have a zfs drive that I’m sharing using samba from Lion and a strange thing is happening: I cannot see the share from the command line on another machine:

> mount -t smbfs '//user:pwd@server.local/Media' /Users/user/Media
mount_smbfs: server rejected the connection: Authentication error

However, if I go to the server, disable and enable file sharing, everything works as expected. I traced the problem to a race condition during the server OS startup. Apparently, file sharing starts up before some security configuration is finalized, so when I try to mount the share, the server fails to correctly authenticate the request (I see errors in kdc.log: NTLM domain not configured). If I restart the file sharing, all the prerequisites are in place and authentication succeeds. I added a small startup script to /Library/LaunchDaemons that restarts smbd after the system is done loading:

cat >
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
        <string>sleep 60;touch "/Library/Preferences/SystemConfiguration/"</string>

Update: Do not forget to change the owner of the file to root and change the permissions:

sudo chown root:wheel
sudo chmod 0644

It will ask you for an administrator password.

Microserver works well for me except for a couple small things: it noticeable chokes on concurrent reads and writes over the network and it cannot do any hard cpu work. I decided to see if I can put together a replacement with more power. I still want to have a small package, get an Intel processor, and ECC support. I went looking for a board. There are two of them out there now: Portwell WADE 8011 and Intel s1200kp. The former is a dream with 6 SATA ports, but it’s difficult to find and I saw someone on a forum quoting $390 Portwell is asking for it. The latter is available everywhere, e.g., $170 at Newegg, but it only has 4 SATA ports. Putting it together with a 4 SATA controller card, small case, Xeon E-1235, and a decent power supply, the bill comes to about $800 without tax. Expensive.

The main problem is that I cannot get a good feel if the rig like that would run OSX. There are some small hints on the forums that someone managed to get Lion running on the Xeon, but it’s not clear if that was the same board.

One of the 3TB drives started acted weird. I got emails from the smartd that the drive keeps failing the offline tests. I did a scrub – it found a couple errors. The drive went to WD for replacement, I got another one next day,

Two weeks later another drive, – a 2TB, – started misbehave. smartctl showed 300+ “Pending Sectors”. I took the drive offline and wiped it with zeroes using Disk Utility: diskutil zeroDisk /dev/diska, the Pending Sectors count went down to 0 with no more errors reported. It looks like the drive fixed itself. I placed the drive back into the pool and reslivered it.