I have already looked at the network transfer speed and concluded that increasing MTU significantly improves the speed. I decided to revisit this question after updating the kernel. The second reason is that I keep having weird issues with the network. My two computers: the Microserver and Mac Pro are connected to an Airport Extreme base station. Since I increased MTU on the Mac Pro, some outgoing http connections stopped responding. For example, Xcode stopped accessing my svn at the 127.0.0.1 address. I lost access to some web pages on the Mac Pro when using my external hostname: e.g., I can access http://my.hostname.net/ but accessing http://my.hostname.net/afolder/ would fail, while apache log claims that it did serve the page. It looked like the data could not find the way back to me.
Also, Wake On Demand stopped working for Mac Pro. I was not happy with the situation. Update: Apparently, WOD does not work on the Mac Pro even after resetting network parameters to default values. Looks like a Lion problem (booting in 10.6.7 I get WOD back)
I also realized that my experiments measuring the network speed were somewhat flawed – I ran copy from a disk on one machine to a disk on another machine. That speed is affected by the network throughput and by the disk speed. This time I copied 1GB file to and from a RAM disk on Mac Pro to eliminate the Pro’s drives from the equation.
To make a RAM disk you can use the one-liner
diskutil erasevolume HFS+ "ram disk" \ `hdiutil attach -nomount ram://2500000`
|MTU 9000||MTU 1500||9000 vs 1500|
You can see that the improvement for MTU 9000 with the new kernel is less dramatic. I saw 10% speed improvement on local server IO. Now I observe a similar increase in speed over the network. It looks like the new kernel is more efficient and gives the CPU more room to breathe.
You can also see that I’m getting almost 90MB/s writes and 80MB/s reads over AFP. The theoretical limit of a gigabit link is somewhere around 125MB/s. I did some experiments with
iperf and saw transfer speeds around 114MB/s. So my link is running close to the theoretical maximum. However, AFP does add some noticeable overhead to the transfer.
I have been reading TCP tuning guides on the web. Most of them suggest tweaking tcp parameters. Specifically, they recommend increasing values for
net.inet.tcp.recvspace. The default values for these variables are set to 64K (
sysctl net.inet.tcp.sendspace). I decided to raise them to 524288.
This time I measure the speed from the RAM disk on the client to the raidz array on the server:
Unfortunately, you cannot directly compare those numbers with the numbers in the old table – the experimental conditions were different. But you can see the overhead caused by using the physical drive vs. using the RAM disk.
Would changing the tcp constants increase the speed for the higher MTU value? A quick run showed that they do not affect it. But a more detailed analysis would be needed to explore that questions. I’ll see if I can run more experiments. However, given the problems created by the higher MTU values, I’m inclined to go back to the default setting on MTU and raise the tcp constants.
I have created
/etc/sysctl.conf file with the following lines on both the server and the client: