You are on page 1of 14

Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

Rolande's Ramblings
Information fit to ignore
Posted by: rolande | December 30, 2010

Performance Tuning the Network Stack on Mac OS X

(https://rolande.files.wordpress.com/2010/12/osx-
hammer.jpg)There is a decent amount of documentation out
there that details all of the tunable parameters on the Mac
OSX IP stack. However, most of these documents either
provide basic suggestions without much background on a
particular setting or they discuss some of the implications of
changing certain parameters but dont give you very solid
guidance or recommendations on the best configuration in a
particular scenario. Many of the parameters are dependent
upon others. So, the configuration should be addressed with
that in mind. This document applies to OSX 10.5 Leopard, 10.6
Snow Leopard, and 10.7 Lion.

The closest thing I have found to a full discussion and tutorial on the topic can be found here
(http://www.macgeekery.com/tips/configuration/mac_os_x_network_tuning_guide_revisited).

The above document is a great reference to bookmark. I thought I would also include my own
thoughts on this topic to shed some additional light on the subject.

UPDATED

This post has generated a fair amount of feedback, due to issues encountered. I have attempted
to provide an update to address certain strange connectivity behaviors. I believe that most issues
were caused by the aggressive TCP keepalive timers. I have not been able to recreate any of the
strange side effects, yet, with these updates. So far, so good. After nearly 5 months of using these
settings, I can report that all local applications and systems are running well. I definitely notice
more zip in webpage display inside Chrome and I am able to sustain higher throughputs on
various speed tests compared to before.

For reference, here are the custom settings I have added to my own sysctl.conf file:

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 1 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

kern.ipc.maxsockbuf=4194304
kern.ipc.somaxconn=512
kern.ipc.maxsockets=2048
kern.ipc.nmbclusters=2048
net.inet.tcp.rfc1323=1
net.inet.tcp.win_scale_factor=3
net.inet.tcp.sockthreshold=16
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.mssdflt=1440
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=0
net.inet.tcp.slowstart_flightsize=4
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50

The easiest way to edit this file is to open a Terminal window and execute sudo nano
/etc/sysctl.conf. The sudo command allows you to elevate your rights to admin. You will be
prompted to enter your password if you have admin rights. nano is the name of the command
line text editor program. The above entries just get added to this file one line at a time.

You can also update your running settings without rebooting by using the sudo sysctl -w
command. Just append each of the above settings one at a time after this command.
kern.ipc.maxsockets and kern.ipc.nmbclusters can only be modified from the sysctl.conf file upon
reboot.

Following you will find my explanations about each of the parameters I have customized or
included in my sysctl.conf file:

1. One suggestion out there is to set the kern.ipc.maxsockbuf value to the sum of the
net.inet.tcp.sendspace and net.inet.tcp.recvspace variables. The key is that this value cant be any
less than the sum of those 2 values or it can cause some fatal errors with your system. The
default value, at least what I found on 10.6.5, is 4194304. That is more than enough. In my
case, I have just hard coded this value into my sysctl.conf file to ensure that it does not
change to prevent problems. If you are trying to tune for high throughput with Gigabit
connectivity you may want to increase this value as recommended in the TCP Tuning Guide
for FreeBSD (http://fasterdata.es.net/TCP-tuning/FreeBSD.html). Generally the suggestion
seems to be to minimally set this value to twice the Bandwidth Delay Product. For the
majority of the world that is using their Mac on a DSL or Cable connection, that value would
be much less than what you would need to support local transfers on your LAN. Personally,
Id leave this at the default but hard code it just to be sure.
2. kern.ipc.somaxconn limits the maximum number of sockets that can be open at any one time.
The default here is just 128. If an attacker can flood you with a sufficiently high number of
SYN packets in a short enough period of time, all of your possible network connections will

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 2 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

be used up, thus successfully denying your users access to the service. Increasing this value
is also beneficial if you run any automated programs like P2P clients that can drain your
connection pool real quickly.
3. kern.ipc.maxsockets and kern.ipc.nmbclusters set the connection thresholds for the entire
system. One socket is created per network connection, and one per unix domain socket
connection. While remote servers and clients will connect to you on the network, more and
more local applications are taking advantage of using unix domain sockets for inter-process
communication. There is far less overhead as full TCP packets dont have to be constructed.
The speed of unix domain socket communication is also much faster as data does not have to
go over the network stack but can instead go almost directly to the application. The number
of sockets youll need depends on what applications will be running. I would recommend
starting with a value matching the number of network buffers, and then tuning it as
appropriate. You can find out how many network buffer clusters in use with the
command netstat:;m.:The defaults are usually fine for most people. However, if you
want to host Torrents, you will likely want to tune these values to 2 or 4 times the default of
512. The kern.ipc.nmbclusters value appears to default to 32768 on 10.7 Lion. So, this should
not be something you have to tune going forward.
4. I have hard-coded the enabling of RFC1323 (net.inet.tcp.rfc1323) which is the TCP High
Performance Options (Window Scaling). This should be on by default on all future OSX
builds. I have also hard-coded the Window Scaling factor to a value of 3. This is also a
default now. Unfortunately the Internet moves slowly when it comes to a significant change
to a fundamental protocol behavior. Setting this value to 3 is a balance between performance
and causing your connection performance to suck. If you notice unacceptable poor
performance with key applications you use, I would suggest you disable this option and
make sure your net.inet.tcp.sendspace and net.inet.tcp.recvspace values are set no higher than
65535. Any applications with load balanced servers that are using a Layer 5 type ruleset can
exhibit performance problems with window scaling if the explicit window scaling
configuration has not been properly addressed on the Load Balancer.
5. The next option I have set is net.inet.tcp.sockthreshold. This parameter sets the number of open
sockets at which the system will begin obeying the values you set in the net.inet.tcp.recvspace
and net.inet.tcp.sendspace parameters. The default value is 64. Essentially think of this as a
Quality of Service threshold. Prior to reaching this number of simultaneous sockets, the
system will restrict itself to a max window of 65536 bytes per connection. As long as your
window sizes are set above 65536, once you hit the socket threshold, the performance should
always be better than anything to which you were previously accustomed. The higher you
set this value, the more opportunity you give the system to take over all of your network
resources.
6. The net.inet.tcp.sendspace and net.inet.tcp.recvspace settings control the maximum TCP window
size the system will allow sending from the machine or receiving to the machine when the.
Up until the latest releases of most operating systems, these values defaulted to 65535 bytes.
This has been the de facto standard essentially from the beginning of the TCP protocols
existence in the 1970!s. Now that the RFC1323 High Performance TCP Options are starting to
be more widely accepted and configured, these values can be increased to improve
performance. I have set mine both to 262144 bytes. That is essentially 4 times the previous
limit. You must have the RFC1323 options enabled, in order to set these values above 65535.
7. The net.inet.tcp.mssdflt setting seems simple to configure on the surface. However, arriving at

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 3 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

the optimum setting for your particular network setup and requirements can be a
mathematical exercise that is not straightforward. The default MSS value that Apple has
configured is a measly 512 bytes. This is not really noticeable on a high speed LAN segment.
But it can be a performance bottleneck across a typical residential broadband connection.
This setting adjusts the Maximum Segment Size that your system can transmit. You need to
understand the characteristics of your own network connection, in order to determine the
appropriate value. For a machine that only communicates with other hosts across a normal
Ethernet network, the answer is very simple. The value should be set to 1500 bytes, as this is
the standard MTU and MSS on Ethernet networks. In my case, I have a DSL line that uses
PPPoE for its transport protocol. In order to get the most out of my DSL line and avoid
wasteful protocol overhead, I want this value to be exactly equal to the amount of payload
data I can attach within a single PPPoE frame to avoid fragmenting segments which causes
additional PPPoE frames and ATM Cells to be created which adds to the overall overhead on
my DSL line and reduces my effective bandwidth. There are quite a few references out there
to help you determine the appropriate setting.So, to configure for a DSL line that uses
PPPoE like mine, an approriate MSS value would be 1452 bytes. 1460 bytes is the normal
MSS on Ethernet for IP traffic as there is normally a 40 byte IP header. With PPPoE you have
to subtract an additional 8 bytes of overhead for the PPPoE header. That leaves you with an
MSS of 1452 bytes. There is one other element to account for. ATM. Many DSL providers,
like mine, use the ATM protocol as the underlying transport carrier for your PPPoE data.
That used to be the only way it was done. ATM uses 53 byte cells of which each cell has a 5
byte header. That leaves 48 bytes for payload in each cell. If I set my MSS to 1452 bytes, that
does not divide evenly across ATMs 48 byte cell payloads. 1452/48 = 30.25 I am left with 12
bytes of additional data to send at the end. Ultimately ATM will fill the last cell with 36 bytes
of null data in that scenario. To avoid this overhead, I reduce the MSS to 1440 bytes so that it
will evenly fit into the ATM cells. 30 * 48 = 1440 < 1452
8. net.inet.tcp.msl defines the Maximum Segment Life. This is the maximum amount of time to
wait for an ACK in reply to a SYN-ACK or FIN-ACK, in milliseconds. If an ACK is not
received in this time, the segment can be considered lost and the network connection is
freed. This setting is primarily about DoS protection. There are two implications for this.
When you are trying to close a connection, if the final ACK is lost or delayed, the socket will
still close, and more quickly. However if a client is trying to open a connection to you and
their ACK is delayed more than 7500ms, the connection will not form. RFC 753 defines the
MSL as 120 seconds (120000ms), however this was written in 1979 and timing issues have
changed slightly since then. Today, FreeBSDs default is 30000ms. This is sufficient for most
conditions, but for stronger DoS protection you will want to lower this. I have set mine to
15000.
9. It appears that the aggressive TCP keepalive timers below are not well liked by quite a few
built-in applications. I have removed these adjustments and kept the defaults for the time
being. net.inet.tcp.keepidle sets the interval in milliseconds when the system will send a
keepalive packet to test an idle connection to see if it is still active. I set this to 120000 or 120
seconds which is a fairly common interval. The default is 2 hours.
10. net.inet.tcp.keepinit sets the keepalive probe interval in milliseconds during initiation of a TCP
connection. I have set mine to the same as the regular interval which is 1500 or 1.5 seconds
11. net.inet.tcp.keepintvl sets the interval, in milliseconds, between keepalive probes sent to
remote machines. After TCPTV_KEEPCNT (default 8) probes are sent, with no response, the
http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 4 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

(TCP) connection is dropped. I have set this value to 1500 or 1.5 seconds.
12. net.inet.tcp.delayed_ack controls the behavior when sending TCP acknowledgements.
Allowing delayed ACKs can slow some systems down. There is also some known poor
interaction with the Nagle algorithm in the TCP stack when dealing with slow start and
congestion control. I have disabled this feature.
13. net.inet.tcp.slowstart_flightsize sets the number of outstanding packets permitted with non-
local systems during the slowstart phase of TCP ramp up. In order to more quickly overcome
TCP slowstart, I have bumped this up to a value of 4.
14. net.inet.tcp.blackhole defines what happens when a TCP packet is received on a closed port.
When set to 1!, SYN packets arriving on a closed port will be dropped without a RST packet
being sent back. When set to 2!, all packets arriving on a closed port are dropped without an
RST being sent back. This saves both CPU time because packets dont need to be processed
as much, and outbound bandwidth as packets are not sent out.
15. net.inet.udp.blackhole is similar to net.inet.tcp.blackhole in its function. As the UDP protocol
does not have states like TCP, there is only a need for one choice when it comes to dropping
UDP packets. When net.inet.udp.blackhole is set to 1!, all UDP packets arriving on a closed
port will be dropped.
16. The name net.inet.icmp.icmplim is somewhat misleading. This sysctl controls the maximum
number of ICMP Unreachable and also TCP RST packets that will be sent back every
second. It helps curb the effects of attacks which generate a lot of reply packets. I have set
mine to a value of 50.

Posted in Mac, Network, OSX, Technology | Tags: IP, OSX, Performance, TCP

1. Hi, thanks for this article. However, I noticed some side effects after installing your
sysctl.conf file on my 2 computers (an iMac and a MacBook Pro):
1. The printer got uninstalled
2. The computers dont see each other via Bonjour. No external access to the disk and no
screen sharing possible. The Bonjour protocol does work- a NAS server is visible for both
computers via Bonjour.
Can you help me out?

By: W. Pisarek on February 17, 2011


at 4:53 pm

Reply

When I first updated my setting everything was fine but as the system ran and more
process started up thing would weird. It started with not being able to use my VPN
configurations. When I tried to create a VPN the GUI would say something like
unrecoverable error check your configurations but this was a working configuration and
hour earlier.

The console log had

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 5 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

configd[32] IPSec Controller: cannot create racoon control socket


configd[32] IPSec Controller: cannot connect racoon control socket (errno = 61)

So I thought I would just remove and try to reconfigurethe weird thing was I could
delete the VPN configuration but when I hit reply it would reappear, so that I could not
save any changes to the systems preferences pane. In the log I had these messages.

.com.apple.systempreferences[9546] _SCHelperOpen connect() failed: Connection refused

The next thing I noticed was my IP printer was gone and the system would not allow me
add it backSo I turned to Google and found a site talking about the sysctl mibs and
sockets availability that had the same errors in the logs (site was in Russian). The light
bulb went on I backed out all the variables and viola all my services were working.

I believe the culprit is the

net.inet.tcp.sockthreshold=16

Variable I have added all of the others back and no problems so far..

By: philbert919 on July 2, 2011


at 2:01 pm

Reply

2. Just a quick update: I renamed the sysctl.conf file on both computers, restarted then and lo
and behold, everything is back to normal. The printer is there, screen sharing and remote
access work as they should.
So now we have a question: which line in the sysctl.conf file caused this trouble?

By: W. Pisarek on February 17, 2011


at 5:33 pm

Reply

3. Interesting find. I dont use Bonjour or desktop sharing, so I hadnt noticed. I will see if I can
recreate and identify which particular parameter is causing the headache. That is really odd
behavior. As far as the printer disappearing, is it locally attached or over the network? I
wonder if it has something to do with the MSL timer setting? Try setting that back to 30000
which is the default for BSD.

By: rolande on February 17, 2011


at 5:53 pm

Reply

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 6 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

4. The printer is attached via USB port to the iMac. I tried reinstalling the drivers first, but it
failed- drivers could not be found via Apple despite the info that they are available there. I
downloaded the driver from Canons website, but installation of these failed too.
The MSL timer- first Ill do some reading about what it is, then give it a try.
Thanks a lot for your quick reply. W.

By: W. Pisarek on February 18, 2011


at 3:57 am

Reply

MSL is the maximum segment lifetime. That is the maximum amount of time a segment
of data can be outstanding on the network without acknowledgement or response from
the remote end.

I narrowed the issue down to one of the TCP timer options. At least for the printing
problem I was able to recreate. It appears that the OSX Printer & Fax settings screen uses
local network communication to talk to the CUPS printing system to identify the installed
printers and their status. Something about the tuned TCP timers/keepalives was causing
this particular communication to fail. I am not 100% sure yet which one is at fault. I
pulled out the MSL setting and the keepalive settings and put it back to default and the
printers reappeared without rebooting.

I set these back to default:

net.inet.tcp.msl=10000 -> 15000


net.inet.tcp.always_keepalive=1 -> 0
net.inet.tcp.delayed_ack=0 -> 3
net.inet.tcp.blackhole=2 -> 0
net.inet.udp.blackhole=1 -> 0

By: rolande on February 19, 2011


at 12:28 am

Reply

5. Hello again, Scott.


while perusing the subject of TCP tuning, I came across IPNETTUNERX by
http://www.sustworks.com (http://www.sustworks.com) .
I dont think this software is worth its money, but the interesting thing was a list of
parameters it modifies. They are (name current default):
kern.ipc.maxsockbuf 4194304 262144
kern.ipc.maxsockets 512 512
kern.maxfiles 12288 12288
kern.maxvnodes 33792 -

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 7 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

net.inet.ip.accept_sourceroute 0 0
net.inet.ip.fastforwarding 0 0
net.inet.ip.forwarding 0 0
net.inet.ip.fw.dyn_count 0 0
net.inet.ip.fw.dyn_max 4096 4096
net.inet.ip.fw.enable 1 1
net.inet.ip.fw.verbose 2 0
net.inet.ip.fw.verbose_limit 0 0
net.inet.ip.intr_queue_drops 0 0
net.inet.ip.intr_queue_maxlen 50 50
net.inet.ip.keepfaith 0 0
net.inet.ip.portrange.first 49152 49152
net.inet.ip.portrange.hifirst 49152 49152
net.inet.ip.portrange.hilast 65535 65535
net.inet.ip.portrange.last 65535 65535
net.inet.portrange.lowfirst 1023 1023
net.inet.ip.portrange.lowlast 600 600
net.inet.ip.redirect 1 1
net.inet.ip.rtexpire 140 3600
net.inet.ip.rtmaxcache 128 128
net.inet.ip.rtminexpire 10 10
net.inet.ip.sourceroute 0 0
net.inet.ip.ttl 64 64
net.inet.raw.maxdgram 8192 8192
net.inet.raw.recvspace 8192 8192
net.inet.tcp.aways_keepalive 0 0
net.inet.tcp.blackhole 2 0
net.inet.tcp.delacktime 50
net.inet.tcp.delayed_ack 3 3
net.inet.tcp.do_tcpdrain 0 0
net.inet.tcp.drop_synfin 1 1
net.inet.tcp.icmp_may_rst 1 1
net.inet.tcp.isn_reseed_interval 0 0
net.inet.tcp.keepidle 7200000 144000
net.inet.tcp.keepinit 75000 1500
net.inet.tcp.keepintvl 75000 1500
net.inet.tcp.local_slowstart_flightsize 8 4
net.inet.tcp.log_in_vain 3 0
net.inet.tcp.minmss 216 216
net.inet.tcp.minmssoverload 0 0
net.inet.tcp.msl 15000 600
net.inet.tcp.mssdflt 512 512
net.inet.tcp.newreno 0 0
net.inet.tcp.path_mtu_discovery 1 1
net.inet.tcp.pcbcount 28
net.inet.tcp.recvspace 65536 32768
http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 8 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

net.inet.tcp.rfc1323 1 1
net.inet.tcp.rfc1644 0 0
net.inet.tcp.rttdflt 3 (not available in Mac OS 10.2 or later)
net.inet.tcp.sack 1 1
net.inet.tcp.sack_globalholes 0 0
net.inet.tcp.sack_globalmaxholes 65536 65536
net.inet.tcp.sack_maxholes 128 128
net.inet.tcp.sendspace 65536 32768
net.inet.tcp.slowlink_wsize 8192 8192
net.inet.tcp.slowstart_flightsize 1 1
net.inet.tcp.sockthreshold 64 256
net.inet.tcp.strict_rfc1948 0 0
net.inet.tcp.tcbhashsize 4096 4096
net.inet.tcp.tcp_lq_overflow 1 1
net.inet.tcp.v6mssdflt 50
net.inet.udp.blackhole 1 0
net.inet.udp.checksum 1 1
net.inet.udp.log_in_vain 3 0
net.inet.udp.maxdgram 9216 9216
net.inet.udp.pcbcount 41
net.inet.udp.recvspace 42080 42080
net.link.ether.inet.apple_hwcksum_rx 1 1
net.link.ether.inet.apple_hwcksum_tx 1 1
net.link.ether.inet.host_down_time 20 20
net.link.ether.inet.log_arp_wrong_iface 0
net.link.ether.inet.max_age 1200 1200
net.link.ether.inet.maxtries 5 5
net.link.ether.inet.proxyall 0 0
net.link.ether.inet.prune_intvl 300 300
net.link.ether.inet.useloopback 1 1
net.local.stream.recvspace 8192 8192
net.local.stream.sendspace 8192 8192

what do you think?

By: W. Pisarek on March 12, 2011


at 2:39 pm

Reply

6. Thanks for sharing. Note that OS X server variants have some different values from the
desktop, some noticeable differences (derived from a 10.5 upgraded from 10.4 OS X server):

kern.maxvnodes: 120000
kern.ipc.somaxconn: 2500
kern.ipc.maxsockbuf: 8388608
http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 9 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

net.inet.tcp.recvspace: 65536
net.inet.tcp.sendspace: 65536

By: Pro Backup - online backup on March 29, 2011


at 12:20 pm

Reply

7. Ive been trying to tune my IP stack on and off for a couple of months without success. I am
unable to get the send and receive spaces to be anything other than 65536. I updated my
parameters to match what you prescribed above, but when I do a sudo sysctl
net.inet.tcp.sendspace, I get a response of 65536 for both. Any idea what is overriding what i
have in sysctl.conf? (osx 10.6.7)

Thanks in advance,

Frederick

By: frederick on May 20, 2011


at 11:12 am

Reply

You have to set kern.ipc.maxsockbuf to a value equivalent to the sum of those 2 fields or
you will cause problems on your machine. You also must already have
net.inet.tcp.rfc1323=1. If that is not set, which it should be by default, the system will not
allow a window size greater than 65535.

By: rolande on June 20, 2011


at 2:30 pm

Reply

8. It hid all my system monitor actions

By: Harun Makandi on June 18, 2011


at 2:40 am

Reply

You likely did not set kern.ipc.maxsockbuf to a value equivalent to the sum of the send
and receive window sizes. If you do not do this you will cause strange issues with local
services on your Mac.

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 10 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

By: rolande on June 20, 2011


at 2:32 pm

Reply

9. Thanks for the post, Ive found the detailed description of each variable modification very
useful.

However when I tried to increase kern.ipc.somaxconn to 32768, clients on my home LAN


could not connect to my local FTP server. I cannot tell how kern.ipc.somaxconn should affect
this in any way, but thats what happens.

By default (without any sysctl mods) clients have no problem connecting. If I create the
/etc/sysctl.conf with the only line kern.ipc.somaxconn=32768" and reboot, clients fail to
connect. Do you have any clue what might cause this?

I dont know if this helps, but heres my config and a few parameters:
- MacBook Pro (Intel CPU, bought in 2006 Dec.) with 2 GB RAM
- runs ipfw (configured via WaterRoof)
- has Little Snitch (but that should not interfere with values set in kern.ipc.somaxconn)
- FTP server is PureFTPd 1.0.29
- clients are running Windows (Win7 and FTP client is FileZilla)

I googled for potential pitfalls with changing kern.ipc.somaxconn, but nothing came up.

One other thing: do I understand correctly that kern.ipc.somaxconn determines the max.
number of _incoming_ connections for a listening socket? Ie. a local webserver, FTP server or
torrent client?

And does kern.ipc.maxsockets determine the max. number of sockets (of all kinds)? The
latter has a default of 512. Does this mean that my Mac can have only 512 network
connections (incoming + outgoing together) at any given moment? Do I have to set other
variables too if I wanted to increase the max. number of network connections? Im just asking
because Ive read in a blog post that maybe the max. number of open files/handles must be
increased as well, because theyre somehow related (?).

Thanks for your help!

By: mzso on June 25, 2011


at 1:37 pm

Reply

First of all, I apologize for the extremely delayed response.

The kern.ipc.somaxconn variable is highly dependent upon the amount of available

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 11 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

memory on your system. This value represents the number of simultaneously permitted
connections in a single sockets listening queue. Your assessment is accurate. Most
documentation I have read indicates that the default value of 128 is usually fine for a
home/work machine. You really only need to tune this up if you are doing some high
volume server hosting and experiencing connection failures. If the value is too high, it is
likely you will run into resource issues.

kern.ipc.maxsockets and kern.ipc.nmbclusters kind of go hand in hand. They do set the


threshold for maximum connections on the system. I have set mine both to 2048.

By: rolande on October 22, 2011


at 9:47 am

Reply

No problem. Thanks for the reply. Hope you get well soon.

By: mzso on October 22, 2011


at 10:30 am

10. [...] have been reading TCP tuning guides on the web. Most of them suggest tweaking tcp
parameters. Specifically, they recommend increasing values for net.inet.tcp.sendspace [...]

By: Network Speed. Part 2. Oceanside Coding on September 22, 2011


at 2:48 am

Reply

11. This post has generated a fair amount of feedback, due to issues encountered with various
applications. Unfortunately, I was mostly offline and unable to focus on responding for the
past 6 or 7 months because of a serious medical issue I was dealing with. I have finally
updated this post to address observed local application connectivity failure as a result of a
couple settings. I believe that most issues were caused by aggressive TCP keepalive timers. I
have not been able to recreate any of the previous strange side effects, yet. So far, so good. As
always, your feedback is welcome. Thank you.

By: rolande on October 22, 2011


at 8:56 am

Reply

12. Rolande,
Thank you for a very informative article. As you may be aware, P2P clients can occasionally
put OS X into system freeze. Other than your references above, do you have any other

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 12 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

suggestions for settings that might reduce the possibility of this unpleasant occurrence.
Also, I wonder if certain settings such as net.inet.tcp.send/recvspace are, in at least some
contexts, more like a soft limit only. While using P2P, for example getsockopt returns
values double and quadruple the defaults shown in sysctl -a for tcp.send/recvspace.
Also the default value kern.ipc.maxsockbuf: 4194304" could easily be exceeded when using
P2P but perhaps that is just a soft limit as well? Do you have any thoughts on the
responsibility of the application code to try to limit socket buffer size via setsockopt? Hope
at least some of this makes sense!

By: queuerious on December 15, 2011


at 1:28 am

Reply

I dont believe those are soft limits from a network stack perspective. If you allow the
tcp.send/recvspace values to exceed the maxsockbuf in aggregate it will cause your
system to puke. Typically it is the sheer volume of TCP connections generated by a P2P
client that causes the system trouble in that it will eat up a decent amount of memory and
cause the machine to chew on the CPU to perform all of the connection management. I
would tune the P2P client to be less aggressive by setting reasonable connection limits
and not try to use system settings to account for the P2P clients aggressive behavior.

By: rolande on April 30, 2012


at 10:39 pm

Reply

13. Hi are those settings also suitable for direct connections via crossover cable? I have my Mac
Pro connected directly to a PC both with gigabit lan and i want to get the best performance
out of it. Is there a way to reduce CPU usage for network connections?

By: Gabe on February 16, 2012


at 12:52 pm

Reply

These settings will work for a LAN only connection. They arent 100% optimal but they
are definitely better than the defaults out of the box. If you want to squeeze all the juice
you can out of a high speed LAN link, there are a number of things you can do, however,
the trade off is that if you need to access a slower broadband Internet connection using
that same high speed interface, your performance can and will suffer. The key is to
optimize performance on a single interface to cover the majority of traffic scenarios.

By: rolande on March 19, 2012

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 13 of 14
Performance Tuning the Network Stack on Mac OS X Rolande's Ramblings 12/05/2012 10:54

at 9:36 pm

Reply

Categories

Humor
OSX
Technology
Mac
Network
IPv6
Security
Software

Blog at WordPress.com. | Theme: Ocean Mist by Ed Merritt.

Follow

Follow Rolande's Ramblings

Powered by WordPress.com

http://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/ Page 14 of 14

You might also like