Dear all:

I'm embarking on some new network stack locking work, which requires me to address a number of loose ends in the current model. A few years ago, my attention was drawn to a largly theoretical race, which had existed in the BSD code since inception. It is detected and handled in practice, but relies on type stability of TCP connection data structures, which will need to change in the future due to on-going virtualization work. I didn't fix it at the time, but did add a counter so that we could see if it was happening in the field -- that counter, net.inet.tcp.timer_race, indicates whether or not the stack has detected it happening (and then handled it). This e-mail is to collect the results of that in-the-field survey.

Please check the results of the following command:

  % sysctl net.inet.tcp.timer_race
  net.inet.tcp.timer_race: 0

If your system shows a non-zero value, please send me a *private e-mail* with the output of that command, plus also the output of "sysctl kern.smp", "uptime", and a brief description of the workload and network interface configuration. For example: it's a busy 8-core web server with roughly X connections/second, and that has three em network interfaces used to load balance from an upstream source. IPSEC is used for management purposes (but not bulk traffic), and there's a local MySQL database.

I've already seen one non-zero report, but would be interested in knowing a bit more about the kinds of situations where it's happening so that I can prioritize fixing it appropriately, but also reason about the frequency at which it happens so we can select a fix that avoids adding significant overhead in the common case.

Thanks,

Robert N M Watson
Computer Laboratory
University of Cambridge
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to