I have a review pending to add missing sysctls to inet.4 and icmp.4
(https://reviews.freebsd.org/D36003). I'd like to do something similar
for tcp.4, but it is a bigger project. I have a start, which I put in
review https://reviews.freebsd.org/D36004. I started with sysctls
related to recvbuf and sendbuf scaling, updating recvspace and sendspace
as well. I also added references to sysctls documented in separate man
pages (not sure I got all of those). There are a number of other sysctls
that are not yet included. Not all of them should be included, though,
probably including those used by tools like tcpdrop. I'll append a list
for anyone interested to peruse. I listed sysctls I added so far, along
with a longer list of things I wasn't sure about. They are listed as
sysctl -d output to provide a little info.
The size of the sysctl list in tcp.4 may deserve a little thought. There
were 85 entries when I started; I'm now at 105. There are 50 on the list
unresolved entries. Some of them may deserve separate man pages, and
many may not be of sufficient interest.
Comments? Feel free to comment on the review, or on items in this email.
Mike
Added sysctls:
net.inet.tcp.drop_synfin: Drop TCP packets with SYN+FIN set
net.inet.tcp.minmss: Minimum TCP Maximum Segment Size
net.inet.tcp.persmax: maximum persistence interval
net.inet.tcp.persmin: minimum persistence interval
net.inet.tcp.recvbuf_auto: Enable automatic receive buffer sizing
net.inet.tcp.recvbuf_max: Max size of automatic receive buffer
net.inet.tcp.require_unique_port: Require globally-unique ephemeral port for
outgoing connections
net.inet.tcp.rexmit_drop_options: Drop TCP options from 3rd and later
retransmitted SYN
net.inet.tcp.sack.globalholes: Global number of TCP SACK holes currently
allocated
net.inet.tcp.sendbuf_auto: Enable automatic send buffer sizing
net.inet.tcp.sendbuf_auto_lowat: Modify threshold for auto send buffer growth
to account for SO_SNDLOWAT
net.inet.tcp.sendbuf_inc: Incrementor step size of automatic send buffer
net.inet.tcp.sendbuf_max: Max size of automatic send buffer
net.inet.tcp.tso: Enable TCP Segmentation Offload
net.inet.tcp.v6mssdflt: Default TCP Maximum Segment Size for IPv6
Added as references:
net.inet.tcp.blackhole_local: Enforce net.inet.tcp.blackhole for locally
originated packets
net.inet.tcp.cc.newreno: New Reno related settings
net.inet.tcp.cc: Congestion control related settings (see mod_cc(4))
net.inet.tcp.syncache: TCP SYN cache
net.inet.tcp.syncookies_only: Use only TCP SYN cookies
Modified:
net.inet.tcp.delayed_ack: Delay ACK to try and piggyback it onto a data packet
net.inet.tcp.recvspace: Initial receive socket buffer size
net.inet.tcp.sendspace: Initial send socket buffer size
Don't have, unclear whether we should (or where):
net.inet.tcp.abc_l_var: Cap the max cwnd increment during slow-start to this
number of segments
net.inet.tcp.ack_war_cnt: If the tcp_stack does ack-war prevention how many
acks can be sent in its time window?
net.inet.tcp.ack_war_timewindow: If the tcp_stack does ack-war prevention how
many milliseconds are in its time window?
net.inet.tcp.bb: TCP Black Box controls
net.inet.tcp.bb.disable_all: Disable all BB logging for all connections
net.inet.tcp.bb.log_auto_all: Auto-select from all sessions (rather than just
those with IDs)
net.inet.tcp.bb.log_auto_mode: Logging mode for auto-selected sessions (default
is TCP_LOG_STATE_HEAD_AUTO)
net.inet.tcp.bb.log_auto_ratio: Do auto capturing for 1 out of N sessions
net.inet.tcp.bb.log_global_entries: Current number of events maintained for all
TCP sessions
net.inet.tcp.bb.log_global_limit: Maximum number of events maintained for all
TCP sessions
net.inet.tcp.bb.log_id_entries: Current number of log IDs
net.inet.tcp.bb.log_id_limit: Maximum number of log IDs
net.inet.tcp.bb.log_id_tcpcb_entries: Current number of tcpcbs with log IDs
net.inet.tcp.bb.log_id_tcpcb_limit: Maximum number of tcpcbs with log IDs
net.inet.tcp.bb.log_session_limit: Maximum number of events maintained for each
TCP session
net.inet.tcp.bb.log_verbose: Force verbose logging for TCP traces
net.inet.tcp.bb.log_version: Version of log formats exported
net.inet.tcp.bb.pcb_ids_cur: Number of pcb IDs allocated in the system
net.inet.tcp.bb.pcb_ids_tot: Total number of pcb IDs that have been allocated
net.inet.tcp.cc.hystartplusplus.bblogs: Do we enable HyStart++ Black Box logs
to be generated if BB logging is on
net.inet.tcp.cc.hystartplusplus.css_growth_div: The divisor to the growth when
in Hystart++ CSS
net.inet.tcp.cc.hystartplusplus.css_rounds: The number of rounds HyStart++
lasts in CSS before falling to CA
net.inet.tcp.cc.hystartplusplus.maxrtt_thresh: HyStarts++ maximum RTT thresh
used in clamp (in microseconds)
net.inet.tcp.cc.hystartplusplus.minrtt_thresh: HyStarts++ minimum RTT thresh
used in clamp (in microseconds)
net.inet.tcp.cc.hystartplusplus.n_rttsamples: The number of RTT samples that
must be seen to consider HyStart++
net.inet.tcp.cc.hystartplusplus: New Reno related HyStart++ settings
net.inet.tcp.function_info: List TCP function block name-to-ID mappings
net.inet.tcp.initcwnd_segments: Slow-start flight size (initial congestion
window) in number of segments
net.inet.tcp.log_debug: Log errors caused by incoming TCP segments
net.inet.tcp.lro.compressed: Number of lro's compressed and sent to transport
net.inet.tcp.lro.entries: default number of LRO entries
net.inet.tcp.lro.extra_mbuf: Number of times we had an extra compressed ack
dropped into the tp
net.inet.tcp.lro.fullqueue: Number of lro's fully queued to transport
net.inet.tcp.lro.lockcnt: Number of lro's inp_wlocks taken
net.inet.tcp.lro.lro_badcsum: Number of packets that the common code saw with
bad csums
net.inet.tcp.lro.lro_cpu_threshold: Number of interrupts in a row on the same
CPU that will make us declare an 'affinity' cpu?
net.inet.tcp.lro.with_m_ackcmp: Number of mbufs queued with M_ACKCMP flags set
net.inet.tcp.lro.without_m_ackcmp: Number of mbufs queued without M_ACKCMP
net.inet.tcp.lro.wokeup: Number of lro's where we woke up transport via hpts
net.inet.tcp.lro.would_have_but: Number of times we would have had an extra
compressed, but mget failed
net.inet.tcp.map_limit: Total sendmap entries limit
net.inet.tcp.newcwv: Enable New Congestion Window Validation per RFC7661
net.inet.tcp.pacing_count: Number of TCP connections being paced
net.inet.tcp.pacing_limit: If the TCP stack does pacing, is there a limit (-1 =
no, 0 = no pacing N = number of connections)
net.inet.tcp.per_cpu_timers: run tcp timers on all cpus
net.inet.tcp.reass.new_limit: Do we use the new limit method we are discussing?
net.inet.tcp.reass.queueguard: Number of TCP Segments in Reassembly Queue where
we flip over to guard mode
net.inet.tcp.rfc3465: Enable RFC 3465 (Appropriate Byte Counting)
net.inet.tcp.soreceive_stream: Using soreceive_stream for TCP sockets
net.inet.tcp.split_limit: Total sendmap split entries limit