Darren Reed wrote:
So there are two considerations:
1) buffer allocation on sockets when we're running 1000s
of TCP connections: @1M, 10,000 sockets = 20GB of RAM,
@48k, 10,000 sockets = 900MB of RAM
... yes, the buffers aren't allocated "straight away"
but it is a measure of the obligation being advertised.
FWIW, this is something I've been thinking about. How to do
resource control in the network stack? Right now, a sys admin
cannot say, "I want this network service to only consume X
amount of memory," or "I want this service to have a new request
accept rate of Y." The latter one can more or less be controlled
in an app (except that fact that the internal stack accept
queue cannot be controlled). Also note that the existing bandwidth
control does not have this kind of granularity. The first one
is very tricky to do. There are probably other cases to consider.
Having said the above, I don't think changing the default size
really matters. Any app can do that now. For most apps which
don't, say some old ftp or http clients, the above issue (too many
connections) normally does not apply. In those cases where the
above issue applies, I guess we are talking about systems managed
by experienced admins. They choose whatever they want already.
2) reduced security through a larger window making it easier
to guess sequence numbers. 48k = 1 in ~87,000,
1M = 1 in 4096
Oh, so the window scale option should be disallowed completely
and window size be reduced to 1 packet ;-) While we do want to
make transport protocol robust and not easy to be compromised,
security is really not a goal. If a person is concerned about
security, IPsec is the thing to check out. Crippling TCP this
way and thinking that it "becomes" more secure is doing it the
wrong way, IMHO. It is *not* more secure and we should not
give a false sense of security.
--
K. Poon.
[email protected]
_______________________________________________
networking-discuss mailing list
[email protected]