Hi guys,

I have some problems understanding proper use of tcpsndbuf and tcprcvbuf
values.

I am running a managed-file-transfer (think rapidshare) service where each
customer instance is in an OpenVZ container. The host is running debian
lenny amd64 with plenty of RAM (12GB physical, 12GB swap), shared by 20 VEs.
VE config was created by vzsplit. I've got plenty of memory unallocated.

As soon as I put somewhat more serious traffic on the system, tcpsndbuf goes
up to the barrier, failcount grows, tcp sockets enter CLOSE_WAIT state and
the load goes up.

If I then stop the traffic, tcpsndbuf goes down, but the system does not
recover. Sockets remain in CLOSE_WAIT state, apache processes are blocked
and the load remains high.

Here are my current values:
NUMTCPSOCK="3076:3076"
TCPSNDBUF="10485760:18360320"
TCPRCVBUF="10485760:18360320"

If I understand the definition right, the following applies regarding memory
distribution:
total memory of host >= number of VEs x (kmemsize + all socket buffers)
(not considering overcommitment)

Right?
So that means I could set the values *much* higher than at the moment
(meager 10MB), right?

Any recommendations, it'd be much appreciated. Thanks!
Hank

-- 
My other signature is a regular expression.
http://www.pray4snow.de
_______________________________________________
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users

Reply via email to