During cvs up of huge modules (few gigs or so, all files in one directory,
it was rpm/SOURCES module in PLD Linux cvs repository), we noticed that
cvs pserver process grows until swap in cvs server ends.
After some investigation I found, that it's becouse of async buffers in
cvs. When you do cvs up and press ctrl-z you might note that pserver
process reads all the files to ram (it grows until it gets to the size
of repo). This is actually how async buffer behaves, if it cannot write,
it buffers :) But I guess there should be some limit placed on it.
I arbitratry choose 100 data_buffer structures, you will probably want
to make it configurable (I had reports that it can be too small).
Also using usleep() where I did it probably ain't portable or
particulary wise, but it works.
Cheers
--
: Michal ``,/\/\, '' Moskal | | : GCS {C,UL}++++$
: | |alekith @ |)|(| . org . pl : {E--, W, w-,M}-
: Linux: We are dot in .ORG. | : {b,e>+}++ !tv h
: CurProj: Gont Compiler: http://gont.pld.org.pl/ : PLD Team member
diff -ur cvs-1.11.1p1.orig/src/buffer.c cvs-1.11.1p1/src/buffer.c
--- cvs-1.11.1p1.orig/src/buffer.c Thu Apr 19 21:29:05 2001
+++ cvs-1.11.1p1/src/buffer.c Thu Feb 14 13:22:23 2002
@@ -1,6 +1,7 @@
/* Code for the buffer data structure. */
#include <assert.h>
+#include <unistd.h>
#include "cvs.h"
#include "buffer.h"
@@ -292,15 +293,28 @@
if (nbytes != data->size)
{
+ struct buffer_data *p;
+ int cnt;
+
/* Not all the data was written out. This is only
permitted in nonblocking mode. Adjust the buffer,
and return. */
assert (buf->nonblocking);
+ cnt = 0;
+ for (p = data; p->next; p = p->next)
+ cnt++;
+
data->size -= nbytes;
data->bufp += nbytes;
+ /* Don't allow buffers to grow over 100 pages. */
+ if (cnt > 100) {
+ usleep(100000);
+ continue;
+ }
+
return 0;
}
}