Time has come to change NR_OPEN value, some production servers hit the not so 'ridiculously high value' of 1024*1024 file descriptors per process.

AFAIK this is safe to raise this value, because alloc_fd_array() uses vmalloc() for large arrays and vmalloc() returns NULL if a too large allocation is attempted (or in case of memory shortage)

Signed-off-by: Eric Dumazet <[EMAIL PROTECTED]>

diff -Nru /tmp/fs.h include/linux/fs.h
--- linux.orig/include/linux/fs.h   2005-01-31 15:28:01.926685144 +0100
+++ inux/include/linux/fs.h  2005-01-31 15:29:37.047224624 +0100
@@ -32,7 +32,8 @@
  * It's silly to have NR_OPEN bigger than NR_FILE, but you can change
  * the file limit at runtime and only root can increase the per-process
  * nr_file rlimit, so it's safe to set up a ridiculously high absolute
- * upper limit on files-per-process.
+ * upper limit on files-per-process. Actual limit depends on vmalloc()
+ * constraints.
  *
  * Some programs (notably those using select()) may have to be
  * recompiled to take full advantage of the new limits..
@@ -40,7 +41,7 @@

 /* Fixed constants first: */
 #undef NR_OPEN
-#define NR_OPEN (1024*1024)    /* Absolute upper limit on fd num */
+#define NR_OPEN (16*1024*1024) /* Absolute upper limit on fd num */
 #define INR_OPEN 1024          /* Initial setting for nfile rlimits */

 #define BLOCK_SIZE_BITS 10

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to