Re: [HACKERS] Too-many-files errors on OS X

2004-06-07 Thread Kevin Brown
Larry Rosenman wrote:
 I had to hack on the code some more for FreeBSD:
 (the realloc call needed the multiplication).  I ran this same code
 on UnixWare.

I feel like a moron, having missed that.  Probably explains the bad
file number error I was getting on AIX, too...




-- 
Kevin Brown   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Too-many-files errors on OS X

2004-06-06 Thread Kevin Brown
Tom Lane wrote:
 However, it seems that the real problem here is that we are so far off
 base about how many files we can open.  I wonder whether we should stop
 relying on sysconf() and instead try to make some direct probe of the
 number of files we can open.  I'm imagining repeatedly open() until
 failure at some point during postmaster startup, and then save that
 result as the number-of-openable-files limit.

I strongly favor this method.  In particular, the probe should probably
be done after all shared libraries have been loaded and initialized.

I originally thought that each shared library that was loaded would eat
a file descriptor (since I thought it would be implemented via mmap())
but that doesn't seem to be the case, at least under Linux (for those
who are curious, you can close the underlying file after you perform
the mmap() and the mapped region still works).  If it's true under any
OS then it would certainly be prudent to measure the available file
descriptors after the shared libs have been loaded (another reason is
that the init function of a library might itself open a file and keep
it open, but this isn't likely to happen very often).

 I also notice that OS X 10.3 seems to have working SysV semaphore
 support.  I am tempted to change template/darwin to use SysV where
 available, instead of Posix semaphores.  I wonder whether inheriting
 100-or-so open file descriptors every time we launch a backend isn't
 in itself a nasty performance hit, quite aside from its effect on how
 many normal files we can open.

I imagine this could easily be tested.  I rather doubt that the
performance hit would be terribly large, but we certainly shouldn't rule
it out without testing it first.


-- 
Kevin Brown   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Too-many-files errors on OS X

2004-02-28 Thread Vic Abell
[EMAIL PROTECTED] (Tom Lane) wrote in message (in part)
 ...
 Hmm.  This may be OS-specific.  The shlibs certainly show up in the
 output of lsof in every variant I've checked, but do they count against
 your open-file limit?

From the lsof FAQ:

 5.2   Why doesn't Apple Darwin lsof report text file information?

   At the first port of lsof to Apple Darwin, revision 4.53,
   insufficient information was available -- logic and header
   files -- to permit the installation of VM space scanning
   for text files.  As of lsof 4.70 it is sill not available.
   Text file support will be added to Apple Darwin lsof after
   the necessary information becomes available.

Lsof calls the executable and shared libraries text files.  The
lsof FAQ may be found at:

  ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ

I have developed a hack which will be released at lsof revision
4.71.  A pre-release source distribution of 4.71 only for Darwin is
available at:

  ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/NEW/lsof_4.71C.darwin.tar.bz2

Note that you must build the lsof executable from that distribution
and building lsof requires that you download the XNU headers from
www.opensource.apple.com/darwinsource/.  Downloading the XNU headers
requires an Apple ID and password.

Vic Abell, lsof author

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Larry Rosenman


--On Sunday, February 22, 2004 23:00:31 -0500 Tom Lane [EMAIL PROTECTED] 
wrote:

Kevin Brown [EMAIL PROTECTED] writes:
I wasn't able to test on HP-UX
I get the same result on HPUX, after whacking the test program around
a bit: no change in the number of files we can open.  Confirmations on
other platforms please, anyone?
For anyone else who has problems getting it to compile, try copying
the relevant version of pg_dlopen from src/backend/port/dynloader/.
I attach the code I actually ran on HPUX.
			regards, tom lane

On FreeBSD 5:

$ ./eatfds3 /usr/local/lib/libpq.so /usr/lib/libm.so
dup() failed: Too many open files
Was able to use 7146 file descriptors
dup() failed: Too many open files
Was able to use 7146 file descriptors after opening 2 shared libs
$
On UnixWare 7.1.4:
$ ./eatfds3 /usr/lib/libpq.so.3 /usr/lib/libm.so.1
dup() failed: Too many open files
Was able to use 2045 file descriptors
dup() failed: Too many open files
Was able to use 2045 file descriptors after opening 2 shared libs
$
I had to hack on the code some more for FreeBSD:
(the realloc call needed the multiplication).  I ran this same code
on UnixWare.




$ cat eatfds3.c
#include stdio.h
#include string.h
#include errno.h
#include stdlib.h
#include unistd.h
#include dlfcn.h
// these seem to be needed on HPUX:
//#include a.out.h
//#include dl.h
int *fd;
int size = 3072;
void *
pg_dlopen(char *filename)
{
   /*
* Use BIND_IMMEDIATE so that undefined symbols cause a failure 
return
* from shl_load(), rather than an abort() later on when we attempt 
to
* call the library!
*/
   caddr_t handle = dlopen(filename,
   RTLD_LAZY);

   return (void *) handle;
}
int eatallfds(void) {
   int i = 0;
   int j, myfd;
   while (1) {
   myfd = dup(0);
   if (myfd  0) {
   fprintf (stderr, dup() failed: %s\n, 
strerror(errno));
   break;
   }
   if (i = size) {
   size *= 2;
   fd = realloc(fd, size * sizeof(*fd));
   if (fd == NULL) {
   fprintf (stderr, Can't allocate: %s\n,
   strerror(errno));
   fprintf (stderr, Had used %d 
descriptors\n,
   i);
   exit(1);
   }
   }
   fd[i++] = myfd;
   }
   for (j = 0 ; j  i ; ++j) {
   close(fd[j]);
   }
   return i;
}

int main (int argc, char *argv[]) {
   int n, na;
   int i;
   void *addr;
   size = 3072;
   fd = malloc((size + 1) * sizeof(*fd));
   if (fd == NULL) {
   fprintf (stderr, Can't allocate: %s\n, strerror(errno));
   return 1;
   }
   n = eatallfds();
   printf (Was able to use %d file descriptors\n, n);
   na = 0;
   for (i = 1 ; i  argc ; ++i) {
   addr = pg_dlopen(argv[i]);
   if (addr != NULL) na++;
   }
   n = eatallfds();
   printf (Was able to use %d file descriptors after opening %d 
shared libs\n, n, na);
   return 0;
}

$



--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


pgp0.pgp
Description: PGP signature


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Kevin Brown
I wrote:
 Larry Rosenman wrote:
  I had to hack on the code some more for FreeBSD:
  (the realloc call needed the multiplication).  I ran this same code
  on UnixWare.
 
 I feel like a moron, having missed that.  Probably explains the bad
 file number error I was getting on AIX, too...

And sure enough, that was it.  Got the same results on AIX 5 as on other
systems:

[EMAIL PROTECTED]:~$ ./eatfds /usr/lib/librpm.so.0 /usr/lib/librpmbuild.so.0
dup() failed: Too many open files
Was able to use 1997 file descriptors
dup() failed: Too many open files
Was able to use 1997 file descriptors after opening 2 shared libs
[EMAIL PROTECTED]:~$ uname -a
AIX m048 1 5 0001063A4C00



-- 
Kevin Brown   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Andrew Rawnsley
On Slackware 8.1:
[EMAIL PROTECTED]:~/src$ ./eatallfds libm.so libtcl.so libjpeg.so
dup() failed: Too many open files
Was able to use 1021 file descriptors
dup() failed: Too many open files
Was able to use 1021 file descriptors after opening 3 shared libs
On OpenBSD 3.1:
grayling# ./eatallfds libcrypto.so.10.0 libkrb5.so.13.0 
libncurses.so.9.0
dup() failed: Too many open files
Was able to use 125 file descriptors
dup() failed: Too many open files
Was able to use 125 file descriptors after opening 3 shared libs



On Feb 22, 2004, at 10:41 PM, Tom Lane wrote:

Kevin Brown [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hmm.  This may be OS-specific.  The shlibs certainly show up in the
output of lsof in every variant I've checked, but do they count 
against
your open-file limit?

It seems not, for both shared libraries that are linked in at startup
time by the dynamic linker and shared libraries that are explicitly
opened via dlopen().
It would certainly make life a lot easier if we could assume that 
dlopen
doesn't reduce your open-files limit.

Attached is the test program I used.
Can folks please try this on other platforms?

			regards, tom lane

---(end of 
broadcast)---
TIP 8: explain analyze is your friend



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Larry Rosenman


--On Monday, February 23, 2004 04:52:09 -0800 Kevin Brown 
[EMAIL PROTECTED] wrote:

Larry Rosenman wrote:
I had to hack on the code some more for FreeBSD:
(the realloc call needed the multiplication).  I ran this same code
on UnixWare.
I feel like a moron, having missed that.  Probably explains the bad
file number error I was getting on AIX, too...
It was a coredump for me, which is why I had to look at it,
and it took a while :-)
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


pgp0.pgp
Description: PGP signature


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Tom Lane
Kevin Brown [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 However, it seems that the real problem here is that we are so far off
 base about how many files we can open.  I wonder whether we should stop
 relying on sysconf() and instead try to make some direct probe of the
 number of files we can open.  I'm imagining repeatedly open() until
 failure at some point during postmaster startup, and then save that
 result as the number-of-openable-files limit.

 I strongly favor this method.  In particular, the probe should probably
 be done after all shared libraries have been loaded and initialized.

I've now committed changes in 7.4 and HEAD branches to do this.  Per the
recent tests, the code does not worry about tracking dlopen() calls, but
assumes that loading a shared library has no long-term impact on the
available number of FDs.

 I also notice that OS X 10.3 seems to have working SysV semaphore
 support.  I am tempted to change template/darwin to use SysV where
 available, instead of Posix semaphores.  I wonder whether inheriting
 100-or-so open file descriptors every time we launch a backend isn't
 in itself a nasty performance hit, quite aside from its effect on how
 many normal files we can open.

 I imagine this could easily be tested.

In some simplistic tests, I couldn't find any clear difference in
backend startup time on Darwin with max_connections set to 5 vs 100.
So the idea that the extra FDs hurt us on backend startup seems wrong.
I am still a bit concerned about the possible impact of having an
unreasonably small number of available FDs, but against that we also
would have to determine whether Posix semaphores might be faster than
SysV semaphores on Darwin.  I think I'll leave well enough alone unless
someone feels like running some benchmarks.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Too-many-files errors on OS X

2004-02-22 Thread Kevin Brown
Tom Lane wrote:
 Kevin Brown [EMAIL PROTECTED] writes:
  I originally thought that each shared library that was loaded would eat
  a file descriptor (since I thought it would be implemented via mmap())
  but that doesn't seem to be the case, at least under Linux
 
 Hmm.  This may be OS-specific.  The shlibs certainly show up in the
 output of lsof in every variant I've checked, but do they count against
 your open-file limit?

It seems not, for both shared libraries that are linked in at startup
time by the dynamic linker and shared libraries that are explicitly
opened via dlopen().  This seems to be true for Linux and Solaris (I
wasn't able to test on HP-UX, and AIX yields a strange bad file number
error that I've yet to track down).

Attached is the test program I used.  It takes as its arguments a list
of files to hand to dlopen(), and will show how many files it was able
to open before and after running a batch of dlopen() commands.


-- 
Kevin Brown   [EMAIL PROTECTED]
#include stdio.h
#include errno.h
#include stdlib.h
#include dlfcn.h

int *fd;
int size = 1024;

int eatallfds(void) {
	int i = 0;
	int j, myfd;

	while (1) {
		myfd = dup(0);
		if (myfd  0) {
			fprintf (stderr, dup() failed: %s\n, strerror(errno));
			break;
		}
		fd[i++] = myfd;
		if (i = size) {
			size *= 2;
			fd = realloc(fd, size);
			if (fd == NULL) {
fprintf (stderr, Can't allocate: %s\n,
		strerror(errno));
fprintf (stderr, Had used %d descriptors\n,
		i);
exit(1);
			}
		}
	}
	for (j = 0 ; j  i ; ++j) {
		close(fd[j]);
	}
	return i;
}


int main (int argc, char *argv[]) {
	int n, na;
	int i;
	void *addr;

	size = 1024;
	fd = malloc(size * sizeof(*fd));
	if (fd == NULL) {
		fprintf (stderr, Can't allocate: %s\n, strerror(errno));
		return 1;
	}
	n = eatallfds();
	printf (Was able to use %d file descriptors\n, n);

	na = 0;
	for (i = 1 ; i  argc ; ++i) {
		addr = dlopen(argv[i], RTLD_LAZY);
		if (addr != NULL) na++;
	}
	n = eatallfds();
	printf (Was able to use %d file descriptors after opening %d shared libs\n, n, na);
	return 0;
}


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Too-many-files errors on OS X

2004-02-22 Thread Tom Lane
Kevin Brown [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Hmm.  This may be OS-specific.  The shlibs certainly show up in the
 output of lsof in every variant I've checked, but do they count against
 your open-file limit?

 It seems not, for both shared libraries that are linked in at startup
 time by the dynamic linker and shared libraries that are explicitly
 opened via dlopen().

It would certainly make life a lot easier if we could assume that dlopen
doesn't reduce your open-files limit.

 Attached is the test program I used.

Can folks please try this on other platforms?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Too-many-files errors on OS X

2004-02-22 Thread Tom Lane
Kevin Brown [EMAIL PROTECTED] writes:
 I wasn't able to test on HP-UX

I get the same result on HPUX, after whacking the test program around
a bit: no change in the number of files we can open.  Confirmations on
other platforms please, anyone?

For anyone else who has problems getting it to compile, try copying
the relevant version of pg_dlopen from src/backend/port/dynloader/.
I attach the code I actually ran on HPUX.

regards, tom lane


#include stdio.h
#include string.h
#include errno.h
#include stdlib.h
#include unistd.h
//#include dlfcn.h
// these seem to be needed on HPUX:
#include a.out.h
#include dl.h

int *fd;
int size = 1024;

void *
pg_dlopen(char *filename)
{
/*
 * Use BIND_IMMEDIATE so that undefined symbols cause a failure return
 * from shl_load(), rather than an abort() later on when we attempt to
 * call the library!
 */
shl_t   handle = shl_load(filename,
BIND_IMMEDIATE | BIND_VERBOSE 
| DYNAMIC_PATH,
  0L);

return (void *) handle;
}


int eatallfds(void) {
int i = 0;
int j, myfd;

while (1) {
myfd = dup(0);
if (myfd  0) {
fprintf (stderr, dup() failed: %s\n, strerror(errno));
break;
}
fd[i++] = myfd;
if (i = size) {
size *= 2;
fd = realloc(fd, size);
if (fd == NULL) {
fprintf (stderr, Can't allocate: %s\n,
strerror(errno));
fprintf (stderr, Had used %d descriptors\n,
i);
exit(1);
}
}
}
for (j = 0 ; j  i ; ++j) {
close(fd[j]);
}
return i;
}


int main (int argc, char *argv[]) {
int n, na;
int i;
void *addr;

size = 1024;
fd = malloc(size * sizeof(*fd));
if (fd == NULL) {
fprintf (stderr, Can't allocate: %s\n, strerror(errno));
return 1;
}
n = eatallfds();
printf (Was able to use %d file descriptors\n, n);

na = 0;
for (i = 1 ; i  argc ; ++i) {
addr = pg_dlopen(argv[i]);
if (addr != NULL) na++;
}
n = eatallfds();
printf (Was able to use %d file descriptors after opening %d shared libs\n, 
n, na);
return 0;
}

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Too-many-files errors on OS X

2004-02-22 Thread Claudio Natoli

  Attached is the test program I used.
 
 Can folks please try this on other platforms?

[after some minor changes] Can confirm that LoadLibrary (aka dlopen) does
not reduce the files limit under Win32.

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Too-many-files errors on OS X

2004-02-22 Thread Joe Conway
Tom Lane wrote:
Confirmations on other platforms please, anyone?

For anyone else who has problems getting it to compile, try copying 
the relevant version of pg_dlopen from src/backend/port/dynloader/. I
attach the code I actually ran on HPUX.
FWIW:

RH9
---
# ./eatallfds libperl.so libR.so libtcl.so
dup() failed: Too many open files
Was able to use 1021 file descriptors
dup() failed: Too many open files
Was able to use 1021 file descriptors after opening 3 shared libs
Fedora
---
# ./eatallfds libR.so libtcl.so libperl.so
dup() failed: Too many open files
Was able to use 1021 file descriptors
dup() failed: Too many open files
Was able to use 1021 file descriptors after opening 3 shared libs
Joe

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html