Jeff Davis <[EMAIL PROTECTED]> writes:
> On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
>> Sounds to me like you just need to up the total amount of open files
>> allowed by the operating system.
> It looks more like the opposite, here's the docs for
> max_files_per_process:
I think J
Ralph Mason wrote:
> I have several databases. They are each about 35gb in size and have about
> 10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about
> 26k rows and the data directory contains about 70k files. These are busy
> machines, they run about 50 xactions per secon
Just adding a bit of relevant information:
We have the kernel file-max setting set to 297834 (256 per 4mb of ram).
/proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles
of which zero are always free.
On 10/05/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
On Wed, 2007-05-09
>To me, that means that his machine is allowing the new FD to be created,
>but then can't really support that many so it gives an error.
files-max is 297834
ulimit is 100
(doesn't make sense but there you go)
What I don’t really understand is with max_files_per_process at 800 we don't
get th
On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
> > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,
> > fd.c:471
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOG: 0: duration: 12.362 ms
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,
Hello,
You likely need to increase your file-max parameters using sysctl.conf.
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of CAJ CAJ
Sent: 10 May 2007 12:26
To: Ralph Mason
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Woes
I have several databases. They are each about 35gb in size and have about
10.5K relations (count
2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,
fd.c:471
2007-05-09 03:07:50.091 GMT 0: LOG: 0: duration: 12.362 ms
2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,
postgres.c:1090
So we decreased the max_files_per_process to
I have several databases. They are each about 35gb in size and have about
10.5K relations (count from pg_stat_all_tables) in them. Pg_class is
about 26k rows and the data directory contains about 70k files. These are
busy machines, they run about 50 xactions per second, ( aproxx insert /
update
Hi,
I have several databases. They are each about 35gb in size and have about
10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about
26k rows and the data directory contains about 70k files. These are busy
machines, they run about 50 xactions per second, ( aproxx insert /
10 matches
Mail list logo