On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa
<kondo.mitsum...@lab.ntt.co.jp> wrote:
> Hi Amit,
>
>
> (2013/08/05 15:23), Amit Langote wrote:
>>
>> May the routines in fd.c become bottleneck with a large number of
>> concurrent connections to above database, say something like "pgbench
>> -j 8 -c 128"? Is there any other place I should be paying attention
>> to?
>
> What kind of file system did you use?
>
> When we open file, ext3 or ext4 file system seems to sequential search inode
> for opening file in file directory.
> And PostgreSQL limit FD 1000 per process. It seems too small.
> Please change src/backend/storage/file/fd.c at "max_files_per_process =
> 1000;"
> If we rewrite it, We can change limit of FD per process. I have already
> created fix-patch about this problem in postgresql.conf, and will submit
> next CF.

Thank you for replying Kondo-san.
The file system is ext4.
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?


-- 
Amit Langote


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to