On 12/3/14, 8:50 AM, José Luis Tallón wrote:

May I possibly suggest a file-per-schema model instead? This approach would
certainly solve the excessive i-node consumption problem that --I guess--
Andres is trying to address here.
I don't think that really has any advantages.

Just spreading the I/O load, nothing more, it seems:

Just to elaborate a bit on the reasoning, for completeness' sake:
Given that a relation's segment maximum size is 1GB, we'd have (1048576/8)=128k 
sequences per relation segment.
Arguably, not many real use cases will have that many sequences.... save for 
*massively* multi-tenant databases.

The downside being that all that random I/O --- in general, it can't really be sequential 
unless there are very very few sequences--- can't be spread to other spindles. Create a 
"sequence_default_tablespace" GUC + ALTER SEQUENCE SET TABLESPACE, to use an 
SSD for this purpose maybe?

Why not? RAID arrays typically use stripe sizes in the 128-256k range, which 
means only 16 or 32 sequences per stripe.

It still might make sense to allow controlling what tablespace a sequence is 
in, but IMHO the default should just be pg_default.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to