On 12/11/2013 02:39 PM, Martijn van Oosterhout wrote:
> In this discussion we've mostly used block = 1 postgresql block of 8k. 
> But when reading from a disk once you've read one block you can
> basically read the following ones practically for free.
> 
> So I wonder if you could make your sampling read always 16 consecutive
> blocks, but then use 25-50% of the tuples.  That way you get many more
> tuples for the same amount of disk I/O seeks..

Yeah, that's what I meant by "tune this for the FS".   We'll probably
have to test a lot of different "block sizes" on different FSes before
we arrive at a reasonable size, and even then I'll bet we have to offer
a GUC.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to