"Kevin Grittner" <kevin.gritt...@wicourts.gov> writes: > [ uniformly sample the TID space defined as (1..P, 1..M) ]
> Shouldn't that get us the randomly chosen sample we're looking for? > Is there a problem you think this ignores? Not sure. The issue that I'm wondering about is that the line number part of the space is not uniformly populated, ie, small line numbers are much more likely to exist than large ones. (In the limit that density goes to zero, when you pick M much too large.) It's not clear to me whether this gives an unbiased probability of picking real tuples, as opposed to hypothetical TIDs. Another issue is efficiency. In practical cases you'll have to greatly overestimate M compared to the typical actual-number-of-tuples-per-page, which will lead to a number of target TIDs N that's much larger than necessary, which will make the scan slow --- I think in practice you'll end up doing a seqscan or something that might as well be one, because unless S is *really* tiny it'll hit just about every page. We can have that today without months worth of development effort, using the "WHERE random() < S" technique. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers