On Thu, Mar 6, 2014 at 11:14 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> I've been tempted to implement a new type of hash index that allows both WAL
>> and high concurrency, simply by disallowing bucket splits.  At the index
>> creation time you use a storage parameter to specify the number of buckets,
>> and that is that. If you mis-planned, build a new index with more buckets,
>> possibly concurrently, and drop the too-small one.
>
> Yeah, we could certainly do something like that.  It sort of sucks,
> though.  I mean, it's probably pretty easy to know that starting with
> the default 2 buckets is not going to be enough; most people will at
> least be smart enough to start with, say, 1024.  But are you going to
> know whether you need 32768 or 1048576 or 33554432?  A lot of people
> won't, and we have more than enough reasons for performance to degrade
> over time as it is.

The other thought I had was that you can do things lazily in vacuum.
So when you probe you need to check multiple pages until vacuum comes
along and rehashes everything.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to