Bruce Momjian <[EMAIL PROTECTED]> writes: > Also, what does an in-memory bitmapped index look like?
One idea that might work: a binary search tree in which each node represents a single page of the table, and contains a bit array with one bit for each possible item number on the page. You could not need more than BLCKSZ/(sizeof(HeapTupleHeaderData)+sizeof(ItemIdData)) bits in a node, or about 36 bytes at default BLCKSZ --- for most tables you could probably prove it would be a great deal less. You only allocate nodes for pages that have at least one interesting row. I think this would represent a reasonable compromise between size and insertion speed. It would only get large if the indexscan output demanded visiting many different pages --- but at some point you could abandon index usage and do a sequential scan, so I think that property is okay. A variant is to make the per-page bit arrays be entries in a hash table with page number as hash key. This would reduce insertion to a nearly constant-time operation, but the drawback is that you'd need an explicit sort at the end to put the per-page entries into page number order before you scan 'em. You might come out ahead anyway, not sure. Or we could try a true linear bitmap (indexed by page number times max-items-per-page plus item number) that's compressed in some fashion, probably just by eliminating large runs of zeroes. The difficulty here is that inserting a new one-bit could be pretty expensive, and we need it to be cheap. Perhaps someone can come up with other better ideas ... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend