Carsten Kropf <ckro...@fh-hof.de> writes: > I have a strange issue using a custom built index structure. My index access > method support document type composed of words (as tsvector) and points > (1-dimensional arrays of them). For internal reasons, I have to save the > documents as a whole inside my structure (for proper reorganisations). > So, I form the tuples using index_form_tuple with the proper description. > Everything works fine, as long as the documents are quite small. However, if > the tsvector becomes too large, I run into a problem of not being able to > store the documents, because (obviously) the tsvector is too large for one > page.
Well, of course. I think this is a fundamentally bad index design. You didn't say exactly what sort of searches you want this index type to accelerate, but perhaps you need a design closer to GIN, in which you'd make index entries for individual words not whole documents. > What I tried to solve this issue here, is to extract the words from the > document (in my index) and calling 'Datum toast_compress_datum(Datum > value)'in order to compress the tsvector into a proper toast table. Indexes don't have toast tables. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers