Dear List,

i have an index with 2.000.000 articles. All those texts get tokenized while indexing. On this data i run a faceted query like this (to receive associated words):

select?q=a_spell:{some word}&facet.method=enum&facet=true&facet.field=Paragraph&facet.limit=10&facet.prefix={some prefix}&facet.mincount=1500&indent=1&fl=_id&wt=json&rows=0'


I have more than 5.000.000 unique token in the index and the facet query is quite slow. I also tried differnt FastLRUcache Settings in the FilterCache.

Has Anybody a hint how i could improve performance within this setup?

Thnak you all

--
Andreas Niekler, Dipl. Ing. (FH)
NLP Group | Department of Computer Science
University of Leipzig
Johannisgasse 26 | 04103 Leipzig

mail: aniek...@informatik.uni-leipzig.deg.de

Reply via email to