Hi,

Its good to hear some feedback on using the Hunspell dictionaries.
 Lucene's support is pretty new so we're obviously looking to improve it.
 Could you open a JIRA issue so we can explore whether there is some ways
to reduce memory consumption?

On Tue, Dec 13, 2011 at 5:37 PM, Maciej Lisiewski <c2h...@poczta.fm> wrote:

> Hi,
>>
>> in my index schema I has defined a
>> DictionaryCompoundWordTokenFil**terFactory and a
>> HunspellStemFilterFactory. Each FilterFactory has a dictionary with
>> about 100k entries.
>>
>> To avoid an out of memory error I have to set the heap space to 128m
>> for 1 index.
>>
>> Is there a way to reduce the memory consumption when parsing the
>> dictionary?
>> I need to create several indexes and 128m for each index is too much.
>>
>
> Same problem here - even with an empty index (no data yet) and two fields
> using Hunspell (pl_PL) I had to increase heap size to over 2GB for solr to
> start at all..
>
> Stempel using the very same dictionary works fine with 128M..
>
> --
> Maciej Lisiewski
>



-- 
Chris Male | Software Developer | DutchWorks | www.dutchworks.nl

Reply via email to