Am 01.08.19 um 13:57 schrieb Salmaan Rashid Syed:
After I make the -Djute.maxbuffer changes to Solr, deployed in production,
Do I need to restart the solr to be able to add synonyms >1MB?

Yes, you have to restart Solr.



Or, Was this supposed to be done before putting Solr to production ever?
Can we make chages when the Solr is running in production?

It depends on your system. In my cloud with 5 shards and 3 replicas I can
take one by one offline, stop, modify and start again without problems.



Thanks.

Regards,
Salmaan



On Tue, Jul 30, 2019 at 4:53 PM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:

You have to increase the -Djute.maxbuffer for large configs.

In Solr bin/solr/solr.in.sh use e.g.
SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=10000000"
This will increase maxbuffer for zookeeper on solr side to 10MB.

In Zookeeper zookeeper/conf/zookeeper-env.sh
SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djute.maxbuffer=10000000"

I have a >10MB Thesaurus and use 30MB for jute.maxbuffer, works perfect.

Regards


Am 30.07.19 um 13:09 schrieb Salmaan Rashid Syed:
Hi Solr Users,

I have a very big synonym file (>5MB). I am unable to start Solr in cloud
mode as it throws an error message stating that the synonmys file is
too large. I figured out that the zookeeper doesn't take a file greater
than 1MB size.

I tried to break down my synonyms file to smaller chunks less than 1MB
each. But, I am not sure about how to include all the filenames into the
Solr schema.

Should it be seperated by commas like synonyms = "__1_synonyms.txt,
__2_synonyms.txt, __3synonyms.txt"

Or is there a better way of doing that? Will the bigger file when broken
down to smaller chunks will be uploaded to zookeeper as well.

Please help or please guide me to relevant documentation regarding this.

Thank you.

Regards.
Salmaan.



Reply via email to