http://lucene.apache.org/solr/guide/6_6/command-line-utilities.html
"Upload a configuration directory"

Take my advise and read the SolrCloud section of Solr Ref Guide.
It will answer most of your questions and is a good start.



Am 02.08.19 um 08:30 schrieb Salmaan Rashid Syed:
Hi Bernd,

Yet, another noob question.

Consider that my conf directory for creating a collection is _default. Suppose
now I made changes to managed-schema and conf.xml, How do I upload it to
external zookeeper at 2181 port?

Can you please give me the command that uploads altered config.xml and
managed-schema to zookeeper?

Thanks.


On Fri, Aug 2, 2019 at 11:53 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:


to 1) yes, because -Djute.maxbuffer is going to JAVA as a start parameter.

to 2) I don't know because i never use internal zookeeper

to 3) the configs are located at solr/server/solr/configsets/
        - choose one configset, make your changes and upload it to zookeeper
        - when creating a new collection choose your uploaded config
        - whenever you change something at your config you have to upload
it to zookeeper

I don't know which Solr version you are using, but a good starting point
with solr cloud is
http://lucene.apache.org/solr/guide/6_6/solrcloud.html

Regards
Bernd



Am 02.08.19 um 07:59 schrieb Salmaan Rashid Syed:
Hi Bernd,

Sorry for noob questions.

1) What do you mean by restart? Do you mean that I shoud issue ./bin/solr
stop -all?

And then issue these commands,

bin/solr restart -cloud -s example/cloud/node1/solr -p 8983

bin/solr restart -c -p 7574 -z localhost:9983 -s example/cloud/node2/solr


2) Where can I find solr internal Zookeeper folder for issuing this
command
SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djute.maxbuffer=10000000"?


3) Where can I find schema.xml and config.xmo files for Solr Cloud Cores
to
make changes in schema and configuration? Or do I have to make chages in
the directory that contains managed-schema and config.xml files with
which
I initialized and created collections? And then the solr will pick them
up
from there when it restarts?


Regards,

Salmaan



On Thu, Aug 1, 2019 at 5:40 PM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de>
wrote:



Am 01.08.19 um 13:57 schrieb Salmaan Rashid Syed:
After I make the -Djute.maxbuffer changes to Solr, deployed in
production,
Do I need to restart the solr to be able to add synonyms >1MB?

Yes, you have to restart Solr.



Or, Was this supposed to be done before putting Solr to production
ever?
Can we make chages when the Solr is running in production?

It depends on your system. In my cloud with 5 shards and 3 replicas I
can
take one by one offline, stop, modify and start again without problems.



Thanks.

Regards,
Salmaan



On Tue, Jul 30, 2019 at 4:53 PM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:

You have to increase the -Djute.maxbuffer for large configs.

In Solr bin/solr/solr.in.sh use e.g.
SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=10000000"
This will increase maxbuffer for zookeeper on solr side to 10MB.

In Zookeeper zookeeper/conf/zookeeper-env.sh
SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djute.maxbuffer=10000000"

I have a >10MB Thesaurus and use 30MB for jute.maxbuffer, works
perfect.

Regards


Am 30.07.19 um 13:09 schrieb Salmaan Rashid Syed:
Hi Solr Users,

I have a very big synonym file (>5MB). I am unable to start Solr in
cloud
mode as it throws an error message stating that the synonmys file is
too large. I figured out that the zookeeper doesn't take a file
greater
than 1MB size.

I tried to break down my synonyms file to smaller chunks less than
1MB
each. But, I am not sure about how to include all the filenames into
the
Solr schema.

Should it be seperated by commas like synonyms = "__1_synonyms.txt,
__2_synonyms.txt, __3synonyms.txt"

Or is there a better way of doing that? Will the bigger file when
broken
down to smaller chunks will be uploaded to zookeeper as well.

Please help or please guide me to relevant documentation regarding
this.

Thank you.

Regards.
Salmaan.







Reply via email to