I want to design a controlling mechanism for my SolrCloud. I have two
choices.

First one is controlling every Solr node from a single point and when I
want to start and stop jetty from remote I will connect to my nodes via an
ssh library at Java. I will send backup command and recovery process with
same policy.

Second one is running another jar at all my Solr nodes and I will send
backup command that custom jars and they will call backup via
HttpSolrServer and recovery process will be managed over that jars.
Starting and stopping jetty will be done from that jars too.

There are some pros and cons using each of them.

First one's pros:
* When I start my custom management server there will be no need to send
some small jars into every Solr node machine.

First one's cons:
* needs keeping eye on from a single point.
* I have to open /admin URLs of every Solr nodes into the outside. Because
my custom management server should reach them. If anybody else can reach
that URL than he/she can run a delete all documents command. (if I put my
management server inside the environment of Solr nodes I think that I can
overwhelm that issue.)
* Connection every Solr node via ssh library may be resource consuming.

Second one's pros
* I can distribute work load. If I have hundreds of Solr nodes within my
SolrCloud I can send backup/recovery commands into my custom jars and each
of them can handle processes.
* I can make forbidden all Solr admin pages into the outside environment of
Solr nodes. My custom jars can run only the commands which I have defined
inside them. They can access to the Solr Node which they are responsible
for via HttpSolrServer.
* This custom jars maybe used at further purposes (advices are welcome)

Second one's cons
* I have to send small jars into each Solr node.

What folks think about such scenarios and what they suggest me?

Reply via email to