here are the details of the new replication
http://wiki.apache.org/solr/SolrReplication

On Wed, Feb 25, 2009 at 4:59 PM, Otis Gospodnetic
<otis_gospodne...@yahoo.com> wrote:
>
> Hm, I know what you did is recommended, but I *think* I once set up a Solr 
> instance that had multiple indices and only a single rsyncd.  It's been a 
> while, so I don't recall the details.  If you feel comfortable with 1.3-dev, 
> grab a nightly and use the new replication mechanism instead.
>
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> ----- Original Message ----
>> From: Jérôme Etévé <jerome.et...@gmail.com>
>> To: solr-user@lucene.apache.org
>> Sent: Tuesday, February 24, 2009 12:57:20 PM
>> Subject: Collection distribution in a multicore environment
>>
>> Hi fellow Solr fans,
>>
>>   I'm setting up some collection distribution along with multicore
>> solr . I'm using version 1.3
>>
>>   I have no problem with the snapshooter, since this can be set within
>> each core in solrconfig.xml.
>>
>>   My question is more about the rsyncd .
>>   The rsyncd-start creates a rsyncd.conf in the conf directory
>> relative to where it lies , so what I did is  copying bin/rsynd-start
>> in each core directory:
>>
>>   solr/
>>      core1/
>>          bin/
>>             rsyncd-start
>>          conf/
>>             rsyncd.conf
>>      core2/
>>          - same thing -
>>
>> Then for each core, I launch a rsyncd :
>>    /../solr/core1/bin/rsyncd-start -p 18080 -d /../solr/core1/data/
>>
>> This way, it can be stopped properly when I use (rsyncd-stop grabs
>> the data from the conf/rsyncd.conf of the containing core).
>>   /../solr/core1/bin/rsyncd-stop
>>
>> The problem is I'm not very confortable with having one running
>> deamon per core (each on a different port), plus a copy of each script
>> inside each core.
>>
>> Is there any better way to set this up ?
>>
>> Cheers !!
>>
>> Jerome Eteve.
>>
>>
>>
>>
>>
>>
>> --
>> Jerome Eteve.
>>
>> Chat with me live at http://www.eteve.net
>>
>> jer...@eteve.net
>
>



-- 
--Noble Paul

Reply via email to