Is there anything wrong with set up?

On Tue, Mar 5, 2013 at 5:43 PM, Sujatha Arun <suja.a...@gmail.com> wrote:

> Hi Otis,
>
> Since currently we are planning for only one slave  due to cost
> considerations, can we have an ELB fronting the master and slave for HA.
>
>    1. All index requests will go to the master .
>    2. Slave replicates  from master .
>    3. Search request can go either to master /slave via ELB.
>
> is that resonable   HA for search ?
>
> Regards
> Sujatha
>
>
>
> On Tue, Mar 5, 2013 at 5:12 PM, Otis Gospodnetic <
> otis.gospodne...@gmail.com> wrote:
>
>> Hi Sujatha,
>>
>> If I understand correctly, you will have only 1 slave (and 1 master), so
>> that's not really a HA architecture.  You could manually turn master into
>> slave, but that's going to mean some down time...
>>
>> Otis
>> --
>> Solr & ElasticSearch Support
>> http://sematext.com/
>>
>>
>>
>>
>>
>> On Tue, Mar 5, 2013 at 3:05 AM, Sujatha Arun <suja.a...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > We are planning to set up *2* *High-Memory Quadruple Extra Large
>> Instance
>> >  *as
>> > master and slave for our multicore solr setup  which has more than 200
>> > cores spread between a couple of webapps on a single JVM on *AWS*
>> >
>> > All indexing [via a queue will go to master ]  . One Slave  Server will
>> > replicate all the core level indexes from the master , slave
>> Configurations
>> > are defined in the solr.xml  at the webapp level  with a different poll
>> > interval for each webapp.
>> >
>> > We are planning to LB the search requests by fronting the master and
>> slave
>> > with an *AWS ELB *. The master configuration will not enable the slave
>> > properties as master is not replicating from any other machine. The
>> master
>> > and slave have similar hardware configurations [*High-Memory Quadruple
>> > Extra Large Instance] .*This is mainly for HA if the slave goes down.
>> > *
>> > *
>> > Any issue with the above set up ,please advice.
>> >
>> > Regards,
>> > Sujatha
>> >
>> >
>> >
>> >
>> > *
>> > *
>> >
>>
>
>

Reply via email to