Alternatively, do you still want to be protected against a single failure
during scheduled maintenance?
With a three node ensemble, when one Zookeeper node is being updated or moved
to a new instance, one more failure means it does not have a quorum. With a
five node ensemble, three nodes would
NP. My usual question though is "how often do you expect to lose a
second ZK node before you can replace the first one that died?"
My tongue-in-cheek statement is often "If you're losing two nodes
regularly, you have problems with your hardware that you're not really
going to address by adding mor
Is not a typo. I was wrong, for zookeeper 2 nodes still count as majority.
It's not the desirable configuration but is tolerable.
Thanks Erick.
\--
/Yago Riveiro
> On Jan 21 2016, at 4:15 am, Erick Erickson
wrote:
>
> bq: 3 are to risky, you lost one you lo
bq: 3 are to risky, you lost one you lost quorum
Typo? You need to lose two.
On Wed, Jan 20, 2016 at 6:25 AM, Yago Riveiro wrote:
> Our Zookeeper cluster is an ensemble of 5 machines, is a good starting point,
> 3 are to risky, you lost one you lost quorum and with 7 sync cost increase.
>
>
Our Zookeeper cluster is an ensemble of 5 machines, is a good starting point,
3 are to risky, you lost one you lost quorum and with 7 sync cost increase.
ZK cluster is in machines without IO and rotative hdd (don't not use SDD to
gain IO performance, zookeeper is optimized to spinning disks).
Thank you for sharing your experiences/ideas.
Yago since you have 8 billion documents over 500 collections, can you share
what/how you do index maintenance (e.g. add field)? And how are you loading
data into the index? Any experiences around how Zookeeper ensemble behaves
with so many collections?
What I can say is:
* SDD (crucial for performance if the index doesn't fit in memory, and will
not fit)
* Divide and conquer, for that volume of docs you will need more than 6 nodes.
* DocValues to not stress the java HEAP.
* Do you will you aggregate data?, if yes, what is your max
On 1/19/2016 1:30 PM, Troy Edwards wrote:
We are currently "beta testing" a SolrCloud with 2 nodes and 2 shards with
2 replicas each. The number of documents is about 125000.
We now want to scale this to about 10 billion documents.
What are the steps to prototyping, hardware estimation and stre
We are currently "beta testing" a SolrCloud with 2 nodes and 2 shards with
2 replicas each. The number of documents is about 125000.
We now want to scale this to about 10 billion documents.
What are the steps to prototyping, hardware estimation and stress testing?
Thanks
That way I
> could just send the dataimport request up through the load balancer and
> forget about it.
>
> Anyway, I thought I would see how others are handling this issue.
>
> Cheers, Jim
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Scaling-SolrCloud-and-DIH-tp4047049.html
> Sent from the Solr - User mailing list archive at Nabble.com.
e handling this issue.
Cheers, Jim
--
View this message in context:
http://lucene.472066.n3.nabble.com/Scaling-SolrCloud-and-DIH-tp4047049.html
Sent from the Solr - User mailing list archive at Nabble.com.
11 matches
Mail list logo