Extrapolating what Jack was saying on his reply ... with 100 shards and
4 replicas, you have 400 cores that are each about 2.8GB. That results in a
total index size of just over a terabyte, with 140GB of index data on each of
the eight servers.
Assuming you have only one Solr instance
Andrew Butkus [andrew.but...@c6-intelligence.com] wrote:
[Shawn/Jack: Ideal amount of RAM]
Have less than this :/ :( - with not much likelihood to upgrade anytime soon
The right amount of RAM is what satisfies your requirements and is tightly
correlated to the speed of your underlying
On 1/8/2015 8:57 AM, Andrew Butkus wrote:
We have 4gb usage (because the shards are split by 100 each shard is approx.
2.8gb on disk), we have allocated 14gb min and 16gb max of ram to solr, so it
has plenty to use (the ram in the dashboard never goes above about 8gb - so
still plenty ).
On 1/8/2015 7:26 AM, Andrew Butkus wrote:
Hi, we have 8 solr servers, split 4x4 across 2 data centers.
We have a collection of around ½ billion documents, split over 100 shards,
each is replicated 4 times on separate nodes (evenly distributed across both
data centers).
The problem we
Hi, we have 8 solr servers, split 4x4 across 2 data centers.
We have a collection of around ½ billion documents, split over 100 shards, each
is replicated 4 times on separate nodes (evenly distributed across both data
centers).
The problem we have is that when we use cursormark (and also when
Hi Shawn,
Thank you for your reply
The part about memory usage is not clear. That 4GB and 16GB could refer to
the operating system view of memory, or the view of memory within the JVM.
I'm curious about how much total RAM each machine has, how large the Java
heap is, and what the total size