On 8/10/2014 11:07 PM, anand.mahajan wrote:
> Thank you for your suggestions. With the autoCommit (every 10 mins) and
> softCommit (every 10 secs) frequencies reduced things work much better now.
> The CPU usages has gone down considerably too (by about 60%) and the
> read/write throughput is showi
yet)
Thanks,
Anand
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4152239.html
Sent from the Solr - User mailing list archive at Nabble.com.
should not autoCommit openSearcher too freq.
360
true
1000
100
1
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4152229.html
Sent from the Solr - User mailing list archive
On 8/2/2014 2:46 PM, anand.mahajan wrote:
> Also, since there are already 18 JVMs per machine - How do I go about
> merging these existing cores under just 1 JVM? Would it be that I'd need to
> create 1 Solr instance with 18 cores inside and then migrate data from these
> separate JVMs into the new
at
go the same shard. Will splitting these up with the existing set of hardware
help at all?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4150811.html
Sent from the Solr - User mailing list archive at Nabble.com.
existing cores under just 1 JVM? Would it be that I'd need to
create 1 Solr instance with 18 cores inside and then migrate data from these
separate JVMs into the new instance?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4150810.html
uld go?
>> Is there a pattern / rule that Solr follows when it creates replicas for
>> split shards?
>>
>> 6. I read somewhere that creating a Core would cost the OS one thread and a
>> file handle. Since a core repsents an index in its entirty would it not be
>&
handle. Since a core repsents an index in its entirty would it not be
> allocated the configured number of write threads? (The dafault that is 8)
>
> 7. The Zookeeper cluster is deployed on the same boxes as the Solr instance
> - Would separating the ZK cluster out help?
>
> Sorr
On 8/1/2014 4:19 AM, anand.mahajan wrote:
> My current deployment :
> i) I'm using Solr 4.8 and have set up a SolrCloud with 6 dedicated machines
> - 24 Core + 96 GB RAM each.
> ii)There are over 190M docs in the SolrCloud at the moment (for all
> replicas its consuming overall disk 2340GB which
Node cluster? (Sorry if I'm deviating
here a bit from the core problem i'm trying to fix - but if DSE could work
with a very minimal time and effort requirement - i wont mind trying it
out.)
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-t
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4150615.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Regards,
Shalin Shekhar Mangar.
y Windows Phone From: anand.mahajan
Sent: 8/1/2014 9:40 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud Scale Struggle
Oops - my bad - Its autoSoftCommit that is set after every doc and not an
autoCommit.
Following snippet from the solrconfig -
1
true
?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4150615.html
Sent from the Solr - User mailing list archive at Nabble.com.
ecommended practice if only because a slow ZK can cause shards to go
into recovery and leader failure. I doubt it will make things faster in
your case. However, if you can, you should move ZK instances to separate
machines.
>
> Sorry for the long thread _ I thought of asking these all at once rather
> than posting separate ones.
>
> Thanks,
> Anand
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Regards,
Shalin Shekhar Mangar.
cene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592.html
Sent from the Solr - User mailing list archive at Nabble.com.
15 matches
Mail list logo