On 8/10/2014 11:07 PM, anand.mahajan wrote:
> Thank you for your suggestions. With the autoCommit (every 10 mins) and
> softCommit (every 10 secs) frequencies reduced things work much better now.
> The CPU usages has gone down considerably too (by about 60%) and the
> read/write throughput is showi
Hello all,
Thank you for your suggestions. With the autoCommit (every 10 mins) and
softCommit (every 10 secs) frequencies reduced things work much better now.
The CPU usages has gone down considerably too (by about 60%) and the
read/write throughput is showing considerable improvements too.
Ther
should not autoCommit openSearcher too freq.
360
true
1000
100
1
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Scale-Struggle-tp4150592p4152229.html
Sent from the Solr - User mailing list archive
On 8/2/2014 2:46 PM, anand.mahajan wrote:
> Also, since there are already 18 JVMs per machine - How do I go about
> merging these existing cores under just 1 JVM? Would it be that I'd need to
> create 1 Solr instance with 18 cores inside and then migrate data from these
> separate JVMs into the new
Thanks Shawn. I'm using 2 level composite id routing right now. These are all
Used Cars listings and all search queries always have car year and make in
the search criteria - hence that made sense to have Year+Make as level 1 in
the composite id. Beyond that the second level composite id is based o
Thank you everyone for your responses. Increased the hard commit to 10mins
and autoSoftCommit to 10 secs. (I wont really need a real time get - tweaked
the app code to cache the doc and use the app side cached version instead of
fetching it from Solr) Will watch it for a day or two and clock the
th
Auto correct not good
Corrected below
Bill Bell
Sent from mobile
> On Aug 2, 2014, at 11:11 AM, Bill Bell wrote:
>
> Seems way overkill. Are you using /get at all ? If you need the docs avail
> right away - why ? How about after 30 seconds ? How many docs do you get
> added per second duri
Seems way overkill. Are you using /get at all ? If you need the docs avail
right away - why ? How about after 30 seconds ? How many docs do you get added
per second during peak ? Even Google has a delay when you do Adwords.
One idea is yo have an empty core that you insert into and then shard i
On 8/1/2014 4:19 AM, anand.mahajan wrote:
> My current deployment :
> i) I'm using Solr 4.8 and have set up a SolrCloud with 6 dedicated machines
> - 24 Core + 96 GB RAM each.
> ii)There are over 190M docs in the SolrCloud at the moment (for all
> replicas its consuming overall disk 2340GB which
Thanks for the reply Shalin.
1. I'll try increasing the softCommit interval and the autoSoftCommit too.
One mistake I made that I realized just now is that I am using /solr/select
and expecting it to do an NRT - for NRT search its got to be /select/get
handler that needs to be used. Please confirm
Increasing autoCommit doesn't increase RAM consumption. It just means that
more items would be in transaction log and that node restart/recovery will
be slower.
On Fri, Aug 1, 2014 at 7:10 PM, anand.mahajan wrote:
> Oops - my bad - Its autoSoftCommit that is set after every doc and not an
> aut
y Windows Phone From: anand.mahajan
Sent: 8/1/2014 9:40 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud Scale Struggle
Oops - my bad - Its autoSoftCommit that is set after every doc and not an
autoCommit.
Following snippet from the solrconfig -
1
true
Oops - my bad - Its autoSoftCommit that is set after every doc and not an
autoCommit.
Following snippet from the solrconfig -
1
true
1
Shall I increase the autoCommit time as well? But would that mean more RAM
is consumed by all instances running on the box?
Comments inline:
On Fri, Aug 1, 2014 at 3:49 PM, anand.mahajan wrote:
> Hello all,
>
> Struggling to get this going with SolrCloud -
>
> Requirement in brief :
> - Ingest about 4M Used Cars listings a day and track all unique cars for
> changes
> - 4M automated searches a day (during the inge
14 matches
Mail list logo