The machines were 32GB ram boxes. You must do the RAM requirement
calculation for your indexes . Just the no:of indexes alone won't be enough
to arrive at the RAM requirement


On Tue, Aug 12, 2014 at 6:59 PM, Ramprasad Padmanabhan <
ramprasad...@gmail.com> wrote:

> On 12 August 2014 18:18, Noble Paul <noble.p...@gmail.com> wrote:
>
> > Hi Ramprasad,
> >
> >
> > I have used it in a cluster with millions of users (1 user per core) in
> > legacy cloud mode .We used the on demand core loading feature where each
> > Solr had 30,000 cores and at a time only 2000 cores were in memory. You
> are
> > just hitting 400 and I don't see much of a problem . What is your h/w
> bTW?
> >
> >
> > On Tue, Aug 12, 2014 at 12:10 PM, Ramprasad Padmanabhan <
> > ramprasad...@gmail.com> wrote:
> >
> > > I need to store in SOLR all data of my clients mailing activitiy
> > >
> > > The data contains meta data like From;To:Date;Time:Subject etc
> > >
> > > I would easily have 1000 Million records every 2 months.
> > >
> > > What I am currently doing is creating cores per client. So I have 400
> > cores
> > > already.
> > >
> > > Is this a good idea to do ?
> > >
> > > What is the general practice for creating cores
> > >
> >
> >
> I have a single machine 16GB Ram with 16 cpu cores
>
> What is the h/w you are using
>



-- 
-----------------------------------------------------
Noble Paul

Reply via email to