On 12 Mar 2018 05:51, "Shawn Heisey" <apa...@elyograg.org> wrote:

On 3/11/2018 11:35 AM, BlackIce wrote:

> I have some questions regarding performance.
>
> Lets says I have a dual CPU with a total of 8 cores and 24 GB RAM for my
> Solr and some other stuff.
>
> Would it be more beneficial to only run 1 instance of Solr with the
> collection stored on 4 HD's in RAID 0?? Or.... Have several Virtual
> Machines each running of its own HD, ie: Have 4 VM's running Solr?
>

Performance is always going to be better on bare metal than on virtual
machines.  Virtualization in modern times is really good, so the difference
*might* be minimal, but there is ALWAYS overhead.

*****Deepak*****

I doubt this. It would be great if someone can subtantiate this with hard
facts
*****Deepak*****


I used to create virtual machines in my hardware for Solr. Initially with
vmware esxi, then later natively in Linux with KVM.  At that time, I was
running one index core per VM.  Just for some testing, I took a similar
machine and set up one Solr instance handling all the same cores on bare
metal.  I do not remember HOW much faster it was, but it was definitely
faster. One big thing I like about bare metal is that there's only one
"machine", IP address, and Solr instance to administer.

Unless you're willing to completely rebuild the whole thing in the event of
drive failure, don't use RAID0.  If one drive dies (and every hard drive IS
eventually going to die if it's used long enough), then *all* of the data
on the whole RAID volume is gone.

You could do RAID5, which has decent redundancy and good space efficiency,
but if you're not familiar with the RAID5 write penalty, do some research
on it, and you'll probably come out of it not wanting to EVER use it.  If
you like, I can explain exactly why you should avoid any RAID level that
incorporates 5 or 6.

Overall, the best level is RAID10 ... but it has a glaring disadvantage
from a cost perspective -- you lose half of your raw capacity.  Since
drives are relatively cheap, I always build my servers with RAID10, using a
1MB stripe size and a battery-backed caching controller.  For the typical
hardware I'm using, that means that I'm going to end up with 6 to 12TB of
usable space instead of 10 to 20TB (raid5), but the volume is FAST.

Thanks,
Shawn

Reply via email to