I would suggest you start some smaller scale tests to get a feeling
for the performance before committing to a large purchase of this
hardware type.
Indeed, without some solid pointers, this is the only way left.
Even with solid pointers, that's the best way. :-)
--
*Craig Lewis*
Senior
Hi Dan,
Le 13/05/2014 13:42, Dan van der Ster a écrit :
> Hi,
> I think you're not getting many replies simply because those are
> rather large servers and not many have such hardware in prod.
Good point.
> We run with 24x3TB drives, 64GB ram, one 10Gbit NIC. Memory-wise there
> are no problems. T
Hi,
I think you're not getting many replies simply because those are rather
large servers and not many have such hardware in prod.
We run with 24x3TB drives, 64GB ram, one 10Gbit NIC. Memory-wise there
are no problems. Throughput-wise, the bottleneck is somewhere between
the NIC (~1GB/s) and
Thanks for your answers Craig, it seems this is a niche use case for Ceph, not
a lot of replies from the ML.
Cheers
--
Cédric Lemarchand
> Le 11 mai 2014 à 00:35, Craig Lewis a écrit :
>
>> On 5/10/14 12:43 , Cédric Lemarchand wrote:
>> Hi Craig,
>>
>> Thanks, I really appreciate the well d
On 5/10/14 12:43 , Cédric Lemarchand wrote:
Hi Craig,
Thanks, I really appreciate the well detailed response.
I carefully note your advices, specifically about the CPU starvation
scenario, which as you said sounds scary.
About IO, datas will be very resilient, in case of crash, loosing not
Hi Craig,
Thanks, I really appreciate the well detailed response.
I carefully note your advices, specifically about the CPU starvation scenario,
which as you said sounds scary.
About IO, datas will be very resilient, in case of crash, loosing not fully
written objects will not be a problem (th
I'm still a noob too, so don't take anything I say with much weight. I
was hoping that somebody with more experience would reply.
I see a few potential problems.
With that CPU to disk ratio, you're going to need to slow recovery down
a lot to make sure you have enough CPU available after a n
An other thought, I would hope that with EC, data chunks spreads would profits
of each drives writes capability where there will be stored.
I did not get any rely for now ! Does this kind of configuration (hard & soft)
looks crazy ?! Am I missing something ?
Looking forward for your comments, t
Some more details, the io pattern will be around 90%write 10%read,
mainly sequential.
Recent posts shows that max_backfills, recovery_max_active and
recovery_op_priority settings will be helpful in case of backfilling/re
balancing.
Any thoughts on such hardware setup ?
Le 07/05/2014 11:43, Cedric
Hello,
This build is only intended for archiving purpose, what matter here is
lowering ratio $/To/W.
Access to the storage would be via radosgw, installed on each nodes. I
need that each nodes sustain an average of 1Gb write rates, for which I
think it would not be a problem. Erasure encoding will
10 matches
Mail list logo