Hi Scott,

Just some observations from here.

We run 8 nodes, 2U units with 12x OSD each (4x 500GB ssd, 8x 4TB platter)
attached to 2x LSI 2308 cards. Each node uses an intel E5-2620 with 32G mem.

Granted, we only have like 25 VM (some fairly io-hungry, both iops and
throughput-wise though) on that cluster, but we hardly see any cpu-usage at
all. We have ~6k PG and according to munin our avg. cpu time is ~9% (that
is out of all cores, so 9% out of 1200% (6 cores, 6 HT)).

Sadly I didn't record cpu-usage while stresstesting or breaking it.

We're using cuttlefish and XFS. And again, this cluster is still pretty
underused, so the cpu-usage does not reflect a more active system.

Cheers,
Martin


On Mon, Oct 7, 2013 at 6:15 PM, Scott Devoid <dev...@anl.gov> wrote:

> I brought this up within the context of the RAID discussion, but it did
> not garner any responses. [1]
>
> In our small test deployments (160 HDs and OSDs across 20 machines) our
> performance is quickly bounded by CPU and memory overhead. These are 2U
> machines with 2x 6-core Nehalem; and running 8 OSDs consumed 25% of the
> total CPU time. This was a cuttlefish deployment.
>
> This seems like a rather high CPU overhead. Particularly when we are
> looking to hit density target of 10-15 4TB drives / U within 1.5 years.
> Does anyone have suggestions for hitting this requirement? Are there ways
> to reduce CPU and memory overhead per OSD?
>
> My one suggestion was to do some form of RAID to join multiple drives and
> present them to a single OSD. A 2 drive RAID-0 would halve the OSD overhead
> while doubling the failure rate and doubling the rebalance overhead. It is
> not clear to me if that is better or not.
>
> [1]
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/004833.html
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to