Hi,
Le 28/11/2015 04:24, Brian Felton a écrit :
> Greetings Ceph Community,
>
> We are running a Hammer cluster (0.94.3-1) in production that recently
> experienced asymptotic performance degradation. We've been migrating
> data from an older non-Ceph cluster at a fairly steady pace for the
>
On 28 November 2015 at 13:24, Brian Felton wrote:
> Each storage server contains 72 6TB SATA drives for Ceph (648 OSDs, ~3.5PB
> in total). Each disk is set up as its own ZFS zpool. Each OSD has a 10GB
> journal, located within the disk's zpool.
>
I doubt I have much to
Greetings Ceph Community,
We are running a Hammer cluster (0.94.3-1) in production that recently
experienced asymptotic performance degradation. We've been migrating data
from an older non-Ceph cluster at a fairly steady pace for the past eight
weeks (about 5TB a week). Overnight, the ingress
On Fri, Nov 27, 2015 at 9:52 PM, Daniel Maraio wrote:
> Hello,
>
> Can you provide some further details. What are the size of your objects,
> how many objects do you have in your buckets. Are you using bucket index
> sharding, are you sharding your objects over multiple
On Fri, Nov 27, 2015 at 9:53 PM, Gregory Farnum wrote:
> > Nothing about the cluster has changed recently -- no OS patches, no Ceph
> > patches, no software updates of any kind. For the months we've had the
> > cluster operational, we've had no performance-related issues.
Hello,
Can you provide some further details. What are the size of your
objects, how many objects do you have in your buckets. Are you using
bucket index sharding, are you sharding your objects over multiple
buckets? Is the cluster doing any scrubbing during these periods? It
sounds like
On Fri, Nov 27, 2015 at 10:24 PM, Brian Felton wrote:
> Greetings Ceph Community,
>
> We are running a Hammer cluster (0.94.3-1) in production that recently
> experienced asymptotic performance degradation. We've been migrating data
> from an older non-Ceph cluster at a