Hi Josef, 

are you saying that there is no ceph config option that can be used to provide 
IO to the vms while the ceph cluster is in heavy data move? I am really 
struggling to understand that this could be the case. I've read so much about 
ceph being the solution to the modern storage needs and that all of its 
components were designed to be redundant to provide an always on availability 
of the storage in case of upgrades and hardware failures. Has something been 
overlooked? 

Also, judging by a low number of people with similar issues I am thinking that 
there are a lot of ceph users which are still using non optimal profile, either 
because they don't want to risk the downtime or simply they don't know about 
the latest crush tunables. 

For any future updates, should I be scheduling a maintenance day or two and 
shutdown all vms prior to upgrading the cluster? It so seems like the backwards 
approach of the 90s and early 2000s ((( 

Cheers 

Andrei 

> From: "Josef Johansson" <jose...@gmail.com>
> To: "Gregory Farnum" <gfar...@redhat.com>, "Daniel Swarbrick"
> <daniel.swarbr...@profitbricks.com>
> Cc: "ceph-users" <ceph-users@lists.ceph.com>, "ceph-devel"
> <ceph-de...@vger.kernel.org>
> Sent: Monday, 20 June, 2016 20:22:02
> Subject: Re: [ceph-users] cluster down during backfilling, Jewel tunables and
> client IO optimisations

> Hi,

> People ran into this when there were some changes in tunables that caused
> 70-100% movement, the solution was to find out what values that changed and
> increment them in the smallest steps possible.

> I've found that with major rearrangement in ceph the VMs does not neccesarily
> survive ( last time on a ssd cluster ), so linux and timeouts doesn't work 
> well
> os my assumption. Which is true with any other storage backend out there ;)

> Regards,
> Josef
> On Mon, 20 Jun 2016, 19:51 Gregory Farnum, < gfar...@redhat.com > wrote:

>> On Mon, Jun 20, 2016 at 8:33 AM, Daniel Swarbrick
>> < daniel.swarbr...@profitbricks.com > wrote:
>> > We have just updated our third cluster from Infernalis to Jewel, and are
>> > experiencing similar issues.

>> > We run a number of KVM virtual machines (qemu 2.5) with RBD images, and
>> > have seen a lot of D-state processes and even jbd/2 timeouts and kernel
>> > stack traces inside the guests. At first I thought the VMs were being
>> > starved of IO, but this is still happening after throttling back the
>> > recovery with:

>> > osd_max_backfills = 1
>> > osd_recovery_max_active = 1
>> > osd_recovery_op_priority = 1

>> > After upgrading the cluster to Jewel, I changed our crushmap to use the
>> > newer straw2 algorithm, which resulted in a little data movment, but no
>> > problems at that stage.

>> > Once the cluster had settled down again, I set tunables to optimal
>> > (hammer profile -> jewel profile), which has triggered between 50% and
>> > 70% misplaced PGs on our clusters. This is when the trouble started each
>> > time, and when we had cascading failures of VMs.

>> > However, after performing hard shutdowns on the VMs and restarting them,
>> > they seemed to be OK.

>> > At this stage, I have a strong suspicion that it is the introduction of
>> > "require_feature_tunables5 = 1" in the tunables. This seems to require
>> > all RADOS connections to be re-established.

>> Do you have any evidence of that besides the one restart?

>> I guess it's possible that we aren't kicking requests if the crush map
>> but not the rest of the osdmap changes, but I'd be surprised.
>> -Greg



>> > On 20/06/16 13:54, Andrei Mikhailovsky wrote:
>> >> Hi Oliver,

>>>> I am also seeing this as a strange behavriour indeed! I was going through 
>>>> the
>>>> logs and I was not able to find any errors or issues. There was also no
>> >> slow/blocked requests that I could see during the recovery process.

>>>> Does anyone has an idea what could be the issue here? I don't want to shut 
>>>> down
>> >> all vms every time there is a new release with updated tunable values.


>> >> Andrei



>> >> ----- Original Message -----
>> >>> From: "Oliver Dzombic" < i...@ip-interactive.de >
>> >>> To: "andrei" < and...@arhont.com >, "ceph-users" < 
>> >>> ceph-users@lists.ceph.com >
>> >>> Sent: Sunday, 19 June, 2016 10:14:35
>>>>> Subject: Re: [ceph-users] cluster down during backfilling, Jewel tunables 
>>>>> and
>> >>> client IO optimisations

>> >>> Hi,

>> >>> so far the key values for that are:

>> >>> osd_client_op_priority = 63 ( anyway default, but i set it to remember 
>> >>> it )
>> >>> osd_recovery_op_priority = 1


>> >>> In addition i set:

>> >>> osd_max_backfills = 1
>> >>> osd_recovery_max_active = 1



>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to