Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread David Turner
"all OSDs are running jewel or later but the 'require_jewel_osds' osdmap flag 
is not set"

It's noted in the release notes that this will happen and that you then just 
set the flag and it goes away.



[cid:image0cf66a.JPG@55847b03.4ab222e1]<https://storagecraft.com>   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrey Shevel 
[shevel.and...@gmail.com]
Sent: Friday, December 09, 2016 1:33 PM
To: ceph-users
Subject: Re: [ceph-users] 10.2.4 Jewel released

I did
yum update

and found out that ceph has version 10.2.4 and also after update I
have the message

"all OSDs are running jewel or later but the 'require_jewel_osds'
osdmap flag is not set"

===
[ceph@ceph-swift-gateway ~]$ ceph -v
ceph version 10.2.4 (9411351cc8ce9ee03fbd46225102fe3d28ddf611)

[ceph@ceph-swift-gateway ~]$ cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1


[ceph@ceph-swift-gateway ~]$ ceph -s
   cluster 65b8080e-d813-45ca-9cc1-ecb242967694
health HEALTH_WARN
   all OSDs are running jewel or later but the
'require_jewel_osds' osdmap flag is not set
monmap e22: 4 mons at
{osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=10.10.1.41:6789/0}
   election epoch 317068, quorum 0,1,2,3 osd2,osd3,osd4,stor
 fsmap e1854002: 1/1/1 up {0=osd2=up:active}, 2 up:standby
osdmap e1889715: 22 osds: 22 up, 22 in
 pgmap v6294339: 3472 pgs, 28 pools, 10805 MB data, 3218 objects
   10890 MB used, 81891 GB / 81902 GB avail
   3472 active+clean


is ceph 10.2.4.-1  still in test stage ?



On Fri, Dec 9, 2016 at 9:30 PM, Udo Lembke <ulem...@polarzone.de> wrote:
> Hi,
>
> unfortunately there are no Debian Jessie packages...
>
>
> Don't know that an recompile take such an long time for ceph... I think
> such an important fix should hit the repros faster.
>
>
> Udo
>
>
> On 09.12.2016 18:54, Francois Lafont wrote:
>> On 12/09/2016 06:39 PM, Alex Evonosky wrote:
>>
>>> Sounds great.  May I asked what procedure you did to upgrade?
>> Of course. ;)
>>
>> It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/
>> (I think this link was pointed by Greg Farnum or Sage Weil in a
>> previous message).
>>
>> Personally I use Ubuntu Trusty, so for me in the page above leads me
>> to use this line in my "sources.list":
>>
>> deb 
>> http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/
>>  trusty main
>>
>> And after that "apt-get update && apt-get upgrade" etc.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Andrey Y Shevel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Andrey Shevel
I did
yum update

and found out that ceph has version 10.2.4 and also after update I
have the message

"all OSDs are running jewel or later but the 'require_jewel_osds'
osdmap flag is not set"

===
[ceph@ceph-swift-gateway ~]$ ceph -v
ceph version 10.2.4 (9411351cc8ce9ee03fbd46225102fe3d28ddf611)

[ceph@ceph-swift-gateway ~]$ cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1


[ceph@ceph-swift-gateway ~]$ ceph -s
cluster 65b8080e-d813-45ca-9cc1-ecb242967694
 health HEALTH_WARN
all OSDs are running jewel or later but the
'require_jewel_osds' osdmap flag is not set
 monmap e22: 4 mons at
{osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=10.10.1.41:6789/0}
election epoch 317068, quorum 0,1,2,3 osd2,osd3,osd4,stor
  fsmap e1854002: 1/1/1 up {0=osd2=up:active}, 2 up:standby
 osdmap e1889715: 22 osds: 22 up, 22 in
  pgmap v6294339: 3472 pgs, 28 pools, 10805 MB data, 3218 objects
10890 MB used, 81891 GB / 81902 GB avail
3472 active+clean


is ceph 10.2.4.-1  still in test stage ?



On Fri, Dec 9, 2016 at 9:30 PM, Udo Lembke  wrote:
> Hi,
>
> unfortunately there are no Debian Jessie packages...
>
>
> Don't know that an recompile take such an long time for ceph... I think
> such an important fix should hit the repros faster.
>
>
> Udo
>
>
> On 09.12.2016 18:54, Francois Lafont wrote:
>> On 12/09/2016 06:39 PM, Alex Evonosky wrote:
>>
>>> Sounds great.  May I asked what procedure you did to upgrade?
>> Of course. ;)
>>
>> It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/
>> (I think this link was pointed by Greg Farnum or Sage Weil in a
>> previous message).
>>
>> Personally I use Ubuntu Trusty, so for me in the page above leads me
>> to use this line in my "sources.list":
>>
>> deb 
>> http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/
>>  trusty main
>>
>> And after that "apt-get update && apt-get upgrade" etc.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Andrey Y Shevel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Udo Lembke
Hi,

unfortunately there are no Debian Jessie packages...


Don't know that an recompile take such an long time for ceph... I think
such an important fix should hit the repros faster.


Udo


On 09.12.2016 18:54, Francois Lafont wrote:
> On 12/09/2016 06:39 PM, Alex Evonosky wrote:
>
>> Sounds great.  May I asked what procedure you did to upgrade?
> Of course. ;)
>
> It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/
> (I think this link was pointed by Greg Farnum or Sage Weil in a
> previous message).
>
> Personally I use Ubuntu Trusty, so for me in the page above leads me
> to use this line in my "sources.list":
>
> deb 
> http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/
>  trusty main
>
> And after that "apt-get update && apt-get upgrade" etc.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Alex Evonosky
Thank you sir.  Ubuntu here as well.





On Fri, Dec 9, 2016 at 12:54 PM, Francois Lafont <
francois.lafont.1...@gmail.com> wrote:

> On 12/09/2016 06:39 PM, Alex Evonosky wrote:
>
> > Sounds great.  May I asked what procedure you did to upgrade?
>
> Of course. ;)
>
> It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/
> (I think this link was pointed by Greg Farnum or Sage Weil in a
> previous message).
>
> Personally I use Ubuntu Trusty, so for me in the page above leads me
> to use this line in my "sources.list":
>
> deb http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/
> 5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/
> trusty main
>
> And after that "apt-get update && apt-get upgrade" etc.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Francois Lafont
On 12/09/2016 06:39 PM, Alex Evonosky wrote:

> Sounds great.  May I asked what procedure you did to upgrade?

Of course. ;)

It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/
(I think this link was pointed by Greg Farnum or Sage Weil in a
previous message).

Personally I use Ubuntu Trusty, so for me in the page above leads me
to use this line in my "sources.list":

deb 
http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/
 trusty main

And after that "apt-get update && apt-get upgrade" etc.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Alex Evonosky
Francois-

Sounds great.  May I asked what procedure you did to upgrade?

Thank you!




On Fri, Dec 9, 2016 at 12:20 PM, Francois Lafont <
francois.lafont.1...@gmail.com> wrote:

> Hi,
>
> Just for information, after the upgrade to the version
> 10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e6823606176d9e)
> of all my cluster (osd, mon and mds) since ~30 hours, I have
> no problem (my cluster is a small cluster with 5 nodes and
> 4 osds per nodes and 3 monitors and I just use cephfs).
>
> Bye.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Francois Lafont
Hi,

Just for information, after the upgrade to the version
10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e6823606176d9e)
of all my cluster (osd, mon and mds) since ~30 hours, I have
no problem (my cluster is a small cluster with 5 nodes and
4 osds per nodes and 3 monitors and I just use cephfs).

Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-09 Thread Graham Allan
On Thu, Dec 8, 2016 at 5:19 AM, Francois Lafont <
francois.lafont.1...@gmail.com> wrote:

> On 12/08/2016 11:24 AM, Ruben Kerkhof wrote:
>
> > I've been running this on one of my servers now for half an hour, and
> > it fixes the issue.
>
> It's the same for me. ;)
>
> ~$ ceph -v
> ceph version 10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e6823606176d9e)
>

In our case I applied the above version only to our rados gateways, and
this resolved the problem well enough. There is still high load on the OSD
nodes but they seem responsive and otherwise stable. Before patching the
gateway nodes, radosgw would eventually run way to a loadav of 3000+ and
stop responding.

Should we expect a more general release of a revised 10.2.4? If it's a
matter of a few days or a week, then I'm inclined to just sit tight and
wait.

Thanks for the rapid fix!

Graham
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-08 Thread Francois Lafont
On 12/08/2016 11:24 AM, Ruben Kerkhof wrote:

> I've been running this on one of my servers now for half an hour, and
> it fixes the issue.

It's the same for me. ;)

~$ ceph -v
ceph version 10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e6823606176d9e)

Thanks for the help.
Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-08 Thread Ruben Kerkhof
On Thu, Dec 8, 2016 at 1:24 AM, Gregory Farnum  wrote:
> Okay, we think we know what's happened; explanation at
> http://tracker.ceph.com/issues/18184 and first PR at
> https://github.com/ceph/ceph/pull/12372
>
> If you haven't already installed the previous branch, please try
> wip-msgr-jewel-fix2 instead. That's a cleaner and more precise
> solution to the real problem. :)
> -Greg

I've been running this on one of my servers now for half an hour, and
it fixes the issue.

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-08 Thread Micha Krause

Hi,


If you haven't already installed the previous branch, please try
wip-msgr-jewel-fix2 instead. That's a cleaner and more precise
solution to the real problem. :)


Any predictions when this fix will hit the Debian repositories?

Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released -- IMPORTANT

2016-12-07 Thread Francois Lafont
On 12/08/2016 12:38 AM, Gregory Farnum wrote:

> Yep!

Ok, thanks for the confirmations Greg.
Bye.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Gregory Farnum
On Wed, Dec 7, 2016 at 3:11 PM, Sage Weil  wrote:
> On Thu, 8 Dec 2016, Ruben Kerkhof wrote:
>> On Wed, Dec 7, 2016 at 11:58 PM, Samuel Just  wrote:
>> > Actually, Greg and Sage are working up other branches, nvm.
>> > -Sam
>>
>> Ok, I'll hold. If the issue is in the SimpleMessenger, would it be
>> safe to switch to ms type = async as a workaround?
>> I heard that it will become the default in Kraken, but how stable is
>> it in Jewel?
>
> We haven't verified that all the fixes are backported to jewel or tested
> it on jewel, so I wouldn't recommend it.
>
> I have a branch that cherry-picks the fix on top of 10.2.4.  It is
> building now and repos should appear in the next hour or so at
>
> 
> https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix/aa0b6a1618df3c942b1206cd3cfaa99ecb43978e/
>
> You can watch builds here
>
> 
> https://shaman.ceph.com/builds/ceph/wip-msgr-jewel-fix/aa0b6a1618df3c942b1206cd3cfaa99ecb43978e/
>
> Once the build is done, can you try it out on one machine and confirm that
> it resolves the problem?

Okay, we think we know what's happened; explanation at
http://tracker.ceph.com/issues/18184 and first PR at
https://github.com/ceph/ceph/pull/12372

If you haven't already installed the previous branch, please try
wip-msgr-jewel-fix2 instead. That's a cleaner and more precise
solution to the real problem. :)
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released -- IMPORTANT

2016-12-07 Thread Francois Lafont
On 12/08/2016 12:06 AM, Sage Weil wrote:

> Please hold off on upgrading to this release.  It triggers a bug in 
> SimpleMessenger that causes threads for broken connections to spin, eating 
> CPU.
> 
> We're making sure we understand the root cause and preparing a fix.

Waiting for the fix and its release, can you confirm to me that restart osd 
daemon every 15 minutes is a possible workaround? In my case, I have a little 
cluster (5 nodes with 4 osd each) and it's possible for me to restart daemons 
every 15 minutes without have a cluster completely down. ;)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
Hi Gregory,

On Thu, Dec 8, 2016 at 12:10 AM, Gregory Farnum  wrote:
> In slightly more detail: you are clearly seeing a problem with the
> messenger, as indicated by the sock_recvmsg at the top of the CPU
> usage list. We've seen this elsewhere very rarely, which is why
> there's already a backport queued up which we didn't block on.
> The 15-minute period you're seeing is the default timeout we set on
> sockets before we start marking them closed if there's no activity.
>
> We're not quite sure why it's causing trouble now, although we have
> one or two patches we are speculating about and looking into.
>
> This didn't turn up in testing because as best we can tell it's only a
> situation you can expect to encounter when you have idle TCP
> connections between systems (or in fairly artificial failed
> networking).

For the OSD's doing 100% cpu, strace indeed shows EAGAIN a lot on some
of the sockets.
I'll try to get some packet captures if I can.

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Sage Weil
On Thu, 8 Dec 2016, Ruben Kerkhof wrote:
> On Wed, Dec 7, 2016 at 11:58 PM, Samuel Just  wrote:
> > Actually, Greg and Sage are working up other branches, nvm.
> > -Sam
> 
> Ok, I'll hold. If the issue is in the SimpleMessenger, would it be
> safe to switch to ms type = async as a workaround?
> I heard that it will become the default in Kraken, but how stable is
> it in Jewel?

We haven't verified that all the fixes are backported to jewel or tested 
it on jewel, so I wouldn't recommend it.

I have a branch that cherry-picks the fix on top of 10.2.4.  It is 
building now and repos should appear in the next hour or so at


https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix/aa0b6a1618df3c942b1206cd3cfaa99ecb43978e/

You can watch builds here


https://shaman.ceph.com/builds/ceph/wip-msgr-jewel-fix/aa0b6a1618df3c942b1206cd3cfaa99ecb43978e/

Once the build is done, can you try it out on one machine and confirm that 
it resolves the problem?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released -- IMPORTANT

2016-12-07 Thread Sage Weil
Hi everyone,

Please hold off on upgrading to this release.  It triggers a bug in 
SimpleMessenger that causes threads for broken connections to spin, eating 
CPU.

We're making sure we understand the root cause and preparing a fix.

Thanks!
sage




On Wed, 7 Dec 2016, Abhishek L wrote:

> This point release fixes several important bugs in RBD mirroring, RGW
> multi-site, CephFS, and RADOS.
> 
> We recommend that all v10.2.x users upgrade. Also note the following when 
> upgrading from hammer
> 
> Upgrading from hammer
> -
> 
> When the last hammer OSD in a cluster containing jewel MONs is
> upgraded to jewel, as of 10.2.4 the jewel MONs will issue this
> warning: "all OSDs are running jewel or later but the
> 'require_jewel_osds' osdmap flag is not set" and change the
> cluster health status to HEALTH_WARN.
> 
> This is a signal for the admin to do "ceph osd set require_jewel_osds" - by
> doing this, the upgrade path is complete and no more pre-Jewel OSDs may be 
> added
> to the cluster.
> 
> 
> Notable Changes
> ---
> * build/ops: aarch64: Compiler-based detection of crc32 extended CPU type is 
> broken (issue#17516 , pr#11492 , Alexander Graf)
> * build/ops: allow building RGW with LDAP disabled (issue#17312 , pr#11478 , 
> Daniel Gryniewicz)
> * build/ops: backport 'logrotate: Run as root/ceph' (issue#17381 , pr#11201 , 
> Boris Ranto)
> * build/ops: ceph installs stuff in %_udevrulesdir but does not own that 
> directory (issue#16949 , pr#10862 , Nathan Cutler)
> * build/ops: ceph-osd-prestart.sh fails confusingly when data directory does 
> not exist (issue#17091 , pr#10812 , Nathan Cutler)
> * build/ops: disable LTTng-UST in openSUSE builds (issue#16937 , pr#10794 , 
> Michel Normand)
> * build/ops: i386 tarball gitbuilder failure on master (issue#16398 , 
> pr#10855 , Vikhyat Umrao, Kefu Chai)
> * build/ops: include more files in "make dist" tarball (issue#17560 , 
> pr#11431 , Ken Dreyer)
> * build/ops: incorrect value of CINIT_FLAG_DEFER_DROP_PRIVILEGES (issue#16663 
> , pr#10278 , Casey Bodley)
> * build/ops: remove SYSTEMD_RUN from initscript (issue#7627 , issue#16441 , 
> issue#16440 , pr#9872 , Vladislav Odintsov)
> * build/ops: systemd: add install section to rbdmap.service file (issue#17541 
> , pr#11158 , Jelle vd Kooij)
> * common: Enable/Disable of features is allowed even the features are already 
> enabled/disabled (issue#16079 , pr#11460 , Lu Shi)
> * common: Log.cc: Assign LOG_INFO priority to syslog calls (issue#15808 , 
> pr#11231 , Brad Hubbard)
> * common: Proxied operations shouldn't result in error messages if replayed 
> (issue#16130 , pr#11461 , Vikhyat Umrao)
> * common: Request exclusive lock if owner sends -ENOTSUPP for proxied 
> maintenance op (issue#16171 , pr#10784 , Jason Dillaman)
> * common: msgr/async: Messenger thread long time lock hold risk (issue#15758 
> , pr#10761 , Wei Jin)
> * doc: fix description for rsize and rasize (issue#17357 , pr#11171 , Andreas 
> Gerstmayr)
> * filestore: can get stuck in an unbounded loop during scrub (issue#17859 , 
> pr#12001 , Sage Weil)
> * fs: Failure in snaptest-git-ceph.sh (issue#17172 , pr#11419 , Yan, Zheng)
> * fs: Log path as well as ino when detecting metadata damage (issue#16973 , 
> pr#11418 , John Spray)
> * fs: client: FAILED assert(root_ancestor->qtree == __null) (issue#16066 , 
> issue#16067 , pr#10107 , Yan, Zheng)
> * fs: client: add missing client_lock for get_root (issue#17197 , pr#10921 , 
> Patrick Donnelly)
> * fs: client: fix shutdown with open inodes (issue#16764 , pr#10958 , John 
> Spray)
> * fs: client: nlink count is not maintained correctly (issue#16668 , pr#10877 
> , Jeff Layton)
> * fs: multimds: allow_multimds not required when max_mds is set in ceph.conf 
> at startup (issue#17105 , pr#10997 , Patrick Donnelly)
> * librados: memory leaks from ceph::crypto (WITH_NSS) (issue#17205 , pr#11409 
> , Casey Bodley)
> * librados: modify Pipe::connect() to return the error code (issue#15308 , 
> pr#11193 , Vikhyat Umrao)
> * librados: remove new setxattr overload to avoid breaking the C++ ABI 
> (issue#18058 , pr#12207 , Josh Durgin)
> * librbd: cannot disable journaling or remove non-mirrored, non-primary image 
> (issue#16740 , pr#11337 , Jason Dillaman)
> * librbd: discard after write can result in assertion failure (issue#17695 , 
> pr#11644 , Jason Dillaman)
> * librbd::Operations: update notification failed: (2) No such file or 
> directory (issue#17549 , pr#11420 , Jason Dillaman)
> * mds: Crash in Client::_invalidate_kernel_dcache when reconnecting during 
> unmount (issue#17253 , pr#11414 , Yan, Zheng)
> * mds: Duplicate damage table entries (issue#17173 , pr#11412 , John Spray)
> * mds: Failure in dirfrag.sh (issue#17286 , pr#11416 , Yan, Zheng)
> * mds: Failure in snaptest-git-ceph.sh (issue#17271 , pr#11415 , Yan, Zheng)
> * mon: Ceph Status - Segmentation Fault (issue#16266 , pr#11408 , Brad 
> Hubbard)
> * mon: Display full flag in ceph status if full flag 

Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
On Wed, Dec 7, 2016 at 11:58 PM, Samuel Just  wrote:
> Actually, Greg and Sage are working up other branches, nvm.
> -Sam

Ok, I'll hold. If the issue is in the SimpleMessenger, would it be
safe to switch to ms type = async as a workaround?
I heard that it will become the default in Kraken, but how stable is
it in Jewel?

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
Hi Samuel,

On Wed, Dec 7, 2016 at 11:52 PM, Samuel Just  wrote:
> I just pushed a branch wip-14120-10.2.4 with a possible fix.
>
> https://github.com/ceph/ceph/pull/12349/ is a fix for a known bug
> which didn't quite make it into 10.2.4, it's possible that
> 165e5abdbf6311974d4001e43982b83d06f9e0cc which did made the bug much
> more likely to happen.  wip-14120-10.2.4 has that fix cherry-picked on
> top of 10.2.4.  Can you try it and let us know the result?
> -Sam

Great, I'll give it a shot! Does the CI produce binary rpms by any chance.
Otherwise I'll have to figure out how to rebuild them myself but this
will take a bit longer.

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Samuel Just
Actually, Greg and Sage are working up other branches, nvm.
-Sam

On Wed, Dec 7, 2016 at 2:52 PM, Samuel Just  wrote:
> I just pushed a branch wip-14120-10.2.4 with a possible fix.
>
> https://github.com/ceph/ceph/pull/12349/ is a fix for a known bug
> which didn't quite make it into 10.2.4, it's possible that
> 165e5abdbf6311974d4001e43982b83d06f9e0cc which did made the bug much
> more likely to happen.  wip-14120-10.2.4 has that fix cherry-picked on
> top of 10.2.4.  Can you try it and let us know the result?
> -Sam
>
> On Wed, Dec 7, 2016 at 2:42 PM, Ruben Kerkhof  wrote:
>> On Wed, Dec 7, 2016 at 11:33 PM, Ruben Kerkhof  
>> wrote:
 And another interesting information (maybe). I have ceph-osd process with 
 big cpu load (as Steve said no iowait and no excessive memory usage). If I 
 restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes 
 for me. After 15 minutes, I have the cpu load again. It's curious this 
 number of 15 minutes, isn't it?
>>>
>>> Thanks, l'll check how long it takes for this to happen on my cluster.
>>
>> Indeed, 15 minutes, on the dot.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Samuel Just
I just pushed a branch wip-14120-10.2.4 with a possible fix.

https://github.com/ceph/ceph/pull/12349/ is a fix for a known bug
which didn't quite make it into 10.2.4, it's possible that
165e5abdbf6311974d4001e43982b83d06f9e0cc which did made the bug much
more likely to happen.  wip-14120-10.2.4 has that fix cherry-picked on
top of 10.2.4.  Can you try it and let us know the result?
-Sam

On Wed, Dec 7, 2016 at 2:42 PM, Ruben Kerkhof  wrote:
> On Wed, Dec 7, 2016 at 11:33 PM, Ruben Kerkhof  wrote:
>>> And another interesting information (maybe). I have ceph-osd process with 
>>> big cpu load (as Steve said no iowait and no excessive memory usage). If I 
>>> restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes 
>>> for me. After 15 minutes, I have the cpu load again. It's curious this 
>>> number of 15 minutes, isn't it?
>>
>> Thanks, l'll check how long it takes for this to happen on my cluster.
>
> Indeed, 15 minutes, on the dot.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
On Wed, Dec 7, 2016 at 11:33 PM, Ruben Kerkhof  wrote:
>> And another interesting information (maybe). I have ceph-osd process with 
>> big cpu load (as Steve said no iowait and no excessive memory usage). If I 
>> restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes 
>> for me. After 15 minutes, I have the cpu load again. It's curious this 
>> number of 15 minutes, isn't it?
>
> Thanks, l'll check how long it takes for this to happen on my cluster.

Indeed, 15 minutes, on the dot.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
On Wed, Dec 7, 2016 at 11:37 PM, Francois Lafont
 wrote:
> On 12/07/2016 11:33 PM, Ruben Kerkhof wrote:
>
>> Thanks, l'll check how long it takes for this to happen on my cluster.
>>
>> I did just pause scrub and deep-scrub. Are there scrubs running on
>> your cluster now by any chance?
>
> Yes but normally not currently because I have:
>
>   osd scrub begin hour = 3
>   osd scrub end hour   = 5
>
> In the ceph.conf of all my cluster node, so normally I have currently no 
> scrubbing.
>
> Why do you think it's related with scrubbing.

Just a hunch, because my cluster wasn't doing anything else (no
client-I/O), and there was a bug related to scrubbing fixed in 10.2.4.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Francois Lafont
On 12/07/2016 11:33 PM, Ruben Kerkhof wrote:

> Thanks, l'll check how long it takes for this to happen on my cluster.
> 
> I did just pause scrub and deep-scrub. Are there scrubs running on
> your cluster now by any chance?

Yes but normally not currently because I have:

  osd scrub begin hour = 3
  osd scrub end hour   = 5

In the ceph.conf of all my cluster node, so normally I have currently no 
scrubbing.

Why do you think it's related with scrubbing.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
On Wed, Dec 7, 2016 at 11:20 PM, Francois Lafont
 wrote:
> On 12/07/2016 11:16 PM, Steve Taylor wrote:
>> I'm seeing the same behavior with very similar perf top output. One server 
>> with 32 OSDs has a load average approaching 800. No excessive memory usage 
>> and no iowait at all.
>
> Exactly!
>
> And another interesting information (maybe). I have ceph-osd process with big 
> cpu load (as Steve said no iowait and no excessive memory usage). If I 
> restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes for 
> me. After 15 minutes, I have the cpu load again. It's curious this number of 
> 15 minutes, isn't it?

Thanks, l'll check how long it takes for this to happen on my cluster.

I did just pause scrub and deep-scrub. Are there scrubs running on
your cluster now by any chance?

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Francois Lafont
On 12/07/2016 11:16 PM, Steve Taylor wrote:
> I'm seeing the same behavior with very similar perf top output. One server 
> with 32 OSDs has a load average approaching 800. No excessive memory usage 
> and no iowait at all.

Exactly!

And another interesting information (maybe). I have ceph-osd process with big 
cpu load (as Steve said no iowait and no excessive memory usage). If I restart 
the ceph-osd daemon cpu load becomes OK during exactly 15 minutes for me. After 
15 minutes, I have the cpu load again. It's curious this number of 15 minutes, 
isn't it?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Steve Taylor
I'm seeing the same behavior with very similar perf top output. One server with 
32 OSDs has a load average approaching 800. No excessive memory usage and no 
iowait at all.




[cid:imagea8a69a.JPG@f4e62cf1.419383aa]<https://storagecraft.com>   Steve 
Taylor | Senior Software Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ruben 
Kerkhof
Sent: Wednesday, December 7, 2016 3:08 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] 10.2.4 Jewel released

On Wed, Dec 7, 2016 at 8:46 PM, Francois Lafont 
<francois.lafont.1...@gmail.com> wrote:
> Hi,
>
> On 12/07/2016 01:21 PM, Abhishek L wrote:
>
>> This point release fixes several important bugs in RBD mirroring, RGW
>> multi-site, CephFS, and RADOS.
>>
>> We recommend that all v10.2.x users upgrade. Also note the following
>> when upgrading from hammer
>
> Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load 
> cpu on osd and mds.

Yes, same here. perf top shows:

  8.23%  [kernel]  [k] sock_recvmsg
  8.16%  libpthread-2.17.so[.] __libc_recv
  7.33%  [kernel]  [k] fget_light
  7.24%  [kernel]  [k] tcp_recvmsg
  6.41%  [kernel]  [k] sock_has_perm
  6.19%  [kernel]  [k] _raw_spin_lock_bh
  4.89%  [kernel]  [k] system_call
  4.74%  [kernel]  [k] avc_has_perm_flags
  3.93%  [kernel]  [k] SYSC_recvfrom
  3.18%  [kernel]  [k] fput
  3.15%  [kernel]  [k] system_call_after_swapgs
  3.12%  [kernel]  [k] local_bh_enable_ip
  3.11%  [kernel]  [k] release_sock
  2.90%  libpthread-2.17.so[.] __pthread_enable_asynccancel
  2.71%  libpthread-2.17.so[.] __pthread_disable_asynccancel
  2.57%  [kernel]  [k] inet_recvmsg
  2.43%  [kernel]  [k] local_bh_enable
  2.16%  [kernel]  [k] local_bh_disable
  2.03%  [kernel]  [k] tcp_cleanup_rbuf
  1.44%  [kernel]  [k] sockfd_lookup_light
  1.26%  [kernel]  [k] _raw_spin_unlock
  1.20%  [kernel]  [k] sysret_check
  1.18%  [kernel]  [k] lock_sock_nested
  1.07%  [kernel]  [k] selinux_socket_recvmsg
  0.98%  [kernel]  [k] _raw_spin_unlock_bh
  0.97%  ceph-osd  [.] Pipe::do_recv
  0.87%  [kernel]  [k] _cond_resched
  0.73%  [kernel]  [k] tcp_release_cb
  0.52%  [kernel]  [k] security_socket_recvmsg

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Ruben Kerkhof
On Wed, Dec 7, 2016 at 8:46 PM, Francois Lafont
 wrote:
> Hi,
>
> On 12/07/2016 01:21 PM, Abhishek L wrote:
>
>> This point release fixes several important bugs in RBD mirroring, RGW
>> multi-site, CephFS, and RADOS.
>>
>> We recommend that all v10.2.x users upgrade. Also note the following when 
>> upgrading from hammer
>
> Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load 
> cpu on osd and mds.

Yes, same here. perf top shows:

   8.23%  [kernel]  [k] sock_recvmsg
   8.16%  libpthread-2.17.so[.] __libc_recv
   7.33%  [kernel]  [k] fget_light
   7.24%  [kernel]  [k] tcp_recvmsg
   6.41%  [kernel]  [k] sock_has_perm
   6.19%  [kernel]  [k] _raw_spin_lock_bh
   4.89%  [kernel]  [k] system_call
   4.74%  [kernel]  [k] avc_has_perm_flags
   3.93%  [kernel]  [k] SYSC_recvfrom
   3.18%  [kernel]  [k] fput
   3.15%  [kernel]  [k] system_call_after_swapgs
   3.12%  [kernel]  [k] local_bh_enable_ip
   3.11%  [kernel]  [k] release_sock
   2.90%  libpthread-2.17.so[.] __pthread_enable_asynccancel
   2.71%  libpthread-2.17.so[.] __pthread_disable_asynccancel
   2.57%  [kernel]  [k] inet_recvmsg
   2.43%  [kernel]  [k] local_bh_enable
   2.16%  [kernel]  [k] local_bh_disable
   2.03%  [kernel]  [k] tcp_cleanup_rbuf
   1.44%  [kernel]  [k] sockfd_lookup_light
   1.26%  [kernel]  [k] _raw_spin_unlock
   1.20%  [kernel]  [k] sysret_check
   1.18%  [kernel]  [k] lock_sock_nested
   1.07%  [kernel]  [k] selinux_socket_recvmsg
   0.98%  [kernel]  [k] _raw_spin_unlock_bh
   0.97%  ceph-osd  [.] Pipe::do_recv
   0.87%  [kernel]  [k] _cond_resched
   0.73%  [kernel]  [k] tcp_release_cb
   0.52%  [kernel]  [k] security_socket_recvmsg

Kind regards,

Ruben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Francois Lafont
Hi,

On 12/07/2016 01:21 PM, Abhishek L wrote:

> This point release fixes several important bugs in RBD mirroring, RGW
> multi-site, CephFS, and RADOS.
> 
> We recommend that all v10.2.x users upgrade. Also note the following when 
> upgrading from hammer

Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load 
cpu on osd and mds. Something like that:

top - 18:53:40 up  2:11,  1 user,  load average: 32.14, 29.49, 27.36
Tasks: 192 total,   2 running, 190 sleeping,   0 stopped,   0 zombie
%Cpu(s): 19.4 us, 80.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  32908088 total,  1876820 used, 31031268 free,31464 buffers
KiB Swap:  8388604 total,0 used,  8388604 free.   412340 cached Mem
 
  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND 
 
 2174 ceph  20   0  492408  79260   8688 S 169.7  0.2 139:49.77 ceph-mds

 2318 ceph  20   0 1081428 166700  25832 S 160.4  0.5 178:32.18 ceph-osd

 2288 ceph  20   0 1256604 241796  22896 S 159.4  0.7 189:25.19 ceph-osd

 2301 ceph  20   0 1261172 261040  23664 S 156.1  0.8 197:11.24 ceph-osd

 2337 ceph  20   0 1247904 260048  19084 S 154.8  0.8 191:01.90 ceph-osd

 2171 ceph  20   0  466160  58292  10992 S   0.3  0.2   0:29.89 ceph-mon

On IRC, two another persons have the same behavior after the upgrade.

The cluster is HEALTH OK. I don't see O/I on disk. If I restart daemons, all is 
ok but after a few minutes the load cpu starts again.

I have currently no idea about the problem.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 10.2.4 Jewel released

2016-12-07 Thread Abhishek L
This point release fixes several important bugs in RBD mirroring, RGW
multi-site, CephFS, and RADOS.

We recommend that all v10.2.x users upgrade. Also note the following when 
upgrading from hammer

Upgrading from hammer
-

When the last hammer OSD in a cluster containing jewel MONs is
upgraded to jewel, as of 10.2.4 the jewel MONs will issue this
warning: "all OSDs are running jewel or later but the
'require_jewel_osds' osdmap flag is not set" and change the
cluster health status to HEALTH_WARN.

This is a signal for the admin to do "ceph osd set require_jewel_osds" - by
doing this, the upgrade path is complete and no more pre-Jewel OSDs may be added
to the cluster.


Notable Changes
---
* build/ops: aarch64: Compiler-based detection of crc32 extended CPU type is 
broken (issue#17516 , pr#11492 , Alexander Graf)
* build/ops: allow building RGW with LDAP disabled (issue#17312 , pr#11478 , 
Daniel Gryniewicz)
* build/ops: backport 'logrotate: Run as root/ceph' (issue#17381 , pr#11201 , 
Boris Ranto)
* build/ops: ceph installs stuff in %_udevrulesdir but does not own that 
directory (issue#16949 , pr#10862 , Nathan Cutler)
* build/ops: ceph-osd-prestart.sh fails confusingly when data directory does 
not exist (issue#17091 , pr#10812 , Nathan Cutler)
* build/ops: disable LTTng-UST in openSUSE builds (issue#16937 , pr#10794 , 
Michel Normand)
* build/ops: i386 tarball gitbuilder failure on master (issue#16398 , pr#10855 
, Vikhyat Umrao, Kefu Chai)
* build/ops: include more files in "make dist" tarball (issue#17560 , pr#11431 
, Ken Dreyer)
* build/ops: incorrect value of CINIT_FLAG_DEFER_DROP_PRIVILEGES (issue#16663 , 
pr#10278 , Casey Bodley)
* build/ops: remove SYSTEMD_RUN from initscript (issue#7627 , issue#16441 , 
issue#16440 , pr#9872 , Vladislav Odintsov)
* build/ops: systemd: add install section to rbdmap.service file (issue#17541 , 
pr#11158 , Jelle vd Kooij)
* common: Enable/Disable of features is allowed even the features are already 
enabled/disabled (issue#16079 , pr#11460 , Lu Shi)
* common: Log.cc: Assign LOG_INFO priority to syslog calls (issue#15808 , 
pr#11231 , Brad Hubbard)
* common: Proxied operations shouldn't result in error messages if replayed 
(issue#16130 , pr#11461 , Vikhyat Umrao)
* common: Request exclusive lock if owner sends -ENOTSUPP for proxied 
maintenance op (issue#16171 , pr#10784 , Jason Dillaman)
* common: msgr/async: Messenger thread long time lock hold risk (issue#15758 , 
pr#10761 , Wei Jin)
* doc: fix description for rsize and rasize (issue#17357 , pr#11171 , Andreas 
Gerstmayr)
* filestore: can get stuck in an unbounded loop during scrub (issue#17859 , 
pr#12001 , Sage Weil)
* fs: Failure in snaptest-git-ceph.sh (issue#17172 , pr#11419 , Yan, Zheng)
* fs: Log path as well as ino when detecting metadata damage (issue#16973 , 
pr#11418 , John Spray)
* fs: client: FAILED assert(root_ancestor->qtree == __null) (issue#16066 , 
issue#16067 , pr#10107 , Yan, Zheng)
* fs: client: add missing client_lock for get_root (issue#17197 , pr#10921 , 
Patrick Donnelly)
* fs: client: fix shutdown with open inodes (issue#16764 , pr#10958 , John 
Spray)
* fs: client: nlink count is not maintained correctly (issue#16668 , pr#10877 , 
Jeff Layton)
* fs: multimds: allow_multimds not required when max_mds is set in ceph.conf at 
startup (issue#17105 , pr#10997 , Patrick Donnelly)
* librados: memory leaks from ceph::crypto (WITH_NSS) (issue#17205 , pr#11409 , 
Casey Bodley)
* librados: modify Pipe::connect() to return the error code (issue#15308 , 
pr#11193 , Vikhyat Umrao)
* librados: remove new setxattr overload to avoid breaking the C++ ABI 
(issue#18058 , pr#12207 , Josh Durgin)
* librbd: cannot disable journaling or remove non-mirrored, non-primary image 
(issue#16740 , pr#11337 , Jason Dillaman)
* librbd: discard after write can result in assertion failure (issue#17695 , 
pr#11644 , Jason Dillaman)
* librbd::Operations: update notification failed: (2) No such file or directory 
(issue#17549 , pr#11420 , Jason Dillaman)
* mds: Crash in Client::_invalidate_kernel_dcache when reconnecting during 
unmount (issue#17253 , pr#11414 , Yan, Zheng)
* mds: Duplicate damage table entries (issue#17173 , pr#11412 , John Spray)
* mds: Failure in dirfrag.sh (issue#17286 , pr#11416 , Yan, Zheng)
* mds: Failure in snaptest-git-ceph.sh (issue#17271 , pr#11415 , Yan, Zheng)
* mon: Ceph Status - Segmentation Fault (issue#16266 , pr#11408 , Brad Hubbard)
* mon: Display full flag in ceph status if full flag is set (issue#15809 , 
pr#9388 , Vikhyat Umrao)
* mon: Error EINVAL: removing mon.a at 172.21.15.16:6789/0, there will be 1 
monitors (issue#17725 , pr#12267 , Joao Eduardo Luis)
* mon: OSDMonitor: only reject MOSDBoot based on up_from if inst matches 
(issue#17899 , pr#12067 , Samuel Just)
* mon: OSDMonitor: Missing nearfull flag set (issue#17390 , pr#11272 , Igor 
Podoski)
* mon: Upgrading 0.94.6 -> 0.94.9 saturating mon node networking (issue#17365 , 
issue#17386 ,