Re: [ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-30 Thread Eric Eastman
It is great having access to features that are not fully production
ready, but it would be nice to know which Ceph features are ready and
which are not.  Just like Ceph File System is well marked that it is
not yet fully ready for production, it would be nice if rbd-fuse could
be marked as not ready, and why. This may motivate someone in the
community to work on it.

Eric

On Thu, Jul 30, 2015 at 2:30 AM, Ilya Dryomov idryo...@gmail.com wrote:

 This is an error from rados_connect(), so I'd say there is something
 wrong with your ceph.conf or the environment.  We can try to debug
 this, but yeah, rbd-fuse, as it is, is not intended for serious use.

 Thanks,

 Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Sage Weil
As time marches on it becomes increasingly difficult to maintain proper 
builds and packages for older distros.  For example, as we make the 
systemd transition, maintaining the kludgey sysvinit and udev support for 
centos6/rhel6 is a pain in the butt and eats up time and energy to 
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

 - Ongoing development on master (and future versions like infernalis and 
jewel) would not be tested on these distros.

 - We would stop building upstream release packages on ceph.com for new 
releases.

 - We would probably continue building hammer and firefly packages for 
future bugfix point releases.

 - The downstream distros would probably continue to package them, but the 
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
precise 12.04, they could, but they'd probably need to futz with the 
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.  
Specifically,

 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit 
entirely and focus on systemd (and continue maintaining the existing 
upstart files for just a bit longer).  That would be a relief.  (The 
sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
about packaging and testing them properly.)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
Not at all.
We have this: http://ceph.com/docs/master/releases/

I would expect that whatever distribution I install Ceph LTS release on will
be supported for the time specified.
That means if I install Hammer on CentOS 6 now it will stay supported
until 3Q/2016.

Of course if in the meantime the distribution itself becomes unsupported
then it makes sense to stop supporting it for Ceph as well, but that’s
probably not the case here:

https://access.redhat.com/support/policy/updates/errata

I don’t expect Ceph to be supported until EOL of the distro.

Jan



 On 30 Jul 2015, at 16:34, Handzik, Joe joseph.t.hand...@hp.com wrote:
 
 So, essentially, you'd vote that all LTS/enterprise releases be supported 
 until their vendor's (canonical, Suse, red hat) designated EOL date? Not 
 voting either way, just trying to put a date stamp on some of this.
 
 Joe
 
 On Jul 30, 2015, at 9:30 AM, Jan “Zviratko” Schermer zvira...@zviratko.net 
 wrote:
 
 I understand your reasons, but dropping support for LTS release like this
 is not right.
 
 You should lege artis support every distribution the LTS release could have
 ever been installed on - that’s what the LTS label is for and what we rely on
 once we build a project on top of it
 
 CentOS 6 in particular is still very widely used and even installed, 
 enterprise
 apps rely on it to this day. Someone out there is surely maintaining their 
 LTS
 Ceph release on this distro and not having tested packages will hurt badly.
 We don’t want out project managers selecting EMC SAN over CEPH SDS
 because of such uncertainty, and you should benchmark yourself to those
 vendors, maybe...
 
 Every developer loves dropping support and concentrating on the bleeding
 edge interesting stuff but that’s not how it should work.
 
 Just my 2 cents...
 
 Jan
 
 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com for new 
 releases.
 
 - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Marc

Hi,

much like debian already has, I would suggest to not make systemd a 
dependency for Ceph (or anything for that matter). The reason being here 
that we desperately need sysvinit until the systemd forks are ready 
which offer the systemd init system without all those slapped-on 
appendages that systemd has accumulated.


Alternatively, would it possible to maintain a dedicated changelog where 
all changes to the init scripts are being collected so that people can 
maintain their own sysvinit scripts? I just can't foresee systemd 
becoming usable in the near future and thus we will have to rely on 
sysvinit for quite a while longer.



RHEL6  will be maintained until Nov 2020, debian wheezy LTS until May 
2018, so I feel like it would be premature to drop those already. I 
can't give any names but I know of at least 2 customers I am consulting 
for that have blocked the RHEL7 upgrade due to systemd.


Regards,
Marc

On 07/30/2015 03:54 PM, Sage Weil wrote:

As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros.  For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6/rhel6 is a pain in the butt and eats up time and energy to
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

  - Ongoing development on master (and future versions like infernalis and
jewel) would not be tested on these distros.

  - We would stop building upstream release packages on ceph.com for new
releases.

  - We would probably continue building hammer and firefly packages for
future bugfix point releases.

  - The downstream distros would probably continue to package them, but the
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
precise 12.04, they could, but they'd probably need to futz with the
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.
Specifically,

  CentOS6 / RHEL6
  Ubuntu precise 12.04
  Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit
entirely and focus on systemd (and continue maintaining the existing
upstart files for just a bit longer).  That would be a relief.  (The
sysvinit files wouldn't go away in the source tree, but we wouldn't worry
about packaging and testing them properly.)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Stijn De Weirdt
i would certainly like that all client libs and/or kernel modules stay 
tested and supported on these OSes for future ceph releases. not sure 
how much work that is, but the at least client side shouldn't be 
affected by the init move.


stijn

On 07/30/2015 04:43 PM, Marc wrote:

Hi,

much like debian already has, I would suggest to not make systemd a
dependency for Ceph (or anything for that matter). The reason being here
that we desperately need sysvinit until the systemd forks are ready
which offer the systemd init system without all those slapped-on
appendages that systemd has accumulated.

Alternatively, would it possible to maintain a dedicated changelog where
all changes to the init scripts are being collected so that people can
maintain their own sysvinit scripts? I just can't foresee systemd
becoming usable in the near future and thus we will have to rely on
sysvinit for quite a while longer.


RHEL6  will be maintained until Nov 2020, debian wheezy LTS until May
2018, so I feel like it would be premature to drop those already. I
can't give any names but I know of at least 2 customers I am consulting
for that have blocked the RHEL7 upgrade due to systemd.

Regards,
Marc

On 07/30/2015 03:54 PM, Sage Weil wrote:

As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros.  For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6/rhel6 is a pain in the butt and eats up time and energy to
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

  - Ongoing development on master (and future versions like infernalis
and
jewel) would not be tested on these distros.

  - We would stop building upstream release packages on ceph.com for new
releases.

  - We would probably continue building hammer and firefly packages for
future bugfix point releases.

  - The downstream distros would probably continue to package them,
but the
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
precise 12.04, they could, but they'd probably need to futz with the
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.
Specifically,

  CentOS6 / RHEL6
  Ubuntu precise 12.04
  Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit
entirely and focus on systemd (and continue maintaining the existing
upstart files for just a bit longer).  That would be a relief.  (The
sysvinit files wouldn't go away in the source tree, but we wouldn't worry
about packaging and testing them properly.)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan “Zviratko” Schermer
I understand your reasons, but dropping support for LTS release like this
is not right.

You should lege artis support every distribution the LTS release could have
ever been installed on - that’s what the LTS label is for and what we rely on
once we build a project on top of it

CentOS 6 in particular is still very widely used and even installed, enterprise
apps rely on it to this day. Someone out there is surely maintaining their LTS
Ceph release on this distro and not having tested packages will hurt badly.
We don’t want out project managers selecting EMC SAN over CEPH SDS
because of such uncertainty, and you should benchmark yourself to those
vendors, maybe...

Every developer loves dropping support and concentrating on the bleeding
edge interesting stuff but that’s not how it should work.

Just my 2 cents...

Jan

 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com for new 
 releases.
 
 - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
I understand your reasons, but dropping support for LTS release like this
is not right.

You should lege artis support every distribution the LTS release could have
ever been installed on - that’s what the LTS label is for and what we rely on
once we build a project on top of it

CentOS 6 in particular is still very widely used and even installed, enterprise
apps rely on it to this day. Someone out there is surely maintaining their LTS
Ceph release on this distro and not having tested packages will hurt badly.
We don’t want out project managers selecting EMC SAN over CEPH SDS
because of such uncertainty, and you should benchmark yourself to those
vendors, maybe...

Every developer loves dropping support and concentrating on the bleeding
edge interesting stuff but that’s not how it should work.

Just my 2 cents...

Jan

P.S. sorry if this reached your more than once, my mail client is having a
very baaad day

 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com for new 
 releases.
 
 - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Asif Murad Khan
I'm don't prefer it. you have to maintain those releases up to their EOL.

On Thu, Jul 30, 2015 at 8:48 PM, Stijn De Weirdt stijn.dewei...@ugent.be
wrote:

 i would certainly like that all client libs and/or kernel modules stay
 tested and supported on these OSes for future ceph releases. not sure how
 much work that is, but the at least client side shouldn't be affected by
 the init move.

 stijn


 On 07/30/2015 04:43 PM, Marc wrote:

 Hi,

 much like debian already has, I would suggest to not make systemd a
 dependency for Ceph (or anything for that matter). The reason being here
 that we desperately need sysvinit until the systemd forks are ready
 which offer the systemd init system without all those slapped-on
 appendages that systemd has accumulated.

 Alternatively, would it possible to maintain a dedicated changelog where
 all changes to the init scripts are being collected so that people can
 maintain their own sysvinit scripts? I just can't foresee systemd
 becoming usable in the near future and thus we will have to rely on
 sysvinit for quite a while longer.


 RHEL6  will be maintained until Nov 2020, debian wheezy LTS until May
 2018, so I feel like it would be premature to drop those already. I
 can't give any names but I know of at least 2 customers I am consulting
 for that have blocked the RHEL7 upgrade due to systemd.

 Regards,
 Marc

 On 07/30/2015 03:54 PM, Sage Weil wrote:

 As time marches on it becomes increasingly difficult to maintain proper
 builds and packages for older distros.  For example, as we make the
 systemd transition, maintaining the kludgey sysvinit and udev support for
 centos6/rhel6 is a pain in the butt and eats up time and energy to
 maintain and test that we could be spending doing more useful work.

 Dropping them would mean:

   - Ongoing development on master (and future versions like infernalis
 and
 jewel) would not be tested on these distros.

   - We would stop building upstream release packages on ceph.com for new
 releases.

   - We would probably continue building hammer and firefly packages for
 future bugfix point releases.

   - The downstream distros would probably continue to package them,
 but the
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
 precise 12.04, they could, but they'd probably need to futz with the
 packaging and/or build environment to make it work.

 So... given that, I'd like to gauge user interest in these old distros.
 Specifically,

   CentOS6 / RHEL6
   Ubuntu precise 12.04
   Debian wheezy

 Would anyone miss them?

 In particular, dropping these three would mean we could drop sysvinit
 entirely and focus on systemd (and continue maintaining the existing
 upstart files for just a bit longer).  That would be a relief.  (The
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry
 about packaging and testing them properly.)

 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


  ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
Asif Murad Khan
Cell: +880-1713-114230
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jon Meacham
If hammer and firefly bugfix releases will still be packaged for these distros, 
I don't see a problem with this. Anyone who is operating an existing LTS 
deployment on CentOS 6, etc. will continue to receive fixes for said LTS 
release.

Jon

From: ceph-users on behalf of Jan “Zviratko” Schermer
Date: Thursday, July 30, 2015 at 8:29 AM
To: Sage Weil
Cc: ceph-devel, ceph-us...@ceph.commailto:ceph-us...@ceph.com
Subject: Re: [ceph-users] dropping old distros: el6, precise 12.04, debian 
wheezy?

I understand your reasons, but dropping support for LTS release like this
is not right.

You should lege artis support every distribution the LTS release could have
ever been installed on - that’s what the LTS label is for and what we rely on
once we build a project on top of it

CentOS 6 in particular is still very widely used and even installed, enterprise
apps rely on it to this day. Someone out there is surely maintaining their LTS
Ceph release on this distro and not having tested packages will hurt badly.
We don’t want out project managers selecting EMC SAN over CEPH SDS
because of such uncertainty, and you should benchmark yourself to those
vendors, maybe...

Every developer loves dropping support and concentrating on the bleeding
edge interesting stuff but that’s not how it should work.

Just my 2 cents...

Jan

On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.commailto:sw...@redhat.com 
wrote:

As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros.  For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6/rhel6 is a pain in the butt and eats up time and energy to
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

- Ongoing development on master (and future versions like infernalis and
jewel) would not be tested on these distros.

- We would stop building upstream release packages on ceph.comhttp://ceph.com 
for new
releases.

- We would probably continue building hammer and firefly packages for
future bugfix point releases.

- The downstream distros would probably continue to package them, but the
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
precise 12.04, they could, but they'd probably need to futz with the
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.
Specifically,

CentOS6 / RHEL6
Ubuntu precise 12.04
Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit
entirely and focus on systemd (and continue maintaining the existing
upstart files for just a bit longer).  That would be a relief.  (The
sysvinit files wouldn't go away in the source tree, but we wouldn't worry
about packaging and testing them properly.)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Mark Nelson

Hi Jan,

From my reading of Sage's email, hammer would continue to be supported 
on older distros, but new development would not target those releases. 
Was that your impression as well?


As a former system administrator I feel your pain.  Upgrading to new 
distros is a ton of work and incurs a ton of uncertainty and potential 
problems and liability.  I truly think though that we will have a lot 
more luck testing and bug fixing new Ceph releases if we can focus 
specifically on new distro support and not get distracted with trying to 
maintain compatability for new Ceph releases on previous generation LTS 
distro releases.


IE it's one thing if we've already tested say Hammer on those distros 
and minor bug fixes aren't likely to hit weird lurking kernel or distro 
bugs that aren't likely to get fixed.   With new releases though, 
there's a ton of things we change, and some of them may be tied to 
expecting certain behavior in the kernel (Random example:  XFS not 
blowing up with sparse writes when non-default extent sizes are used). 
At some point we need to stop make exceptions for stuff like this 
because an old kernel may not have a patch or behavior that we need to 
move Ceph forward.


Mark

On 07/30/2015 09:29 AM, Jan “Zviratko” Schermer wrote:

I understand your reasons, but dropping support for LTS release like this
is not right.

You should /lege artis/ support every distribution the LTS release could
have
ever been installed on - that’s what the LTS label is for and what we
rely on
once we build a project on top of it

CentOS 6 in particular is still very widely used and even installed,
enterprise
apps rely on it to this day. Someone out there is surely maintaining
their LTS
Ceph release on this distro and not having tested packages will hurt badly.
We don’t want out project managers selecting EMC SAN over CEPH SDS
because of such uncertainty, and you should benchmark yourself to those
vendors, maybe...

Every developer loves dropping support and concentrating on the bleeding
edge interesting stuff but that’s not how it should work.

Just my 2 cents...

Jan


On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com
mailto:sw...@redhat.com wrote:

As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros.  For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6/rhel6 is a pain in the butt and eats up time and energy to
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

- Ongoing development on master (and future versions like infernalis and
jewel) would not be tested on these distros.

- We would stop building upstream release packages on ceph.com
http://ceph.com for new
releases.

- We would probably continue building hammer and firefly packages for
future bugfix point releases.

- The downstream distros would probably continue to package them, but the
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
precise 12.04, they could, but they'd probably need to futz with the
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.
Specifically,

CentOS6 / RHEL6
Ubuntu precise 12.04
Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit
entirely and focus on systemd (and continue maintaining the existing
upstart files for just a bit longer).  That would be a relief.  (The
sysvinit files wouldn't go away in the source tree, but we wouldn't worry
about packaging and testing them properly.)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Tech Talk Today!

2015-07-30 Thread Patrick McGarry
Hey cephers,

Just sending a friendly reminder that our online CephFS Tech Talk is
happening today at 13:00 EDT (17:00 UTC). Please stop by and hear a
technical deep dive on CephFS and ask any questions you might have.
Thanks!

http://ceph.com/ceph-tech-talks/

direct link to the video conference:  https://bluejeans.com/172084437/browser



-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan “Zviratko” Schermer
I understand your reasons, but dropping support for LTS release like this
is not right.

You should lege artis support every distribution the LTS release could have
ever been installed on - that’s what the LTS label is for and what we rely on
once we build a project on top of it

CentOS 6 in particular is still very widely used and even installed, enterprise
apps rely on it to this day. Someone out there is surely maintaining their LTS
Ceph release on this distro and not having tested packages will hurt badly.
We don’t want out project managers selecting EMC SAN over CEPH SDS
because of such uncertainty, and you should benchmark yourself to those
vendors, maybe...

Every developer loves dropping support and concentrating on the bleeding
edge interesting stuff but that’s not how it should work.

Just my 2 cents...

Jan

 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com for new 
 releases.
 
 - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
It is possible I misunderstood Sage’s message - I apologize if that’s the case.

This is what made me uncertain:
 - We would probably continue building hammer and firefly packages for
 future bugfix point releases.

Decision for new releases (Infernalis, Jewel, K*) regarding distro support
should be made officialy somewhere. Not sure if packages for them exist today
and where? I don’t think new releases need to be made for CentOS 6 etc., parts
of it are just too old for new stuff to make sense. But once that “commitment” 
is
made (like http://ceph.com/docs/master/releases/ ) I expect them to be 
“supported
for until the Ceph release itself is EOL.

Of course whoever is “freeloading” must be ready for anything, anytime, no 
promises :-)

Jan

 On 30 Jul 2015, at 16:47, Mark Nelson mnel...@redhat.com wrote:
 
 Hi Jan,
 
 From my reading of Sage's email, hammer would continue to be supported on 
 older distros, but new development would not target those releases. Was that 
 your impression as well?
 
 As a former system administrator I feel your pain.  Upgrading to new distros 
 is a ton of work and incurs a ton of uncertainty and potential problems and 
 liability.  I truly think though that we will have a lot more luck testing 
 and bug fixing new Ceph releases if we can focus specifically on new distro 
 support and not get distracted with trying to maintain compatability for new 
 Ceph releases on previous generation LTS distro releases.
 
 IE it's one thing if we've already tested say Hammer on those distros and 
 minor bug fixes aren't likely to hit weird lurking kernel or distro bugs that 
 aren't likely to get fixed.   With new releases though, there's a ton of 
 things we change, and some of them may be tied to expecting certain behavior 
 in the kernel (Random example:  XFS not blowing up with sparse writes when 
 non-default extent sizes are used). At some point we need to stop make 
 exceptions for stuff like this because an old kernel may not have a patch or 
 behavior that we need to move Ceph forward.
 
 Mark
 
 On 07/30/2015 09:29 AM, Jan “Zviratko” Schermer wrote:
 I understand your reasons, but dropping support for LTS release like this
 is not right.
 
 You should /lege artis/ support every distribution the LTS release could
 have
 ever been installed on - that’s what the LTS label is for and what we
 rely on
 once we build a project on top of it
 
 CentOS 6 in particular is still very widely used and even installed,
 enterprise
 apps rely on it to this day. Someone out there is surely maintaining
 their LTS
 Ceph release on this distro and not having tested packages will hurt badly.
 We don’t want out project managers selecting EMC SAN over CEPH SDS
 because of such uncertainty, and you should benchmark yourself to those
 vendors, maybe...
 
 Every developer loves dropping support and concentrating on the bleeding
 edge interesting stuff but that’s not how it should work.
 
 Just my 2 cents...
 
 Jan
 
 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com
 mailto:sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper
 builds and packages for older distros.  For example, as we make the
 systemd transition, maintaining the kludgey sysvinit and udev support for
 centos6/rhel6 is a pain in the butt and eats up time and energy to
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com
 http://ceph.com for new
 releases.
 
 - We would probably continue building hammer and firefly packages for
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
 precise 12.04, they could, but they'd probably need to futz with the
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit
 entirely and focus on systemd (and continue maintaining the existing
 upstart files for just a bit longer).  That would be a relief.  (The
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I agree that for the distros and version in question, Ceph releases
already released on them should provide bug support until EoL of Ceph
or the distro version, whichever is shorter.
Since we are so far into Infernalis and Jewel development cycle, would
it be better to make Jewel the last supported release on these
distros/versions? It would not be as much as a surprise and give
admins a little more time to transition (we are currently looking to
migrate to CentOS7 on the client side, but it will take some time so
this is also a personal request). I can understand and appreciate that
the Ceph team is stretched thin trying to maintain such a large
breadth of compatibility.

Thank you for reading,

-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVunuVCRDmVDuy+mK58QAAxvkP/0QqygaVYU9fV2bu3URb
wvU4+2MT8W1zt9reYl+/42f709Us+7Abho90cMya61sIt+5t9h8oEms/5qwt
ejx6CY3WX9x5Zobku1taUUtjtW3ZGZUcQ2JegO18WwqrSXfomd1vNvgB5JsJ
KtDCv2/oCMLZWrFCXcAeoLkb/n3dwnctlsNs58911PERP7EAIMOVFBb5asee
mUoyE06Nj5RUb5r2D7XfrcBxDb6eII0KLzpjbveGrUddoMgRPm2BVbab4N84
HDM330+3vg59pKsaRjU8wEgk4yuJtbxG1SaJMDEJKWHJNe+K5MFRX/FFtfMg
6P5ygDIn+7EK5EZLgRhJl0CTUCpXKa2/bc0tZzcwqV+YXmmAvAIHLa2hjpG9
1llSOlzKIR7lputbFbUHKJzgAOLG/nOOoUTGmnWqiTDEPrj3gOdI2sqaNWoX
50pT50pvkf64Pl1RXWPafveiX4pmZcK0jAYhqZTZ7Cpa0TfuyBfsDHZTzI6k
YWunqhjqsm8mMFAr8ppjCK52e246iAdA1G7iXuPNE1yrtwhsHDc9VbqX8llT
GaevOTQQygCZ/0b4adt+HWBQTZgLi0bxIyBEo7Pg75+f8yzawTpWWx5tM4JR
yKkADT8+hid8PK6YwUrsaYnRhdRunAZv9BdvOulXyJ9azcCHSzlJWeYWHEwq
krdm
=m+W6
-END PGP SIGNATURE-

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Thu, Jul 30, 2015 at 7:54 AM, Sage Weil sw...@redhat.com wrote:
 As time marches on it becomes increasingly difficult to maintain proper
 builds and packages for older distros.  For example, as we make the
 systemd transition, maintaining the kludgey sysvinit and udev support for
 centos6/rhel6 is a pain in the butt and eats up time and energy to
 maintain and test that we could be spending doing more useful work.

 Dropping them would mean:

  - Ongoing development on master (and future versions like infernalis and
 jewel) would not be tested on these distros.

  - We would stop building upstream release packages on ceph.com for new
 releases.

  - We would probably continue building hammer and firefly packages for
 future bugfix point releases.

  - The downstream distros would probably continue to package them, but the
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
 precise 12.04, they could, but they'd probably need to futz with the
 packaging and/or build environment to make it work.

 So... given that, I'd like to gauge user interest in these old distros.
 Specifically,

  CentOS6 / RHEL6
  Ubuntu precise 12.04
  Debian wheezy

 Would anyone miss them?

 In particular, dropping these three would mean we could drop sysvinit
 entirely and focus on systemd (and continue maintaining the existing
 upstart files for just a bit longer).  That would be a relief.  (The
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry
 about packaging and testing them properly.)

 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-mon cpu usage

2015-07-30 Thread Spillmann, Dieter
I saw this behavior when the servers are not in time sync.
Check your ntp settings

Dieter

From: ceph-users 
ceph-users-boun...@lists.ceph.commailto:ceph-users-boun...@lists.ceph.com 
on behalf of Quentin Hartman 
qhart...@direwolfdigital.commailto:qhart...@direwolfdigital.com
Date: Wednesday, July 29, 2015 at 5:47 PM
To: Luis Periquito periqu...@gmail.commailto:periqu...@gmail.com
Cc: Ceph Users ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-mon cpu usage

I just had my ceph cluster exhibit this behavior (two of three mons eat all 
CPU, cluster becomes unusably slow) which is running 0.87.1

It seems to be tied to deep scrubbing, as the behavior almost immediately 
surfaces if that is turned on, but if it is off the behavior eventually seems 
to return to normal and stays that way while scrubbing is off. I have not yet 
found anything in the cluster to indicate a hardware problem.

Any thoughts or further insights on this subject would be appreciated.

QH

On Sat, Jul 25, 2015 at 12:31 AM, Luis Periquito 
periqu...@gmail.commailto:periqu...@gmail.com wrote:
I think I figured out! All 4 of the OSDs on one host (OSD 107-110) were sending 
massive amounts of auth requests to the monitors, seeming to overwhelm them.

Weird bit is that I removed them (osd crush remove, auth del, osd rm), dd the 
box and all of the disks, reinstalled and guess what? They are still doing a 
lot of requests to the MONs... this will require some further investigations.

As this is happening during my holidays, I just disabled them, and will 
investigate further when I get back.


On Fri, Jul 24, 2015 at 11:11 PM, Kjetil Jørgensen 
kje...@medallia.commailto:kje...@medallia.com wrote:
It sounds slightly similar to what I just experienced.

I had one monitor out of three, which seemed to essentially run one core at 
full tilt continuously, and had it's virtual address space allocated at the 
point where top started calling it Tb. Requests hitting this monitor did not 
get very timely responses (although; I don't know if this were happening 
consistently or arbitrarily).

I ended up re-building the monitor from the two healthy ones I had, which made 
the problem go away for me.

After the fact inspection of the monitor I ripped out, clocked it in at 1.3Gb 
compared to the 250Mb of the other two, after rebuild they're all comparable in 
size.

In my case; this started out for me on firefly, and persisted after upgrading 
to hammer. Which prompted the rebuild, suspecting that in my case it were 
related to something persistent for this monitor.

I do not have that much more useful to contribute to this discussion, since 
I've more-or-less destroyed any evidence by re-building the monitor.

Cheers,
KJ

On Fri, Jul 24, 2015 at 1:55 PM, Luis Periquito 
periqu...@gmail.commailto:periqu...@gmail.com wrote:

The leveldb is smallish: around 70mb.

I ran debug mon = 10 for a while,  but couldn't find any interesting 
information. I would run out of space quite quickly though as the log partition 
only has 10g.

On 24 Jul 2015 21:13, Mark Nelson 
mnel...@redhat.commailto:mnel...@redhat.com wrote:
On 07/24/2015 02:31 PM, Luis Periquito wrote:
Now it's official,  I have a weird one!

Restarted one of the ceph-mons with jemalloc and it didn't make any
difference. It's still using a lot of cpu and still not freeing up memory...

The issue is that the cluster almost stops responding to requests, and
if I restart the primary mon (that had almost no memory usage nor cpu)
the cluster goes back to its merry way responding to requests.

Does anyone have any idea what may be going on? The worst bit is that I
have several clusters just like this (well they are smaller), and as we
do everything with puppet, they should be very similar... and all the
other clusters are just working fine, without any issues whatsoever...

We've seen cases where leveldb can't compact fast enough and memory balloons, 
but it's usually associated with extreme CPU usage as well. It would be showing 
up in perf though if that were the case...


On 24 Jul 2015 10:11, Jan Schermer j...@schermer.czmailto:j...@schermer.cz
mailto:j...@schermer.czmailto:j...@schermer.cz wrote:

You don’t (shouldn’t) need to rebuild the binary to use jemalloc. It
should be possible to do something like

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 ceph-osd …

The last time we tried it segfaulted after a few minutes, so YMMV
and be careful.

Jan

On 23 Jul 2015, at 18:18, Luis Periquito 
periqu...@gmail.commailto:periqu...@gmail.com
mailto:periqu...@gmail.commailto:periqu...@gmail.com wrote:

Hi Greg,

I've been looking at the tcmalloc issues, but did seem to affect
osd's, and I do notice it in heavy read workloads (even after the
patch and
increasing TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728). This
is affecting the mon process though.

looking at perf top I'm getting most of the CPU usage 

Re: [ceph-users] ceph-mon cpu usage

2015-07-30 Thread Quentin Hartman
Thanks for the suggestion. NTP is fine in my case. Turns out it was a
networking problem that wasn't triggering error counters on the NICs so it
took a bit to track it down.

QH

On Thu, Jul 30, 2015 at 4:16 PM, Spillmann, Dieter 
dieter.spillm...@arris.com wrote:

 I saw this behavior when the servers are not in time sync.
 Check your ntp settings

 Dieter

 From: ceph-users ceph-users-boun...@lists.ceph.com on behalf of Quentin
 Hartman qhart...@direwolfdigital.com
 Date: Wednesday, July 29, 2015 at 5:47 PM
 To: Luis Periquito periqu...@gmail.com
 Cc: Ceph Users ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] ceph-mon cpu usage

 I just had my ceph cluster exhibit this behavior (two of three mons eat
 all CPU, cluster becomes unusably slow) which is running 0.87.1

 It seems to be tied to deep scrubbing, as the behavior almost immediately
 surfaces if that is turned on, but if it is off the behavior eventually
 seems to return to normal and stays that way while scrubbing is off. I have
 not yet found anything in the cluster to indicate a hardware problem.

 Any thoughts or further insights on this subject would be appreciated.

 QH

 On Sat, Jul 25, 2015 at 12:31 AM, Luis Periquito periqu...@gmail.com
 wrote:

 I think I figured out! All 4 of the OSDs on one host (OSD 107-110) were
 sending massive amounts of auth requests to the monitors, seeming to
 overwhelm them.

 Weird bit is that I removed them (osd crush remove, auth del, osd rm), dd
 the box and all of the disks, reinstalled and guess what? They are still
 doing a lot of requests to the MONs... this will require some further
 investigations.

 As this is happening during my holidays, I just disabled them, and will
 investigate further when I get back.


 On Fri, Jul 24, 2015 at 11:11 PM, Kjetil Jørgensen kje...@medallia.com
 wrote:

 It sounds slightly similar to what I just experienced.

 I had one monitor out of three, which seemed to essentially run one core
 at full tilt continuously, and had it's virtual address space allocated at
 the point where top started calling it Tb. Requests hitting this monitor
 did not get very timely responses (although; I don't know if this were
 happening consistently or arbitrarily).

 I ended up re-building the monitor from the two healthy ones I had,
 which made the problem go away for me.

 After the fact inspection of the monitor I ripped out, clocked it in at
 1.3Gb compared to the 250Mb of the other two, after rebuild they're all
 comparable in size.

 In my case; this started out for me on firefly, and persisted after
 upgrading to hammer. Which prompted the rebuild, suspecting that in my case
 it were related to something persistent for this monitor.

 I do not have that much more useful to contribute to this discussion,
 since I've more-or-less destroyed any evidence by re-building the monitor.

 Cheers,
 KJ

 On Fri, Jul 24, 2015 at 1:55 PM, Luis Periquito periqu...@gmail.com
 wrote:

 The leveldb is smallish: around 70mb.

 I ran debug mon = 10 for a while,  but couldn't find any interesting
 information. I would run out of space quite quickly though as the log
 partition only has 10g.
 On 24 Jul 2015 21:13, Mark Nelson mnel...@redhat.com wrote:

 On 07/24/2015 02:31 PM, Luis Periquito wrote:

 Now it's official,  I have a weird one!

 Restarted one of the ceph-mons with jemalloc and it didn't make any
 difference. It's still using a lot of cpu and still not freeing up
 memory...

 The issue is that the cluster almost stops responding to requests, and
 if I restart the primary mon (that had almost no memory usage nor cpu)
 the cluster goes back to its merry way responding to requests.

 Does anyone have any idea what may be going on? The worst bit is that
 I
 have several clusters just like this (well they are smaller), and as
 we
 do everything with puppet, they should be very similar... and all the
 other clusters are just working fine, without any issues whatsoever...


 We've seen cases where leveldb can't compact fast enough and memory
 balloons, but it's usually associated with extreme CPU usage as well. It
 would be showing up in perf though if that were the case...


 On 24 Jul 2015 10:11, Jan Schermer j...@schermer.cz
 mailto:j...@schermer.cz wrote:

 You don’t (shouldn’t) need to rebuild the binary to use jemalloc.
 It
 should be possible to do something like

 LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 ceph-osd …

 The last time we tried it segfaulted after a few minutes, so YMMV
 and be careful.

 Jan

 On 23 Jul 2015, at 18:18, Luis Periquito periqu...@gmail.com
 mailto:periqu...@gmail.com wrote:

 Hi Greg,

 I've been looking at the tcmalloc issues, but did seem to affect
 osd's, and I do notice it in heavy read workloads (even after the
 patch and
 increasing TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728). This
 is affecting the mon process though.

 looking at perf top I'm getting most of the CPU usage in 

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang

 Date: Thu, 30 Jul 2015 13:11:11 +0300
 Subject: Re: [ceph-users] which kernel version can help avoid kernel client 
 deadlock
 From: idryo...@gmail.com
 To: zhangz.da...@outlook.com
 CC: chaofa...@owtware.com; ceph-users@lists.ceph.com
 
 On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da...@outlook.com wrote:
 
  Date: Thu, 30 Jul 2015 11:37:37 +0300
  Subject: Re: [ceph-users] which kernel version can help avoid kernel
  client deadlock
  From: idryo...@gmail.com
  To: zhangz.da...@outlook.com
  CC: chaofa...@owtware.com; ceph-users@lists.ceph.com
 
  On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da...@outlook.com
  wrote:
  
   
   Subject: Re: [ceph-users] which kernel version can help avoid kernel
   client
   deadlock
   From: chaofa...@owtware.com
   Date: Thu, 30 Jul 2015 13:16:16 +0800
   CC: idryo...@gmail.com; ceph-users@lists.ceph.com
   To: zhangz.da...@outlook.com
  
  
   On Jul 30, 2015, at 12:48 PM, Z Zhang zhangz.da...@outlook.com wrote:
  
   We also hit the similar issue from time to time on centos with 3.10.x
   kernel. By iostat, we can see kernel rbd client's util is 100%, but no
   r/w
   io, and we can't umount/unmap this rbd client. After restarting OSDs, it
   will become normal.
 
  3.10.x is rather vague, what is the exact version you saw this on? Can you
  provide syslog logs (I'm interested in dmesg)?
 
  The kernel version should be 3.10.0.
 
  I don't have sys logs at hand. It is not easily reproduced, and it happened
  at very low memory situation. We are running DB instances over rbd as
  storage. DB instances will use lot of memory when running high concurrent
  rw, and after running for a long time, rbd might hit this problem, but not
  always. Enabling rbd log makes our system behave strange during our test.
 
  I back-ported one of your fixes:
  https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/block/rbd.c?id=5a60e87603c4c533492c515b7f62578189b03c9c
 
  So far test looks fine for few days, but still under observation. So want to
  know if there are some other fixes?
 
 I'd suggest following 3.10 stable series (currently at 3.10.84).  The
 fix you backported is crucial in low memory situations, so I wouldn't
 be surprised if it alone fixed your problem.  (It is not in 3.10.84,
 I assume it'll show up in 3.10.85 - for now just apply your backport.)
 
cool, looking forward 3.10.85 to see what else would be brought in.
Thanks.
 Thanks,
 
 Ilya
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW + civetweb + SSL

2015-07-30 Thread Italo Santos
Hello,
I’d like to know if someone know how to setup a SSL implementation of RGW with 
civetweb?

The only “documentation” that I found about that is a “bug” - 
http://tracker.ceph.com/issues/11239 - which I’d like to know if this kind of 
implementation really works?  

Regards.

Italo Santos
http://italosantos.com.br/

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] questions on editing crushmap for ceph cache tier

2015-07-30 Thread van


 On Jul 31, 2015, at 2:55 AM, Robert LeBlanc rob...@leblancnet.us wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 You are close...
 
 I've done it by creating a new SSD root in the CRUSH map, then put the
 SSD OSDs into a -ssd entry. I then created a new crush rule to choose
 from the SSD root, then have the tiering pool use that rule. If you
 look at the example in the document and think of ceph-osd-ssd-server-1
 and ceph-osd-platter-server-1 as the same physical server with just
 logical separation, you can follow the rest of the example. You will
 need to modify either the ceph.conf to have the ssd OSDs have a
 different CRUSH map location or write a location script to do it
 automatically [1].
 
 ceph.conf example:
 [osd.70]
crush location = root=ssd host=nodez-ssd

Thanks. It works. Modifying ceph.conf is very important. Otherwise, after I 
start the OSDs, it will automatically
go back to original host.

 
 To do it programmaticly, you can use /usr/bin/ceph-crush-location as a
 base. I extended it by finding the device, then searching through
 hdparam to see if the rotation was greater than zero. If it wasn't
 then I output the hostname with the ssd portion, otherwise just the
 hostname. It was only a few lines of code, but I can't find it right
 now. I was waiting for ACLs to be modified so that I could query the
 data center location from our inventory management system (that was a
 few months ago) and I'm still waiting.
 
 [1] 
 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-January/045673.html
 -BEGIN PGP SIGNATURE-
 Version: Mailvelope v0.13.1
 Comment: https://www.mailvelope.com
 
 wsFcBAEBCAAQBQJVunMXCRDmVDuy+mK58QAAq04P+QH95haU34fWZ5PsPIxv
 oY7JReywHwP3mWBO2XkaIg/l4AYV/HCBckBTSyp+GGFAtMeVEndiHCYTUf+F
 kQJjIk1jZoN+WTLnD8nsfDMrmVmforyGcG4Y399C4cCkBmeoU3jeGeKx+Unx
 dxiW6flRH5GPCazQdAAbXgb8InynUZ/EqmTY0FCDuLQ3CEXELuM8IReKwz0X
 9LYqVIV+tdE1Ff2nDnLQlYYVpVv5K0y4TXBj8JzYH/41XbEws2GQnhb6b8zW
 aopUDC9RsNGtzWf4RDg8X3LDHrw7IBtAuJf+PHbcq3Y4cmPf5Z0TYiS1bqn1
 19kj3EhDVVdW1KUue2S3GemyP0+bIypA/VDGzFXgv9g5oKN0bXPOjuFKAD2Q
 7Rc2yoW70LACgL0a2KiRPRt8e6Jz5/vG6GijZvxTTZfkDPKHBOPPA3mFyAS4
 FGmu39/q5VP7V+CepaKjbGNUWRzlzOOcc4ybk3dmktYEFOTw4QZLczBGJ8s4
 I+AdYDjiQOAG3n3xixqRFOb4URjfKOrUbnHfNVQJU+qfYfV1RBLThhRjiv0v
 O+oiKiWuugZicHniTfHuOYePgxbs9eU2Hk8VRVk9ievXuRynDrH7D+IeUzUJ
 JGoj01YM60Ul1XJPWatMoM+435hcHrGd0rJ3bi91DOrZmT55X4jjdUA8z/3Y
 xMZE
 =Tqaw
 -END PGP SIGNATURE-
 
 Robert LeBlanc
 PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
 
 
 On Wed, Jul 29, 2015 at 9:21 PM, van chaofa...@owtware.com wrote:
 Hi, list,
 
 Ceph cache tier seems very promising for performance.
 According to
 http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
 , I need to create a new pool based on SSD OSDs。
 
 Currently, I’ve two servers with several HDD-based OSDs. I mean to add one
 SSD-based OSD for each server, and then use these two OSDs to build a cache
 pool.
 But I’ve found problems editing crushmap.
 The example in the link use two new hosts to build SSD OSDs and then create
 a new ruleset take the new hosts.
 But in my environment, I do not have new servers to use.
 Can I create a ruleset choose part of OSDs in a host?
 For example, as the crushmap shown below, osd.2 and osd.5 are new added
 SSD-based OSDs, how can I create a ruleset choose these two OSDs only and
 how can I avoid default ruleset to choose osd.2 and osd.5?
 Is this possible, or I have to add a new server to deploy cache tier?
 Thanks.
 
 host node0 {
  id -2
  alg straw
  hash 0
  item osd.0 weight 1.0 # HDD
  item osd.1 weight 1.0 # HDD
  item osd.2 weight 0.5 # SSD
 }
 
 host node1 {
  id -3
  alg straw
  hash 0
  item osd.3 weight 1.0 # HDD
  item osd.4 weight 1.0 # HDD
  item osd.5 weight 0.5 # SSD
 }
 
 root default {
id -1   # do not change unnecessarily
# weight 1.560
alg straw
hash 0  # rjenkins1
item node0 weight 2.5
item node1 weight 2.5
 }
 
 # typical ruleset
 rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
 }
 
 
 
 van
 chaofa...@owtware.com
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Check networking first?

2015-07-30 Thread Stijn De Weirdt
wouldn't it be nice that ceph does something like this in background 
(some sort of network-scrub). debugging network like this is not that 
easy (can't expect admins to install e.g. perfsonar on all nodes and/or 
clients)


something like: every X min, each service X pick a service Y on another 
host (assuming X and Y will exchange some communication at some point; 
like osd with other osd), send 1MB of data, and make the timing data 
available so we can monitor it and detect underperforming links over time.


ideally clients also do this, but not sure where they should 
report/store the data.


interpreting the data can be a bit tricky, but extreme outliers will be 
spotted easily, and the main issue with this sort of debugging is 
collecting the data.


simply reporting / keeping track of ongoing communications is already a 
big step forward, but then we need to have the size of the exchanged 
data to allow interpretation (and the timing should be about the network 
part, not e.g. flush data to disk in case of an osd). (and obviously 
sampling is enough, no need to have details of every bit send).




stijn

On 07/30/2015 08:04 PM, Mark Nelson wrote:

Thanks for posting this!  We see issues like this more often than you'd
think.  It's really important too because if you don't figure it out the
natural inclination is to blame Ceph! :)

Mark

On 07/30/2015 12:50 PM, Quentin Hartman wrote:

Just wanted to drop a note to the group that I had my cluster go
sideways yesterday, and the root of the problem was networking again.
Using iperf I discovered that one of my nodes was only moving data at
1.7Mb / s. Moving that node to a different switch port with a different
cable has resolved the problem. It took awhile to track down because
none of the server-side error metrics for disk or network showed
anything was amiss, and I didn't think to test network performance (as
suggested in another thread) until well into the process.

Check networking first!

QH


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fuse mount in fstab

2015-07-30 Thread Alvaro Simon Garcia
Hi

More info about this issue, we have opened a ticket to redhat here is
the feedback:

https://bugzilla.redhat.com/show_bug.cgi?id=1248003

Cheers
Alvaro

On 16/07/15 15:19, Alvaro Simon Garcia wrote:
 Hi

 I have tested a bit this with different ceph-fuse versions and linux
 distros and it seems a mount issue in CentOS7. The problem is that mount
 tries to find first the param=value from fstab fs_spec field into
 the blkid block device attributes and of course this flag is not there
 and you always get an error like this:
 ||
 mount: can't find param=value

 and stops here, the mount values are never parsed by
 /sbin/mount.fuse.ceph helper...

 The only workaround that I found without change mount version is to
 change the spurious = by another special character like a colon for
 example:

 id:admin  /mnt/ceph fuse.ceph defaults 0 0

 but you also have to change /sbin/mount.fuse.ceph parser:

 ...
 # convert device string to options
 fs_spec=`echo $1 | sed 's/:/=/g'`
 cephargs='--'`echo $fs_spec | sed 's/,/ --/g'`
 ...

 but this is a bit annoying...

 someone else has found the same mount fuse issue in RHEL7 or CentOS?

 Cheers
 Alvaro

 On 09/07/15 12:22, Kenneth Waegeman wrote:
 Hmm, it looks like a version issue..

 I am testing with these versions on centos7:
  ~]# mount -V
 mount from util-linux 2.23.2 (libmount 2.23.0: selinux, debug, assert)
  ~]# ceph-fuse -v
 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)

 This do not work..


 On my fedora box, with these versions from repo:
 # mount -V
 mount from util-linux 2.24.2 (libmount 2.24.0: selinux, debug, assert)
 # ceph-fuse -v
 ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)

 this works..


 Which versions are you running?
 And does someone knows from which versions , or which version
 combinations do work?

 Thanks a lot!
 K

 On 07/09/2015 11:53 AM, Thomas Lemarchand wrote:
 Hello Kenneth,

 I have a working ceph fuse in fstab. Only difference I see it that I
 don't use conf, your configuration file is at the default path
 anyway.
 I tried it with and without conf, but it always complains about id
 id=recette-files-rw,client_mountpoint=/recette-files/files
   /mnt/wimi/ceph-files  fuse.ceph noatime,_netdev 0 0


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-30 Thread Ilya Dryomov
On Wed, Jul 29, 2015 at 11:42 PM, pixelfairy pixelfa...@gmail.com wrote:
 copied ceph.conf from the servers. hope this helps. should this be
 concidered an unsupported feature?

 # rbd-fuse /cmnt -c /etc/ceph/ceph.conf -d
 FUSE library version: 2.9.2
 nullpath_ok: 0
 nopath: 0
 utime_omit_ok: 0
 unique: 1,
 opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
 INIT: 7.22 flags=0xf7fb max_readahead=0x0002 Error connecting to
 cluster: No such file or directory

This is an error from rados_connect(), so I'd say there is something
wrong with your ceph.conf or the environment.  We can try to debug
this, but yeah, rbd-fuse, as it is, is not intended for serious use.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
Hello,

it seems that there are no Debian Squeeze packages in the repository for the 
current Hammer version. Is this an oversight or is there another reason those 
are not provided?

Sebastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread John Spray
For sufficiently recent clients we do this for you (clients send some 
metadata like hostname, which is used in the MDS to generate an 
easier-to-understand identifier).


To do it by hand, use the admin socket command ceph daemon mds.id 
session ls command, and look out for the client IP addresses in the 
address part of each entry.  By the way, there have been various other 
bugs in 3.13-ish kernels, so you should consider using either a more 
recent kernel or a fuse client.


John

On 30/07/15 08:32, Oliver Schulz wrote:

Hello Ceph Experts,

lately, ceph status on our cluster often states:

mds0: Client CLIENT_ID failing to respond to capability release

How can I identify which client is at fault (hostname or IP address)
from the CLIENT_ID?

What could be the source of the failing to respond to capability 
release -

Linux kernel on the client too old? We use ceph-0.94.2 on the cluster,
and the CephFS kernel client on the clients (kernel 3.13.0 on Ubuntu 
12.04

and Ubuntu 14.04). But it's possible that there's a machine with an
older Kernel around somewhere ...


Cheers and thanks,

Oliver
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount rbd image with iscsi

2015-07-30 Thread Daleep Bais
hi,

I am trying to mount an RBD image using iSCSI following URL :

*http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/
http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/*

However, I don't get rbd flag when I give the command

sudo tgtadm --lld iscsi --op show --mode system | grep rbd rbd (bsoflags
sync:direct) *--- this is not showing*


Can someone help with this so that I can mount RBD image to iSCSI

thanks..
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to mount Format 2 striped RBD image

2015-07-30 Thread Daleep Bais
hi Ilya,

I had used the below command to create the rbd image

rbd -p fool create strpimg --image-format 2 --order 22 --size 2048M
--stripe-unit 65536 --stripe-count 3 --image-feature striping --image-shared

I am confused when you say why I use sysfs instead of rbd cli tool.

Can you please help me? I dont wish to use qemu or virtmanager. I wish to
use this as block device on client PC

Thanks.

Daleep

On Wed, Jul 29, 2015 at 4:19 PM, Ilya Dryomov idryo...@gmail.com wrote:

 On Wed, Jul 29, 2015 at 1:45 PM, Daleep Bais dal...@mask365.com wrote:
  Hi,
 
  I have created a format 2 striped image, however, I am not able to mount
 it
  on client machine..
 
  # rbd -p foo info strpimg
  rbd image 'strpimg':
  size 2048 MB in 513 objects
  order 22 (4096 kB objects)
  block_name_prefix: rbd_data.20c942ae8944a
  format: 2
  features: striping
  flags:
  stripe unit: 65536 bytes
  stripe count: 3
 
 
  Getting below error  when trying to mount using echo --- 
 /sys/bus/rbd/add
 
  write error : invalid arguement
 
  if i use a format 1 image, I am able to mount the RBD to client and use
 it.

 Custom striping settings (i.e. non-default stripe_unit and
 stripe_count) are not yet supported by the kernel client.

 Unrelated, why are you using sysfs directly instead of rbd cli tool?

 Thanks,

 Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da...@outlook.com wrote:

 
 Subject: Re: [ceph-users] which kernel version can help avoid kernel client
 deadlock
 From: chaofa...@owtware.com
 Date: Thu, 30 Jul 2015 13:16:16 +0800
 CC: idryo...@gmail.com; ceph-users@lists.ceph.com
 To: zhangz.da...@outlook.com


 On Jul 30, 2015, at 12:48 PM, Z Zhang zhangz.da...@outlook.com wrote:

 We also hit the similar issue from time to time on centos with 3.10.x
 kernel. By iostat, we can see kernel rbd client's util is 100%, but no r/w
 io, and we can't umount/unmap this rbd client. After restarting OSDs, it
 will become normal.

3.10.x is rather vague, what is the exact version you saw this on?  Can you
provide syslog logs (I'm interested in dmesg)?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to mount Format 2 striped RBD image

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 11:15 AM, Daleep Bais dal...@mask365.com wrote:
 hi Ilya,

 I had used the below command to create the rbd image

 rbd -p fool create strpimg --image-format 2 --order 22 --size 2048M
 --stripe-unit 65536 --stripe-count 3 --image-feature striping --image-shared

 I am confused when you say why I use sysfs instead of rbd cli tool.

Well, there is no way to create an image via sysfs, but you can map an
image by writing to /sys/bus/rbd/... instead of using rbd map.  You
wrote this:

 Getting below error  when trying to mount using echo --- 
 /sys/bus/rbd/add

 write error : invalid arguement

which is why I asked if you had used sysfs directly, instead of
something like rbd map strpimg.


 Can you please help me? I dont wish to use qemu or virtmanager. I wish to
 use this as block device on client PC

You'd have to drop --stripe-unit 65536 --stripe-count 3 from your
rbd create for that to work, at least for now.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recovery question

2015-07-30 Thread Peter Hinman
For the record, I have been able to recover.  Thank you very much for 
the guidance.


I hate searching the web and finding only partial information on threads 
like this, so I'm going to document and post what I've learned as best I 
can in hopes that it will help someone else out in the future.


--
Peter Hinman

On 7/29/2015 5:15 PM, Robert LeBlanc wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

If you had multiple monitors, you should recover if possible more than
50% of them (they will need to form a quorum). If you can't, it is
messy but, you can manually remove enough monitors to start a quorum.
 From /etc/ceph/ you will want the keyring and the ceph.conf at a
minimim. The keys for the monitor I think are in the store.db which
will let the monitors start, but the keyring has the admin key which
lets you manage the cluster once you get it up. rbdmap is not needed
for recovery (only automatically mounting RBDs at boot time), we can
deal with that later if needed.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 4:40 PM, Peter Hinman  wrote:

Ok - that is encouraging.  I've believe I've got data from a previous
monitor. I see files in a store.db dated yesterday, with a MANIFEST-
file that is significantly greater than the MANIFEST-07 file listed for
the current monitors.

I've actually found data for two previous monitors.  Any idea which one I
should select? The one with the highest manifest number? The most recent
time stamp?

What files should I be looking for in /etc/conf?  Just the keyring and
rbdmap files?  How important is it to use the same keyring file?

--
Peter Hinman


On 7/29/2015 3:47 PM, Robert LeBlanc wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

The default is /var/lib/ceph/mon/- (/var/lib/ceph/mon/ceph-mon1 for
me). You will also need the information from /etc/ceph/ to reconstruct
the data. I *think* you should be able to just copy this to a new box
with the same name and IP address and start it up.

I haven't actually done this, so there still may be some bumps.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 3:44 PM, Peter Hinman  wrote:

Thanks Robert -

Where would that monitor data (database) be found?

--
Peter Hinman


On 7/29/2015 3:39 PM, Robert LeBlanc wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

If you built new monitors, this will not work. You would have to
recover the monitor data (database) from at least one monitor and
rebuild the monitor. The new monitors would not have any information
about pools, OSDs, PGs, etc to allow an OSD to be rejoined.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 2:46 PM, Peter Hinman  wrote:

Hi Greg -

So at the moment, I seem to be trying to resolve a permission error.

=== osd.3 ===
Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
2015-07-29 13:35:08.809536 7f0a0262e700  0 librados: osd.3
authentication
error (1) Operation not permitted
Error connecting to cluster: PermissionError
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf
--name=osd.3
--keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move --
3
3.64 host=stor-2 root=default'
ceph-disk: Error: ceph osd start failed: Command
'['/usr/sbin/service',
'ceph', '--cluster', 'ceph', 'start', 'osd.3']' returned non-zero exit
status 1
ceph-disk: Error: One or more partitions failed to activate


Is there a way to identify the cause of this PermissionError?  I've
copied
the client.bootstrap-osd key from the output of ceph auth list, and
pasted
it into /var/lib/ceph/bootstrap-osd/ceph.keyring, but that has not
resolve
the error.

But it sounds like you are saying that even once I get this resolved, I
have
no hope of recovering the data?

--
Peter Hinman

On 7/29/2015 1:57 PM, Gregory Farnum wrote:

This sounds like you're trying to reconstruct a cluster after
destroying
the
monitors. That is...not going to work well. The monitors define the
cluster
and you can't move OSDs into different clusters. We have ideas for how
to
reconstruct monitors and it can be done manually with a lot of hassle,
but
the process isn't written down and there aren't really fools I help
with
it.
:/
-Greg

On Wed, Jul 29, 2015 at 5:48 PM Peter Hinman  wrote:

I've got a situation that seems on the surface like it should be
recoverable, but I'm struggling to understand how to do it.

I had a cluster of 3 monitors, 3 osd disks, and 3 journal ssds. After
multiple hardware failures, I pulled the 3 osd disks and 3 journal
ssds
and am attempting to bring them back up again on new hardware in a new
cluster.  I see plenty of documentation on how to zap and initialize
and
add new osds, but I don't see anything on rebuilding with existing
osd
disks.

Could somebody provide guidance on how to do this?  I'm 

[ceph-users] ceph osd mounting issue with ocfs2

2015-07-30 Thread gjprabu
Hi All,



   We are using ceph with two OSD and three clients. Clients try to mount with 
OCFS2 file system. Here when i start mounting only two clients i can able to 
mount properly and third client giving below errors. Some time i can able to 
mount third client but data not sync to third client.





mount /dev/rbd/rbd/integdownloads /soho/build/downloads



mount.ocfs2: Invalid argument while mounting /dev/rbd0 on 
/soho/build/downloads. Check 'dmesg' for more information on this error.



dmesg



[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node 
mismatch -22, node 0

[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status 
= -22

[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22

[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = 
-22

[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = 
-22

[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22

[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = 
-22

[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)

[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22







OCFS2 configuration



cluster:

   node_count=3

   heartbeat_mode = local

   name=ocfs2



node:

ip_port = 

ip_address = 192.168.112.192

number = 0

name = integ-hm5

cluster = ocfs2

node:

ip_port = 

ip_address = 192.168.113.42

number = 1

name = integ-soho

cluster = ocfs2

node:

ip_port = 7778

ip_address = 192.168.112.115

number = 2

name = integ-hm2

cluster = ocfs2



Ceph configuration

# ceph -s

cluster 944fa0af-b7be-45a9-93ff-b9907cfaee3f

 health HEALTH_OK

 monmap e2: 3 mons at 
{integ-hm5=192.168.112.192:6789/0,integ-hm6=192.168.112.193:6789/0,integ-hm7=192.168.112.194:6789/0}

election epoch 54, quorum 0,1,2 integ-hm5,integ-hm6,integ-hm7

 osdmap e10: 2 osds: 2 up, 2 in

  pgmap v32626: 64 pgs, 1 pools, 10293 MB data, 8689 objects

14575 MB used, 23651 GB / 24921 GB avail

  64 active+clean

  client io 2047 B/s rd, 1023 B/s wr, 2 op/s





Regards

GJ









___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread Oliver Schulz

Hello Ceph Experts,

lately, ceph status on our cluster often states:

mds0: Client CLIENT_ID failing to respond to capability release

How can I identify which client is at fault (hostname or IP address)
from the CLIENT_ID?

What could be the source of the failing to respond to capability release -
Linux kernel on the client too old? We use ceph-0.94.2 on the cluster,
and the CephFS kernel client on the clients (kernel 3.13.0 on Ubuntu 12.04
and Ubuntu 14.04). But it's possible that there's a machine with an
older Kernel around somewhere ...


Cheers and thanks,

Oliver
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Brian Kroth

Sage Weil sw...@redhat.com 2015-07-30 06:54:

As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros.  For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6/rhel6 is a pain in the butt and eats up time and energy to
maintain and test that we could be spending doing more useful work.

Dropping them would mean:

- Ongoing development on master (and future versions like infernalis and
jewel) would not be tested on these distros.

- We would stop building upstream release packages on ceph.com for new
releases.

- We would probably continue building hammer and firefly packages for
future bugfix point releases.

- The downstream distros would probably continue to package them, but the
burden would be on them.  For example, if Ubuntu wanted to ship Jewel on
precise 12.04, they could, but they'd probably need to futz with the
packaging and/or build environment to make it work.

So... given that, I'd like to gauge user interest in these old distros.
Specifically,

CentOS6 / RHEL6
Ubuntu precise 12.04
Debian wheezy

Would anyone miss them?

In particular, dropping these three would mean we could drop sysvinit
entirely and focus on systemd (and continue maintaining the existing
upstart files for just a bit longer).  That would be a relief.  (The
sysvinit files wouldn't go away in the source tree, but we wouldn't worry
about packaging and testing them properly.)

Thanks!
sage


As I still haven't heard or seen about any upstream distros for Debian 
Jessie (see also [1]), I am still running Debian Wheezy and as that is 
supposed to be supported for another ~4 years by Debian, it would be 
very nice if there were at least stability and security fixes backported 
for the upstream ceph package repositories for that platform.


Additionally, I'll note that I'm personally likely to continue to use 
sysvinit so long as I still can, even when I am able to make the switch 
to Jessie.


Thanks,
Brian

[1] http://www.spinics.net/lists/ceph-users/msg19959.html


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Udo Lembke
Hi,
dropping debian wheezy are quite fast - till now there aren't packages
for jessie?!
Dropping of squeeze I understand, but wheezy at this time?


Udo


On 30.07.2015 15:54, Sage Weil wrote:
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.

 Dropping them would mean:

  - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.

  - We would stop building upstream release packages on ceph.com for new 
 releases.

  - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.

  - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.

 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,

  CentOS6 / RHEL6
  Ubuntu precise 12.04
  Debian wheezy

 Would anyone miss them?

 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)

 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Check networking first?

2015-07-30 Thread Quentin Hartman
Just wanted to drop a note to the group that I had my cluster go sideways
yesterday, and the root of the problem was networking again. Using iperf I
discovered that one of my nodes was only moving data at 1.7Mb / s. Moving
that node to a different switch port with a different cable has resolved
the problem. It took awhile to track down because none of the server-side
error metrics for disk or network showed anything was amiss, and I didn't
think to test network performance (as suggested in another thread) until
well into the process.

Check networking first!

QH
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Check networking first?

2015-07-30 Thread Mark Nelson
Thanks for posting this!  We see issues like this more often than you'd 
think.  It's really important too because if you don't figure it out the 
natural inclination is to blame Ceph! :)


Mark

On 07/30/2015 12:50 PM, Quentin Hartman wrote:

Just wanted to drop a note to the group that I had my cluster go
sideways yesterday, and the root of the problem was networking again.
Using iperf I discovered that one of my nodes was only moving data at
1.7Mb / s. Moving that node to a different switch port with a different
cable has resolved the problem. It took awhile to track down because
none of the server-side error metrics for disk or network showed
anything was amiss, and I didn't think to test network performance (as
suggested in another thread) until well into the process.

Check networking first!

QH


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Johannes Formann
I agree. For the existing stable series the distribution support should be 
continued.
But for new releases (infernalis, jewel...) I see no problem dropping the older 
versions of the distributions.

greetings

Johannes

 Am 30.07.2015 um 16:39 schrieb Jon Meacham jomea...@adobe.com:
 
 If hammer and firefly bugfix releases will still be packaged for these 
 distros, I don't see a problem with this. Anyone who is operating an existing 
 LTS deployment on CentOS 6, etc. will continue to receive fixes for said LTS 
 release.
 
 Jon
 
 From: ceph-users on behalf of Jan “Zviratko” Schermer
 Date: Thursday, July 30, 2015 at 8:29 AM
 To: Sage Weil
 Cc: ceph-devel, ceph-us...@ceph.com
 Subject: Re: [ceph-users] dropping old distros: el6, precise 12.04, debian 
 wheezy?
 
 I understand your reasons, but dropping support for LTS release like this
 is not right.
 
 You should lege artis support every distribution the LTS release could have
 ever been installed on - that’s what the LTS label is for and what we rely on
 once we build a project on top of it
 
 CentOS 6 in particular is still very widely used and even installed, 
 enterprise
 apps rely on it to this day. Someone out there is surely maintaining their LTS
 Ceph release on this distro and not having tested packages will hurt badly.
 We don’t want out project managers selecting EMC SAN over CEPH SDS
 because of such uncertainty, and you should benchmark yourself to those
 vendors, maybe...
 
 Every developer loves dropping support and concentrating on the bleeding
 edge interesting stuff but that’s not how it should work.
 
 Just my 2 cents...
 
 Jan
 
 On 30 Jul 2015, at 15:54, Sage Weil sw...@redhat.com wrote:
 
 As time marches on it becomes increasingly difficult to maintain proper 
 builds and packages for older distros.  For example, as we make the 
 systemd transition, maintaining the kludgey sysvinit and udev support for 
 centos6/rhel6 is a pain in the butt and eats up time and energy to 
 maintain and test that we could be spending doing more useful work.
 
 Dropping them would mean:
 
 - Ongoing development on master (and future versions like infernalis and 
 jewel) would not be tested on these distros.
 
 - We would stop building upstream release packages on ceph.com for new 
 releases.
 
 - We would probably continue building hammer and firefly packages for 
 future bugfix point releases.
 
 - The downstream distros would probably continue to package them, but the 
 burden would be on them.  For example, if Ubuntu wanted to ship Jewel on 
 precise 12.04, they could, but they'd probably need to futz with the 
 packaging and/or build environment to make it work.
 
 So... given that, I'd like to gauge user interest in these old distros.  
 Specifically,
 
 CentOS6 / RHEL6
 Ubuntu precise 12.04
 Debian wheezy
 
 Would anyone miss them?
 
 In particular, dropping these three would mean we could drop sysvinit 
 entirely and focus on systemd (and continue maintaining the existing 
 upstart files for just a bit longer).  That would be a relief.  (The 
 sysvinit files wouldn't go away in the source tree, but we wouldn't worry 
 about packaging and testing them properly.)
 
 Thanks!
 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] questions on editing crushmap for ceph cache tier

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

You are close...

I've done it by creating a new SSD root in the CRUSH map, then put the
SSD OSDs into a -ssd entry. I then created a new crush rule to choose
from the SSD root, then have the tiering pool use that rule. If you
look at the example in the document and think of ceph-osd-ssd-server-1
and ceph-osd-platter-server-1 as the same physical server with just
logical separation, you can follow the rest of the example. You will
need to modify either the ceph.conf to have the ssd OSDs have a
different CRUSH map location or write a location script to do it
automatically [1].

ceph.conf example:
[osd.70]
crush location = root=ssd host=nodez-ssd

To do it programmaticly, you can use /usr/bin/ceph-crush-location as a
base. I extended it by finding the device, then searching through
hdparam to see if the rotation was greater than zero. If it wasn't
then I output the hostname with the ssd portion, otherwise just the
hostname. It was only a few lines of code, but I can't find it right
now. I was waiting for ACLs to be modified so that I could query the
data center location from our inventory management system (that was a
few months ago) and I'm still waiting.

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-January/045673.html
-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVunMXCRDmVDuy+mK58QAAq04P+QH95haU34fWZ5PsPIxv
oY7JReywHwP3mWBO2XkaIg/l4AYV/HCBckBTSyp+GGFAtMeVEndiHCYTUf+F
kQJjIk1jZoN+WTLnD8nsfDMrmVmforyGcG4Y399C4cCkBmeoU3jeGeKx+Unx
dxiW6flRH5GPCazQdAAbXgb8InynUZ/EqmTY0FCDuLQ3CEXELuM8IReKwz0X
9LYqVIV+tdE1Ff2nDnLQlYYVpVv5K0y4TXBj8JzYH/41XbEws2GQnhb6b8zW
aopUDC9RsNGtzWf4RDg8X3LDHrw7IBtAuJf+PHbcq3Y4cmPf5Z0TYiS1bqn1
19kj3EhDVVdW1KUue2S3GemyP0+bIypA/VDGzFXgv9g5oKN0bXPOjuFKAD2Q
7Rc2yoW70LACgL0a2KiRPRt8e6Jz5/vG6GijZvxTTZfkDPKHBOPPA3mFyAS4
FGmu39/q5VP7V+CepaKjbGNUWRzlzOOcc4ybk3dmktYEFOTw4QZLczBGJ8s4
I+AdYDjiQOAG3n3xixqRFOb4URjfKOrUbnHfNVQJU+qfYfV1RBLThhRjiv0v
O+oiKiWuugZicHniTfHuOYePgxbs9eU2Hk8VRVk9ievXuRynDrH7D+IeUzUJ
JGoj01YM60Ul1XJPWatMoM+435hcHrGd0rJ3bi91DOrZmT55X4jjdUA8z/3Y
xMZE
=Tqaw
-END PGP SIGNATURE-

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 9:21 PM, van chaofa...@owtware.com wrote:
 Hi, list,

  Ceph cache tier seems very promising for performance.
  According to
 http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
 , I need to create a new pool based on SSD OSDs。

  Currently, I’ve two servers with several HDD-based OSDs. I mean to add one
 SSD-based OSD for each server, and then use these two OSDs to build a cache
 pool.
  But I’ve found problems editing crushmap.
  The example in the link use two new hosts to build SSD OSDs and then create
 a new ruleset take the new hosts.
  But in my environment, I do not have new servers to use.
  Can I create a ruleset choose part of OSDs in a host?
  For example, as the crushmap shown below, osd.2 and osd.5 are new added
 SSD-based OSDs, how can I create a ruleset choose these two OSDs only and
 how can I avoid default ruleset to choose osd.2 and osd.5?
  Is this possible, or I have to add a new server to deploy cache tier?
  Thanks.

 host node0 {
   id -2
   alg straw
   hash 0
   item osd.0 weight 1.0 # HDD
   item osd.1 weight 1.0 # HDD
   item osd.2 weight 0.5 # SSD
 }

 host node1 {
   id -3
   alg straw
   hash 0
   item osd.3 weight 1.0 # HDD
   item osd.4 weight 1.0 # HDD
   item osd.5 weight 0.5 # SSD
 }

 root default {
 id -1   # do not change unnecessarily
 # weight 1.560
 alg straw
 hash 0  # rjenkins1
 item node0 weight 2.5
 item node1 weight 2.5
 }

  # typical ruleset
  rule replicated_ruleset {
 ruleset 0
 type replicated
 min_size 1
 max_size 10
 step take default
 step chooseleaf firstn 0 type host
 step emit
 }



 van
 chaofa...@owtware.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recovery question

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I'm glad you were able to recover. I'm sure you learned a lot about
Ceph through the exercise (always seems to be the case for me with
things). I'll look forward to your report so that we can include it in
our operations manual, just in case.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Thu, Jul 30, 2015 at 12:41 PM, Peter Hinman  wrote:
 For the record, I have been able to recover.  Thank you very much for the
 guidance.

 I hate searching the web and finding only partial information on threads
 like this, so I'm going to document and post what I've learned as best I can
 in hopes that it will help someone else out in the future.

 --
 Peter Hinman

 On 7/29/2015 5:15 PM, Robert LeBlanc wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 If you had multiple monitors, you should recover if possible more than
 50% of them (they will need to form a quorum). If you can't, it is
 messy but, you can manually remove enough monitors to start a quorum.
  From /etc/ceph/ you will want the keyring and the ceph.conf at a
 minimim. The keys for the monitor I think are in the store.db which
 will let the monitors start, but the keyring has the admin key which
 lets you manage the cluster once you get it up. rbdmap is not needed
 for recovery (only automatically mounting RBDs at boot time), we can
 deal with that later if needed.
 - 
 Robert LeBlanc
 PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


 On Wed, Jul 29, 2015 at 4:40 PM, Peter Hinman  wrote:

 Ok - that is encouraging.  I've believe I've got data from a previous
 monitor. I see files in a store.db dated yesterday, with a
 MANIFEST-
 file that is significantly greater than the MANIFEST-07 file listed
 for
 the current monitors.

 I've actually found data for two previous monitors.  Any idea which one I
 should select? The one with the highest manifest number? The most recent
 time stamp?

 What files should I be looking for in /etc/conf?  Just the keyring and
 rbdmap files?  How important is it to use the same keyring file?

 --
 Peter Hinman


 On 7/29/2015 3:47 PM, Robert LeBlanc wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 The default is /var/lib/ceph/mon/- (/var/lib/ceph/mon/ceph-mon1 for
 me). You will also need the information from /etc/ceph/ to reconstruct
 the data. I *think* you should be able to just copy this to a new box
 with the same name and IP address and start it up.

 I haven't actually done this, so there still may be some bumps.
 - 
 Robert LeBlanc
 PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


 On Wed, Jul 29, 2015 at 3:44 PM, Peter Hinman  wrote:

 Thanks Robert -

 Where would that monitor data (database) be found?

 --
 Peter Hinman


 On 7/29/2015 3:39 PM, Robert LeBlanc wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 If you built new monitors, this will not work. You would have to
 recover the monitor data (database) from at least one monitor and
 rebuild the monitor. The new monitors would not have any information
 about pools, OSDs, PGs, etc to allow an OSD to be rejoined.
 - 
 Robert LeBlanc
 PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


 On Wed, Jul 29, 2015 at 2:46 PM, Peter Hinman  wrote:

 Hi Greg -

 So at the moment, I seem to be trying to resolve a permission error.

 === osd.3 ===
 Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
 2015-07-29 13:35:08.809536 7f0a0262e700  0 librados: osd.3
 authentication
 error (1) Operation not permitted
 Error connecting to cluster: PermissionError
 failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf
 --name=osd.3
 --keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move
 --
 3
 3.64 host=stor-2 root=default'
 ceph-disk: Error: ceph osd start failed: Command
 '['/usr/sbin/service',
 'ceph', '--cluster', 'ceph', 'start', 'osd.3']' returned non-zero
 exit
 status 1
 ceph-disk: Error: One or more partitions failed to activate


 Is there a way to identify the cause of this PermissionError?  I've
 copied
 the client.bootstrap-osd key from the output of ceph auth list, and
 pasted
 it into /var/lib/ceph/bootstrap-osd/ceph.keyring, but that has not
 resolve
 the error.

 But it sounds like you are saying that even once I get this resolved,
 I
 have
 no hope of recovering the data?

 --
 Peter Hinman

 On 7/29/2015 1:57 PM, Gregory Farnum wrote:

 This sounds like you're trying to reconstruct a cluster after
 destroying
 the
 monitors. That is...not going to work well. The monitors define the
 cluster
 and you can't move OSDs into different clusters. We have ideas for
 how
 to
 reconstruct monitors and it can be done manually with a lot of
 hassle,
 but
 the process isn't written down and there aren't really fools I help
 with
 it.
 :/
 -Greg

 On Wed, Jul 29, 2015 at 5:48 PM Peter 

Re: [ceph-users] Elastic-sized RBD planned?

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I'll take a stab at this.

I don't think it will be a feature that you will find in Ceph due to
the fact that Ceph doesn't really understand what is going on inside
the RBD. There are too many technologies that can use RBD that it is
not feasible to try and support something like this.

You can however have a service that runs in your VM which monitors
free space. Then the free space gets too low, it can call to a service
you write which will then expand the RBD on the fly and then the VM
itself and resize the partitions and the file system after the RBD is
expanded.

I'm not sure how CephFS fits into all of this as it is different than
RBD. CephFS is to NFS as RBD is to SAN block storage.

If I misunderstood the question, please clarify.
-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVunUCCRDmVDuy+mK58QAAwGcP/0Y5gz4flZ0XuaXdKlez
iogX9QsAoPk9s8fd0vpc3Prlhx/YODgkqqm35iJh5PaDEqb6njZwe3CR0WBh
mba6xAjSGe9D8dzkLP5cCvRSlkVexddAfj5K/M+JkjWyQhlq4TQcu3CSBo8Q
1pyI6dDwWNl8ScCu4/PN4Bl2OD9favEs8tXjNJ5/mhZWFjSN7t8/LLqsUNu6
AlMp2hFFfb1f0ky7EI/Hg3uw0+BbGDn/N0oxDyZqH7Wqnp/5L4kRAkZkXsGh
T7a4KxReNu/5eg2Tef83h7AAeRTSDTEw+38ToGnOIGpXYCxKdWF878Xs7tVa
+jPUds7aSNazaRB9nSPYiIxXBcTRKN2VFqhFNyQ/6CrEvsjkZQKkQfjJgNtL
eg0hmjS1X0QryapX7xhfz+Apx369Pkitm8UosyPIwEPnMuVqwVN5VwDTDkub
FlGNHX+b1/NDgZDpWF+b5gOErHMW8kWRNt/+2i5pXj0ZjDADmrQn+Hd9G0Hx
g1dot64vLogcvcyt0C+fLicF9xlddU/Zuz7VZLyIOH1KSVhABK1RaI8+Zws6
ZWriDFal0ztd0BNEQlCqtlo4hyY/AVies9qB6V4sUL0UuEL/+iTj71/VNh09
RJIURK6KySwxtW97pMGGafw5xPOjpxnm75D6AmovZ6WV68GSxlNpTY/V7CPH
dW/U
=h5n5
-END PGP SIGNATURE-

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 11:32 PM, Shneur Zalman Mattern
shz...@eimsys.co.il wrote:
 Hi to all!


 Perhaps, somebody already thought about, but my Googling had no results.


 How can I do RBD that will grow on demand of VM/client disk space.

 Are there in Ceph some options for this?

 Is it planned to do?

 Is it utopic idea?

 Is this client need CephFS already?


 Thanks,

 Shneur





 
 This footnote confirms that this email message has been scanned by
 PineApp Mail-SeCure for the presence of malicious code, vandals  computer
 viruses.
 


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Hi,

I tried to add a new monitor, but now I was unable to use ceph command

after doing ceph-deploy mon create myhostname

I’ve got :
# ceph status
2015-07-30 10:42:39.682038 7f7b16d90700  0 librados: client.admin 
authentication error (1) Operation not permitted
Error connecting to cluster: PermissionError

Could you help me to fix and how to change keys please ?

Thank a lot in advance, and sorry I’m a newbie on this topic.
K

 Le 30 juil. 2015 à 11:39, Khalid Ahsein kahs...@gmail.com a écrit :
 
 Good morning christian,
 
 thank you for your quick response.
 so I need to upgrade to 64 GB or 96 GB to be more secure ?
 
 And sorry I though that 2 monitors was the minimum. We will work to add a new 
 host quickly.
 
 About osd_pool_default_min_size should I change something for the future ? 
 
 thank you again
 K
 
 Le 30 juil. 2015 à 11:12, Christian Balzer ch...@gol.com 
 mailto:ch...@gol.com a écrit :
 
 
 Hello,
 
 On Thu, 30 Jul 2015 10:55:30 +0200 Khalid Ahsein wrote:
 
 Hello everybody,
 
 I’m running since 4 months a ceph cluster configured with two monitors :
 
 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for
 system 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1
 for system
 
 Too little RAM, just 2 monitors, just 2 nodes...
 
 This night I’ve encountered an issue with the crash of the first host.
 
 My first question is why with 1 host down, all my cluster was down
 (unable to do ceph status — hang command) and all my rbd was stuck
 without possibility to R/W. 
 
 Re-read the documentation, you need at least 3 monitors to survive the
 loss of one (monitor) node.
 
 Your osd_pool_default_min_size would have left in a usable situation, 2
 nodes is really a minimal case.
 
 I rebooted the first host, and 2 hours later
 the second go down with the same issue (all rbd down and ceph hang).
 
 After reboot, here is ceph status :
 
 # ceph status
cluster 9c29f469-7bad-4b64-97bf-3fbb1bbc0c5f
 health HEALTH_ERR
3 pgs inconsistent
1 pgs peering
1 pgs stuck inactive
1 pgs stuck unclean
36 requests are blocked  32 sec
928 scrub errors
clock skew detected on mon.drt-becks
 monmap e1: 2 mons at
 {drt-becks=172.16.21.6:6789/0,drt-marco=172.16.21.4:6789/0} election
 epoch 26, quorum 0,1 drt-marco,drt-becks osdmap e961: 24 osds: 24 up, 24
 in pgmap v2532968: 400 pgs, 1 pools, 512 GB data, 130 kobjects
1039 GB used, 88092 GB / 89177 GB avail
 393 active+clean
   3 active+clean+scrubbing+deep
   3 active+clean+inconsistent
   1 peering
  client io 57290 B/s wr, 7 op/s
 
 You will want to:
 a) fix your NTP, clock skew.
 b) check your logs about the scrub errors
 c) same for the stuck requests
 
 Also I found this error on DMESG about the crash :
 
 Message from syslogd@drt-marco at Jul 30 04:03:57 ...
 kernel:[4876519.657178] BUG: soft lockup - CPU#7 stuck for 22s!
 [btrfs-cleaner:32713]
 
 All my volumes are on BTRFS, maybe it was not a good idea ?
 
 Depending on your OS, kernel version, most definitely. 
 Plenty of BTRFS problems in the ML archives to be found.
 
 Christian
 
 -- 
 Christian BalzerNetwork/Systems Engineer
 ch...@gol.com mailto:ch...@gol.com Global OnLine Japan/Fusion 
 Communications
 http://www.gol.com/ http://www.gol.com/

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Weird behaviour of cephfs with samba

2015-07-30 Thread Jörg Henne
Gregory Farnum greg@... writes:
 
 You can mount subtrees with the -r option to ceph-fuse.

Yay! That did the trick to properly mount via fuse. And I can confirm that
the directory list results are now stable both locally and via samba.

 Once you've started it up you should find a file like
 client.admin.[0-9]*.asok in (I think?) /var/run/ceph. You can run
 ceph --admin-daemon /var/run/ceph/{client_asok} status and provide
 the output to see if it's doing anything useful. Or set debug client
 = 20 in the config and then upload the client log file either
 publicly or with ceph-post-file and I'll take a quick look to see
 what's going on.

root@gru:/mnt# ceph --admin-daemon /var/run/ceph/ceph-client.admin.asok status
{
metadata: {
ceph_sha1: 5fb85614ca8f354284c713a2f9c610860720bbf3,
ceph_version: ceph version 0.94.2
(5fb85614ca8f354284c713a2f9c610860720bbf3),
entity_id: admin,
hostname: gru,
mount_point: \/mnt\/test
},
dentry_count: 0,
dentry_pinned_count: 0,
inode_count: 1,
mds_epoch: 1075,
osd_epoch: 2391,
osd_epoch_barrier: 0
}

It seems like if cephfs was only ever mounted with a non-root basepath, the
cephfs root does not contain a directory entry for that base path. At least
that's what I guess from dentry_count: 0.

 Mmm, that looks like a Samba config issue which unfortunately I don't
 know much about. Perhaps you need to install these modules
 individually? It looks like our nightly tests are just getting the
 Ceph VFS installed by default. :/

Ubuntu's samba package isn't configured with --with-libcephfs and therefore
simply doesn't ship ceph.so. I am currently trying to recompile it with that
flag. If that doesn't work, that's no biggie, because the re-export of the
ceph-fuse mounted directory behaves flawlessly now.

Thanks a ton for your help!

Joerg Henne




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang

 Date: Thu, 30 Jul 2015 11:37:37 +0300
 Subject: Re: [ceph-users] which kernel version can help avoid kernel client 
 deadlock
 From: idryo...@gmail.com
 To: zhangz.da...@outlook.com
 CC: chaofa...@owtware.com; ceph-users@lists.ceph.com
 
 On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da...@outlook.com wrote:
 
  
  Subject: Re: [ceph-users] which kernel version can help avoid kernel client
  deadlock
  From: chaofa...@owtware.com
  Date: Thu, 30 Jul 2015 13:16:16 +0800
  CC: idryo...@gmail.com; ceph-users@lists.ceph.com
  To: zhangz.da...@outlook.com
 
 
  On Jul 30, 2015, at 12:48 PM, Z Zhang zhangz.da...@outlook.com wrote:
 
  We also hit the similar issue from time to time on centos with 3.10.x
  kernel. By iostat, we can see kernel rbd client's util is 100%, but no r/w
  io, and we can't umount/unmap this rbd client. After restarting OSDs, it
  will become normal.
 
 3.10.x is rather vague, what is the exact version you saw this on?  Can you
 provide syslog logs (I'm interested in dmesg)?
The kernel version should be 3.10.0.  
I don't have sys logs at hand. It is not easily reproduced, and it happened at 
very low memory situation. We are running DB instances over rbd as storage. DB 
instances will use lot of memory when running high concurrent rw, and after 
running for a long time, rbd might hit this problem, but not always. Enabling 
rbd log makes our system behave strange during our test.
I back-ported one of your fixes: 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/block/rbd.c?id=5a60e87603c4c533492c515b7f62578189b03c9c
So far test looks fine for few days, but still under observation. So want to 
know if there are some other fixes?
 
 Thanks,
 
 Ilya
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da...@outlook.com wrote:

 Date: Thu, 30 Jul 2015 11:37:37 +0300
 Subject: Re: [ceph-users] which kernel version can help avoid kernel
 client deadlock
 From: idryo...@gmail.com
 To: zhangz.da...@outlook.com
 CC: chaofa...@owtware.com; ceph-users@lists.ceph.com

 On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da...@outlook.com
 wrote:
 
  
  Subject: Re: [ceph-users] which kernel version can help avoid kernel
  client
  deadlock
  From: chaofa...@owtware.com
  Date: Thu, 30 Jul 2015 13:16:16 +0800
  CC: idryo...@gmail.com; ceph-users@lists.ceph.com
  To: zhangz.da...@outlook.com
 
 
  On Jul 30, 2015, at 12:48 PM, Z Zhang zhangz.da...@outlook.com wrote:
 
  We also hit the similar issue from time to time on centos with 3.10.x
  kernel. By iostat, we can see kernel rbd client's util is 100%, but no
  r/w
  io, and we can't umount/unmap this rbd client. After restarting OSDs, it
  will become normal.

 3.10.x is rather vague, what is the exact version you saw this on? Can you
 provide syslog logs (I'm interested in dmesg)?

 The kernel version should be 3.10.0.

 I don't have sys logs at hand. It is not easily reproduced, and it happened
 at very low memory situation. We are running DB instances over rbd as
 storage. DB instances will use lot of memory when running high concurrent
 rw, and after running for a long time, rbd might hit this problem, but not
 always. Enabling rbd log makes our system behave strange during our test.

 I back-ported one of your fixes:
 https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/block/rbd.c?id=5a60e87603c4c533492c515b7f62578189b03c9c

 So far test looks fine for few days, but still under observation. So want to
 know if there are some other fixes?

I'd suggest following 3.10 stable series (currently at 3.10.84).  The
fix you backported is crucial in low memory situations, so I wouldn't
be surprised if it alone fixed your problem.  (It is not in 3.10.84,
I assume it'll show up in 3.10.85 - for now just apply your backport.)

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Christian Balzer

Hello,

On Thu, 30 Jul 2015 08:49:16 + Sebastian Köhler wrote:

 Hello,
 
 it seems that there are no Debian Squeeze packages in the repository for
 the current Hammer version. Is this an oversight or is there another
 reason those are not provided?
 
Most likely because it's 2 versions behind the curve?
Doing a fresh install on Squeeze is probably not something people consider
to be likely or wise.

Is there any reason you can't use Wheezy or Jessie?

I can think of MANY reasons for not going to Jessie, main point for me is
the lack of pacemaker, however Wheezy would make a good platform for the
time being, seemingly also with LTS support.

Christian

 Sebastian
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread Oliver Schulz

Hi John,

thanks a lot - I was indeed able to identify the machine
in question.

As for the kernel, we'll certainly update to a newer kernel
(3.16 and later 3.19) for the Ubuntu 14.04 clients. For
the 12.04 clients, we'll have to see, but these machines will
be phased out over time anyhow. I'd like to avoid the FUSE
client, as the performance is so much worse than with the
kernel client.


Thanks again!

Oliver


On 30.07.2015 09:50, John Spray wrote:

For sufficiently recent clients we do this for you (clients send some
metadata like hostname, which is used in the MDS to generate an
easier-to-understand identifier).

To do it by hand, use the admin socket command ceph daemon mds.id
session ls command, and look out for the client IP addresses in the
address part of each entry.  By the way, there have been various other
bugs in 3.13-ish kernels, so you should consider using either a more
recent kernel or a fuse client.

John

On 30/07/15 08:32, Oliver Schulz wrote:

Hello Ceph Experts,

lately, ceph status on our cluster often states:

mds0: Client CLIENT_ID failing to respond to capability release

How can I identify which client is at fault (hostname or IP address)
from the CLIENT_ID?

What could be the source of the failing to respond to capability
release -
Linux kernel on the client too old? We use ceph-0.94.2 on the cluster,
and the CephFS kernel client on the clients (kernel 3.13.0 on Ubuntu
12.04
and Ubuntu 14.04). But it's possible that there's a machine with an
older Kernel around somewhere ...


Cheers and thanks,

Oliver
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] A cache tier issue with rate only at 20MB/s when data move from cold pool to hot pool

2015-07-30 Thread Kenneth Waegeman



On 06/16/2015 01:17 PM, Kenneth Waegeman wrote:

Hi!

We also see this at our site:  When we cat a large file from cephfs to
/dev/null, we get about 10MB/s data transfer.  I also do not see a
system resource bottleneck.
Our cluster consists of 14 servers with each 16 disks, together forming
a EC coded pool. We also have 2SSDs per server for the cache. Running
0.94.1

Hi,

Does someone has an idea about this? Is there some debugging or testing 
we can do to find the problem here?


Thank you!

Kenneth




So we are having the same question.

Our cache pool is in writeback mode, can it help to set it in readonly
for this?

Kenneth

On 06/16/2015 12:58 PM, liukai wrote:

Hi all,
   A cache tier, 2 hot node with 8 ssd osd, and 2 cold node with 24 sata
osd.

   The public network rate is 1Mb/s and cluster network rate is
1000Mb/s.

   Using fuse-client to access the files.
The issue is:

   When the files are in hot pool, the copy rate is very fash.

   But when the files are only in cold pool, the rate only reach 20MB/s.

   I known that when files are not in the hot pool, the files should
been copy from cold pool to hot pool first, and then from hot pool to
client.

   But the cpu, ram and network seems not the bottleneck. and will be
the cause of system design?

   Are there some params to adjust to improve the rate from cold pool to
hot pool?

Thanks
2015-06-16

liukai


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Good morning christian,

thank you for your quick response.
so I need to upgrade to 64 GB or 96 GB to be more secure ?

And sorry I though that 2 monitors was the minimum. We will work to add a new 
host quickly.

About osd_pool_default_min_size should I change something for the future ? 

thank you again
K

 Le 30 juil. 2015 à 11:12, Christian Balzer ch...@gol.com a écrit :
 
 
 Hello,
 
 On Thu, 30 Jul 2015 10:55:30 +0200 Khalid Ahsein wrote:
 
 Hello everybody,
 
 I’m running since 4 months a ceph cluster configured with two monitors :
 
 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for
 system 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1
 for system
 
 Too little RAM, just 2 monitors, just 2 nodes...
 
 This night I’ve encountered an issue with the crash of the first host.
 
 My first question is why with 1 host down, all my cluster was down
 (unable to do ceph status — hang command) and all my rbd was stuck
 without possibility to R/W. 
 
 Re-read the documentation, you need at least 3 monitors to survive the
 loss of one (monitor) node.
 
 Your osd_pool_default_min_size would have left in a usable situation, 2
 nodes is really a minimal case.
 
 I rebooted the first host, and 2 hours later
 the second go down with the same issue (all rbd down and ceph hang).
 
 After reboot, here is ceph status :
 
 # ceph status
cluster 9c29f469-7bad-4b64-97bf-3fbb1bbc0c5f
 health HEALTH_ERR
3 pgs inconsistent
1 pgs peering
1 pgs stuck inactive
1 pgs stuck unclean
36 requests are blocked  32 sec
928 scrub errors
clock skew detected on mon.drt-becks
 monmap e1: 2 mons at
 {drt-becks=172.16.21.6:6789/0,drt-marco=172.16.21.4:6789/0} election
 epoch 26, quorum 0,1 drt-marco,drt-becks osdmap e961: 24 osds: 24 up, 24
 in pgmap v2532968: 400 pgs, 1 pools, 512 GB data, 130 kobjects
1039 GB used, 88092 GB / 89177 GB avail
 393 active+clean
   3 active+clean+scrubbing+deep
   3 active+clean+inconsistent
   1 peering
  client io 57290 B/s wr, 7 op/s
 
 You will want to:
 a) fix your NTP, clock skew.
 b) check your logs about the scrub errors
 c) same for the stuck requests
 
 Also I found this error on DMESG about the crash :
 
 Message from syslogd@drt-marco at Jul 30 04:03:57 ...
 kernel:[4876519.657178] BUG: soft lockup - CPU#7 stuck for 22s!
 [btrfs-cleaner:32713]
 
 All my volumes are on BTRFS, maybe it was not a good idea ?
 
 Depending on your OS, kernel version, most definitely. 
 Plenty of BTRFS problems in the ML archives to be found.
 
 Christian
 
 -- 
 Christian BalzerNetwork/Systems Engineer
 ch...@gol.com mailto:ch...@gol.com  Global OnLine Japan/Fusion 
 Communications
 http://www.gol.com/ http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Hello everybody,

I’m running since 4 months a ceph cluster configured with two monitors :

1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for system
1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for system

This night I’ve encountered an issue with the crash of the first host.

My first question is why with 1 host down, all my cluster was down (unable to 
do ceph status — hang command) and all my rbd was stuck without possibility to 
R/W.
I rebooted the first host, and 2 hours later the second go down with the same 
issue (all rbd down and ceph hang).

After reboot, here is ceph status :

# ceph status
cluster 9c29f469-7bad-4b64-97bf-3fbb1bbc0c5f
 health HEALTH_ERR
3 pgs inconsistent
1 pgs peering
1 pgs stuck inactive
1 pgs stuck unclean
36 requests are blocked  32 sec
928 scrub errors
clock skew detected on mon.drt-becks
 monmap e1: 2 mons at 
{drt-becks=172.16.21.6:6789/0,drt-marco=172.16.21.4:6789/0}
election epoch 26, quorum 0,1 drt-marco,drt-becks
 osdmap e961: 24 osds: 24 up, 24 in
  pgmap v2532968: 400 pgs, 1 pools, 512 GB data, 130 kobjects
1039 GB used, 88092 GB / 89177 GB avail
 393 active+clean
   3 active+clean+scrubbing+deep
   3 active+clean+inconsistent
   1 peering
  client io 57290 B/s wr, 7 op/s

Also I found this error on DMESG about the crash :

Message from syslogd@drt-marco at Jul 30 04:03:57 ...
 kernel:[4876519.657178] BUG: soft lockup - CPU#7 stuck for 22s! 
[btrfs-cleaner:32713]

All my volumes are on BTRFS, maybe it was not a good idea ?

Thanks a lot for your help, on the bottom more hardware information

K

# cat /proc/cpuinfo 
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5506  @ 2.13GHz
stepping: 5
microcode   : 0x19
cpu MHz : 2133.433
cache size  : 4096 KB
physical id : 1
siblings: 4
core id : 0
cpu cores   : 4
apicid  : 16
initial apicid  : 16
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc 
aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca 
sse4_1 sse4_2 popcnt lahf_lm dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips: 4266.86
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5506  @ 2.13GHz
stepping: 5
microcode   : 0x19
cpu MHz : 2133.433
cache size  : 4096 KB
physical id : 0
siblings: 4
core id : 0
cpu cores   : 4
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc 
aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca 
sse4_1 sse4_2 popcnt lahf_lm dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips: 4266.74
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 2
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5506  @ 2.13GHz
stepping: 5
microcode   : 0x19
cpu MHz : 2133.433
cache size  : 4096 KB
physical id : 1
siblings: 4
core id : 1
cpu cores   : 4
apicid  : 18
initial apicid  : 18
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc 
aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca 
sse4_1 sse4_2 popcnt lahf_lm dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips: 4266.86
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 3
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5506  @ 2.13GHz
stepping: 5
microcode   : 0x19
cpu MHz : 2133.433
cache 

Re: [ceph-users] Crash and question

2015-07-30 Thread Christian Balzer

Hello,

On Thu, 30 Jul 2015 10:55:30 +0200 Khalid Ahsein wrote:

 Hello everybody,
 
 I’m running since 4 months a ceph cluster configured with two monitors :
 
 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for
 system 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1
 for system
 
Too little RAM, just 2 monitors, just 2 nodes...

 This night I’ve encountered an issue with the crash of the first host.
 
 My first question is why with 1 host down, all my cluster was down
 (unable to do ceph status — hang command) and all my rbd was stuck
 without possibility to R/W. 

Re-read the documentation, you need at least 3 monitors to survive the
loss of one (monitor) node.

Your osd_pool_default_min_size would have left in a usable situation, 2
nodes is really a minimal case.

 I rebooted the first host, and 2 hours later
 the second go down with the same issue (all rbd down and ceph hang).
 
 After reboot, here is ceph status :
 
 # ceph status
 cluster 9c29f469-7bad-4b64-97bf-3fbb1bbc0c5f
  health HEALTH_ERR
 3 pgs inconsistent
 1 pgs peering
 1 pgs stuck inactive
 1 pgs stuck unclean
 36 requests are blocked  32 sec
 928 scrub errors
 clock skew detected on mon.drt-becks
  monmap e1: 2 mons at
 {drt-becks=172.16.21.6:6789/0,drt-marco=172.16.21.4:6789/0} election
 epoch 26, quorum 0,1 drt-marco,drt-becks osdmap e961: 24 osds: 24 up, 24
 in pgmap v2532968: 400 pgs, 1 pools, 512 GB data, 130 kobjects
 1039 GB used, 88092 GB / 89177 GB avail
  393 active+clean
3 active+clean+scrubbing+deep
3 active+clean+inconsistent
1 peering
   client io 57290 B/s wr, 7 op/s
 
You will want to:
a) fix your NTP, clock skew.
b) check your logs about the scrub errors
c) same for the stuck requests

 Also I found this error on DMESG about the crash :
 
 Message from syslogd@drt-marco at Jul 30 04:03:57 ...
  kernel:[4876519.657178] BUG: soft lockup - CPU#7 stuck for 22s!
 [btrfs-cleaner:32713]
 
 All my volumes are on BTRFS, maybe it was not a good idea ?
 
Depending on your OS, kernel version, most definitely. 
Plenty of BTRFS problems in the ML archives to be found.

Christian

-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
July 30 2015 11:05 AM, Christian Balzer ch...@gol.com wrote:
 Is there any reason you can't use Wheezy or Jessie?

Our cluster is running on trusty however nearly all our clients are running on 
squeeze and can not be updated for compatibility reasons in the short term. 
Packages of older Hammer versions were provided so we assumed future releases 
of Hammer would be provided for Debian 6.


Sebastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com