Re: [ceph-users] Recovery stuck after adjusting to recent tunables

2016-07-22 Thread Brad Hubbard
On Sat, Jul 23, 2016 at 12:17 AM, Kostis Fardelas wrote: > Hello, > being in latest Hammer, I think I hit a bug with more recent than > legacy tunables. > > Being in legacy tunables for a while, I decided to experiment with > "better" tunables. So first I went from argonaut

Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-22 Thread Brad Hubbard
On Sat, Jul 23, 2016 at 1:41 AM, Ruben Kerkhof wrote: > Please keep the mailing list on the CC. > > On Fri, Jul 22, 2016 at 3:40 PM, Manuel Lausch wrote: >> oh. This was a copy failure. >> Of course I checked my config again. Some other variations

Re: [ceph-users] CephFS Samba VFS RHEL packages

2016-07-22 Thread Blair Bethwaite
Hi Brett, Don't think so thanks, but we'll shout if we find any breakage/weirdness. Cheers, On 23 July 2016 at 05:42, Brett Niver wrote: > So Blair, > > As far as you know right now, you don't need anything from the CephFS team, > correct? > Thanks, > Brett > > > On Fri, Jul

[ceph-users] Ceph performance calculator

2016-07-22 Thread EP Komarla
Team, Have a performance related question on Ceph. I know performance of a ceph cluster depends on so many factors like type of storage servers, processors (no of processor, raw performance of processor), memory, network links, type of disks, journal disks, etc. On top of the hardware

Re: [ceph-users] CephFS Samba VFS RHEL packages

2016-07-22 Thread Brett Niver
So Blair, As far as you know right now, you don't need anything from the CephFS team, correct? Thanks, Brett On Fri, Jul 22, 2016 at 2:18 AM, Yan, Zheng wrote: > On Fri, Jul 22, 2016 at 11:15 AM, Blair Bethwaite > wrote: > > Thanks Zheng, > > > >

Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-22 Thread Ruben Kerkhof
Please keep the mailing list on the CC. On Fri, Jul 22, 2016 at 3:40 PM, Manuel Lausch wrote: > oh. This was a copy failure. > Of course I checked my config again. Some other variations of configurating > didn't help as well. > > Finaly I took the

Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-22 Thread Gaurav Goyal
It will be smooth installation. I have recently installed hammer on centos 7. Regards Gaurav Goyal On Fri, Jul 22, 2016 at 7:22 AM, Ruben Kerkhof wrote: > On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch > wrote: > > Hi, > > Hi, > > > > I try to

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-22 Thread Anthony D'Atri
> FWIW, the xfs -n size=64k option is probably not a good idea. Agreed, moreover it’s a really bad idea. You get memory allocation slowdowns as described in the linked post, and eventually the OSD dies. It can be mitigated somewhat by periodically (say every 2 hours, ymmv) flushing the

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] Sent: 22 July 2016 15:13 To: n...@fisk.me.uk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 14:10, Nick Fisk a écrit : From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 14:10, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 11:19 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com

[ceph-users] Recovery stuck after adjusting to recent tunables

2016-07-22 Thread Kostis Fardelas
Hello, being in latest Hammer, I think I hit a bug with more recent than legacy tunables. Being in legacy tunables for a while, I decided to experiment with "better" tunables. So first I went from argonaut profile to bobtail and then to firefly. However, I decided to make the changes on

Re: [ceph-users] change of dns names and IP addresses of cluster members

2016-07-22 Thread Andrei Mikhailovsky
Hi Henrik, Many thanks for your answer. What settings in the ceph.conf are you referring to? These: mon_initial_members = mon_host = I was under the impression that mon_initial_members are only used when the cluster is being setup and are not used by the live cluster. Is this the case?

Re: [ceph-users] Uncompactable Monitor Store at 69GB -- Re: Cluster in warn state, not sure what to do next.

2016-07-22 Thread Brian ::
Such great detail in this post David.. This will come in very handy for people in the future On Thu, Jul 21, 2016 at 8:24 PM, David Turner wrote: > The Mon store is important and since your cluster isn't healthy, they need > to hold onto it to make sure that when

Re: [ceph-users] rbd export-dif question

2016-07-22 Thread Jason Dillaman
Nothing immediately pops out at me as incorrect when looking at your scripts. What do you mean when you say the diff is "always a certain size"? Any chance that these images are clones? If it's the first write to the backing object from the clone, librbd will need to copy the full object from the

Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Jason Dillaman
You aren't, by chance, sharing the same RBD image between multiple VMs, are you? An order-of-magnitude performance degradation would not be unexpected if you have multiple clients concurrently accessing the same image with the "exclusive-lock" feature enabled on the image. 4000 IOPS for 4K random

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 11:19 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le

Re: [ceph-users] change of dns names and IP addresses of cluster members

2016-07-22 Thread Henrik Korkuc
On 16-07-22 13:33, Andrei Mikhailovsky wrote: Hello We are planning to make changes to our IT infrastructure and as a result the fqdn and IPs of the ceph cluster will change. Could someone suggest the best way of dealing with this to make sure we have a minimal ceph downtime? Can old and

Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-22 Thread Ruben Kerkhof
On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch wrote: > Hi, Hi, > > I try to install ceph hammer on centos7 but something with the RPM > Repository seems to be wrong. > > In my yum.repos.d/ceph.repo file I have the following configuration: > > [ceph] > name=Ceph packages

[ceph-users] change of dns names and IP addresses of cluster members

2016-07-22 Thread Andrei Mikhailovsky
Hello We are planning to make changes to our IT infrastructure and as a result the fqdn and IPs of the ceph cluster will change. Could someone suggest the best way of dealing with this to make sure we have a minimal ceph downtime? Many thanks Andrei

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 11:48, Nick Fisk a écrit : *From:*Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] *Sent:* 22 July 2016 10:40 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] ceph +

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] Sent: 22 July 2016 10:40 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 10:23, Nick Fisk a écrit

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 10:23, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 09:10 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com

[ceph-users] Re: Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Yoann Moulin
Hi, >>> I just upgraded from Infernalis to Jewel and see an approximate 10x >>> latency increase. >>> >>> Quick facts: >>> - 3x replicated pool >>> - 4x 2x-"E5-2690 v3 @ 2.60GHz", 128GB RAM, 6x 1.6 TB Intel S3610 >>> SSDs, >>> - LSI3008 controller with up-to-date firmware and upstream driver,

Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Nick Fisk
> -Original Message- > From: Martin Millnert [mailto:mar...@millnert.se] > Sent: 22 July 2016 10:32 > To: n...@fisk.me.uk; 'Ceph Users' > Subject: Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase > > On Fri, 2016-07-22 at 08:56 +0100, Nick Fisk

Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Martin Millnert
On Fri, 2016-07-22 at 08:56 +0100, Nick Fisk wrote: > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On > > Behalf Of Martin Millnert > > Sent: 22 July 2016 00:33 > > To: Ceph Users > > Subject: [ceph-users] Infernalis

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 09:10 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 09:47, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 08:11 *To:* Jake Young ; Jan Schermer *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users]

Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Martin Millnert > Sent: 22 July 2016 00:33 > To: Ceph Users > Subject: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase > > Hi, > > I just upgraded

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 08:11 To: Jake Young ; Jan Schermer Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 20/07/2016 21:20, Jake Young a

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 20/07/2016 21:20, Jake Young a écrit : On Wednesday, July 20, 2016, Jan Schermer > wrote: > On 20 Jul 2016, at 18:38, Mike Christie > wrote: > > On 07/20/2016 03:50 AM, Frédéric Nass wrote:

Re: [ceph-users] CephFS Samba VFS RHEL packages

2016-07-22 Thread Yan, Zheng
On Fri, Jul 22, 2016 at 11:15 AM, Blair Bethwaite wrote: > Thanks Zheng, > > On 22 July 2016 at 12:12, Yan, Zheng wrote: >> We actively back-port fixes to RHEL 7.x kernel. When RHCS2.0 release, >> the RHEL kernel should contain fixes up to 3.7