On Sat, Jul 23, 2016 at 12:17 AM, Kostis Fardelas wrote:
> Hello,
> being in latest Hammer, I think I hit a bug with more recent than
> legacy tunables.
>
> Being in legacy tunables for a while, I decided to experiment with
> "better" tunables. So first I went from argonaut
On Sat, Jul 23, 2016 at 1:41 AM, Ruben Kerkhof wrote:
> Please keep the mailing list on the CC.
>
> On Fri, Jul 22, 2016 at 3:40 PM, Manuel Lausch wrote:
>> oh. This was a copy failure.
>> Of course I checked my config again. Some other variations
Hi Brett,
Don't think so thanks, but we'll shout if we find any breakage/weirdness.
Cheers,
On 23 July 2016 at 05:42, Brett Niver wrote:
> So Blair,
>
> As far as you know right now, you don't need anything from the CephFS team,
> correct?
> Thanks,
> Brett
>
>
> On Fri, Jul
Team,
Have a performance related question on Ceph.
I know performance of a ceph cluster depends on so many factors like type of
storage servers, processors (no of processor, raw performance of processor),
memory, network links, type of disks, journal disks, etc. On top of the
hardware
So Blair,
As far as you know right now, you don't need anything from the CephFS team,
correct?
Thanks,
Brett
On Fri, Jul 22, 2016 at 2:18 AM, Yan, Zheng wrote:
> On Fri, Jul 22, 2016 at 11:15 AM, Blair Bethwaite
> wrote:
> > Thanks Zheng,
> >
> >
Please keep the mailing list on the CC.
On Fri, Jul 22, 2016 at 3:40 PM, Manuel Lausch wrote:
> oh. This was a copy failure.
> Of course I checked my config again. Some other variations of configurating
> didn't help as well.
>
> Finaly I took the
It will be smooth installation. I have recently installed hammer on centos
7.
Regards
Gaurav Goyal
On Fri, Jul 22, 2016 at 7:22 AM, Ruben Kerkhof
wrote:
> On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch
> wrote:
> > Hi,
>
> Hi,
> >
> > I try to
> FWIW, the xfs -n size=64k option is probably not a good idea.
Agreed, moreover it’s a really bad idea. You get memory allocation slowdowns
as described in the linked post, and eventually the OSD dies.
It can be mitigated somewhat by periodically (say every 2 hours, ymmv) flushing
the
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
Sent: 22 July 2016 15:13
To: n...@fisk.me.uk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 22/07/2016 14:10, Nick Fisk a écrit :
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]
Le 22/07/2016 14:10, Nick Fisk a écrit :
*From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
Behalf Of *Frédéric Nass
*Sent:* 22 July 2016 11:19
*To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan
Schermer'
*Cc:* ceph-users@lists.ceph.com
Hello,
being in latest Hammer, I think I hit a bug with more recent than
legacy tunables.
Being in legacy tunables for a while, I decided to experiment with
"better" tunables. So first I went from argonaut profile to bobtail
and then to firefly. However, I decided to make the changes on
Hi Henrik,
Many thanks for your answer.
What settings in the ceph.conf are you referring to? These:
mon_initial_members =
mon_host =
I was under the impression that mon_initial_members are only used when the
cluster is being setup and are not used by the live cluster. Is this the case?
Such great detail in this post David.. This will come in very handy
for people in the future
On Thu, Jul 21, 2016 at 8:24 PM, David Turner
wrote:
> The Mon store is important and since your cluster isn't healthy, they need
> to hold onto it to make sure that when
Nothing immediately pops out at me as incorrect when looking at your
scripts. What do you mean when you say the diff is "always a certain
size"? Any chance that these images are clones? If it's the first
write to the backing object from the clone, librbd will need to copy
the full object from the
You aren't, by chance, sharing the same RBD image between multiple
VMs, are you? An order-of-magnitude performance degradation would not
be unexpected if you have multiple clients concurrently accessing the
same image with the "exclusive-lock" feature enabled on the image.
4000 IOPS for 4K random
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Frédéric Nass
Sent: 22 July 2016 11:19
To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer'
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le
On 16-07-22 13:33, Andrei Mikhailovsky wrote:
Hello
We are planning to make changes to our IT infrastructure and as a
result the fqdn and IPs of the ceph cluster will change. Could someone
suggest the best way of dealing with this to make sure we have a
minimal ceph downtime?
Can old and
On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch wrote:
> Hi,
Hi,
>
> I try to install ceph hammer on centos7 but something with the RPM
> Repository seems to be wrong.
>
> In my yum.repos.d/ceph.repo file I have the following configuration:
>
> [ceph]
> name=Ceph packages
Hello
We are planning to make changes to our IT infrastructure and as a result the
fqdn and IPs of the ceph cluster will change. Could someone suggest the best
way of dealing with this to make sure we have a minimal ceph downtime?
Many thanks
Andrei
Le 22/07/2016 11:48, Nick Fisk a écrit :
*From:*Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
*Sent:* 22 July 2016 10:40
*To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan
Schermer'
*Cc:* ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users] ceph +
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
Sent: 22 July 2016 10:40
To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer'
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 22/07/2016 10:23, Nick Fisk a écrit
Le 22/07/2016 10:23, Nick Fisk a écrit :
*From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
Behalf Of *Frédéric Nass
*Sent:* 22 July 2016 09:10
*To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan
Schermer'
*Cc:* ceph-users@lists.ceph.com
Hi,
>>> I just upgraded from Infernalis to Jewel and see an approximate 10x
>>> latency increase.
>>>
>>> Quick facts:
>>> - 3x replicated pool
>>> - 4x 2x-"E5-2690 v3 @ 2.60GHz", 128GB RAM, 6x 1.6 TB Intel S3610
>>> SSDs,
>>> - LSI3008 controller with up-to-date firmware and upstream driver,
> -Original Message-
> From: Martin Millnert [mailto:mar...@millnert.se]
> Sent: 22 July 2016 10:32
> To: n...@fisk.me.uk; 'Ceph Users'
> Subject: Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase
>
> On Fri, 2016-07-22 at 08:56 +0100, Nick Fisk
On Fri, 2016-07-22 at 08:56 +0100, Nick Fisk wrote:
> >
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
> > Behalf Of Martin Millnert
> > Sent: 22 July 2016 00:33
> > To: Ceph Users
> > Subject: [ceph-users] Infernalis
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Frédéric Nass
Sent: 22 July 2016 09:10
To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer'
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le
Le 22/07/2016 09:47, Nick Fisk a écrit :
*From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
Behalf Of *Frédéric Nass
*Sent:* 22 July 2016 08:11
*To:* Jake Young ; Jan Schermer
*Cc:* ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users]
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Martin Millnert
> Sent: 22 July 2016 00:33
> To: Ceph Users
> Subject: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase
>
> Hi,
>
> I just upgraded
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Frédéric Nass
Sent: 22 July 2016 08:11
To: Jake Young ; Jan Schermer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 20/07/2016 21:20, Jake Young a
Le 20/07/2016 21:20, Jake Young a écrit :
On Wednesday, July 20, 2016, Jan Schermer > wrote:
> On 20 Jul 2016, at 18:38, Mike Christie > wrote:
>
> On 07/20/2016 03:50 AM, Frédéric Nass wrote:
On Fri, Jul 22, 2016 at 11:15 AM, Blair Bethwaite
wrote:
> Thanks Zheng,
>
> On 22 July 2016 at 12:12, Yan, Zheng wrote:
>> We actively back-port fixes to RHEL 7.x kernel. When RHCS2.0 release,
>> the RHEL kernel should contain fixes up to 3.7
31 matches
Mail list logo