> > I’m running Proxmox VE 5.2 which includes ceph version 12.2.7
> > (94ce186ac93bb28c3c444bccfefb8a31eb0748e4)
> luminous (stable)
> 12.2.8 is in the repositories. ;)
I forgot to reply to this part and I did notice the update afterwards and since
updated but performance was the same.
I redid
-Original message-
> From:Alwin Antreich
> Sent: Thursday 13th September 2018 14:41
> To: Menno Zonneveld
> Cc: ceph-users ; Marc Roos
>
> Subject: Re: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
> > Am I doing s
On Thu, Sep 13, 2018 at 02:17:20PM +0200, Menno Zonneveld wrote:
> Update on the subject, warning, lengthy post but reproducible results and
> workaround to get performance back to expected level.
>
> One of the servers had a broken disk controller causing some performance
> issues on this one h
Cc: Marc Roos
> Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
>
> -Original message-
> > From:Alwin Antreich
> > Sent: Thursday 6th September 2018 18:36
> > To: ceph-users
> > Cc: Me
-Original message-
> From:Alwin Antreich
> Sent: Thursday 6th September 2018 18:36
> To: ceph-users
> Cc: Menno Zonneveld ; Marc Roos
> Subject: Re: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
> On Thu, Sep 06, 2018 a
Stddev Latency(s): 0.103518
Max latency(s): 1.08162
Min latency(s): 0.0218688
-Original message-
> From:Marc Roos
> Sent: Thursday 6th September 2018 17:15
> To: ceph-users ; Menno Zonneveld
> Subject: RE: [ceph-users] Rados performance inconsistenci
the samsung sm863.
write-4k-seq: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=1
randwrite-4k-seq: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W)
4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
read-4k-seq: (g=2): rw=read, bs=(R) 409
On Thu, Sep 06, 2018 at 05:15:26PM +0200, Marc Roos wrote:
>
> It is idle, testing still, running a backup's at night on it.
> How do you fill up the cluster so you can test between empty and full?
> Do you have a "ceph df" from empty and full?
>
> I have done another test disabling new scrubs
ure what to make of that.
Are your machines actively used perhaps? Mine are mostly idle as it's
still a test setup.
-Original message-
> From:Marc Roos
> Sent: Thursday 6th September 2018 16:23
> To: ceph-users ; Menno Zonneveld
>
> Subject: RE: [ceph-users] Rados p
mostly idle as it's still a
test setup.
-Original message-
> From:Marc Roos
> Sent: Thursday 6th September 2018 16:23
> To: ceph-users ; Menno Zonneveld
> Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
>
-Original message-
> From:Alwin Antreich
> Sent: Thursday 6th September 2018 16:27
> To: ceph-users
> Cc: Menno Zonneveld
> Subject: Re: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
> Hi,
Hi!
> On Thu, Sep 06, 2018
default size = 3
>
>
> -Original message-
> > From:Marc Roos
> > Sent: Thursday 6th September 2018 15:43
> > To: ceph-users ; Menno Zonneveld
> > Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
> > expected performance
>
Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:52
To: Marc Roos; ceph-users
Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
expected performance
ah yes, 3x replicated with minimal 2.
my ceph.conf is pretty bare, just in case it might be rel
message-
> From:Marc Roos
> Sent: Thursday 6th September 2018 15:43
> To: ceph-users ; Menno Zonneveld
> Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
>
>
> Test pool is 3x replicated?
>
>
> -Original
Test pool is 3x replicated?
-Original Message-
From: Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:29
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Rados performance inconsistencies, lower than
expected performance
I've setup a CEPH clust
I've setup a CEPH cluster to test things before going into production but I've
run into some performance issues that I cannot resolve or explain.
Hardware in use in each storage machine (x3)
- dual 10Gbit Solarflare Communications SFC9020 (Linux bond, mtu 9000)
- dual 10Gbit EdgeSwitch 16-Port XG
16 matches
Mail list logo