Quoting Robert Sander (r.san...@heinlein-support.de):
> On 07.12.18 18:33, Scharfenberg, Buddy wrote:
>
> > We have 3 nodes set up, 1 with several large drives, 1 with a handful of
> > small ssds, and 1 with several nvme drives.
>
> This is a very unusual setup. Do you really have all your HDDs i
I think this is a April fools day joke of someone that did not setup his
time correctly.
-Original Message-
From: Robert Sander [mailto:r.san...@heinlein-support.de]
Sent: 10 December 2018 09:49
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance Problems
On
On 07.12.18 18:33, Scharfenberg, Buddy wrote:
> We have 3 nodes set up, 1 with several large drives, 1 with a handful of
> small ssds, and 1 with several nvme drives.
This is a very unusual setup. Do you really have all your HDDs in one
node, the SSDs in another and NVMe in the third?
How do you
emmer...@croit.io]
> Sent: Friday, December 07, 2018 12:31 PM
> To: Scharfenberg, Buddy
> Cc: Ceph Users
> Subject: Re: [ceph-users] Performance Problems
>
> What are the exact parameters you are using? I often see people using dd in a
> way that effectively just measures write lat
`dd if=/dev/zero of=/mnt/test/writetest bs=1M count=1000 oflag=dsync`
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: Friday, December 07, 2018 12:31 PM
To: Scharfenberg, Buddy
Cc: Ceph Users
Subject: Re: [ceph-users] Performance Problems
What are the
Original Message-
> From: Paul Emmerich [mailto:paul.emmer...@croit.io]
> Sent: Friday, December 07, 2018 11:52 AM
> To: Scharfenberg, Buddy
> Cc: Ceph Users
> Subject: Re: [ceph-users] Performance Problems
>
> How are you measuring the performance when using CephFS?
>
&
I'm measuring with dd writing from /dev/zero with a size of 1 MB 1000 times to
get client write speeds.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: Friday, December 07, 2018 11:52 AM
To: Scharfenberg, Buddy
Cc: Ceph Users
Subject: Re: [ceph-
How are you measuring the performance when using CephFS?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Fr., 7. Dez. 2018 um 18:34 Uhr schrieb Scharfenberg, Buddy :
Hello all,
I'm new to Ceph management, and we're having some performance issues with a
basic cluster we've set up.
We have 3 nodes set up, 1 with several large drives, 1 with a handful of small
ssds, and 1 with several nvme drives. We have 46 OSDs in total, a healthy FS
being served out, and 1
Le vendredi 12 avril 2013 à 19:45 +0200, Olivier Bonvalet a écrit :
> Le vendredi 12 avril 2013 à 10:04 -0500, Mark Nelson a écrit :
> > On 04/11/2013 07:25 PM, Ziemowit Pierzycki wrote:
> > > No, I'm not using RDMA in this configuration since this will eventually
> > > get deployed to production
Le vendredi 12 avril 2013 à 10:04 -0500, Mark Nelson a écrit :
> On 04/11/2013 07:25 PM, Ziemowit Pierzycki wrote:
> > No, I'm not using RDMA in this configuration since this will eventually
> > get deployed to production with 10G ethernet (yes RDMA is faster). I
> > would prefer Ceph because it h
On 04/11/2013 07:25 PM, Ziemowit Pierzycki wrote:
No, I'm not using RDMA in this configuration since this will eventually
get deployed to production with 10G ethernet (yes RDMA is faster). I
would prefer Ceph because it has a storage drive built into OpenNebula
which my company is using and as y
No, I'm not using RDMA in this configuration since this will eventually get
deployed to production with 10G ethernet (yes RDMA is faster). I would
prefer Ceph because it has a storage drive built into OpenNebula which my
company is using and as you mentioned individual drives.
I'm not sure what t
With GlusterFS are you using the native RDMA support?
Ceph and Gluster tend to prefer pretty different disk setups too. Afaik
RH still recommends RAID6 beind each brick while we do better with
individual disks behind each OSD. You might want to watch the OSD admin
socket and see if operation
When executing ceph -w I see the following warning:
2013-04-09 22:38:07.288948 osd.2 [WRN] slow request 30.180683 seconds old,
received at 2013-04-09 22:37:37.108178: osd_op(client.4107.1:9678
102.01df [write 0~4194304 [6@0]] 0.4e208174 snapc 1=[])
currently waiting for subops from [0]
Neither made a difference. I also have a glusterFS cluster with two nodes
in replicating mode residing on 1TB drives:
[root@triton speed]# dd conv=fdatasync if=/dev/zero of=/mnt/speed/test.out
bs=512k count=1
1+0 records in
1+0 records out
524288 bytes (5.2 GB) copied, 43.573 s, 1
I'm running DDR in this setup but I also have QDR setup.
On Tue, Apr 9, 2013 at 2:31 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2013/4/8 Ziemowit Pierzycki
>
>> Hi,
>>
>> I have a 3 node SSD-backed cluster connected over infiniband (16K MTU)
>> and here is the perform
2013/4/8 Ziemowit Pierzycki
> Hi,
>
> I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
> here is the performance I am seeing:
>
Which kind of IB ? SDR, DDR, QDR ... ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
On 04/08/2013 04:12 PM, Ziemowit Pierzycki wrote:
There is one SSD in each node. IPoIB performance is about 7 gbps
between each host. CephFS is mounted via kernel client. Ceph version
is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but
on a seperate file system split via L
There is one SSD in each node. IPoIB performance is about 7 gbps between
each host. CephFS is mounted via kernel client. Ceph version
is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but on
a seperate file system split via LVM.
Here is output of another test with fdatasync:
Hi,
How many drives? Have you tested your IPoIB performance with iperf? Is
this CephFS with the kernel client? What version of Ceph? How are your
journals configured? etc. It's tough to make any recommendations
without knowing more about what you are doing.
Also, please use conv=fdatasy
Hi,
The first test was writing 500 mb file and was clocked at 1.2 GBps. The
second test was writing 5000 mb file at 17 MBps. The third test was
reading the file at ~400 MBps.
On Mon, Apr 8, 2013 at 2:56 PM, Gregory Farnum wrote:
> More details, please. You ran the same test twice and perform
More details, please. You ran the same test twice and performance went
up from 17.5MB/s to 394MB/s? How many drives in each node, and of what
kind?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I have a 3 n
Hi,
I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
here is the performance I am seeing:
[root@triton temp]# !dd
dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000
1000+0 records in
1000+0 records out
524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s
[root@tri
24 matches
Mail list logo