Re: [ceph-users] cephfs change metadata pool?

2016-07-20 Thread Di Zhang
update: After upgrading to Jewel and changing journaling to SSD, I no longer have the slow/blocked requests warnings during normal data copying. Thank you all. Zhang Di On Wed, Jul 13, 2016 at 11:04 PM, Christian Balzer wrote: > > Hello, > > On Wed, 13 Jul 2016 22:47:05

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Christian Balzer
Hello, On Wed, 13 Jul 2016 22:47:05 -0500 Di Zhang wrote: > Hi, > I changed to only use the infiniband network. For the 4KB write, the > IOPS doesn’t improve much. That's mostly going to be bound by latencies (as I just wrote in the other thread), both network and internal Ceph ones.

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Di Zhang
Hi, I changed to only use the infiniband network. For the 4KB write, the IOPS doesn’t improve much. I also logged into the OSD nodes and atop showed the disks are not always at 100% busy. Please check a snapshot of one node below: DSK | sdc | busy 72% | read20/s |

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Di Zhang
I also tried 4K write bench. The IOPS is ~420. I used to have better bandwidth when I use the same network for both the cluster and clients. Now the bandwidth must be limited by the 1G ethernet. What would you suggest to me to do? Thanks, On Wed, Jul 13, 2016 at 11:37 AM, Di Zhang

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Di Zhang
Hello, Sorry for the misunderstanding about IOPS. Here are some summary stats of my benchmark (Is the 20 - 30 IOPS seems normal to you?): ceph osd pool create test 512 512 rados bench -p test 10 write --no-cleanup Total time run: 10.480383 Total writes made: 288 Write size:

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread John Spray
On Wed, Jul 13, 2016 at 12:14 AM, Di Zhang wrote: > Hi, > > Is there any way to change the metadata pool for a cephfs without losing > any existing data? I know how to clone the metadata pool using rados cppool. > But the filesystem still links to the original metadata

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Christian Balzer
Hello, On Tue, 12 Jul 2016 20:57:00 -0500 Di Zhang wrote: > I am using 10G infiniband for cluster network and 1G ethernet for public. Hmm, very unbalanced, but I guess that's HW you already had. > Because I don't have enough slots on the node, so I am using three files on > the OS drive (SSD)

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Di Zhang
I am using 10G infiniband for cluster network and 1G ethernet for public. Because I don't have enough slots on the node, so I am using three files on the OS drive (SSD) for journaling, which really improved but not entirely solved the problem. I am quite happy with the current IOPS, which range

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Christian Balzer
Hello, On Tue, 12 Jul 2016 19:54:38 -0500 Di Zhang wrote: > It's a 5 nodes cluster. Each node has 3 OSDs. I set pg_num = 512 for both > cephfs_data and cephfs_metadata. I experienced some slow/blocked requests > issues when I was using hammer 0.94.x and prior. So I was thinking if the > pg_num

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Di Zhang
It's a 5 nodes cluster. Each node has 3 OSDs. I set pg_num = 512 for both cephfs_data and cephfs_metadata. I experienced some slow/blocked requests issues when I was using hammer 0.94.x and prior. So I was thinking if the pg_num is too large for metadata. I just upgraded the cluster to Jewel

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Gregory Farnum
I'm not at all sure that rados cppool actually captures everything (it might). Doug has been working on some similar stuff for disaster recovery testing and can probably walk you through moving over. But just how large *is* your metadata pool in relation to others? Having a too-large pool doesn't

[ceph-users] cephfs change metadata pool?

2016-07-12 Thread Di Zhang
Hi, Is there any way to change the metadata pool for a cephfs without losing any existing data? I know how to clone the metadata pool using rados cppool. But the filesystem still links to the original metadata pool no matter what you name it. The motivation here is to decrease the pg_num