Re: [ceph-users] where does 100% RBD utilization come from?

2020-01-14 Thread Philip Brown
"turn that knob" up? - Original Message - From: "Wido den Hollander" To: "Philip Brown" , "ceph-users" Sent: Tuesday, January 14, 2020 12:42:48 AM Subject: Re: [ceph-users] where does 100% RBD utilization come from? The util is calculated

Re: [ceph-users] where does 100% RBD utilization come from?

2020-01-14 Thread Philip Brown
The odd thing is: the network interfaces on the gateways dont seem to be at 100% capacity and the OSD disks dont seem to be at 100% utilization. so I'm confused where this could be getting held up. - Original Message - From: "Wido den Hollander" To: "Philip Brown"

[ceph-users] where does 100% RBD utilization come from?

2020-01-10 Thread Philip Brown
only goes as high as about 60% on a per-device basis. CPU is idle. Doesnt seem like network interface is capped either. So.. how do I improve RBD throughput? -- Philip Brown| Sr. Linux System Administrator | Medata, Inc. 5 Peters Canyon Rd Suite 250 Irvine CA 92606 Office 714.918

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
Yes I saw that thanks. Unfortunately, that doesnt show use of "custom classes" as someone hinted at. - Original Message - From: dhils...@performair.com To: "ceph-users" Cc: "Philip Brown" Sent: Monday, December 16, 2019 3:38:49 PM Subject: RE: Separate

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
Sounds very useful. Any online example documentation for this? havent found any so far? - Original Message - From: "Nathan Fish" To: "Marc Roos" Cc: "ceph-users" , "Philip Brown" Sent: Monday, December 16, 2019 2:07:44 PM Subject: Re: [ce

[ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
performance group, and allocate certain RBDs to only use that set of disks. Pools, only control things like the replication count, and number of placement groups. I'd have to set up a whole new ceph cluster for the type of behavior I want. Am I correct? -- Philip Brown| Sr. Linux System

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Philip Brown
es: /dev/sdb I had seen various claims here and there about how ceph would just automatically figure things out, but I hadnt seen any real world examples. Thank you for posting. - Original Message - From: "Daniel Sung" To: "Philip Brown" Cc: "ceph-users&q

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Philip Brown
Interesting. What did the partitioning look like? - Original Message - From: "Daniel Sung" To: "Nathan Fish" Cc: "Philip Brown" , "ceph-users" Sent: Tuesday, December 10, 2019 1:21:36 AM Subject: Re: [ceph-users] sharing single SSD acro

[ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-09 Thread Philip Brown
otherwise, and then go hand manage the slicing? ceph-volume lvm create --data /dev/sdc --block.wal /dev/sdx1 ceph-volume lvm create --data /dev/sdd --block.wal /dev/sdx2 ceph-volume lvm create --data /dev/sde --block.wal /dev/sdx3 ? can I not get away with some other more simplified usage? -- Philip

Re: [ceph-users] best pool usage for vmware backing

2019-12-05 Thread Philip Brown
- Original Message - From: "Paul Emmerich" To: "Philip Brown" Cc: "ceph-users" Sent: Thursday, December 5, 2019 11:08:23 AM Subject: Re: [ceph-users] best pool usage for vmware backing No, you obviously don't need multiple pools for load balancing. -- Paul E

Re: [ceph-users] best pool usage for vmware backing

2019-12-05 Thread Philip Brown
an association between pools and a theoretical preferred iscsi gateway. - Original Message - From: "Paul Emmerich" To: "Philip Brown" Cc: "ceph-users" Sent: Thursday, December 5, 2019 8:16:09 AM Subject: Re: [ceph-users] best pool usage for vmware backing

Re: [ceph-users] best pool usage for vmware backing

2019-12-05 Thread Philip Brown
Interesting. I thought when you defined a pool, and then defined an RBD within that pool.. that any auto-replication stayed within that pool? So what kind of "load balancing" do you mean? I'm confused. - Original Message - From: "Paul Emmerich" To: "Phili

[ceph-users] best pool usage for vmware backing

2019-12-04 Thread Philip Brown
the storage be divided up? The big questions are: * 1 pool, or multiple,and why * many RBDs, few RBDs, or single RBD per pool? why? -- Philip Brown| Sr. Linux System Administrator | Medata, Inc. 5 Peters Canyon Rd Suite 250 Irvine CA 92606 Office 714.918.1310| Fax 714.918.1325 pbr

[ceph-users] osds way ahead of gateway version?

2019-12-03 Thread Philip Brown
for this and keep it all bluestore 2. we only use the cluster for RBDs. -- Philip Brown| Sr. Linux System Administrator | Medata, Inc. 5 Peters Canyon Rd Suite 250 Irvine CA 92606 Office 714.918.1310| Fax 714.918.1325 pbr...@medata.com| www.medata.com