[ceph-users] Printer is in error state because of motherboard malfunction? Contact customer care.

2020-09-10 Thread mary smith
Sometimes the motherboard glitch can cause printer is in error state situation. Therefore, you can use the troubleshooting solutions that are available in the tech consultancy websites or you can call the help team and use their assistance to deal with the issue. In addition to that, you can

[ceph-users] Errror in Facebook drafts? Find support by dialing Facebook Customer Service Toll Free Number.

2020-09-10 Thread mary smith
The drafts are the place the message gets set aside in the occasion that you've excluded the recipient. In any case, if there's a glitch there, by then you can get it cured by using the help from the tech help destinations or you can dial the Facebook Customer Service Toll Free Number and have

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread Igor Fedotov
Norman, >default-fs-data0    9 374 TiB 1.48G 939 TiB 74.71   212 TiB given the above numbers 'default-fs-data0' pool has average object size around 256K (374 TiB / 1.48G objects). Are you sure that absolute majority of your objects in this pool are 4M?

[ceph-users] ceph config dump question

2020-09-10 Thread Dave Baukus
A naive ceph user asks: I have 3 node cluster configured with 72 bluestore OSDs running on Ubuntu 20.01, Ceph Octopus 15.2.4 The cluster is configured via ceph-ansible stable-5.0. No configuration changes have been made outside of what is generated by ceph-ansible. I expected "ceph config

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
Latency from a client side is not an issue. It just combines with other latencies in the stack. The more client lags, the easier it's for the cluster. Here, the thing I talk, is slightly different. When you want to establish baseline performance for osd daemon (disregarding block device and

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread vitalif
Yeah, of course... but RBD is primarily used for KVM VMs, so the results from a VM are the thing that real clients see. So they do mean something... :) I know. I tested fio before testing cephwith fio. On null ioengine fio can handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
I know. I tested fio before testing ceph with fio. On null ioengine fio can handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets down to 2.4-2.8M IOPS. On brd it drops to sad 700k IOPS. BTW, never run synthetic high-performance benchmarks on kvm. My old server with

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Виталий Филиппов
By the way, DON'T USE rados bench. It's an incorrect benchmark. ONLY use fio 10 сентября 2020 г. 22:35:53 GMT+03:00, vita...@yourcmc.ru пишет: >Hi George > >Author of Ceph_performance here! :) > >I suspect you're running tests with 1 PG. Every PG's requests are >always serialized, that's why OSD

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread vitalif
Hi George Author of Ceph_performance here! :) I suspect you're running tests with 1 PG. Every PG's requests are always serialized, that's why OSD doesn't utilize all threads with 1 PG. You need something like 8 PGs per OSD. More than 8 usually doesn't improve results. Also note that read

[ceph-users] Re: slow "rados ls"

2020-09-10 Thread Stefan Kooman
On 2020-09-01 10:51, Marcel Kuiper wrote: > As a matter of fact we did. We doubled the storage nodes from 25 to 50. > Total osds now 460. > > You want to share your thoughts on that? OK, I'm really curious if you observed the following behaviour: During, or shortly after the rebalance, did you

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
Thank you! I know that article, but they promise 6 core use per OSD, and I got barely over three, and all this in totally synthetic environment with no SDD to blame (brd is more than fast and have a very consistent latency under any kind of load). On Thu, Sep 10, 2020, 19:39 Marc Roos wrote: >

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Marc Roos
Hi George, Very interesting and also a bit expecting result. Some messages posted here are already indicating that getting expensive top of the line hardware does not really result in any performance increase above some level. Vitaliy has documented something similar[1] [1]

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Mark Nelson
On 9/10/20 11:03 AM, George Shuklin wrote: I'm creating a benchmark suite for Сeph. During benchmarking of benchmark, I've checked how fast ceph-osd works. I decided to skip all 'SSD mess' and use brd (block ram disk, modprobe brd) as underlying storage. Brd itself can yield up to 2.7Mpps

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread Milan Kupcevic
On 2020-09-08 19:30, norman kern wrote: > Hi, > > I have changed most of pools from 3-replica to ec 4+2 in my cluster, > when I use ceph df command to show > > the used capactiy of the cluster: > [...] > > The USED = 3 * STORED in 3-replica mode is completely right, but for EC > 4+2 pool

[ceph-users] ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
I'm creating a benchmark suite for Сeph. During benchmarking of benchmark, I've checked how fast ceph-osd works. I decided to skip all 'SSD mess' and use brd (block ram disk, modprobe brd) as underlying storage. Brd itself can yield up to 2.7Mpps in fio. In single thread mode (iodepth=1) it

[ceph-users] Re: Multipart uploads with partsizes larger than 16MiB failing on Nautilus

2020-09-10 Thread Casey Bodley
On Thu, Sep 10, 2020 at 10:19 AM shubjero wrote: > > Hi Casey, > > I was never setting rgw_max_chunk_size in my ceph.conf so it must have > been default? Funny enough I dont even see this configuration > parameter in the documentation > https://docs.ceph.com/docs/nautilus/radosgw/config-ref/ . >

[ceph-users] Re: Octopus dashboard: rbd-mirror page shows error for primary site

2020-09-10 Thread Jason Dillaman
On Thu, Sep 10, 2020 at 10:05 AM Eugen Block wrote: > > Thank you, Jason. > The report can be found at https://tracker.ceph.com/issues/47390 > > By the way, I think your link to the rbd issues should be the other > way around, http://tracker.ceph.com/issues/rbd gives me a 404. ;-) > This is

[ceph-users] Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-10 Thread Robert Sander
Hi, I stumbled across an issue where an OSD the gets redeployed has a CRUSH weight of 0 after cephadm finishes. I have created a service definition for the orchestrator to automatically deploy OSDs on SSDs: service_type: osd service_id: SSD_OSDs placement: label: 'osd' data_devices:

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread Steven Pine
Multiple people have posted to this mailing list with this exact problem, presumably others have it as well, but the developers don't believe it is worthy of even placing a warning in documentation, for all the good that ceph does this issue is oddly treated with little urgency. Basically Ceph

[ceph-users] Re: Multipart uploads with partsizes larger than 16MiB failing on Nautilus

2020-09-10 Thread shubjero
Hi Casey, I was never setting rgw_max_chunk_size in my ceph.conf so it must have been default? Funny enough I dont even see this configuration parameter in the documentation https://docs.ceph.com/docs/nautilus/radosgw/config-ref/ . Armed with your information I tried setting the following in my

[ceph-users] Re: Octopus dashboard: rbd-mirror page shows error for primary site

2020-09-10 Thread Eugen Block
Thank you, Jason. The report can be found at https://tracker.ceph.com/issues/47390 By the way, I think your link to the rbd issues should be the other way around, http://tracker.ceph.com/issues/rbd gives me a 404. ;-) This is better: https://tracker.ceph.com/projects/rbd/issues Regards,

[ceph-users] Re: Octopus: snapshot errors during rbd import

2020-09-10 Thread Eugen Block
Hi Jason, Sure, it's probably worth creating a new tracker ticket at [1]. Is your system configured to enable journaling by default on all new images? yes, I have it in the ceph.conf rbd default features = 125 and the features are enabled: ceph1:~ # rbd info rbd-pool1/cloud7 | grep

[ceph-users] Re: Octopus: snapshot errors during rbd import

2020-09-10 Thread Jason Dillaman
On Thu, Sep 10, 2020 at 7:44 AM Eugen Block wrote: > > Hi *, > > I'm currently testing rbd-mirror on ceph version > 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08) > octopus (stable) and saw this during an rbd import of a fresh image on > the primary site: > > ---snip--- >

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread Frank Schilder
We might have the same problem. EC 6+2 on a pool for RBD images on spindles. Please see the earlier thread "mimic: much more raw used than reported". In our case, this seems to be a problem exclusively for RBD workloads and here, in particular, Windows VMs. I see no amplification at all on our

[ceph-users] Octopus: snapshot errors during rbd import

2020-09-10 Thread Eugen Block
Hi *, I'm currently testing rbd-mirror on ceph version 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08) octopus (stable) and saw this during an rbd import of a fresh image on the primary site: ---snip--- ceph1:~ # rbd import /mnt/SUSE-OPENSTACK-CLOUD-7-x86_64-GM-DVD1.iso

[ceph-users] Octopus dashboard: rbd-mirror page shows error for primary site

2020-09-10 Thread Eugen Block
Hi *, I was just testing rbd-mirror on ceph version 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08) octopus (stable) and noticed mgr errors on the primary site (also in version 15.2.2): ---snip--- 2020-09-10T11:20:01.724+0200 7f1c1b46a700 0 [dashboard ERROR

[ceph-users] Re: Moving OSD from one node to another

2020-09-10 Thread huxia...@horebdata.cn
thanks a lot for the information. samuel huxia...@horebdata.cn From: Eugen Block Date: 2020-09-10 08:50 To: ceph-users Subject: [ceph-users] Re: Moving OSD from one node to another Hi, I haven't done this myself yet but you should be able to simply move the (virtual) disk to the new host

[ceph-users] Re: Moving OSD from one node to another

2020-09-10 Thread Eugen Block
Hi, I haven't done this myself yet but you should be able to simply move the (virtual) disk to the new host and start the OSD, depending on the actual setup. If those are stand-alone OSDs (no separate DB/WAL) it shouldn't be too difficult [1]. If you're using ceph-volume you could run

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread norman
Anyone else met the same problem? Using EC instead of Replica is to save spaces, but now it's worse than replica... On 9/9/2020 上午7:30, norman kern wrote: Hi, I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use ceph df command to show the used capactiy of the