[ceph-users] RFC: Possible replacement for ceph-disk

2020-10-01 Thread Nico Schottelius
Good evening, since 2018 we have been using a custom script to create disks / partitions, because at the time both ceph-disk and ceph-volume exhibited bugs that made them unreliable for us. We recently re-tested ceph-volume and while it seems generally speaking [0] to work, using LVM seems to

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Eric Ivancich
Hi Dan, One way to tell would be to do a: radosgw-admin bi list —bucket= And see if any of the lines output contains (perhaps using `grep`): "type": "olh", That would tell you if there were any versioned objects in the bucket. The “fix” we currently have only prevents this

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread tri
Hi Matt, Marc, I'm using Ceph Otopus with cephadm as the orchestration tool. I've tried adding OSDs with ceph orch daemon add ... but it's pretty limited. For one, you can't create dmcrypt OSD with it nor having a separate db device. I found that the most reliable way to create OSD with

[ceph-users] Re: Feedback for proof of concept OSD Node

2020-10-01 Thread Brian Topping
Welcome to Ceph! I think better questions to start with are “what are your objectives in your study?” Is it just seeing Ceph run with many disks, or are you trying to see how much performance you can get out of it with distributed disk? What is your budget? Do you want to try different

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Andrej Filipcic
On 2020-10-01 15:56, Frank Schilder wrote: There used to be / is a bug in ceph fs commands when using data pools. If you enable the application cephfs on a pool explicitly before running cephfs add datapool, the fs-tag is not applied. Maybe its that? There is an older thread on the topic in

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Frank Schilder
There used to be / is a bug in ceph fs commands when using data pools. If you enable the application cephfs on a pool explicitly before running cephfs add datapool, the fs-tag is not applied. Maybe its that? There is an older thread on the topic in the users-list and also a fix/workaround.

[ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
P, thanks, you are right, I am blind and impatient not to look under options. -Original Message- Cc: ceph-users; miperez Subject: *SPAM* Re: [ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects You can click "join without audio and video"

[ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Peter Sarossy
You can click "join without audio and video" at the bottom On Thu, Oct 1, 2020 at 1:10 PM Marc Roos wrote: > > Mike, > > Can you allow access without mic and cam? > > Thanks, > Marc > > > > -Original Message- > > To: ceph-users@ceph.io > Subject: *SPAM* [ceph-users] Ceph Tech

[ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
Mike, Can you allow access without mic and cam? Thanks, Marc -Original Message- To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects Hey all, We're live now with the latest Ceph tech talk! Join us:

[ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Mike Perez
Hey all, We're live now with the latest Ceph tech talk! Join us: https://bluejeans.com/908675367/browser -- Mike Perez he/him Ceph Community Manager M: +1-951-572-2633 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA @Thingee Thingee

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Dan van der Ster
Thanks Matt and Eric, Sorry for the basic question, but how can I as a ceph operator tell if a bucket is versioned? And for fixing this current situation, I would wait for the fix then reshard? (We want to reshard this bucket anyway because listing perf is way too slow for the user with 512

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Eric Ivancich
Hi Matt and Dan, I too suspect it’s the issue Matt linked to. That bug only affects versioned buckets, so I’m guessing your bucket is versioned, Dan. This bug is triggered when the final instance of an object in a versioned bucket is deleted, but for reasons we do not yet understand, the

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread Matt Larson
Hi Marc, Did you have any success with `ceph-volume` for activating your OSD? I am having a similar problem where the command `ceph-bluestore-tool` fails to be able to read a label for a previously created OSD on an LVM partition. I had previously been using the OSD without issues, but after a

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Eugen Block
Hi, I have a one-node-cluster (also 15.2.4) for testing purposes and just created a cephfs with the tag, it works for me. But my node is also its own client, so there's that. And it was installed with 15.2.4, no upgrade. For the 2nd, mds works, files can be created or removed, but client

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Matt Benjamin
Hi Dan, Possibly you're reproducing https://tracker.ceph.com/issues/46456. That explains how the underlying issue worked, I don't remember how a bucked exhibiting this is repaired. Eric? Matt On Thu, Oct 1, 2020 at 8:41 AM Dan van der Ster wrote: > > Dear friends, > > Running 14.2.11, we

[ceph-users] CEPH iSCSI issue - ESXi command timeout

2020-10-01 Thread Golasowski Martin
Dear All, a week ago we had to reboot our ESXi nodes since our CEPH cluster sudennly stopped serving all I/O. We have identified a VM (vCenter appliance) which was swapping heavily and causing heavy load. However, since then we are experiencing strange issues, as if the cluster cannot handle

[ceph-users] cephfs tag not working

2020-10-01 Thread Andrej Filipcic
Hi, on octopus 15.2.4 I have an issue with cephfs tag auth. The following works fine: client.f9desktop     key:     caps: [mds] allow rw     caps: [mon] allow r     caps: [osd] allow rw  pool=cephfs_data, allow rw pool=ssd_data, allow rw pool=fast_data,  allow rw

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-10-01 Thread Frank Schilder
Dear Mark and Nico, I think this might be the time to file a tracker report. As far as I can see, your set-up is as it should be, OSD operations on your clusters should behave exactly as on ours. I don't know of any other configuration option that influences placement calculation. The

[ceph-users] bugs ceph-volume scripting

2020-10-01 Thread Marc Roos
I have been creating lvm osd's with: ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create --data /dev/sdf --dmcrypt Because this procedure failed: ceph-volume lvm zap --destroy /dev/sdf (waiting on slow human typing) ceph-volume lvm create --data /dev/sdf --dmcrypt However when