Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread JC Lopez
Hi, you can actually specify the feature you want to enable at creation time so this way no need to remove the feature after. To illustrate Ilya’s message: rbd create rbd/test --size=128M --image-feature=layering,striping --stripe-count=8 --stripe-unit=4K The object size is hereby left to the

Re: [ceph-users] Large OMAP Object

2019-11-14 Thread JC Lopez
Hi this probably comes from your RGW which is a big consumer/producer of OMAP for bucket indexes. Have a look at this previous post and just adapt the pool name to match the one where it’s detected: https://www.spinics.net/lists/ceph-users/msg51681.html Regards JC > On Nov 14, 2019, at

Re: [ceph-users] Create containers/buckets in a custom rgw pool

2019-11-11 Thread JC Lopez
Hi Soumya, have a look at this page that will show you how to map your special pool from the RADOS Gateway perspective. L uminous: https://docs.ceph.com/docs/luminous/radosgw/placement/

Re: [ceph-users] Optimizing terrible RBD performance

2019-10-04 Thread JC Lopez
Hi, your RBD bench and RADOS bench use by default 4MB IO request size while your FIO is configured for 4KB IO request size. If you want to compare apple 2 apple (bandwidth) you need to change the FIO IO request size to 4194304. Plus, you tested a sequential workload with RADOS bench but

Re: [ceph-users] file location

2019-08-20 Thread JC Lopez
Hi, fin out the inode number, identify from the data pool all the object that belong to this inode and then run the ceph osd map {pool} {objectname} for each of them and this will tell you about all the PGs your inode objects are located in. printf '%x\n' $(stat -c %i {filepath}) 100

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread JC Lopez
Hi, See https://docs.ceph.com/docs/nautilus/cephfs/kernel/ -o mds_namespace={fsname} Regards JC > On Aug 7, 2019, at 10:24, dhils...@performair.com wrote: > > All; > > Thank you for your assistance, this led me to the fact that I hadn't

Re: [ceph-users] MON crashing when upgrading from Hammer to Luminous

2019-07-22 Thread JC Lopez
.ceph.com/docs/mimic/install/upgrading-ceph/#upgrade-procedures> to be consistent. JC > On Jul 22, 2019, at 13:38, JC Lopez wrote: > > Hi > > you’ll have to go from Hammer to Jewel then from Jewel to Luminous for a > smooth upgrade. > - http://docs.ceph.com/docs/mimi

Re: [ceph-users] MON crashing when upgrading from Hammer to Luminous

2019-07-22 Thread JC Lopez
Hi you’ll have to go from Hammer to Jewel then from Jewel to Luminous for a smooth upgrade. - http://docs.ceph.com/docs/mimic/install/upgrading-ceph/#upgrade-procedures -

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-29 Thread JC Lopez
bd ls | grep vm-101-disk-2" As I’m not familiar with proxmox so I’d suggest the following: If yes to 1, for security, copy this file somewhere else and then to a rados -p rbd rm vm-101-disk-2. If no to 1, for security, copy this file somewhere else and then to a rm -rf vm-101-disk-2__head_383C3223__0

Re: [ceph-users] Cache Tiering Question

2015-10-15 Thread JC Lopez
Hi Robert usable bytes so before replication. The size of the actual original objects you write. Cheers JC > On 15 Oct 2015, at 16:33, Robert LeBlanc wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > One more question. Is max_{bytes,objects} before or