[ceph-users] Upcoming Ceph Days for 2020

2020-01-22 Thread Mike Perez
Hi Cephers, We have just posted some upcoming Ceph Days. We are looking for sponsors and content: * Ceph Day Istanbul: March 17 * Ceph Day Oslo: May 13 * Ceph Day Vancouver: May 13 https://ceph.com/cephdays/ Also don't forget about our big event Cephalocon Seoul March 3-5. Registration,

[ceph-users] Re: Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Robert LeBlanc
On Wed, Jan 22, 2020 at 9:01 AM Alexandru Cucu wrote: > Hi, > > There is no need to rebuild all ODS. You can follow the procedure > described by RedHat[0] to convert the DB and tell the OSD to use > rocksdb. > Couldn't find this documented elsewhere. You may need a RedHat account > to access the

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-22 Thread Patrick Donnelly
Hi Yoann, On Tue, Jan 21, 2020 at 11:58 PM Yoann Moulin wrote: > > Hello, > > On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook > stable-4.0, I have an issue with cephfs. I can create a folder, I can > create empty files, but cannot write data on like I'm not allowed to

[ceph-users] Re: ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Marco Gaiarin
Mandi! Wesley Dillingham In chel di` si favelave... > Upon restart of the server containing these OSDs they fail to start with the > following error in the logs: I've hit th exactly same trouble. Look at:

[ceph-users] Re: ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Janne Johansson
Den ons 22 jan. 2020 kl 18:01 skrev Wesley Dillingham : > After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing the > following behavior on OSDs which were created with "ceph-volume lvm create > --filestore --osd-id --data --journal " > > Upon restart of the server containing

[ceph-users] ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Wesley Dillingham
After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing the following behavior on OSDs which were created with "ceph-volume lvm create --filestore --osd-id --data --journal " Upon restart of the server containing these OSDs they fail to start with the following error in the logs:

[ceph-users] Re: Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Janne Johansson
Den ons 22 jan. 2020 kl 16:30 skrev Robert LeBlanc : > In the last release of Jewel [0] it mentions that omap data can be stored > in rocksdb instead of leveldb. We are seeing high latencies from compaction > of leveldb on our Jewel cluster (can't upgrade at this time). I installed > the latest

[ceph-users] Auto create rbd snapshots

2020-01-22 Thread Marc Roos
Is it possible to schedule the creation of snapshots on specific rbd images within ceph? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Robert LeBlanc
In the last release of Jewel [0] it mentions that omap data can be stored in rocksdb instead of leveldb. We are seeing high latencies from compaction of leveldb on our Jewel cluster (can't upgrade at this time). I installed the latest version, but apparently that is not enough to do the

[ceph-users] Large omap objects in radosgw .usage pool: is there a way to reshard the rgw usage log?

2020-01-22 Thread Ingo Reimann
Hi All >On 09/10/2019 09:07, Florian Haas wrote: >[...] >the question with about resharding the usage log still stands. (The untrimmed >usage log, in my case, would have blasted the old 2M keys threshold, too.) > >Cheers, Florian Is there any new wisdom about resharding the usage log for one