[ceph-users] Config parameters for system tuning

2017-06-20 Thread Maged Mokhtar
Hi, 1) I am trying to set some of the following config values which seems to be present in most config examples relating to performance tuning: journal_queue_max_ops journal_queue_max_bytes filestore_queue_committing_max_bytes filestore_queue_committing_max_ops I am using 10.2.7 but not able

Re: [ceph-users] Ceph packages for Debian Stretch?

2017-06-20 Thread Alfredo Deza
On Mon, Jun 19, 2017 at 8:25 PM, Christian Balzer wrote: > > Hello, > > can we have the status, projected release date of the Ceph packages for > Debian Stretch? We don't have anything yet as a projected release date. The current status is that this has not been prioritized. I

Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread Jonas Jaszkowic
> Am 20.06.2017 um 16:06 schrieb David Turner : > > Ceph is a large scale storage system. You're hoping that it is going to care > about and split files that are 9 bytes in size. Do this same test with a 4MB > file and see how it splits up the content of the file. > >

Re: [ceph-users] cephfs-data-scan pg_files missing

2017-06-20 Thread John Spray
On Tue, Jun 20, 2017 at 4:06 PM, Mazzystr wrote: > > I'm on Red Hat Storage 2.2 (ceph-10.2.7-0.el7.x86_64) and I see this... > # cephfs-data-scan > Usage: > cephfs-data-scan init [--force-init] > cephfs-data-scan scan_extents [--force-pool] > cephfs-data-scan

[ceph-users] Recovering rgw index pool with large omap size

2017-06-20 Thread Sam Wouters
Hi list, we need to recover an index pool distributed over 4 ssd based osd's. We needed to kick out one of the OSDs cause it was blocking all rgw access due to leveldb compacting. Since then we've restarted the OSD with "leveldb compact on mount = true" and noup flag set, running the leveldb

[ceph-users] cephfs-data-scan pg_files missing

2017-06-20 Thread Mazzystr
I'm on Red Hat Storage 2.2 (ceph-10.2.7-0.el7.x86_64) and I see this... # cephfs-data-scan Usage: cephfs-data-scan init [--force-init] cephfs-data-scan scan_extents [--force-pool] cephfs-data-scan scan_inodes [--force-pool] [--force-corrupt] --force-corrupt: overrite apparently

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread David Turner
Setting an osd to 0.0 in the crush map will tell all PGs to move off of the osd. It's right the same as removing the osd from the closer, except it allows the osd to help move the data that it has and prevents having degraded PGs and objects while you do it. The limit to weighting osds to 0.0 is

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Peter Maloney
these settings are on a specific OSD: > osd recovery max active = 1 > osd max backfills = 1 I don't know if it will behave as you expect if you set 0... (I tested setting 0 which didn't complain, but is 0 actually 0 or unlimited or an error?) Maybe you could parse the ceph pg dump, then look at

Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread David Turner
Ceph is a large scale storage system. You're hoping that it is going to care about and split files that are 9 bytes in size. Do this same test with a 4MB file and see how it splits up the content of the file. On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic wrote: >

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread David Turner
If you're planning to remove the next set of disks, I would recommend weighting them to 0.0 in the crush map if you have the room for it. The process at this point would be weighting the next set to 0.0 when you add the previous set back in. That way when you finish removing the next set there is

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Sam Wouters
Yes, don't know exactly since which release it was introduced, but in latest jewel and beyond there is: Please use pool level options recovery_priority and recovery_op_priority for enabling pool level recovery priority feature: Raw # ceph osd pool set default.rgw.buckets.index recovery_priority

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Logan Kuhn
Is there a way to prioritize specific pools during recovery? I know there are issues open for it, but I wasn't aware it was implemented yet... Regards, Logan - On Jun 20, 2017, at 8:20 AM, Sam Wouters wrote: | Hi, | Are they all in the same pool? Otherwise you could

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Sam Wouters
Hi, Are they all in the same pool? Otherwise you could prioritize pool recovery. If not, maybe you can play with the osd max backfills number, no idea if it accepts a value of 0 to actually disable it for specific OSDs. r, Sam On 20-06-17 14:44, Richard Hesketh wrote: > Is there a way, either

[ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Richard Hesketh
Is there a way, either by individual PG or by OSD, I can prioritise backfill/recovery on a set of PGs which are currently particularly important to me? For context, I am replacing disks in a 5-node Jewel cluster, on a node-by-node basis - mark out the OSDs on a node, wait for them to clear,

[ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread Jonas Jaszkowic
I am currently evaluating erasure coding in Ceph. I wanted to know where my data and coding chunks are located, so I followed the example at http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool

Re: [ceph-users] CephFS | flapping OSD locked up NFS

2017-06-20 Thread John Spray
On Tue, Jun 20, 2017 at 11:13 AM, David wrote: > Hi John > > I've had nfs-ganesha testing on the to do list for a while, I think I might > move it closer to the top! I'll certainly report back with the results. > > I'd still be interested to hear any kernel nfs

Re: [ceph-users] CephFS | flapping OSD locked up NFS

2017-06-20 Thread David
Hi John I've had nfs-ganesha testing on the to do list for a while, I think I might move it closer to the top! I'll certainly report back with the results. I'd still be interested to hear any kernel nfs experiences/tips, my understanding is nfs is included in the ceph testing suite so there is

Re: [ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-20 Thread Jonas Jaszkowic
Thank you! I already knew about the ceph osd map command, but I am not sure how to interpret the output. For example, on the described erasure coded pool, the output is: osdmap e30 pool 'ecpool' (1) object 'sample-obj' -> pg 1.fa0b8566 (1.66) -> up ([1,4,2,0,3], p1) acting ([1,4,2,0,3], p1)

Re: [ceph-users] FW: radosgw: stale/leaked bucket index entries

2017-06-20 Thread Pavan Rallabhandi
Hi Orit, No, we do not use multi-site. Thanks, -Pavan. From: Orit Wasserman Date: Tuesday, 20 June 2017 at 12:49 PM To: Pavan Rallabhandi Cc: "ceph-users@lists.ceph.com" Subject: EXT: Re: [ceph-users] FW: radosgw:

Re: [ceph-users] FW: radosgw: stale/leaked bucket index entries

2017-06-20 Thread Orit Wasserman
Hi Pavan, On Tue, Jun 20, 2017 at 8:29 AM, Pavan Rallabhandi < prallabha...@walmartlabs.com> wrote: > Trying one more time with ceph-users > > On 19/06/17, 11:07 PM, "Pavan Rallabhandi" > wrote: > > On many of our clusters running Jewel (10.2.5+), am running

[ceph-users] RadosGW not working after upgrade to Hammer

2017-06-20 Thread Gerson Jamal
Hi everyone, I upgrade ceph from firefly to hammer and everything looks OK on upgrade but after that RadosGW not working, I can list all buckets but i cant list the objects inside the buckets, and I receive the following error: format=json 400 Bad Request []{"Code":"InvalidArgument"} On