Re: [ceph-users] MDS damaged

2018-07-13 Thread Alessandro De Salvo
Alessandro De Salvo wrote: However, I cannot reduce the number of mdses anymore, I was used to do that with e.g.: ceph fs set cephfs max_mds 1 Trying this with 12.2.6 has apparently no effect, I am left with 2 active mdses. Is this another bug? Are you following this procedure? http://docs.ceph.com/docs

Re: [ceph-users] MDS damaged

2018-07-13 Thread Alessandro De Salvo
, Jul 12, 2018 at 11:39 PM Alessandro De Salvo wrote: Some progress, and more pain... I was able to recover the 200. using the ceph-objectstore-tool for one of the OSDs (all identical copies) but trying to re-inject it just with rados put was giving no error while the get was still

Re: [ceph-users] MDS damaged

2018-07-12 Thread Alessandro De Salvo
error) Can I safely try to do the same as for object 200.? Should I check something before trying it? Again, checking the copies of the object, they have identical md5sums on all the replicas. Thanks,     Alessandro Il 12/07/18 16:46, Alessandro De Salvo ha scritto: Unfortunately

Re: [ceph-users] MDS damaged

2018-07-12 Thread Alessandro De Salvo
up when trying to read an object, but not on scrubbing, that magically disappeared after restarting the OSD. However, in my case it was clearly related to https://tracker.ceph.com/issues/22464 which doesn't seem to be the issue here. Paul 2018-07-12 13:53 GMT+02:00 Alessandro De Salvo

Re: [ceph-users] MDS damaged

2018-07-12 Thread Alessandro De Salvo
Il 12/07/18 11:20, Alessandro De Salvo ha scritto: Il 12/07/18 10:58, Dan van der Ster ha scritto: On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum wrote: On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo wrote: OK, I found where the object is: ceph osd map cephfs_metadata

Re: [ceph-users] MDS damaged

2018-07-12 Thread Alessandro De Salvo
Il 12/07/18 10:58, Dan van der Ster ha scritto: On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum wrote: On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo wrote: OK, I found where the object is: ceph osd map cephfs_metadata 200. osdmap e632418 pool 'cephfs_metadata' (10) object

Re: [ceph-users] MDS damaged

2018-07-12 Thread Alessandro De Salvo
> Il giorno 11 lug 2018, alle ore 23:25, Gregory Farnum ha > scritto: > >> On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo >> wrote: >> OK, I found where the object is: >> >> >> ceph osd map cephfs_metadata 200. >>

Re: [ceph-users] MDS damaged

2018-07-11 Thread Alessandro De Salvo
e OSDs with 10.14 are on a SAN system and one on a different one, so I would tend to exclude they both had (silent) errors at the same time. Thanks,     Alessandro Il 11/07/18 18:56, John Spray ha scritto: On Wed, Jul 11, 2018 at 4:49 PM Alessandro De Salvo wrote: Hi John, in fact I get an I/O

Re: [ceph-users] MDS damaged

2018-07-11 Thread Alessandro De Salvo
, 2018 at 4:10 PM Alessandro De Salvo wrote: Hi, after the upgrade to luminous 12.2.6 today, all our MDSes have been marked as damaged. Trying to restart the instances only result in standby MDSes. We currently have 2 filesystems active and 2 MDSes each. I found the following error messages

Re: [ceph-users] MDS damaged

2018-07-11 Thread Alessandro De Salvo
age before issuing the "repaired" command? What is the history of the filesystems on this cluster? On Wed, Jul 11, 2018 at 8:10 AM Alessandro De Salvo <mailto:alessandro.desa...@roma1.infn.it>> wrote: Hi, after the upgrade to luminous 12.2.6 today, all our MDSes ha

[ceph-users] MDS damaged

2018-07-11 Thread Alessandro De Salvo
Hi, after the upgrade to luminous 12.2.6 today, all our MDSes have been marked as damaged. Trying to restart the instances only result in standby MDSes. We currently have 2 filesystems active and 2 MDSes each. I found the following error messages in the mon: mds.0 :6800/2412911269

Re: [ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-14 Thread Alessandro De Salvo
Hi, Il 14/06/18 06:13, Yan, Zheng ha scritto: On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo wrote: Hi, Il 13/06/18 14:40, Yan, Zheng ha scritto: On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo wrote: Hi, I'm trying to migrate a cephfs data pool to a different one in order

Re: [ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Alessandro De Salvo
Hi, Il 13/06/18 14:40, Yan, Zheng ha scritto: On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo wrote: Hi, I'm trying to migrate a cephfs data pool to a different one in order to reconfigure with new pool parameters. I've found some hints but no specific documentation to migrate pools

[ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Alessandro De Salvo
Hi, I'm trying to migrate a cephfs data pool to a different one in order to reconfigure with new pool parameters. I've found some hints but no specific documentation to migrate pools. I'm currently trying with rados export + import, but I get errors like these: Write

Re: [ceph-users] Luminous 12.2.2 OSDs with Bluestore crashing randomly

2018-01-31 Thread Alessandro De Salvo
, 2018 at 5:49 AM Alessandro De Salvo <alessandro.desa...@roma1.infn.it <mailto:alessandro.desa...@roma1.infn.it>> wrote: Hi, we have several times a day different OSDs running Luminous 12.2.2 and Bluestore crashing with errors like this: starting osd.2 at - osd_d

[ceph-users] Luminous 12.2.2 OSDs with Bluestore crashing randomly

2018-01-30 Thread Alessandro De Salvo
Hi, we have several times a day different OSDs running Luminous 12.2.2 and Bluestore crashing with errors like this: starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal 2018-01-30 13:45:28.440883 7f1e193cbd00 -1 osd.2 107082 log_to_monitors {default=true}

Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-11 Thread Alessandro De Salvo
018 05:40 PM, Alessandro De Salvo wrote: > > Thanks Lincoln, > > > > indeed, as I said the cluster is recovering, so there are pending ops: > > > > > > pgs: 21.034% pgs not active > > 1692310/24980804 objects degraded (6.774%) >

Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Alessandro De Salvo
+0100, Alessandro De Salvo wrote: Hi, I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded. I have 2 active mds instances and 1 standby. All the active instances are now in replay state and show the same error in the logs: mds1 2018-01-08 16:04:15.765637 7fc2e92451c0  0

[ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Alessandro De Salvo
Hi, I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded. I have 2 active mds instances and 1 standby. All the active instances are now in replay state and show the same error in the logs: mds1 2018-01-08 16:04:15.765637 7fc2e92451c0  0 ceph version 12.2.2

[ceph-users] ceph-fuse hanging on df with ceph luminous >= 12.1.3

2017-08-21 Thread Alessandro De Salvo
Hi, when trying to use df on a ceph-fuse mounted cephfs filesystem with ceph luminous >= 12.1.3 I'm having hangs with the following kind of messages in the logs: 2017-08-22 02:20:51.094704 7f80addb7700 0 client.174216 ms_handle_reset on 192.168.0.10:6789/0 The logs are only showing