Alessandro De Salvo
wrote:
However, I cannot reduce the number of mdses anymore, I was used to do
that with e.g.:
ceph fs set cephfs max_mds 1
Trying this with 12.2.6 has apparently no effect, I am left with 2
active mdses. Is this another bug?
Are you following this procedure?
http://docs.ceph.com/docs
, Jul 12, 2018 at 11:39 PM Alessandro De Salvo
wrote:
Some progress, and more pain...
I was able to recover the 200. using the ceph-objectstore-tool for one
of the OSDs (all identical copies) but trying to re-inject it just with rados
put was giving no error while the get was still
error)
Can I safely try to do the same as for object 200.? Should I
check something before trying it? Again, checking the copies of the
object, they have identical md5sums on all the replicas.
Thanks,
Alessandro
Il 12/07/18 16:46, Alessandro De Salvo ha scritto:
Unfortunately
up when
trying to read an object,
but not on scrubbing, that magically disappeared after restarting the
OSD.
However, in my case it was clearly related to
https://tracker.ceph.com/issues/22464 which doesn't
seem to be the issue here.
Paul
2018-07-12 13:53 GMT+02:00 Alessandro De Salvo
Il 12/07/18 11:20, Alessandro De Salvo ha scritto:
Il 12/07/18 10:58, Dan van der Ster ha scritto:
On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum
wrote:
On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
wrote:
OK, I found where the object is:
ceph osd map cephfs_metadata
Il 12/07/18 10:58, Dan van der Ster ha scritto:
On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum wrote:
On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
wrote:
OK, I found where the object is:
ceph osd map cephfs_metadata 200.
osdmap e632418 pool 'cephfs_metadata' (10) object
> Il giorno 11 lug 2018, alle ore 23:25, Gregory Farnum ha
> scritto:
>
>> On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
>> wrote:
>> OK, I found where the object is:
>>
>>
>> ceph osd map cephfs_metadata 200.
>>
e OSDs with 10.14 are on a SAN system and one on a different one, so I
would tend to exclude they both had (silent) errors at the same time.
Thanks,
Alessandro
Il 11/07/18 18:56, John Spray ha scritto:
On Wed, Jul 11, 2018 at 4:49 PM Alessandro De Salvo
wrote:
Hi John,
in fact I get an I/O
, 2018 at 4:10 PM Alessandro De Salvo
wrote:
Hi,
after the upgrade to luminous 12.2.6 today, all our MDSes have been
marked as damaged. Trying to restart the instances only result in
standby MDSes. We currently have 2 filesystems active and 2 MDSes each.
I found the following error messages
age before
issuing the "repaired" command?
What is the history of the filesystems on this cluster?
On Wed, Jul 11, 2018 at 8:10 AM Alessandro De Salvo
<mailto:alessandro.desa...@roma1.infn.it>> wrote:
Hi,
after the upgrade to luminous 12.2.6 today, all our MDSes ha
Hi,
after the upgrade to luminous 12.2.6 today, all our MDSes have been
marked as damaged. Trying to restart the instances only result in
standby MDSes. We currently have 2 filesystems active and 2 MDSes each.
I found the following error messages in the mon:
mds.0 :6800/2412911269
Hi,
Il 14/06/18 06:13, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo
wrote:
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in order
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'm currently trying with rados export + import, but I get errors like
these:
Write
, 2018 at 5:49 AM Alessandro De Salvo
<alessandro.desa...@roma1.infn.it
<mailto:alessandro.desa...@roma1.infn.it>> wrote:
Hi,
we have several times a day different OSDs running Luminous 12.2.2 and
Bluestore crashing with errors like this:
starting osd.2 at - osd_d
Hi,
we have several times a day different OSDs running Luminous 12.2.2 and
Bluestore crashing with errors like this:
starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2
/var/lib/ceph/osd/ceph-2/journal
2018-01-30 13:45:28.440883 7f1e193cbd00 -1 osd.2 107082 log_to_monitors
{default=true}
018 05:40 PM, Alessandro De Salvo wrote:
> > Thanks Lincoln,
> >
> > indeed, as I said the cluster is recovering, so there are pending ops:
> >
> >
> > pgs: 21.034% pgs not active
> > 1692310/24980804 objects degraded (6.774%)
>
+0100, Alessandro De Salvo wrote:
Hi,
I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded.
I have 2 active mds instances and 1 standby. All the active
instances
are now in replay state and show the same error in the logs:
mds1
2018-01-08 16:04:15.765637 7fc2e92451c0 0
Hi,
I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded.
I have 2 active mds instances and 1 standby. All the active instances
are now in replay state and show the same error in the logs:
mds1
2018-01-08 16:04:15.765637 7fc2e92451c0 0 ceph version 12.2.2
Hi,
when trying to use df on a ceph-fuse mounted cephfs filesystem with ceph
luminous >= 12.1.3 I'm having hangs with the following kind of messages
in the logs:
2017-08-22 02:20:51.094704 7f80addb7700 0 client.174216 ms_handle_reset
on 192.168.0.10:6789/0
The logs are only showing
20 matches
Mail list logo