Hi,
Future releases of Ceph support cephdeploy or only Cephadm will be the
choice.
Thanks,
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Fri, Oct 9, 2020 at 11:56 PM Alexander E. Patrakov
wrote:
>
> Hello,
>
> I found that documentation on the Internet on the question whether I
> can safely have two instances of cephfs in my cluster is inconsistent.
> For the record, I don't use snapshots.
>
> FOSDEM 19 presentation by Sage Weil
Hello,
What is the necessity for enabling the application on the pool? As per the
documentation, we need to enable application before using the pool.
However, in my case, I have a single pool running on the cluster used for
RBD. I am able to run all RBD operations on the pool even if I dont enable
Thank you Martin! I am familiar with that process, I just didn’t understand
that the monmap was the only difference between the monitor databases. This
makes sense if Paxos is maintaining the full DB synchronization, but the
structure and contents of the database were not clear. I have discovere
Hello Mike,
do your OSDs go down from time to time? I once has an issue with
unrecoverable objects, because I had only n+1 (size 2) redundancy and
ceph wasn't able to decide, what's the correct copy of the object. In my
case there half-deleted snapshots in one of the copies. I used
ceph-objectsto
Dear Michael,
> I have other tasks I need to perform on the filesystem (removing OSDs,
> adding new OSDs, increasing PG count), but I feel like I need to address
> these degraded/lost objects before risking any more damage.
I would probably not attempt any such maintenance before there was a peri
Hello Brian,
as long as you have at least one working MON, it's kind of easy to recover.
Shutdown all MONs, modify the MONMAP by hand, leaving just one of the
working MONs and then start it up. After that, redeploy the other mons to
have your quorum and redundancy back again.
You find more detail
Thanks, Anthony, for your quick response.
I'll remove the disk and replace it.
Javier.-
El 10/10/20 a las 00:17, Anthony D'Atri escribió:
* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by
default.
If any OSD has repaired more than this many I/O errors in stored data
Is it possible to disable checking on 'x pool(s) have no replicas
configured', so I don't have this HEALTH_WARN constantly.
Or is there some other disadvantage of keeping some empty 1x replication
test pools?
___
ceph-users mailing list -- ceph-use