On Tue, Dec 13, 2022 at 2:02 PM Mevludin Blazevic
wrote:
>
> Hi all,
>
> in Ceph Pacific 6.2.5, the MDS failover function does not working. The
> one host with the active MDS hat to be rebooted and after that, the
> standby deamons did not jump in. The fs was not accessible, instead all
> mds rema
Hi,
thanks for the quick response!
CEPH STATUS:
cluster:
id: 8c774934-1535-11ec-973e-525400130e4f
health: HEALTH_ERR
7 failed cephadm daemon(s)
There are daemons running an older version of ceph
1 filesystem is degraded
1 filesystem ha
On Tue, Dec 13, 2022 at 2:21 PM Mevludin Blazevic
wrote:
>
> Hi,
>
> thanks for the quick response!
>
> CEPH STATUS:
>
> cluster:
> id: 8c774934-1535-11ec-973e-525400130e4f
> health: HEALTH_ERR
> 7 failed cephadm daemon(s)
> There are daemons running an olde
Hi,
while upgrading to ceph pacific 6.2.7, the upgrade process stuck exactly
at the mds daemons. Before, I have tried to increase/shrink the
placement size of them, but nothing happens. Currently I have 4/3
running daemons. One daemon should be stopped and removed.
Do you suggest to force re
On Thu, Dec 15, 2022 at 7:24 AM Mevludin Blazevic
wrote:
>
> Hi,
>
> while upgrading to ceph pacific 6.2.7, the upgrade process stuck exactly
> at the mds daemons. Before, I have tried to increase/shrink the
> placement size of them, but nothing happens. Currently I have 4/3
> running daemons. One
Ceph fs dump:
e62
enable_multiple, ever_enabled_multiple: 1,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
anchor table,9=file
On Thu, Dec 15, 2022 at 3:17 PM Mevludin Blazevic
wrote:
>
> Ceph fs dump:
>
> e62
> enable_multiple, ever_enabled_multiple: 1,1
> default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
> writeable ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses ver