This is supposed to indicate that the directory is hot and being replicated
to another active MDS to spread the load.

But skimming the code it looks like maybe there's a bug and this is not
blocked on the multiple-active stuff it's supposed to be. (Though I don't
anticipate any issues for you.) Patrick, Zheng, any thoughts?
-Greg

On Mon, Sep 25, 2017 at 4:59 AM David <dclistsli...@gmail.com> wrote:

> Hi All
>
> Since upgrading a cluster from Jewel to Luminous I'm seeing a lot of the
> following line in my ceph-mds log (path name changed by me - the messages
> refer to different dirs)
>
> 2017-09-25 12:47:23.073525 7f06df730700  0 mds.0.bal replicating dir [dir
> 0x10000003e5b /path/to/dir/ [2,head] auth v=50477 cv=50465/50465 ap=0+3+4
> state=1610612738|complete f(v0 m2017-03-27 11:04:17.935529 51=19+32)
> n(v3297 rc2017-09-25 12:46:13.379651 b14050737379 13086=10218+2868)/n(v3297
> rc2017-09-25 12:46:13.052651 b14050862881 13083=10215+2868) hs=51+0,ss=0+0
> dirty=1 | child=1 dirty=1 waiter=0 authpin=0 0x7f0707298000] pop 13139 ..
> rdp 191 adj 0
>
> I've not had any issues reported, just interested to know why I'm suddenly
> seeing a lot of these messages, the client versions and workload hasn't
> changed. Anything to be concerned about?
>
> Single MDS with standby-replay
> Luminous 12.2.0
> Kernel clients: 3.10.0-514.2.2.el7.x86_64
>
> Thanks,
> David
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to