Dear Cephers,

      I have a Ceph cluster with 5 nodes, 5*24=120 OSDs all 
running Octopus 15.2.8 now??I encountered the same problem freqeuntly,but i can 
not find the reason currently. 
     https://tracker.ceph.com/issues/20742
      I find that the state is 'can not reproduce'.


      Maybe,my business can help to reproduce.


      3 ceph-fuse clients sequence write 3 dirs separately, when 
pg remaps.


      if there is no  pg remaps, no problem.  
      


https://tracker.ceph.com/issues/20742
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to