I got to this situation several times, due to a strange behavior in the xfs filesystem - I initially ran on debian, afterwards reinstalled the nodes to centos7, kernel 3.10.0-229.14.1.el7.x86_64, package xfsprogs-3.2.1-6.el7.x86_64. Around 75-80% of usage shown with df, the disk is already full.

To delete PGs in order to restart the OSD, I first lowered the weight of the affected OSD, and observed which PGs started backfilling elsewhere. Then I deleted some of these backfilling PGs before trying to restart the OSD. It worked without data loss.


Em 08-08-2016 08:19, Mykola Dvornik escreveu:
@Shinobu

According to
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/

"If you cannot start an OSD because it is full, you may delete some data by deleting some placement group directories in the full OSD."


On 8 August 2016 at 13:16, Shinobu Kinjo <shinobu...@gmail.com <mailto:shinobu...@gmail.com>> wrote:

    On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik
    <mykola.dvor...@gmail.com <mailto:mykola.dvor...@gmail.com>> wrote:
    > Dear ceph community,
    >
    > One of the OSDs in my cluster cannot start due to the
    >
    > ERROR: osd init failed: (28) No space left on device
    >
    > A while ago it was recommended to manually delete PGs on the OSD
    to let it
    > start.

    Who recommended that?

    >
    > So I am wondering was is the recommended way to fix this issue
    for the
    > cluster running Jewel release (10.2.2)?
    >
    > Regards,
    >
    > --
    >  Mykola
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
    >



    --
    Email:
    shin...@linux.com <mailto:shin...@linux.com>
    shin...@redhat.com <mailto:shin...@redhat.com>




--
 Mykola**


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--

--

        
Mandic Cloud Solutions <http://www.mandic.com.br/?utm_source=Assinatura-de-Email&utm_medium=Email&utm_content=Logo&utm_campaign=Site-Mandic> *Gerd Jakobovitsch *
*Diretoria de Tecnologia*
+55 11 3030-3456

Avalie a Mandic Cloud <http://www.mandic.com.br/redirect/surveymonkey/?utm_source=Assinatura-de-Email&utm_medium=Email&utm_campaign=Pesquisa-Survey-Monkey> | Como está sua satisfação?

*Vendas:* 4007-2442
*Suporte 24h:* 4007-1858 | 400-365-24

*Mandic.* Somos Especialistas em Cloud. <http://www.mandic.com.br/?utm_source=Assinatura%20de%20Email&utm_medium=Email&utm_content=Texto&utm_campaign=Site%20Mandic>

Imagem: Cloud Sob Medida - Garantia de serviço e contrato em reais, sem variação do dólar. <http://www.mandic.com.br/solucoes/projetos-especiais-em-cloud/?utm_source=Email&utm_medium=Assinatura-de-Email-Out15&utm_campaign=BannerProjetosEspeciais>
        




--

As informações contidas nesta mensagem são CONFIDENCIAIS, protegidas pelo 
sigilo legal e por direitos autorais. A divulgação, distribuição, reprodução ou 
qualquer forma de utilização do teor deste documento depende de autorização do 
emissor, sujeitando-se o infrator às sanções legais. Caso esta comunicação 
tenha sido recebida por engano, favor avisar imediatamente, respondendo esta 
mensagem.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to