Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Adeel Nazir
Discard is supported in kernel 3.18 rc1 or greater as per 
https://lkml.org/lkml/2014/10/14/450


 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Robert Sander
 Sent: Friday, December 12, 2014 7:01 AM
 To: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Ceph Block device and Trim/Discard
 
 On 12.12.2014 12:48, Max Power wrote:
 
  It would be great to shrink the used space. Is there a way to achieve
  this? Or have I done something wrong? In a professional environment
  you may can live with filesystems that only grow. But on my small
  home-cluster this really is a problem.
 
 As Wido already mentioned the kernel RBD does not support discard.
 
 When using qemu+rbd you cannot use the virto driver as this also does not
 support discard. My best experience is with the virtual SATA driver and the
 options cache=writeback and discard=on.
 
 Regards
 --
 Robert Sander
 Heinlein Support GmbH
 Schwedter Str. 8/9b, 10119 Berlin
 
 http://www.heinlein-support.de
 
 Tel: 030 / 405051-43
 Fax: 030 / 405051-19
 
 Zwangsangaben lt. §35a GmbHG:
 HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
 Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Proper procedure for osd/host removal

2014-12-15 Thread Adeel Nazir
I'm going through something similar, and it seems like the double backfill 
you're experiencing is about par for the course. According to the CERN 
presentation (http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern slide 
19), doing a 'ceph osd crush rm osd ID' should save the double backfill, but 
I haven't experienced that in my 0.80.5 cluster. Even after I do a crush rm 
osd, and finally remove it via ceph rm osd.ID, it computes a new map and does 
the backfill again. As far as I can tell, there's no way around it without 
editing the map manually, making whatever changes you require and then pushing 
the new map. I personally am not experienced enough to feel comfortable making 
that kind of a change.


Adeel

 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Dinu Vlad
 Sent: Monday, December 15, 2014 11:35 AM
 To: ceph-users@lists.ceph.com
 Subject: [ceph-users] Proper procedure for osd/host removal
 
 Hello,
 
 I've been working to upgrade the hardware on a semi-production ceph
 cluster, following the instructions for OSD removal from
 http://ceph.com/docs/master/rados/operations/add-or-rm-
 osds/#removing-osds-manual. Basically, I've added the new hosts to the
 cluster and now I'm removing the old ones from it.
 
 What I found curious is that after the sync triggered by the ceph osd out
 id finishes and I stop the osd process and remove it from the crush map,
 another session of synchronization is triggered - sometimes this one takes
 longer than the first. Also, removing an empty host bucket from the crush
 map triggred another resynchronization.
 
 I noticed that the overall weight of the host bucket does not change in the
 crush map as a result of one OSD being out, therefore what is happening is
 kinda' normal behavior - however it remains time-consuming. Is there
 something that can be done to avoid the double resync?
 
 I'm running 0.72.2 on top of ubuntu 12.04 on the OSD hosts.
 
 Thanks,
 Dinu
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com