On Thursday, November 22, 2012 at 4:33 AM, Jimmy Tang wrote:
> Hi All,
>  
> Is it possible at this point in time to setup some form of tiering of storage 
> pools in ceph by modifying the crush map? For example I want to have my most 
> recently used data on a small set of nodes that have SSD's and over time 
> migrate data from the SSD's to some bulk spinning disk using a LRU policy?
There's no way to have Ceph do this automatically at this time. Tiering in this 
fashion traditionally requires the sort of centralized metadata that Ceph and 
RADOS are designed to avoid, and while interest in it is heating up we haven't 
yet come up with a new solution. ;)

If your system allows you to do this manually, though — yes. You can create 
multiple (non-overlapping, presumably) trees within your CRUSH map, one of 
which would be an "SSD" storage group and one of which would be a "normal" 
storage group. Then create a CRUSH rule which draws from the SSD group and a 
rule which draws from the normal group, create a pool using each of those, and 
write to whichever one at the appropriate time.
Alternatively, you could also place all the primaries on SSD storage but the 
replicas on regular drives — this won't speed up your writes much but will mean 
SSD-speed reads. :)
-Greg
  
>  
> Regards,
> Jimmy Tang
>  
> --
> Senior Software Engineer, Digital Repository of Ireland (DRI)
> Trinity Centre for High Performance Computing,
> Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
> http://www.tchpc.tcd.ie/ | jt...@tchpc.tcd.ie (mailto:jt...@tchpc.tcd.ie)
>  
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org 
> (mailto:majord...@vger.kernel.org)
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to