Re: [ceph-users] OSD capacity variance ?
Hi Howard, be default each OSD is weighed based on its capacity automatically. So the smaller OSDs will receive less data than the bigger ones. Be careful though in this case to properly monitor the utilization rate of all OSDs in your cluster so that one of them does not reach the odd_full ratio Read this link that will help you get e better view on Ceph data placement mechanisms. Cheers JC On Jan 31, 2015, at 14:39, Howard Thomson h...@thomsons.co.uk wrote: Hi All, I am developing a custom disk storage backend for the Bacula backup system, and am in the process of setting up a trial Ceph system, intending to use a direct interface to RADOS. I have a variety of 1Tb, 250Mb and 160Mb disk drives that I would like to use, but it is not [as yet] obvious as to whether having differences in capacity at different OSDs matters. Can anyone comment, or point me in the right direction on docs.ceph.com ? Thanks, Howard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] OSD capacity variance ?
Hi Howard, I assume it's an typo with 160 + 250 MB. Ceph OSDs must be min. 10GB to get an weight of 0.01 Udo On 31.01.2015 23:39, Howard Thomson wrote: Hi All, I am developing a custom disk storage backend for the Bacula backup system, and am in the process of setting up a trial Ceph system, intending to use a direct interface to RADOS. I have a variety of 1Tb, 250Mb and 160Mb disk drives that I would like to use, but it is not [as yet] obvious as to whether having differences in capacity at different OSDs matters. Can anyone comment, or point me in the right direction on docs.ceph.com ? Thanks, Howard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] OSD capacity variance ?
Hello, As far as I am correct you should weight the OSD's as per their capacity in CRUSH map. Here is sample from ceph docs ( http://ceph.com/docs/master/rados/operations/crush-map/ ) *Weighting Bucket Items* *Ceph expresses bucket weights as doubles, which allows for fine weighting. A weight is the relative difference between device capacities. We recommend using 1.00 as the relative weight for a 1TB storage device. In such a scenario, a weight of 0.5 would represent approximately 500GB, and a weight of 3.00 would represent approximately 3TB. Higher level buckets have a weight that is the sum total of the leaf items aggregated by the bucket.* Regards, Sudarshan Pathak On Sun, Feb 1, 2015 at 4:24 AM, Howard Thomson h...@thomsons.co.uk wrote: Hi All, I am developing a custom disk storage backend for the Bacula backup system, and am in the process of setting up a trial Ceph system, intending to use a direct interface to RADOS. I have a variety of 1Tb, 250Mb and 160Mb disk drives that I would like to use, but it is not [as yet] obvious as to whether having differences in capacity at different OSDs matters. Can anyone comment, or point me in the right direction on docs.ceph.com ? Thanks, Howard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] OSD capacity variance ?
Hi All, I am developing a custom disk storage backend for the Bacula backup system, and am in the process of setting up a trial Ceph system, intending to use a direct interface to RADOS. I have a variety of 1Tb, 250Mb and 160Mb disk drives that I would like to use, but it is not [as yet] obvious as to whether having differences in capacity at different OSDs matters. Can anyone comment, or point me in the right direction on docs.ceph.com ? Thanks, Howard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com