On Thu, 12 Apr 2012, Bernard Grymonpon wrote:
On 09 Apr 2012, at 21:22, Tommi Virtanen wrote:
On Mon, Apr 9, 2012 at 11:16, Sage Weil s...@newdream.net wrote:
One thing we need to keep in mind here is that the individual disks are
placed in the CRUSH hierarchy based on the host/rack/etc
On Fri, Apr 6, 2012 at 12:45, Bernard Grymonpon bern...@openminds.be wrote:
Lets go wild, and say, if you have hunderds of machines, summing up to
thousands of of disks, all already migrated/moved to other machines/... , and
it reports that OSD 536 is offline, how will you find what disk is
On Mon, 9 Apr 2012, Tommi Virtanen wrote:
On Fri, Apr 6, 2012 at 12:45, Bernard Grymonpon bern...@openminds.be wrote:
Lets go wild, and say, if you have hunderds of machines, summing up to
thousands of of disks, all already migrated/moved to other machines/... ,
and it reports that OSD
On Mon, Apr 9, 2012 at 11:16, Sage Weil s...@newdream.net wrote:
One thing we need to keep in mind here is that the individual disks are
placed in the CRUSH hierarchy based on the host/rack/etc location in the
datacenter. Moving disk around arbitrarily will break the placement
constraints if
On Fri, 6 Apr 2012, Tommi Virtanen wrote:
On Thu, Apr 5, 2012 at 22:12, Sage Weil s...@newdream.net wrote:
Here's what I'm thinking:
- No data paths are hard-coded except for /etc/ceph/*.conf
- We initially mount osd volumes in some temporary location (say,
/var/lib/ceph/temp/$uuid)
On Fri, Apr 6, 2012 at 00:37, Bernard Grymonpon bern...@openminds.be wrote:
Storage lives by UUID's, I would suggest to move the programming in the
direction to get rid of all the own naming and labeling, and just stick to
uuids. I could not care less if the data on that disk is internally
On Fri, Apr 6, 2012 at 10:55, Sage Weil s...@newdream.net wrote:
Hopefully we can keep things as general as possible, so that brave souls
can go out of bounds without getting bitten. For example, never parse the
directory name if the same information can be had from the directory
contents.
I
On 06 Apr 2012, at 19:55, Sage Weil wrote:
On Fri, 6 Apr 2012, Tommi Virtanen wrote:
On Thu, Apr 5, 2012 at 22:12, Sage Weil s...@newdream.net wrote:
Here's what I'm thinking:
- No data paths are hard-coded except for /etc/ceph/*.conf
- We initially mount osd volumes in some temporary
On 06 Apr 2012, at 19:57, Tommi Virtanen wrote:
On Fri, Apr 6, 2012 at 00:37, Bernard Grymonpon bern...@openminds.be wrote:
Storage lives by UUID's, I would suggest to move the programming in the
direction to get rid of all the own naming and labeling, and just stick to
uuids. I could not
On Thu, 5 Apr 2012, Tommi Virtanen wrote:
As I think it is a very specific scenario where a machine would be
participating in multiple Ceph clusters I'd vote for:
/var/lib/ceph/$type/$id
I really want to avoid having two different cases, two different code
paths to test, a more rare
10 matches
Mail list logo