Re: ceph-deploy progress and CDS session

2013-08-06 Thread Mark Kirkwood
One thing that comes to mind is the ability to create (or activate) 
osd's with a custom crush specification from (say) a supplied file.


Regards

Mark

On 03/08/13 06:02, Sage Weil wrote:

There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT on
Monday).  We'll be going over what we currently have in backlog for
improvements, but if you have any opinions about what else ceph-deploy
should or should not do or areas where it is problematic, please reply to
this thread to let us know what you think, and/or join the CDS discussion
hangout.

For those who haven't noticed, we now have a full-time devoloper,
Alfredo Deza, who is working on ceph-deploy.  He's been making huge
progress over the last couple of weeks improving error reporting,
visibility into what ceph-deploy is doing, and fixing various bugs.  We
have a long list of things we want to do with the tool, but any feedback
from users is helpful to make sure we're working on the right things
first!



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] ceph-deploy progress and CDS session

2013-08-02 Thread Eric Eastman

Hi,

First I would like to state that with all its limitiation, I have 
managed to build multiple
clusters with ceph-deploy and without it, I would have been totally 
lost.  Things

that I feel would improve it include:

A debug mode where it lists everything it is doing.  This will be 
helpful in the future
when I move to a more integrated tool then ceph-deploy, as I could see 
exactly

how ceph-deploy built my test cluster.

To understand more types of linux storage devices. I have spent hours 
trying to make it
understand multipath devices, as I happen to have a large number of 
these in my lab,

but so far I have not made it work.

Really good documentation on all the ceph-deploy options.

Lastly, this is not just a ceph-deploy thing, but documentation 
explaining how things
boot up, and interact.  Ceph-deploy depends on tools like ceph-disk to 
mount OSD
disks on the servers during boot, and I learned the hard way that if a 
OSD is on a LUN
that is seen by more then one OSD node, you can corrupt data, as each 
OSD node

tries to mount all the ODS it can find.

There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT 

on

Monday).  We'll be going over what we currently have in backlog for
improvements, but if you have any opinions about what else ceph-deploy
should or should not do or areas where it is problematic, please reply 

to
this thread to let us know what you think, and/or join the CDS 

discussion

hangout.



Thank
Eric
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ceph-deploy progress and CDS session

2013-08-02 Thread Sage Weil
There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT on 
Monday).  We'll be going over what we currently have in backlog for 
improvements, but if you have any opinions about what else ceph-deploy 
should or should not do or areas where it is problematic, please reply to 
this thread to let us know what you think, and/or join the CDS discussion 
hangout.

For those who haven't noticed, we now have a full-time devoloper,
Alfredo Deza, who is working on ceph-deploy.  He's been making huge
progress over the last couple of weeks improving error reporting,
visibility into what ceph-deploy is doing, and fixing various bugs.  We
have a long list of things we want to do with the tool, but any feedback
from users is helpful to make sure we're working on the right things 
first!

sage


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html