On 01/24/2013 03:07 PM, Dan Mick wrote:
...
> Yeah; it's probably mostly just that one-OSD configurations are so
> uncommon that we never special-cased that small user set.  Also, you can
> run with a cluster in that state forever (well, until that one OSD dies
> at least); I do that regularly with the default vstart.sh local test
> cluster

Well, this goes back to the quick start guide: to me a more natural way
to start is with one host, then add another. That's what I was trying to
do, however, the quick start page ends with

"When your cluster echoes back HEALTH_OK, you may begin using Ceph."

and that doesn't happen with one host: you get "384 pgs stuck unclean"
instead of "HEALTH_OK". To me that means I may *not* begin using ceph.

I did run "ceph osd pool set ... size 1" on each of the 3 default pools,
verified that it took with "ceph osd dump | grep 'rep size'", and gave
it a good half hour to settle. I still got "384 pgs stuck unclean" from
"ceph health".

So I re-done it with 2 OSDs and got the expected HEALTH_OK right from
the start.

John,

a) a note saying "if you have only one OSD you won't get HEALTH_OK until
you add another one; you can start using the cluster" may be a useful
addition to the quick start,

b) more importantly, if there are any plans to write more quickstart
pages, I'd love to see the "add another OSD (MDS, MON) to an existing
pool in 5 minutes".

Thanks all,
-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to