Re: [ceph-users] perplexed by unmapped groups on fresh firefly install

2014-06-10 Thread Miki Habryn
Thanks, that did the trick! I think there's some puzzling things that
change depending on timing of commands during setup, and at some point I
noticed that the script output said "Installing stable release Emperor" or
the equivalent, so possibly I have no idea what my own commands are doing.
But, for posterity, the following script builds a working dual-OSD cluster
for me.

#!/bin/sh

set -x

sudo stop ceph-all
ceph-deploy uninstall usrv1
sudo rm -rf /var/lib/ceph/osd/ceph-*/*
ceph-deploy purgedata usrv1
ceph-deploy forgetkeys

rm -rf ~/ceph
mkdir ~/ceph
cd ~/ceph
ceph-deploy new usrv1

perl -nli -e 'print unless /^$/ or /omap/' ceph.conf
cat >>ceph.conf <
wrote:

> Miki,
>
> osd crush chooseleaf type is set to 1 by default, which means that it
> looks to peer with placement groups on another node, not the same node. You
> would need to set that to 0 for a 1-node cluster.
>
> John
>
>
> On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn  wrote:
>
>> I set up a single-node, dual-osd cluster following the Quick Start on
>> ceph.com with Firefly packages, adding "osd pool default size = 2".
>> All of the pgs came up in active+remapped or active+degraded status. I
>> read up on tunables and set them to optimal, to no result, so I added
>> a third osd instead. About 39 pgs moved to active status, but the rest
>> stayed in active+remapped or active+degraded. When I raised the
>> replication level to 3 with "ceph osd pool set ... size 3", all the
>> pgs went back to degraded or remapped. Just for kicks, I tried to set
>> the replication level to 1, and I still only got 39 pgs active. Is
>> there something obvious I'm doing wrong?
>>
>> m.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] perplexed by unmapped groups on fresh firefly install

2014-06-09 Thread John Wilkins
Miki,

osd crush chooseleaf type is set to 1 by default, which means that it looks
to peer with placement groups on another node, not the same node. You would
need to set that to 0 for a 1-node cluster.

John


On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn  wrote:

> I set up a single-node, dual-osd cluster following the Quick Start on
> ceph.com with Firefly packages, adding "osd pool default size = 2".
> All of the pgs came up in active+remapped or active+degraded status. I
> read up on tunables and set them to optimal, to no result, so I added
> a third osd instead. About 39 pgs moved to active status, but the rest
> stayed in active+remapped or active+degraded. When I raised the
> replication level to 3 with "ceph osd pool set ... size 3", all the
> pgs went back to degraded or remapped. Just for kicks, I tried to set
> the replication level to 1, and I still only got 39 pgs active. Is
> there something obvious I'm doing wrong?
>
> m.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] perplexed by unmapped groups on fresh firefly install

2014-06-08 Thread Miki Habryn
I set up a single-node, dual-osd cluster following the Quick Start on
ceph.com with Firefly packages, adding "osd pool default size = 2".
All of the pgs came up in active+remapped or active+degraded status. I
read up on tunables and set them to optimal, to no result, so I added
a third osd instead. About 39 pgs moved to active status, but the rest
stayed in active+remapped or active+degraded. When I raised the
replication level to 3 with "ceph osd pool set ... size 3", all the
pgs went back to degraded or remapped. Just for kicks, I tried to set
the replication level to 1, and I still only got 39 pgs active. Is
there something obvious I'm doing wrong?

m.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com