> Can I remove safely default pools?
Yes, as long as you're not using the default pools to store data, you can
delete them.

> Why total size is about 1GB?, because it should have 500MB since 2
replicas.
I'm assuming that you're talking about the output of 'ceph df' or 'rados
df'. These commands report *raw* storage capacity.. It's up to you to
divide the raw capacity by the number of replicas. It's this way
intentionally since you could have multiple pools each with different
replica counts.


btw.. I'd strongly urge you to re-deploy your OSDs with XFS instead of
BTRFS. The last details I've seen show BTRFS slows drastically after only a
few hours with a high file count in the filesystem. Better to re-deploy now
than when you have data serving in production.

Thanks,

Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services


On Sat, Apr 19, 2014 at 5:51 PM, Gonzalo Aguilar Delgado <
gagui...@aguilardelgado.com> wrote:

> Hi Michael,
>
> It worked. I didn't realized of this because docs it installs two osd
> nodes and says that would become active+clean after installing them.
> (Something that didn't worked for me because the 3 replicas problem).
>
> http://ceph.com/docs/master/start/quick-ceph-deploy/
>
> Now I can shutdown second node and I can retrieve the data stored there.
>
> So last questions are:
>
> Can I remove safely default pools?
> Why total size is about 1GB?, because it should have 500MB since 2
> replicas.
>
>
> Thank you a lot for your help.
>
> PS: I will try now the openstack integration.
>
>
> El sáb, 19 de abr 2014 a las 6:53 , Michael J. Kidd <
> michael.k...@inktank.com> escribió:
>
> You may also want to check your 'min_size'... if it's 2, then you'll be
> incomplete even with 1 complete copy.
>
> ceph osd dump | grep pool
>
> You can reduce the min size with the following syntax:
>
> ceph osd pool set <poolname> min_size 1
>
> Thanks,
> Michael J. Kidd
>
> Sent from my mobile device.  Please excuse brevity and typographical
> errors.
> On Apr 19, 2014 12:50 PM, "Jean-Charles Lopez" <jc.lo...@inktank.com>
> wrote:
>
>> Hi again
>>
>> Looked at your ceph -s.
>>
>> You have only 2 OSDs, one on each node. The default replica count is 2,
>> the default crush map says each replica on a different host, or may be you
>> set it to 2 different OSDs. Anyway, when one of your OSD goes down, Ceph
>> can no longer find another OSDs to host the second replica it must create.
>>
>> Looking at your crushmap we would know better.
>>
>> Recommendation: for testing efficiently and most options available,
>> functionnally speaking, deploy a cluster with 3 nodes, 3 OSDs each is my
>> best practice.
>>
>> Or make 1 node with 3 OSDs modifying your crushmap to "choose type osd"
>> in your rulesets.
>>
>> JC
>>
>>
>> On Saturday, April 19, 2014, Gonzalo Aguilar Delgado <
>> gagui...@aguilardelgado.com> wrote:
>>
>>> Hi,
>>>
>>> I'm building a cluster where two nodes replicate objects inside. I found
>>> that shutting down just one of the nodes (the second one), makes everything
>>> "incomplete".
>>>
>>> I cannot find why, since crushmap looks good to me.
>>>
>>> after shutting down one node
>>>
>>>     cluster 9028f4da-0d77-462b-be9b-dbdf7fa57771
>>>      health HEALTH_WARN 192 pgs incomplete; 96 pgs stuck inactive; 96
>>> pgs stuck unclean; 1/2 in osds are down
>>>      monmap e9: 1 mons at {blue-compute=172.16.0.119:6789/0}, election
>>> epoch 1, quorum 0 blue-compute
>>>      osdmap e73: 2 osds: 1 up, 2 in
>>>       pgmap v172: 192 pgs, 3 pools, 275 bytes data, 1 objects
>>>             7552 kB used, 919 GB / 921 GB avail
>>>                  192 incomplete
>>>
>>>
>>> Both nodes has WD Caviar Black 500MB disk with btrfs filesystem on it.
>>> Full disk used.
>>>
>>> I cannot understand why does not replicate to both nodes.
>>>
>>> Someone can help?
>>>
>>> Best regards,
>>>
>>
>>
>> --
>> Sent while moving
>> Pardon my French and any spelling &| grammar glitches
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to