So it was a PG problem. I added a couple of OSD per host, reconfigured the
CRUSH map and the cluster began to work properly.
Thanks
Giuseppe
2015-04-14 19:02 GMT+02:00 Saverio Proto :
> No error message. You just finish the RAM memory and you blow up the
> cluster because of too many PGs.
>
> Sa
com
Subject: Re: [ceph-users] Binding a pool to certain OSDs
I use this to quickly check pool stats:
[root@ceph-mon01 ceph]# ceph osd dump | grep pool
pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 fla
You may also be interested in the cbt code that does this kind of thing
for creating cache tiers:
https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L295
The idea is that you create a parallel crush hierarchy for the SSDs and
then you can assign that to the pool used for the cache tier.
No error message. You just finish the RAM memory and you blow up the
cluster because of too many PGs.
Saverio
2015-04-14 18:52 GMT+02:00 Giuseppe Civitella :
> Hi Saverio,
>
> I first made a test on my test staging lab where I have only 4 OSD.
> On my mon servers (which run other services) I have
h.com
Subject: Re: [ceph-users] Binding a pool to certain OSDs
Hi Saverio,
I first made a test on my test staging lab where I have only 4 OSD.
On my mon servers (which run other services) I have 16BG RAM, 15GB used but 5
cached. On the OSD servers I have 3GB RAM, 3GB used but 2 cached.
"
Hi Saverio,
I first made a test on my test staging lab where I have only 4 OSD.
On my mon servers (which run other services) I have 16BG RAM, 15GB used but
5 cached. On the OSD servers I have 3GB RAM, 3GB used but 2 cached.
"ceph -s" tells me nothing about PGs, shouldn't I get an error message fro
You only have 4 OSDs ?
How much RAM per server ?
I think you have already too many PG. Check your RAM usage.
Check on Ceph wiki guidelines to dimension the correct number of PGs.
Remeber that everytime to create a new pool you add PGs into the
system.
Saverio
2015-04-14 17:58 GMT+02:00 Giuseppe
Hi all,
I've been following this tutorial to realize my setup:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
I got this CRUSH map from my test lab:
http://paste.openstack.org/show/203887/
then I modified the map and uploaded it. This is the final version:
Hi Giuseppe,
There is also this article from Sébastien Han that you might find useful:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Best regards,
Vincenzo.
2015-04-14 10:34 GMT+02:00 Saverio Proto :
> Yes you can.
> You have to write your own crushmap.
Yes you can.
You have to write your own crushmap.
At the end of the crushmap you have rulesets
Write a ruleset that selects only the OSDs you want. Then you have to
assign the pool to that ruleset.
I have seen examples online, people what wanted some pools only on SSD
disks and other pools only
Hi all,
I've got a Ceph cluster which serves volumes to a Cinder installation. It
runs Emperor.
I'd like to be able to replace some of the disks with OPAL disks and create
a new pool which uses exclusively the latter kind of disk. I'd like to have
a "traditional" pool and a "secure" one coexisting
11 matches
Mail list logo