Hi Saverio,

I first made a test on my test staging lab where I have only 4 OSD.
On my mon servers (which run other services) I have 16BG RAM, 15GB used but
5 cached. On the OSD servers I have 3GB RAM, 3GB used but 2 cached.
"ceph -s" tells me nothing about PGs, shouldn't I get an error message from
its output?

Thanks
Giuseppe

2015-04-14 18:20 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:

> You only have 4 OSDs ?
> How much RAM per server ?
> I think you have already too many PG. Check your RAM usage.
>
> Check on Ceph wiki guidelines to dimension the correct number of PGs.
> Remeber that everytime to create a new pool you add PGs into the
> system.
>
> Saverio
>
>
> 2015-04-14 17:58 GMT+02:00 Giuseppe Civitella <
> giuseppe.civite...@gmail.com>:
> > Hi all,
> >
> > I've been following this tutorial to realize my setup:
> >
> http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
> >
> > I got this CRUSH map from my test lab:
> > http://paste.openstack.org/show/203887/
> >
> > then I modified the map and uploaded it. This is the final version:
> > http://paste.openstack.org/show/203888/
> >
> > When applied the new CRUSH map, after some rebalancing, I get this health
> > status:
> > [-> avalon1 root@controller001 Ceph <-] # ceph -s
> >     cluster af09420b-4032-415e-93fc-6b60e9db064e
> >      health HEALTH_WARN crush map has legacy tunables; mon.controller001
> low
> > disk space; clock skew detected on mon.controller002
> >      monmap e1: 3 mons at
> > {controller001=
> 10.235.24.127:6789/0,controller002=10.235.24.128:6789/0,controller003=10.235.24.129:6789/0
> },
> > election epoch 314, quorum 0,1,2
> controller001,controller002,controller003
> >      osdmap e3092: 4 osds: 4 up, 4 in
> >       pgmap v785873: 576 pgs, 6 pools, 71548 MB data, 18095 objects
> >             8842 MB used, 271 GB / 279 GB avail
> >                  576 active+clean
> >
> > and this osd tree:
> > [-> avalon1 root@controller001 Ceph <-] # ceph osd tree
> > # id    weight  type name       up/down reweight
> > -8      2       root sed
> > -5      1               host ceph001-sed
> > 2       1                       osd.2   up      1
> > -7      1               host ceph002-sed
> > 3       1                       osd.3   up      1
> > -1      2       root default
> > -4      1               host ceph001-sata
> > 0       1                       osd.0   up      1
> > -6      1               host ceph002-sata
> > 1       1                       osd.1   up      1
> >
> > which seems not a bad situation. The problem rise when I try to create a
> new
> > pool, the command "ceph osd pool create sed 128 128" gets stuck. It never
> > ends.  And I noticed that my Cinder installation is not able to create
> > volumes anymore.
> > I've been looking in the logs for errors and found nothing.
> > Any hint about how to proceed to restore my ceph cluster?
> > Is there something wrong with the steps I take to update the CRUSH map?
> Is
> > the problem related to Emperor?
> >
> > Regards,
> > Giuseppe
> >
> >
> >
> >
> > 2015-04-13 18:26 GMT+02:00 Giuseppe Civitella
> > <giuseppe.civite...@gmail.com>:
> >>
> >> Hi all,
> >>
> >> I've got a Ceph cluster which serves volumes to a Cinder installation.
> It
> >> runs Emperor.
> >> I'd like to be able to replace some of the disks with OPAL disks and
> >> create a new pool which uses exclusively the latter kind of disk. I'd
> like
> >> to have a "traditional" pool and a "secure" one coexisting on the same
> ceph
> >> host. I'd then use Cinder multi backend feature to serve them.
> >> My question is: how is it possible to realize such a setup? How can I
> bind
> >> a pool to certain OSDs?
> >>
> >> Thanks
> >> Giuseppe
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to