Salut,

Ce valori ai folosit pentru pg_num si pgp_num?

Ceva documentatie utila:
https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/storage-strategies/chapter-14-pg-count#pg-count-for-small-clusters
http://www.sebastien-han.fr/blog/2015/06/16/ceph-is-complaining-too-many-pgs/

Multumesc,
Marius

On Sun, Feb 7, 2016 at 7:02 PM, Mircea MITU <mir...@sigu.ro> wrote:
> Salut
>
> Pe un sistem (centos6) mereu cand incarc sa creez un ceph storage (din 2 
> osd-uri) il creez in status health_warn ca mai jos
>
> # ceph -w
>     cluster idxxxx
>      health HEALTH_WARN
>             too many PGs per OSD (576 > max 300)
>      monmap e1: 1 mons at {x=1.2.3.4:6789/0}
>             election epoch 1, quorum 0 vditest
>      osdmap e58: 2 osds: 2 up, 2 in
>       pgmap v35876: 576 pgs, 5 pools, 7506 MB data, 949 objects
>             25335 MB used, 1837 GB / 1862 GB avail
>                  576 active+clean
>   client io 40918 kB/s wr, 9 op/s
>
> Indiferent ce as face (scot osd-urile, adaug, refac de la zero), raman cu too 
> many PGs per OSD (576 > max 300) si 576 active+clean
> Are cineva idee unde ma mai pot uita?
>
>
>
> Multumesc,
> Mircea
> _______________________________________________
> RLUG mailing list
> RLUG@lists.lug.ro
> http://lists.lug.ro/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@lists.lug.ro
http://lists.lug.ro/mailman/listinfo/rlug

Raspunde prin e-mail lui