root ssds {
    id -9       # do not change unnecessarily
    # weight 0.000
    alg straw
    hash 0  # rjenkins1

}                              

It is empty in ssds!



rule ssdpool {
    ruleset 1
    type replicated
    min_size 1
    max_size 10
    step take ssds
    step chooseleaf firstn 0 type host
    step emit
}










At 2017-08-03 18:10:31, "Stanislav Kopp" <stask...@gmail.com> wrote:
>Hello,
>
>I was running ceph cluster with hdds for OSDs, now I've created new
>dedicated SSD pool within same cluster, everything looks fine, cluster
>is "healthy", but if I try to create new rbd image in this new ssd
>pool it just hangs, I've tried both "rbd" command and within proxmox
>gui, " rbd" just hangs, proxmox says "rbd error: got lock timeout".
>Creating volumes in old pool is no problem. Are there any way to see
>what's wrong? I've grepped  ceph logs, but haven't found anything
>useful.
>
>
>Ceph 11.2, here is my crushmap: https://pastebin.com/YVUVCvqu
>
>
>ceph01:/etc/ceph# ceph -s
>    cluster 4f23f683-21e6-49f3-ae2c-c95b150b9dc6
>     health HEALTH_OK
>     monmap e4: 3 mons at
>{ceph02=10.1.8.32:6789/0,ceph03=10.1.8.33:6789/0,ceph04=10.1.8.34:6789/0}
>            election epoch 38, quorum 0,1,2 ceph02,ceph03,ceph04
>        mgr no daemons active
>     osdmap e1100: 24 osds: 24 up, 24 in
>            flags sortbitwise,require_jewel_osds,require_kraken_osds
>      pgmap v3606818: 528 pgs, 2 pools, 1051 GB data, 271 kobjects
>            3243 GB used, 116 TB / 119 TB avail
>                 528 active+clean
>  client io 28521 B/s rd, 1140 kB/s wr, 6 op/s rd, 334 op/s wr
>
>ceph01:/etc/ceph# ceph osd lspools
>0 rbd,1 ssdpool
>
>Thanks,
>Stan
>_______________________________________________
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to