[ceph-users] "rbd create" hangs for specific pool

2017-08-03 Thread Stanislav Kopp
Hello, I was running ceph cluster with hdds for OSDs, now I've created new dedicated SSD pool within same cluster, everything looks fine, cluster is "healthy", but if I try to create new rbd image in this new ssd pool it just hangs, I've tried both "rbd" command and within proxmox gui, " rbd"

[ceph-users] How to force "rbd unmap"

2017-07-05 Thread Stanislav Kopp
Hello, I have problem that sometimes I can't unmap rbd device, I get "sysfs write failed rbd: unmap failed: (16) Device or resource busy", there is no open files and "holders" directory is empty. I saw on the mailling list that you can "force" unmapping the device, but I cant find how does it

[ceph-users] slow cluster perfomance during snapshot restore

2017-06-29 Thread Stanislav Kopp
Hi, we're testing ceph cluster as storage backend for our virtualization (proxmox), we're using RBD for raw VM images. If I'm trying to restore some snapshot with "rbd snap rollback", the whole cluster becomes really slow, the "apply_latency" goes to 4000-6000ms from normally 0-10ms, I see load

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Stanislav Kopp
I've reduced OSDs to 12 and moved journal to ssd drives and now have "boost" with writes to ~33-35MB/s. Is it maximum without full ssd pools? Best, Stan 2017-04-06 9:34 GMT+02:00 Stanislav Kopp <stask...@gmail.com>: > Hello, > > I'm evaluate ceph cluster, to see

[ceph-users] slow perfomance: sanity check

2017-04-06 Thread Stanislav Kopp
Hello, I'm evaluate ceph cluster, to see if you can use it for our virtualization solution (proxmox). I'm using 3 nodes, running Ubuntu 16.04 with stock ceph (10.2.6), every OSD uses separate 8 TB spinning drive (XFS), MONITORs are installed on the same nodes, all nodes are connected via 10G