Hello,
I was running ceph cluster with hdds for OSDs, now I've created new
dedicated SSD pool within same cluster, everything looks fine, cluster
is "healthy", but if I try to create new rbd image in this new ssd
pool it just hangs, I've tried both "rbd" command and within proxmox
gui, " rbd"
Hello,
I have problem that sometimes I can't unmap rbd device, I get "sysfs
write failed rbd: unmap failed: (16) Device or resource busy", there
is no open files and "holders" directory is empty. I saw on the
mailling list that you can "force" unmapping the device, but I cant
find how does it
Hi,
we're testing ceph cluster as storage backend for our virtualization
(proxmox), we're using RBD for raw VM images. If I'm trying to restore
some snapshot with "rbd snap rollback", the whole cluster becomes
really slow, the "apply_latency" goes to 4000-6000ms from normally
0-10ms, I see load
I've reduced OSDs to 12 and moved journal to ssd drives and now have
"boost" with writes to ~33-35MB/s. Is it maximum without full ssd
pools?
Best,
Stan
2017-04-06 9:34 GMT+02:00 Stanislav Kopp <stask...@gmail.com>:
> Hello,
>
> I'm evaluate ceph cluster, to see
Hello,
I'm evaluate ceph cluster, to see if you can use it for our
virtualization solution (proxmox). I'm using 3 nodes, running Ubuntu
16.04 with stock ceph (10.2.6), every OSD uses separate 8 TB spinning
drive (XFS), MONITORs are installed on the same nodes, all nodes are
connected via 10G