On Thu, Jun 29, 2017 at 1:33 PM, Gregory Farnum wrote:
> I'm not sure if there are built-in tunable commands available (check the
> manpages? Or Jason, do you know?), but if not you can use any generic
> tooling which limits how much network traffic the RBD command can run.
Long-running RBD actio
On Thu, Jun 29, 2017 at 7:44 AM Stanislav Kopp wrote:
> Hi,
>
> we're testing ceph cluster as storage backend for our virtualization
> (proxmox), we're using RBD for raw VM images. If I'm trying to restore
> some snapshot with "rbd snap rollback", the whole cluster becomes
> really slow, the "ap
Many others I’m sure will comment on the snapshot specifics.
However running a cluster with some 8TB drives I have noticed huge differences
between 4TB and 8TB drives and their peak latency’s when busy. So along with
the known snapshot performance you may find the higher seek time and higher
TB
Hi,
we're testing ceph cluster as storage backend for our virtualization
(proxmox), we're using RBD for raw VM images. If I'm trying to restore
some snapshot with "rbd snap rollback", the whole cluster becomes
really slow, the "apply_latency" goes to 4000-6000ms from normally
0-10ms, I see load o