Hi,
Am 26.07.24 um 21:47 schrieb Jonathan Nicklin via pve-devel:
>
> Hi Fiona,
>
> Would adding support for offloading incremental difference detection
> to the underlying storage be feasible with the API updates? The QEMU
> bitmap strategy works for all storage devices but is far from
>
> In hyper-converged deployments, the node performing the backup is sourcing
> ((nodes-1)/(nodes))*bytes) of backup data (i.e., ingress traffic) and then
> sending 1*bytes to PBS (i.e., egress traffic). If PBS were to pull the data
> from the nodes directly, the maximum load on any one host
> The biggest issue we see reported related to QEMU bitmaps is
> persistence. The lack of durability results in unpredictable backup
> behavior at scale. If a host, rack, or data center loses power, you're
> in for a full backup cycle. Even if several VMs are powered off for
> some reason, it
> Today, I believe the client is reading the data and pushing it to
> PBS. In the case of CEPH, wouldn't this involve sourcing data from
> multiple nodes and then sending it to PBS? Wouldn't it be more
> efficient for PBS to read it directly from storage? In the case of
> centralized storage, we'd
> Would adding support for offloading incremental difference detection
> to the underlying storage be feasible with the API updates? The QEMU
> bitmap strategy works for all storage devices but is far from
> optimal.
Sorry, but why do you think this is far from optimal?
--- Begin Message ---
Hi Fiona,
Would adding support for offloading incremental difference detection
to the underlying storage be feasible with the API updates? The QEMU
bitmap strategy works for all storage devices but is far from
optimal. If backup coordinated a storage snapshot, the underlying
==
A backup provider needs to implement a storage plugin as well as a
backup provider plugin. The storage plugin is for integration in
Proxmox VE's front-end, so users can manage the backups via
UI/API/CLI. The backup provider plugin is for interfacing with the
backup provider's backend to