Thanks, Rohit, Wei.

My team and I are keeping a close and proactive watch on any gaps that may
arise on the orchestrator side, and we are addressing them as we identify
them. You will begin to see these improvements as soon as we have the
opportunity to submit them as PRs. At the moment, we are waiting for our
previous PR to be merged, and we are actively collaborating with Daan to
expedite that process.
Regarding the snapshot workflow, we will monitor the global‑settings
parameter and ensure that users are clearly informed when snapshots are
retained on the primary storage.

Regards
Rajiv Jain

On Sun, Mar 1, 2026 at 1:12 AM Wei ZHOU <[email protected]> wrote:

> +1 to what Rohit said.
>
> It is not mandatory for every storage provider to support snapshot copy to
> secondary storage. For example, Ceph only supports snapshots on it, and
> users need to set the global setting snapshot.backup.to.secondary to false.
>
> I believe a similar approach can be implemented here - it would just need
> to be clearly documented.
>
>
> -Wei
>
>
> On Sat, Feb 28, 2026 at 6:08 PM Rohit Yadav <[email protected]>
> wrote:
>
> > Hi Rajiv,
> >
> > You may be hitting a bug - feel free to refactor / fix the code (any
> fixes
> > beyond the storage plugin is also welcome, as long as it passes our
> > integration tests and doesn't break other plugins).
> >
> > To answer other questions - it's not mandatory to have the snapshots be
> > copied/moved to the secondary storage, though most plugins do that. For
> > example, ceph and other managed storage plugins keep snapshots on the
> > storage itself. There's even a global setting for such primary storages
> > where you can keep the snapshots on the primary storage itself.
> >
> >
> > Regards.
> >
> > ________________________________
> > From: Rajiv Jain <[email protected]>
> > Sent: Friday, February 27, 2026 18:14
> > To: [email protected] <[email protected]>
> > Subject: Re: Clarification on VM Snapshot Behavior with Mixed Volume
> > Configurations
> >
> > Rohit, appreciate your guidance.
> >
> > If the storage plugin relies on the hypervisor‑side utilities (such as
> QEMU
> > for KVM), then equivalent mechanisms will be required for every supported
> > hypervisor to ensure consistent behavior. Considering this, we can plan
> to
> > introduce VM quiescing support in a subsequent release of the storage
> > plugin.
> >
> > Additionally, we would like clarification on whether it is mandatory to
> > move snapshots to secondary storage. For ONTAP‑managed pools
> specifically,
> > the plugin currently plans to retain snapshots on the primary storage.
> > Since ONTAP snapshots are inherently read‑only, transferring them over
> the
> > data path to secondary storage is not feasible. Therefore, retaining
> > snapshots on primary storage may be the only practical approach unless an
> > alternative workflow is required.
> >
> > Thanks
> > Rajiv Jain
> >
> > On Fri, Feb 27, 2026 at 5:12 PM Rohit Yadav <[email protected]>
> > wrote:
> >
> > > Hi Rajiv - it depends, if the managed storage along with the hypervisor
> > > can do the quiesce based snapshot (wherein they generally pause the
> > > instance or flush filesystem to have a consistent snapshot). I've asked
> > my
> > > colleagues to review your questions and get back to you.
> > >
> > >
> > > Regards.
> > >
> > > ________________________________
> > > From: Rajiv Jain <[email protected]>
> > > Sent: Saturday, February 21, 2026 15:46
> > > To: [email protected] <[email protected]>
> > > Subject: Clarification on VM Snapshot Behavior with Mixed Volume
> > > Configurations
> > >
> > > Hello Team,
> > >
> > > I am currently developing VM‑level snapshot workflows for the NFSv3
> > > protocol, specifically for environments that leverage NetApp storage
> > > through the NetApp plugin. In certain scenarios, a virtual machine may
> > > contain a mix of managed and non‑managed volumes.
> > >
> > > In such cases, could you please confirm whether the existing VM
> snapshot
> > > workflow is expected to fail when the quiesce option is enabled?
> > >
> > > Thank you,
> > > Rajiv Jain
> > >
> >
>

Reply via email to