Thank you very much for the help, I will start looking into alternatives.

On Mon, Oct 29, 2018 at 11:08 AM Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Alexandre, I don't recommend to go with such an idea. If you would like to
> go with such HA-approach, your customers end with unfunctional VMs. Better
> to go with sync, uncached storage and some sort of snapshotting like ZFS
> send/receive or proprietary technology. Also, think about managed HA
> services, like MySQL Galera. Snapshotting VMs causes freeze/unfreeze
> everytime you do that, for some VMs it can take a second, from other tens
> of seconds. I guarantee, you and your users will not be happy about such
> snapshots for general purpose VMs.
>
> пн, 29 окт. 2018 г. в 10:35, Alexandre Bruyere <
> bruyere.alexan...@gmail.com
> >:
>
> > Tests will be done for sure.
> >
> > Use case is 5-minute snapshots on VMs for ultra-high-availability hybrid
> > cloud - to provide small and medium businesses with a reliable system
> that,
> > in the case of issues, loses as little work as possible.
> >
> > On Sun, Oct 28, 2018 at 6:00 AM Andrija Panic <andrija.pa...@gmail.com>
> > wrote:
> >
> > > I'm not sure what is your use case - what you want to achieve - but
> make
> > > sure to test this thoroughly
> > >
> > > You can "manually" (outside of ACS) always make a snap of the volume,
> but
> > > you need to make sure that this doesn't collide with CloudStack in any
> > way
> > > - i.e. there is also VM level snapshots in KVM if you are using NFS as
> > > Primary Storage - so check this out maybe it works for you - here for
> > > example you have the limitation (if I remember correctly) that you can
> > not
> > > attach additional volume (or something similar) to the VM, until you
> have
> > > deleted all VM-level snapshots, etc. (which makes sense of course)
> > >
> > > I guess it takes a lot of work to skip Secondary Storage (snapshot
> > workflow
> > > inside CLoudStack), because you need to make sure to provide workflow
> for
> > > all different Primary Storage providers (there are bunch of them, not
> > only
> > > NFS...), and then there are bunch of HyperVisors supported, and so on,
> so
> > > it's a big challenge (I'm not developer, but that is my assumption)
> > >
> > > Cheers
> > >
> > > On Sun, 28 Oct 2018 at 00:06, Alexandre Bruyere <
> > > bruyere.alexan...@gmail.com>
> > > wrote:
> > >
> > > > Well... Sounds like the new scripters that are coming in tomorrow
> will
> > > come
> > > > in handy. I'll probably have them script something to pull snapshots
> > from
> > > > KVM directly instead of going through Cloudstack.
> > > >
> > > > Is there anything that would stop this from working?
> > > >
> > > > On Fri, Oct 26, 2018 at 4:15 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Yes.
> > > > >
> > > > > There are improvements being done atm, (afaik), to try to manage
> > > > snapshots
> > > > > on the primary storage (for NFS and maybe CEPH, it's already
> > > implemented
> > > > on
> > > > > i.e. SolidFire).
> > > > >
> > > > > Simply this is how it was working so far - snapshots are meant to
> be
> > > > moved
> > > > > to Secondary Storage (and later can be converted to Templates,
> > > downloaded
> > > > > from SSVM, converted to volumes etc).
> > > > > I agree with you, but that is how it was implemented, I assume for
> > > > > compatibility reasons - since different Hypervisors manage things
> in
> > > > > different ways - you have to support different hypervisosrs,
> > different
> > > > > storage solutions etc (it's NOT only NFS...).
> > > > >
> > > > > Cheers
> > > > >
> > > > >
> > > > > On Fri, 26 Oct 2018 at 22:08, Alexandre Bruyere <
> > > > > bruyere.alexan...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > So wait. Are you telling me that Cloudstack does a full backup of
> > the
> > > > > > volume every time a snapshot is taken?
> > > > > >
> > > > > > What's the point of snapshots then? Making specific operations
> > > faster?
> > > > > >
> > > > > > --
> > > > > > Alexandre Bruyère
> > > > > >
> > > > > > -----Original Message-----
> > > > > > Re: Questions on snapshots
> > > > > > From: Andrija Panic <andrija.pa...@gmail.com>
> > > > > > To: users <users@cloudstack.apache.org>
> > > > > > Friday, October 26, 2018 at 3:38 PM
> > > > > >
> > > > > > So :)
> > > > > >
> > > > > > 1. Snap interval - scheduled snaps are max 1h per the so called
> > > > "hourly"
> > > > > > schedule - so makes sense :) You could do some automation, by
> > > creating
> > > > > > manual snapshots and deleting oldest ones via automation - i.e.
> you
> > > can
> > > > > > use Cloud Monkey, CLI utility that talk to API and is great for
> any
> > > > kind
> > > > > of
> > > > > > automation, unless you talk directly to API from i.e. Python etc,
> > via
> > > > > > HTTPS.
> > > > > >
> > > > > > 2. number of snaps: Go to Global Configuration, there is
> parameter
> > > > > > "snapshot.max.hourly" - and you can change it, I assume to <=24
> > > > > ...(restart
> > > > > > mgmt server and you are good),(there are similar for daily and
> > > monthly)
> > > > > >
> > > > > > Now, related to snapshots - when you decided to really use them
> > (i.e.
> > > > in
> > > > > > production) - a BIG warning - make sure to "know" what you are
> > > doing...
> > > > > > Because so far, when you create a snapshot of the volume on
> Primary
> > > > > Storage
> > > > > > (NFS or CEPH), there is really a snapshot that is created almost
> > > > > instantly
> > > > > > of that volume, but then the whole image (so whole image in that
> > > point
> > > > in
> > > > > > time) is being copied over (qemu-img) to the Secondary Storage
> NFS
> > -
> > > > and
> > > > > in
> > > > > > case of too frequent snaps, or modest networking, this might at
> > some
> > > > > point
> > > > > > throttle your network and also break some logic inside CloudStack
> > > > > > For example: I had clients that were expecting to do hourly
> > snapshots
> > > > of
> > > > > > the 2TB image (right... perhaps a too much expectation from their
> > > side)
> > > > > and
> > > > > > this can fail with timeout (in my case it was modest CEPH
> > > performance)
> > > > > > Also pay attention to schedules, so you don't have hourly snap
> (one
> > > of
> > > > > > hourly runs) begin at i.e. 17.00h and then you configured at same
> > > time
> > > > > > (17.00) daily (/weekly/monthly) at 17.00 (or about the same
> time) -
> > > > those
> > > > > > later snaps will simply fail, because there is already ongoing
> snap
> > > on
> > > > > the
> > > > > > same volume.
> > > > > >
> > > > > > Sorry long post...
> > > > > >
> > > > > >
> > > > > > Cheers
> > > > > > Andrija
> > > > > >
> > > > > > On Fri, 26 Oct 2018 at 20:53, Alexandre Bruyere <
> > > > > > bruyere.alexan...@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello.
> > > > > > >
> > > > > > > I'm currently investigating the functions of Cloudstack, and
> > looked
> > > > > into
> > > > > > > snapshots.
> > > > > > >
> > > > > > > As far as I can tell, the smallest possible interval for
> > snapshots
> > > is
> > > > > one
> > > > > > > hour. Is there a way to schedule them more frequently? For my
> > use,
> > > 5
> > > > > > > minutes snapshots would be ideal.
> > > > > > >
> > > > > > > Also, it's limiting me to 8 snapshots kept. Is it possible to
> > keep
> > > a
> > > > > > larger
> > > > > > > number of them - whether it is by changing configurations, by
> > some
> > > > > other
> > > > > > > mechanic or any other way?
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Andrija Panić
> > > > > >
> > > > > >
> > > > > > -----Original Message-----
> > > > > > Re: Questions on snapshots
> > > > > > From: Andrija Panic <andrija.pa...@gmail.com>
> > > > > > To: users <users@cloudstack.apache.org>
> > > > > > Friday, October 26, 2018 at 3:38 PM
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Andrija Panić
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>

Reply via email to