Hi Bryan, Sorry for the late reply. Yes we had zero downtime but no provisioning of new machines got affected.
On Thu, Oct 19, 2023 at 1:48 PM Bryan Tiang <bryantian...@hotmail.com> wrote: > Hi Pratik, > > Thanks for the response. > > Just to confirm, your storage volumes had zero downtime, right? > > Regards, > Bryan > On 19 Oct 2023 at 3:18 PM +0800, m...@swen.io, wrote: > > Hey Pratik, > > > > can you elaborate more on this stability problems? We are doing also a > CS + Linstor PoC at the moment and we did a lot of stress testing it > without any problems on linstor side. I am curious if we did miss some > tests. > > We are using a place count of 2 in a 3 node cluster. > > > > Regards, > > Swen > > > > -----Ursprüngliche Nachricht----- > > Von: Pratik Chandrakar <chandrakarpra...@gmail.com> > > Gesendet: Donnerstag, 19. Oktober 2023 07:15 > > An: users@cloudstack.apache.org > > Betreff: Re: Comparing Hyperconverged + Converged Setup with Cloudstack > + Linbit > > > > Hi Bryan, > > > > We did a small PoC with Cloudstack + Linbit SDS(3 Time replica) in a > hyperconverged setup. There was no issue with HA, the VMs successfully > restarted from different nodes. However, we did face stability problems > with Linbit HA, which prevented us from provisioning new storage or virtual > machines. > > > > On Wed, Oct 18, 2023 at 3:42 PM Bryan Tiang <bryantian...@hotmail.com> > > wrote: > > > > > Hi Guys, > > > > > > We are doing some evaluation with Cloudstack + Linbit SDS. > > > > > > Has anyone had any experience using these with a Converged or > > > Hyperconverged setup? > > > > > > My understanding is that Converged is the best for HA Because: > > > > > > • If any storage node goes down, there is zero downtime. (3 Time > > > Replica) • If any compute node goes down, it will be restarted in > > > another node as part of HA feature. > > > > > > But what about Hyperconverged setup? Can we also set zero downtime > > > with storage and fast VM recovery? > > > > > > Regards, > > > Bryan > > > > > > > > > -- > > *Regards,* > > *Pratik Chandrakar* > > > > > -- *Regards,* *Pratik Chandrakar*