Hi Rishi,
There may be approaches you can consider;
1.
Arch-specific cluster:
CloudStack assumes clusters are homogenous; so you can start with arch-specific
cluster - one cluster of x86 hosts and another one of arm64, within the same
zone. You'll then need to decide which arch you want to
Hello guys,
I discussed with the responsibles and it was decided that the CloudStack
talks will not be targeted to an individual track; however, the track
"Cloud and runtime" was renamed to "CloudStack, Cloud, and Runtime", due
to our expressiveness in it, and we will be co-chairing this
Hmm Interesting. Thanks Alex. Will try it out.
Regards,
Shiv
(Sent from mobile device. Please excuse brevity and typos.)
On Fri, 19 Apr 2024, 21:10 Alex Mattioli,
wrote:
> You can set your upstream router with a static ARP entry for the public
> IP, this way if your user changes the IP they'll
Hi Daan
Sorry - I forgot to post back what the issue turned out to be.
We had a power outage in our building where our Dev environment is racked.
After doing some tcp dumps on some vxlan interfaces I was seeing lots of bad
checksums.
Our Network engineer swore that the switches we're all OK
GitHub user GutoVeronezi added a comment to the discussion: logging standards
in CloudStack
I believe that adding an error message in `ERROR` followed by the stack trace
in `DEBUG` is not so interesting. For instance, if we have the `INFO` level
enabled and an exception occurs, we would not
different clusters should be fine.
maybe @Rohit Yadav can give some advice.
-Wei
On Fri, Apr 19, 2024 at 4:56 PM Daan Hoogland wrote:
>
> good point Rishi,
> I think you would have to separate the hardware into different
> clusters at least, but maybe even separate zones. I never heard of
>
You can set your upstream router with a static ARP entry for the public IP,
this way if your user changes the IP they'll simply lose their own
connectivity. Should be quite easy to automate.
-Original Message-
From: K B Shiv Kumar
Sent: Friday, April 19, 2024 4:30 PM
To:
looks like you've been quite complete in your sumup Jimmy
On Fri, Apr 12, 2024 at 3:41 AM Jimmy Huybrechts wrote:
>
> Hi,
>
> What are the current backup options that work with Cloudstack?
>
> - Veeam enterprise
> - Networker
> - Backroll
>
> The above I found, backroll seems to not be very
Piotr,
sounds good. Are you willing to propose and mentor this effort?
On Thu, Apr 11, 2024 at 11:26 AM Piotr Pisz wrote:
>
> Hi,
>
>
>
> I have an idea that could be implemented as part of GSOC 2024.
>
>
>
> We currently use vSphere+Automation Center for internal purposes, but we
> plan to
good point Rishi,
I think you would have to separate the hardware into different
clusters at least, but maybe even separate zones. I never heard of
anybody doing a setup like yours.
On Wed, Apr 10, 2024 at 6:56 PM Rishi Misra wrote:
>
> Can a CloudStack instance running on x86 manage deployments
Hi Wei,
The main concern with that solution is how do we prevent the user from
going into the VNF and changing the public IP. That can cause an ARP clash
and bring down someone else's system too. That's the only and very major
drawback in that solution. Was wondering if there's any workaround for
sorry no one could help you Gary,
Have you gotten any further on this issue?
On Fri, Apr 5, 2024 at 4:59 PM Gary Dixon
wrote:
> HI all
>
>
>
> ACS 4.15.2
>
> Ubuntu 20.04
>
> KVM
>
> Adv Zone no sec groups
>
>
>
> We recently had to move all of our dev ACS environment virtual management
> and
Hi,
You can deploy the VNF appliance on a shared network as the first network
-Wei
On Fri, Apr 19, 2024 at 3:38 PM Kaushik Bora
wrote:
>
> Dear Community,
>
>
>
>We have been going through the VNF appliance deployment which is
> available in release 4.19.0. We have successfully tested the
Dear Community,
We have been going through the VNF appliance deployment which is
available in release 4.19.0. We have successfully tested the VNF deployment
scenario with the Virtual Router. However, want to evaluate if the VNF
appliance can be deployed without the Virtual Router as well,
The Apache CloudStack project is pleased to announce the release of
CloudStack 4.18.2.0.
The CloudStack 4.18.2.0 release is a maintenance release as part of
its 4.18.x LTS branch and contains around 100 fixes and
improvements since the CloudStack 4.18.1.0 release. Some of the
highlights include:
Hi everyone,
To anyone that may face this issue, the problem is in terraform
parallelism when executing *for_each*.
As long as entities are listed manually the devices are attached as
expected.
Best regards,
Jordan
On Wed, Apr 17, 2024 at 2:22 PM jordan j wrote:
> Hi everyone,
>
> We are
We do not use any raid in front of the NVMe drives, because of possible
performance bottlenecks. We are using LVMThin on top of the drives.
-Ursprüngliche Nachricht-
Von: David Sekne
Gesendet: Freitag, 19. April 2024 11:04
An: users@cloudstack.apache.org
Betreff: Re: AW: Storage
Hi Swen,
Since you need a pool to present to linstor on each node do you use any
RAID (SW or HW) for the NVMe's (or you use ZFS / VG's)?
Regards,
David
On 19. 04. 24 08:49, m...@swen.io wrote:
Hi David,
1. Are you running linstor just as HCI or you also have a dis-aggregated
cluster
nice to see this discussion being had again and sorry to be late in
replying João,
I see no replies have come forward yet. Let's add a new
https://github.com/apache/cloudstack/discussions/new/choose for this
as well. I still feel my proposal is good and should have been
applied. but we can gather
Hi David,
1. Are you running linstor just as HCI or you also have a dis-aggregated
cluster somewhere (and some recomendations regarding it)?
> we run only HCI clusters, so I cannot tell you a lot about standalone
> clusters.
2. Whats the resource usage of linstor (even better if you have data
Hello Swen, Bryan,
We are just evaluating linstor at the moment as well (looks very
promising). I would like to run a standalone cluster (not HCI) just for
linbit (so we would use only diskless + nfs for secondary storage).
I would have a couple of questions for both:
1. Are you running
21 matches
Mail list logo