I just stood up a Ubuntu cluster and started adding hosts to it while
decommissioning hosts from the Centos cluster and restarting virtual
machines onto it. The whole point of a cluster is that it is homogeneous
so that things like virtual machine migration can be assured to work.
There's no
We have used both CentOS and Ubuntu. Currently we have standardized on
Ubuntu 20.04 LTS due to Red Hat shenanigans with CentOS and for
reliability reasons (Red Hat often broke Cloudstack with bug fixes for
CentOS, Ubuntu has never done so). Once a version of Cloudstack is
released that has
I tried to configure SAML on my Cloudstack a while back and never got it
to work too, though I must admit I wasn't trying too hard. So if you get
it to work, please enlighten us!
On 3/8/2022 10:59 AM, Piotr Pisz wrote:
Hi,
I am trying to configure Cloudstack with Keycloak SSO, with no
Not currently using the VMware integration, but it's on our road map.
The main reason for using Cloudstack for us was that it worked well with
Linux KVM hosts without needing to purchase expensive add-ons or
licenses. This allowed us to spin up our cluster rapidly using existing
machines in
On KVM, Cloudstack relies on the underlying Linux OS to do the base
network configuration. Linux "port groups" are called "bonds" and
virtual switches are called "bridges". In the Linux OS you set up the
bond0 for all of the ports that will be part of the port group, with
whatever parameters
Cloudstack will not, however, manage existing KVM virtual machines,
which is what Chris wants to do. While that is theoretically possible,
there's currently no practical way to populate the Cloudstack MySQL
database with the information needed to make that happen. It appears his
desire is to
On 7/29/2021 3:48 AM, Andrija Panic wrote:
AND, the "insufficient capacity" has, wait one 99% of the case NOTHING
to do with not having enough capacity here or there, it's the stupid,
generic message on failure.
Talking about which, a bit off-topic here I know, I dug through the
source
I am with KVM.
I am sure it’s the core count preventing me from starting VM’s because when I
hack the database to tell it I have 48 cores rather than 24 cores on my hosts,
I can then start the VM.
The only thing the logs say is that I can’t create a new VM due to lack of
resources. Then it
n add yourself to the PR to get notify when things are moving on it.
>>
>> https://github.com/apache/cloudstack/pull/1709
>>
>> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <eric.lee.gr...@gmail.com>
>> wrote:
>>
>>> Theoretically on Centos 7 a
Theoretically on Centos 7 as the host KVM OS it could be done with a couple of
pauses and the snapshotting mechanism built into qcow2, but there is no simple
way to do it directly via virsh, the libvirtd/qemu control program that is used
to manage virtualization. It's not as with issuing a
Official EOL for Centos 6 / RHEL 6 as declared by Red Hat Software is
11/30/2020. Jumping the gun a bit there, padme.
People on Centos 6 should certainly be working on a migration strategy right
now, but the end is not here *yet*. Furthermore, the install documentation is
still written for
If all else fails, change its state to the correct state in the MySQL
database and restart the management service. Sadly that is the only way I
could do it when my Cloudstack got confused and stuck an instance in an
intermediate state where I couldn't do anything with it.
On Dec 22, 2017 at
Did you try the same test from the exact same physical host that one of the
guests are running on? There may be congestion between the Cloudstack network
and the NFS network.
I just tested this by creating a compute offering that had the 200Mbit limit
and assigning it to an instance. I mounted
Okay. So:
1) Don't use EXT4 with LVM/RAID, it performs terribly with QCOW2. Use XFS.
2) I didn't do anything to my NFS mount options and they came out fine:
10.100.255.3:/storage/primary3 on /mnt/0ab13de9-2310-334c-b438-94dfb0b8ec84
type nfs4
Not happening unless your instances on Amazon are Centos or some other
"standard" Linux distribution, not standard Amazon Linux. Amazon Linux is its
own thing and won't run outside the Amazon ecosphere, and Windows instances on
AWS don't react well at all to having their hypervisor yanked out
> On Aug 18, 2017, at 03:22, Asanka Gunasekara wrote:
>
> Hi Eric,
>
> SSVM can access my nfs and I an manual mount :(
>
> This "s-397-VM:/# grep com.cloud.agent.api.SecStorageSetupCommand
> /var/log/cloud.log" did not produced any output, but found below error
>
> From
> On Aug 7, 2017, at 23:44, Asanka Gunasekara wrote:
> NFS is running on a different server, I can manual mount this share as NFS
> and SMB
> Cloud stack - 4.9
> Os is Centos 7 (64)
* Make sure that it's accessible from the *storage* network (the network that
you configured
> On Aug 5, 2017, at 21:03, Ivan Kudryavtsev wrote:
>
> Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
> linux soft raid 5 and Ext4 and it works like a charm without special
> tunning.
>
> Qcow2 also not so bad. LVM2 does it better of course
qcow2 performance has been historically bad regardless of the underlying
storage (it is an absolutely terrible storage format), which is why most
OpenStack Kilo and later installations instead usually use managed LVM and
present LVM volumes as iSCSI volumes to QEMU, because using raw LVM
First, about me -- I've been administering Linux systems since 1995. No, that's
not a typo -- that's 22 years. I've also worked for a firewall manufacturer in
the past, I designed the layer 2 VLAN support for a firewall vendor, so I know
VLAN's and such. I run a fairly complex production
20 matches
Mail list logo