Re: Dissimilar host OS within the same cluster - Not allowed?

2022-10-13 Thread Eric Green
I just stood up a Ubuntu cluster and started adding hosts to it while decommissioning hosts from the Centos cluster and restarting virtual machines onto it. The whole point of a cluster is that it is homogeneous so that things like virtual machine migration can be assured to work. There's no

Re: Which linux system is recommended

2022-09-20 Thread Eric Green
We have used both CentOS and Ubuntu. Currently we have standardized on Ubuntu 20.04 LTS due to Red Hat shenanigans with CentOS and for reliability reasons (Red Hat often broke Cloudstack with bug fixes for CentOS, Ubuntu has never done so). Once a version of Cloudstack is released that has

Re: keycloak saml

2022-03-08 Thread Eric Green
I tried to configure SAML on my Cloudstack a while back and never got it to work too, though I must admit I wasn't trying too hard. So if you get it to work, please enlighten us! On 3/8/2022 10:59 AM, Piotr Pisz wrote: Hi, I am trying to configure Cloudstack with Keycloak SSO, with no

Re: CloudStack vs. vCloud Director

2021-10-26 Thread Eric Green
Not currently using the VMware integration, but it's on our road map. The main reason for using Cloudstack for us was that it worked well with Linux KVM hosts without needing to purchase expensive add-ons or licenses. This allowed us to spin up our cluster rapidly using existing machines in

Re: kvm ovs vm with trunk

2021-10-07 Thread Eric Green
On KVM, Cloudstack relies on the underlying Linux OS to do the base network configuration. Linux "port groups" are called "bonds" and virtual switches are called "bridges". In the Linux OS you set up the bond0 for all of the ports that will be part of the port group, with whatever parameters

Re: Will Cloudstack work with existing KVM server?

2021-08-12 Thread Eric Green
Cloudstack will not, however, manage existing KVM virtual machines, which is what Chris wants to do. While that is theoretically possible, there's currently no practical way to populate the Cloudstack MySQL database with the information needed to make that happen. It appears his desire is to

Re: CPU Core Count Incorrect

2021-07-29 Thread Eric Green
On 7/29/2021 3:48 AM, Andrija Panic wrote: AND, the "insufficient capacity" has, wait one 99% of the case NOTHING to do with not having enough capacity here or there, it's the stupid, generic message on failure. Talking about which, a bit off-topic here I know, I dug through the source

RE: number of cores

2018-11-19 Thread Eric Green
I am with KVM. I am sure it’s the core count preventing me from starting VM’s because when I hack the database to tell it I have 48 cores rather than 24 cores on my hosts, I can then start the VM. The only thing the logs say is that I can’t create a new VM due to lack of resources. Then it

Re: kvm live volume migration

2018-01-19 Thread Eric Green
n add yourself to the PR to get notify when things are moving on it. >> >> https://github.com/apache/cloudstack/pull/1709 >> >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <eric.lee.gr...@gmail.com> >> wrote: >> >>> Theoretically on Centos 7 a

Re: kvm live volume migration

2018-01-17 Thread Eric Green
Theoretically on Centos 7 as the host KVM OS it could be done with a couple of pauses and the snapshotting mechanism built into qcow2, but there is no simple way to do it directly via virsh, the libvirtd/qemu control program that is used to manage virtualization. It's not as with issuing a

Re: [PROPOSE] EOL for supported OSes & Hypervisors

2018-01-12 Thread Eric Green
Official EOL for Centos 6 / RHEL 6 as declared by Red Hat Software is 11/30/2020. Jumping the gun a bit there, padme. People on Centos 6 should certainly be working on a migration strategy right now, but the end is not here *yet*. Furthermore, the install documentation is still written for

Re: Recover VM after KVM host down (and HA not working) ?

2017-12-23 Thread Eric Green
If all else fails, change its state to the correct state in the MySQL database and restart the management service. Sadly that is the only way I could do it when my Cloudstack got confused and stuck an instance in an intermediate state where I couldn't do anything with it. On Dec 22, 2017 at

Re: Bandwith limit on guests

2017-10-30 Thread Eric Green
Did you try the same test from the exact same physical host that one of the guests are running on? There may be congestion between the Cloudstack network and the NFS network. I just tested this by creating a compute offering that had the 200Mbit limit and assigning it to an instance. I mounted

Re: Rsize / Wsize configuration for NFS share

2017-10-20 Thread Eric Green
Okay. So: 1) Don't use EXT4 with LVM/RAID, it performs terribly with QCOW2. Use XFS. 2) I didn't do anything to my NFS mount options and they came out fine: 10.100.255.3:/storage/primary3 on /mnt/0ab13de9-2310-334c-b438-94dfb0b8ec84 type nfs4

Re: Migrating VMs from AWS to CloudStack

2017-09-07 Thread Eric Green
Not happening unless your instances on Amazon are Centos or some other "standard" Linux distribution, not standard Amazon Linux. Amazon Linux is its own thing and won't run outside the Amazon ecosphere, and Windows instances on AWS don't react well at all to having their hypervisor yanked out

Re: Secondary storage is not secondary properly

2017-08-18 Thread Eric Green
> On Aug 18, 2017, at 03:22, Asanka Gunasekara wrote: > > Hi Eric, > > SSVM can access my nfs and I an manual mount :( > > This "s-397-VM:/# grep com.cloud.agent.api.SecStorageSetupCommand > /var/log/cloud.log" did not produced any output, but found below error > > From

Re: Secondary storage is not secondary properly

2017-08-08 Thread Eric Green
> On Aug 7, 2017, at 23:44, Asanka Gunasekara wrote: > NFS is running on a different server, I can manual mount this share as NFS > and SMB > Cloud stack - 4.9 > Os is Centos 7 (64) * Make sure that it's accessible from the *storage* network (the network that you configured

Re: KVM qcow2 perfomance

2017-08-06 Thread Eric Green
> On Aug 5, 2017, at 21:03, Ivan Kudryavtsev wrote: > > Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with > linux soft raid 5 and Ext4 and it works like a charm without special > tunning. > > Qcow2 also not so bad. LVM2 does it better of course

Re: KVM qcow2 perfomance

2017-08-05 Thread Eric Green
qcow2 performance has been historically bad regardless of the underlying storage (it is an absolutely terrible storage format), which is why most OpenStack Kilo and later installations instead usually use managed LVM and present LVM volumes as iSCSI volumes to QEMU, because using raw LVM

Some things I found out installing on Centos 7

2017-08-02 Thread Eric Green
First, about me -- I've been administering Linux systems since 1995. No, that's not a typo -- that's 22 years. I've also worked for a firewall manufacturer in the past, I designed the layer 2 VLAN support for a firewall vendor, so I know VLAN's and such. I run a fairly complex production