Re: VM HA enabled when on ceph
Jess, Yes, that works. -Si From: jesse.wat...@gmail.com Sent: Wednesday, June 26, 2019 3:55 PM To: users@cloudstack.apache.org Subject: VM HA enabled when on ceph Hi all, Have a ACS 4.12.0.0 running with 2 kvm hosts and ceph for primary storage. How do I enable HA for my VM(s)? The manual hints "HA features work with iSCSI or NFS primary storage", leaves out ceph. Can I just go in db and enable set vm to "HA" yes? mysql> update vm_instance set ha_enabled = 1 where id = ; TIA, Jesse
VM HA enabled when on ceph
Hi all, Have a ACS 4.12.0.0 running with 2 kvm hosts and ceph for primary storage. How do I enable HA for my VM(s)? The manual hints "HA features work with iSCSI or NFS primary storage", leaves out ceph. Can I just go in db and enable set vm to "HA" yes? mysql> update vm_instance set ha_enabled = 1 where id = ; TIA, Jesse
Re: Cpu Overprovisioning factor
No (currently). Perhaps in next year or so, we *might* have some changes around that, but atm - no. Andrija On Wed, 26 Jun 2019 at 14:52, Fariborz Navidan wrote: > Hello All, > > Is there any way to force ACS to take into consideration the new cpu > overprovision factor without having to shoutdown lots of VMs on the host? > > Thanks > -- Andrija Panić
Re: Juniper SRX Support in 4.13
+users@ The larger community does not have the infra to help with regression testing of srx plugin related changes. I would reach out to our users list as well and ask if there are any srx users who want to share their thoughts? Otherwise, we only have Richard and his team's changes and testing effort to rely on and we can help run smoketests to validate any regressions. Thanks. Regards, Rohit Yadav From: Richard Lawley Sent: Wednesday, 26 June, 6:37 PM Subject: Juniper SRX Support in 4.13 To: d...@cloudstack.apache.org Hi, In the current release, we list CloudStack as supporting Juniper SRX (Model srx100b) versions 10.3 to 10.4 R7.5. As I've mentioned in a previous email, this is very old (EOL 2014). The main thing that causes a problem here is that somewhere between JunOS 10 and 15, Port Forwarding changed from being a single port per rule to a port-range per rule. I'm prepared to modify the SRX plugin so that it works fully with JunOS 15 (I've done some basics), but to fully complete this would require breaking compatibility with older versions. I think given that JunOS 15 is the lowest currently available version, and 19 is also available, it would be better if upped the minimum requirements. In addition, when I asked previously for input from anyone using it, there was only one response (from Jayapal Uradi), so I don't think this is being widely used. We're trying to use SRXs with JunOS 15 (and also testing with vSRX appliances). What are anyone's thoughts on bumping the supported SRX version to 15? Regards, Richard rohit.ya...@shapeblue.com www.shapeblue.com Amadeus House, Floral Street, London WC2E 9DPUK @shapeblue
Cpu Overprovisioning factor
Hello All, Is there any way to force ACS to take into consideration the new cpu overprovision factor without having to shoutdown lots of VMs on the host? Thanks
Re: Very BIG mess with networking
I'm fine with just a beer :P Glad you solved that one ! On Wed, 26 Jun 2019 at 11:53, Alessandro Caviglione wrote: > Hi Andrija, > I want to say a big THANK YOU for your suggestions, I changed bond mode and > Network name on XenSerevr Pool and updated network name on CS... and it > works now!! > Thank you again!! > A big hug! :) > > On Tue, Jun 25, 2019 at 12:45 PM Andrija Panic > wrote: > > > I would say, to make sure you have identical network/bond setup as before > > (irrelevant of which chils pifs are in the bond/network) - so from > > CloudStack point of view, you did zero changes (same networks/bind). > > > > Limited reading capabilities from myside on mobile...but I would say that > > pluging in the vif for a guest Network is the problem - in XenServer > logs, > > you can clearly see error while joining slave interface (vif) to the > > network - but again, not sure which network is it (public or guest > Network) > > > > On Tue, Jun 25, 2019, 10:50 Alessandro Caviglione < > c.alessan...@gmail.com> > > wrote: > > > > > No, new bond have the same name... > > > In the log there is: > > > > > > Found more than one network with the name Public > > > 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork] > > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called > > > Public on host=192.168.200.39; > > > Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b; > > > pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c* > > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif > > > dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2 > > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for > r-899-VM > > on > > > nic [Nic:Guest-10.122.12.1-vlan://384] > > > 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network > named > > > GuestVM > > > 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork] > > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called > > > GuestVM on host=192.168.200.39; > > > Network=300e55f0-88ff-a460-e498-e75424bc292a; > > > pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc > > > > > > So it seems that, however, it get the right Network and continue to > > looking > > > for other network. > > > Do you think this is the issue also if CS continue with other tasks > > instead > > > stops founding more than one network Public? > > > Could I simply change network name in Xen Pool and update in CS? > > > > > > On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic > > > > wrote: > > > > > > > If your new bond have changes name, have you changed also XenServer > > > Traffic > > > > Label in CloudStack ? > > > > Active-active is known to be sometimes very problematic, switch back > to > > > > active-passive until you solve your issues. Experiment later with > > > > active-active. > > > > > > > > > > > > > > > > On Tue, Jun 25, 2019, 09:34 Michael Kesper > > > wrote: > > > > > > > > > Hi Alessandro, > > > > > > > > > > On 25.06.19 08:43, Alessandro Caviglione wrote: > > > > > > complains on more than one network with name Publci... ??? > > > > > > > > > > [...] > > > > > > > > > > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for > network > > > > named > > > > > >> Public > > > > > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than > one > > > > > > network > > > > > >> with the name Public > > > > > > > > > > Bye > > > > > Michael > > > > > > > > > > > > > > > > > > > > > > > > > -- Andrija Panić
Re: Very BIG mess with networking
Hi Andrija, I want to say a big THANK YOU for your suggestions, I changed bond mode and Network name on XenSerevr Pool and updated network name on CS... and it works now!! Thank you again!! A big hug! :) On Tue, Jun 25, 2019 at 12:45 PM Andrija Panic wrote: > I would say, to make sure you have identical network/bond setup as before > (irrelevant of which chils pifs are in the bond/network) - so from > CloudStack point of view, you did zero changes (same networks/bind). > > Limited reading capabilities from myside on mobile...but I would say that > pluging in the vif for a guest Network is the problem - in XenServer logs, > you can clearly see error while joining slave interface (vif) to the > network - but again, not sure which network is it (public or guest Network) > > On Tue, Jun 25, 2019, 10:50 Alessandro Caviglione > wrote: > > > No, new bond have the same name... > > In the log there is: > > > > Found more than one network with the name Public > > 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork] > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called > > Public on host=192.168.200.39; > > Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b; > > pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c* > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif > > dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2 > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM > on > > nic [Nic:Guest-10.122.12.1-vlan://384] > > 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named > > GuestVM > > 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork] > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called > > GuestVM on host=192.168.200.39; > > Network=300e55f0-88ff-a460-e498-e75424bc292a; > > pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc > > > > So it seems that, however, it get the right Network and continue to > looking > > for other network. > > Do you think this is the issue also if CS continue with other tasks > instead > > stops founding more than one network Public? > > Could I simply change network name in Xen Pool and update in CS? > > > > On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic > > wrote: > > > > > If your new bond have changes name, have you changed also XenServer > > Traffic > > > Label in CloudStack ? > > > Active-active is known to be sometimes very problematic, switch back to > > > active-passive until you solve your issues. Experiment later with > > > active-active. > > > > > > > > > > > > On Tue, Jun 25, 2019, 09:34 Michael Kesper > > wrote: > > > > > > > Hi Alessandro, > > > > > > > > On 25.06.19 08:43, Alessandro Caviglione wrote: > > > > > complains on more than one network with name Publci... ??? > > > > > > > > [...] > > > > > > > > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network > > > named > > > > >> Public > > > > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase] > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one > > > > > network > > > > >> with the name Public > > > > > > > > Bye > > > > Michael > > > > > > > > > > > > > > > > > >