We found the root cause of our problem. The cluster was dedicated to a
domain and the VM was not part of this domain. CS is not giving any error in
the UI when you try this. I will check the latest version of CS for this.
Best regards,
Swen
-Ursprüngliche Nachricht-
Von: Swen - swen.io [m
Thanks... But I think something else is now broken, too...:
The SystemVMs are now no longer being provisioned: They come up "empty"
with "systemvm type=".
I also deleted the Console Proxy VM, and the new one is plain, too...
I tried with Git branch 4.11 (producing 4.11.1-SNAPSHOT RPMs), same
I investigated further, and opened an issue:
https://github.com/apache/cloudstack/issues/2561
Cheers,
Martin
Am 11.04.18 um 12:18 schrieb Martin Emrich:
Thanks... But I think something else is now broken, too...:
The SystemVMs are now no longer being provisioned: They come up
"empty" with
Hi martin,
I've just read your issue on github and was wondering how you;ve been able to
select Debian 9.
But maybe you did a fresh installation.
We did an update from 4.9.2 to 4.11.0 and were able to select "Debian GNU/Linux
7(64-bit)" as
highest possible Debian-version. The documentation said
virt-what will give 'xen-domU' for paravirtualized guests. Didn't XenServer
make some kind of change around this as a Meltdown/Spectre migation?
Kind regards,
Paul Angus
[email protected]
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
-Orig
AFAIK not for 6.5 SP1.
https://xen-orchestra.com/blog/meltdown-and-spectre-for-xenserver/ shows that
7.x is fixed and gives the hint,
that HVM guests are not affected (at least for spectre)
https://support.citrix.com/article/CTX231390
" 6.2 SP1, and 6.5 SP1 versions of XenServer require extensive
Just tried a Debian 9 running on XenServer 6.5 SP1 with model "Other 2.6x Linux
(64-bit)":
# virt-what --version
1.15
# virt-what
hyperv
xen
xen-domU
#
Am Mittwoch, den 11.04.2018, 13:50 +0200 schrieb Stephan Seitz:
> AFAIK not for 6.5 SP1.
> https://xen-orchestra.com/blog/meltdown-and-spectre-
Xen you execute the following command in your XenServer?
> xe vm-param-list uuid=
>
Then, what is the content of these parameters?
- PV-legacy-args
- PV-bootloader
- PV-bootloader-args
- HVM-boot-policy
- HVM-boot-params
- HVM-shadow-multiplier
It is just to make sure that th
# xe vm-param-list uuid=c1bcef11-ffc2-24bd-7c5e-0840fb4f8f49 | grep -e
PV-legacy-args -e PV-boot -e HVM-boot -e HVM-shadow
HVM-boot-policy ( RW): BIOS order
HVM-boot-params (MRW): order: dc
HVM-shadow-multiplier ( RW): 1.000
PV-legacy-args ( R
That is interesting. The VM is indeed in HVM mode.
On Wed, Apr 11, 2018 at 9:04 AM, Stephan Seitz
wrote:
> # xe vm-param-list uuid=c1bcef11-ffc2-24bd-7c5e-0840fb4f8f49 | grep -e
> PV-legacy-args -e PV-boot -e HVM-boot -e HVM-shadow
>HVM-boot-policy ( RW): BIOS order
>
Rafael,
don't get confused, I'm not the OP, just added a few thoughts. We are running a
very similar Infrastructure than the OP, but our systemvm-template is Debian 7
instead of Debian 9 (he has).
The recent host you questioned is "other linux2.x 64bit" so *should* be (as
verified :) ) run in H
For me:
[root@csdev-xen1 ~]# xe vm-param-list
uuid=68daf990-0cc6-174c-c114-30f52940af1d
uuid ( RO) : 68daf990-0cc6-174c-c114-30f52940af1d
HVM-boot-policy ( RW): BIOS order
HVM-boot-params (MRW): order: dc
HVM-shadow-multiplier (
Hi!
Am 11.04.18 um 13:38 schrieb Stephan Seitz:
Hi martin,
I've just read your issue on github and was wondering how you;ve been able to
select Debian 9.
But maybe you did a fresh installation.
No, Upgrade from 4.9.2.0. I set the OS type to Debian 8 in ACS.
"Debian 9.3" is what XenCenter rep
Hi all,
on one of our two CloudStack instances we have an issue adding NICs to
VMs after upgrading to 4.11.0.0. On another instance the problem does
not occur after upgrade.
Before posting an issue on GitHub I would like to know if someone else
can reproduce the following problem:
Our setup: Adv
a small update about my problem.
I 've recreated the zone from scratch this morning and one of my "cloudbr"
used for the secondary storage was misconfigured.
So Now, I can ping the secondary storage from KVM host, CS-MGMT, SSVM and
mount the nfs on them...but...the agent still not going up an
There is a global setting you have to set to use internal non public IP.
Try setting sec.storage.allow.internal.site to an internal network cidr.
You may need to destroy ssvm for settings to take effect. In my case there
is some sort of minor bug where it takes upward of 5 minutes for ssvm to
pro
Hi ilya,
Thanks for your answer
Yes i did it already with no effect :/
Best regards,
N.B
De : ilya musayev
Envoyé : mercredi 11 avril 2018 20:26:34
À : [email protected]
Objet : Re: ssvm NFS public ip
There is a global setting you have to set to u
Hi, list:
I am in the process of upgrading my hypervisors clusters from XenServer 6.5 to
7.0 for all my ACS instances.
My XS 7.0 clusters are patched up to XS70E050.
During cluster rolling upgrade, VM instances are live migrated several times,
eventually all of them running on XS 7.0 hosts.
H
18 matches
Mail list logo