I am currently using cloudstack 4.3 with Citrix XenServer 6.2
After the upgrade 4.2.1 to 4.3, does not restart console VM :
cloudstack-sysvmadm -a
Stopping and starting 1 secondary storage vm(s)...
Done stopping and starting secondary storage vm(s)
Stopping and starting 1 console proxy vm(s)...
Hi Suresh,
Thanks for your update.
There is already submitted bug ( bug id?)? it's will be fixed in 4.3.1 or
committed to 4.4?
--
Serg
> On 21 באפר 2014, at 08:50, Suresh Sadhu wrote:
>
> Its temporary and its regression bug caused due to other last min commit.
> due to this traffic lab
Hi Ameen,
comments inline.
On 19-Apr-2014, at 11:02 pm, Ameen Ali wrote:
> Dear CloudStackers,
>
> I've been having this issue for quite a while. My SSVM is not able to ping
> the Management Server or DNS or resolve download.cloud.com Therefor I am
> not able to download any template even if I
Hi,
I am using CS 4.3 with esxi & vCenter 5.5, while creating system vms its
giving below error,
with reference to this bug fix
https://issues.apache.org/jira/browse/CLOUDSTACK-4875. I though it should
be in CS 4.3
2014-04-21 11:09:25,158 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-82:ctx-9d
Its temporary and its regression bug caused due to other last min commit. due
to this traffic labels are not considering.
Regards
Sadhu
-Original Message-
From: Serg Senko [mailto:kernc...@gmail.com]
Sent: 21 April 2014 11:12
To: users@cloudstack.apache.org
Subject: Re: Cloudstack 4.
Sorry, actually I see the 'connection refused' is just your own test
after the fact. By that time the vm may be shut down, so connection
refused would make sense.
What happens if you do this:
'virsh dumpxml v-1-VM > /tmp/v-1-VM.xml' while it is running
stop the cloudstack agent
'virsh destroy v-1
Hi,
Yes sure,
root@r-256-VM:~# cat /etc/cloudstack-release
Cloudstack Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 2014
Also I tried to destroy the VR and re-create, VR up with same problem.
The "cloudstack-sysvmadm" script haven't receive success answer from VR's.
I have a finish rolling
Hi,
What does mean "In 4.3 traffic labels are not considering" ?
It's temporary or " traffic labels " is deprecated now ?
Does mean, anyone with KVM traffic labels environment can't upgrade to
4.3.0?
On Thu, Apr 10, 2014 at 5:05 PM, Suresh Sadhu wrote:
> Did you used traffic name labels?
>
Hi,
I have same issue after upgrade 4.1.1 to 4.3.0
Take a look, in CS4.2 VR you have NIC's eth0,eth1,eth2.
In CS 4.3 VR you have 4 NIC's where the eth2 and eth3 is the same.
How CS4.3 is passed the QA?
On Sat, Apr 12, 2014 at 12:16 AM, motty cruz wrote:
> I have a testing cloudstack cluster,
Type brctl show
And check your public interface of your router is plugged into cloudbr0 or
cloudbr1..If its plugged to cloubr0 and then need to detach from cloudbr0 and
attach that interface to cloudbr1 and need to apply the all the iptables rules
. Take the backup of iptables rules with ipt
You may want to look in the qemu log of the vm to see if there's
something deeper going on, perhaps the qemu process is not fully
starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
something like that.
On Sun, Apr 20, 2014 at 11:22 PM, Marcus wrote:
> No, it has nothing to do
No, it has nothing to do with ssh or libvirt daemon. It's the literal
unix socket that is created for virtio-serial communication when the
qemu process starts. The question is why the system is refusing access
to the socket. I assume this is being attempted as root.
On Sat, Apr 19, 2014 at 9:58 AM
No idea, but have you verified that the vm is running the new system
vm template? What happens if you destroy the router and let it
recreate?
On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko wrote:
> Hi
>
> After upgrade and restarting system-VM's
> all VR started with some bad network configuration,
Hi
After upgrade and restarting system-VM's
all VR started with some bad network configuration, egress rules stopped
work.
also some staticNAT rules,
there is " ip addr show " from one of VR's
root@r-256-VM:~# ip addr show
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:
Hello,
Question — I am following the directions found here:
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/hypervisor_installation.html?highlight=network
I am wondering something about the interfaces, I do as it says; however, I am
unable to obtain connect to my b
Geoff,
Thank you that is what i wanted to have. I am planning to have NFS for
secondary and CLVM for primary as cloudstack doesn't support direct SAN,
not sure is there any other solution to present my SAN luns to all the VM
hosts.
Ram
On Sun, Apr 20, 2014 at 3:07 PM, Geoff Higginbottom <
geoff
Ram,
The management server(s) need to have access to secondary storage, not primary.
If you have placed your pri and sec storage devices on a common network
(perfectly acceptable config) then you just need to ensure the management
servers have access to the sec storage devices. Best practice
Hello all,
There is some bug after upgrade from 4.1.1 to 4.3.0
KVM Hypervizor
Agent settings labels :
guest.network.device=cloudbr1
private.network.device=cloudbr1
public.network.device=cloudbr0
After upgrade to 4.3 SSVM, all VR's started with multiple [public]
interfaces,
I have some VR's with
Thanks for the video, i have one more question,
So CS/management server needs to have access to storage network? i have two
networks one for storage and one for regular traffic, my hyperviser hosts
can connect to storage but my management server doesn't have connectivity
to storage, i am wonderin
Hi Michael,
I usually build on CentOS, but I've had a run at the Ubuntu build, it all looks
ok.
Have you got somewhere I can upload these debs to?
Regards
Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: CloudyAngus
paul.an...@shapeblue.com
-Original Message-
Fro
I am attempting to assist myself in this, but I am not quite finding out
what I should do. I suspect this is exactly why I am getting a 404 error
for the CS UI:
https://www.dropbox.com/s/2xsrwj931hi4948/Screenshot%202014-04-20%2010.33.46.png
2014-04-20 10:31:25,965 WARN [utils.nio.NioConnection
On 20.04.2014 13:24, Salvatore Sciacco wrote:
2014-04-20 12:31 GMT+02:00 Nux! :
It looks like a bug, "qemu-img convert" should be used instead of "cp
-f",
among others.
I suppose that some code was added to do a simple copy when format is
the
same, this wasn't the case with 4.1.1 version.
2014-04-20 12:31 GMT+02:00 Nux! :
> It looks like a bug, "qemu-img convert" should be used instead of "cp -f",
> among others.
>
I suppose that some code was added to do a simple copy when format is the
same, this wasn't the case with 4.1.1 version.
>
Do you mind opening an issue in https://is
On 20.04.2014 10:57, Salvatore Sciacco wrote:
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different
primary
storage pool fail. I've enabled debug on the agents side and I think
there
is a problem with the format type con
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different primary
storage pool fail. I've enabled debug on the agents side and I think there
is a problem with the format type conversion
Volume on database has format QCOW2
these
25 matches
Mail list logo