disk IO throttling IOPS feature

2014-06-20 Thread Praveen Buravilli
Hi,

Can anyone tell me which CloudStack version supports "hypervisor type" based 
disk IOPS and for what hypervisors?
I checked in CentOS 6.5 KVM   and also in ESX5.1 but it is not working.

So not sure whether my hypervisor isn't supported for this IOPS feature OR 
CloudStack doesn't.

I confirm that diskIOPS parameters are passing from management server to these 
hypervisors in volume attach commands.

*Attached screenshot of IOPS while defining a new disk offering for your 
reference.

Thanks,
Praveen Kumar Buravilli



RE: Adding ceph RBD storage failed in CentOS 6.5 KVM

2014-06-20 Thread Praveen Buravilli
Hi Andrija,

Thanks for providing detailed instructions. 
I have executed all of the steps given in http://pastebin.com/HwCZEASR and also 
in http://admintweets.com/centos-kvm-and-ceph-client-side-setup/.
But still facing the same issue. Any other ideas? 

Also, when I tried to install qemu-* rpms, it reported lot of dependency 
issues(attaching the output file). 

Please note that "qemu-img" was reporting support for rbd earlier also. 
Probably ceph osd node was also running on the same KVM node which might have 
updated "rbd" in list of supported formats.

Error:

[root@kvm-ovs-002 src-DONT-TOUCH]# virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error: missing backend for pool type 8 (rbd)

[root@kvm-ovs-002 src-DONT-TOUCH]# qemu-img | grep rbd
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed 
vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file gluster 
gluster gluster gluster rbd

Thanks,
Praveen Kumar Buravilli


-Original Message-
From: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Sent: 20 June 2014 14:35
To: users@cloudstack.apache.org
Subject: Re: Adding ceph RBD storage failed in CentOS 6.5 KVM

Been there, done that:

This libvirt error "error: internal error missing backend for pool type 8"
means that libvirt was not compiled with RBD backend support.

Here are my steps to compile libvirt 1.2.3 few months ago - change configure 
options if you want, I tried to use as much options as possible.
http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as in 
instructions in order to be able to compile with RBD support)

Also note, that if you are using CentOS 6.5, the "-s" flag was removed from 
qemu-img packages, meaning you will not be able to use snapshot functionality 
on cloudstack (not related to libbvirt), this is ANY snapshoting will be broken 
- there is workarround down :)


Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure your 
qemu-img and qemu-kvm was compiled with RBD support, check like this:
qemu-img | grep "Supported formats"
should get somwething like this - note the "rbd" on the end of output:

*Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed 
parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*


If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2 or 
newer.

If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are NOT 
compiled with RBD support, so you will not be able to use CEPH.
You need to install Intank's version of RPM packages, that are based on 
official RedHat stock code of those packages, but are pathced for RBD/CEPH 
support.

Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in order 
to download Intanks's RPMs - note that the latest RPMs you will find are 
probably also based on RHEL 6.5 version, that is missing "-s" flag, so you will 
still NOT be able to use disk snapshotting in ACS...

I solved this by installing a little bit older RPMs from Intanks (qemu-img and 
qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
flag preseent), please let me know if you need this provided, since they are 
NOT present on the Intanks download page at the moment exact versions of 
packages I installed and which are working fine for me:
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64

This whole setup is working fine for me...

Hope that helps, was going through the same pain, as you do now... :)

Best,
Andrija Panic




On 20 June 2014 04:38, Praveen Buravilli 
wrote:

>  Hi,
>
>
>
> I am facing an issue in adding ceph RBD storage to CloudStack. It is 
> failing with “Failed to add datasource” error.
>
> I have followed all the available instructions related to CloudStack, 
> KVM and ceph storage integration.
>
>
>
> CenOS 6.5 KVM is used as KVM node here. I have read in some blog that 
> we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to 
> work with in CloudStack.
>
> Hence I have got git cloned LibVirt package from its source and 
> upgraded LibVirt and Qemu versions.
>
> (Commands used à git cone #, ./autogen.sh, make, make install).
>
>
>
> It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support 
> which needs to be specified as a parameter while compiling LibVirt.
>
>
>
> Can anyone throw some pointers on how to rectify this problem?
>
>
>
> *Management Server Exception:*
>
> 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
>
> 2014

Recall: Adding ceph RBD storage failed in CentOS 6.5 KVM

2014-06-20 Thread Praveen Buravilli
Praveen Buravilli would like to recall the message, "Adding ceph RBD storage 
failed in CentOS 6.5 KVM".

RE: Adding ceph RBD storage failed in CentOS 6.5 KVM

2014-06-20 Thread Praveen Buravilli
Thanks Andrija for your detailed instructions.

Here a question, can I execute all the steps mentioned at 
http://pastebin.com/HwCZEASR in the centos kvm node which had LibVirt compiled 
from its git source already?

Praveen Kumar Buravilli
Cloud Platform Implementation Engineer, APAC Cloud Services
M +91-9885456905
praveen.buravi...@citrix.com



-Original Message-
From: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Sent: 20 June 2014 14:35
To: users@cloudstack.apache.org
Subject: Re: Adding ceph RBD storage failed in CentOS 6.5 KVM

Been there, done that:

This libvirt error "error: internal error missing backend for pool type 8"
means that libvirt was not compiled with RBD backend support.

Here are my steps to compile libvirt 1.2.3 few months ago - change configure 
options if you want, I tried to use as much options as possible.
http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as in 
instructions in order to be able to compile with RBD support)

Also note, that if you are using CentOS 6.5, the "-s" flag was removed from 
qemu-img packages, meaning you will not be able to use snapshot functionality 
on cloudstack (not related to libbvirt), this is ANY snapshoting will be broken 
- there is workarround down :)


Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure your 
qemu-img and qemu-kvm was compiled with RBD support, check like this:
qemu-img | grep "Supported formats"
should get somwething like this - note the "rbd" on the end of output:

*Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed 
parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*


If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2 or 
newer.

If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are NOT 
compiled with RBD support, so you will not be able to use CEPH.
You need to install Intank's version of RPM packages, that are based on 
official RedHat stock code of those packages, but are pathced for RBD/CEPH 
support.

Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in order 
to download Intanks's RPMs - note that the latest RPMs you will find are 
probably also based on RHEL 6.5 version, that is missing "-s" flag, so you will 
still NOT be able to use disk snapshotting in ACS...

I solved this by installing a little bit older RPMs from Intanks (qemu-img and 
qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
flag preseent), please let me know if you need this provided, since they are 
NOT present on the Intanks download page at the moment exact versions of 
packages I installed and which are working fine for me:
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64

This whole setup is working fine for me...

Hope that helps, was going through the same pain, as you do now... :)

Best,
Andrija Panic




On 20 June 2014 04:38, Praveen Buravilli 
wrote:

>  Hi,
>
>
>
> I am facing an issue in adding ceph RBD storage to CloudStack. It is 
> failing with “Failed to add datasource” error.
>
> I have followed all the available instructions related to CloudStack, 
> KVM and ceph storage integration.
>
>
>
> CenOS 6.5 KVM is used as KVM node here. I have read in some blog that 
> we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to 
> work with in CloudStack.
>
> Hence I have got git cloned LibVirt package from its source and 
> upgraded LibVirt and Qemu versions.
>
> (Commands used à git cone #, ./autogen.sh, make, make install).
>
>
>
> It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support 
> which needs to be specified as a parameter while compiling LibVirt.
>
>
>
> Can anyone throw some pointers on how to rectify this problem?
>
>
>
> *Management Server Exception:*
>
> 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
>
> 2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-6:null) Details from executing class
> com.cloud.agent.api.ModifyStoragePoolCommand: 
> java.lang.NullPointerException
>
> at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePo
> ol(LibvirtStorageAdaptor.java:531)
>
> at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:185)
>
> at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:177)
>
>
>
> I even tried defining a pool using virsh command 

Adding ceph RBD storage failed in CentOS 6.5 KVM

2014-06-19 Thread Praveen Buravilli
Hi,

I am facing an issue in adding ceph RBD storage to CloudStack. It is failing 
with "Failed to add datasource" error.
I have followed all the available instructions related to CloudStack, KVM and 
ceph storage integration.

CenOS 6.5 KVM is used as KVM node here. I have read in some blog that we need 
to compile LibVirt in CenOS KVM nodes to make Ceph storage to work with in 
CloudStack.
Hence I have got git cloned LibVirt package from its source and upgraded 
LibVirt and Qemu versions.
(Commands used --> git cone #, ./autogen.sh, make, make install).

It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support which 
needs to be specified as a parameter while compiling LibVirt.

Can anyone throw some pointers on how to rectify this problem?

Management Server Exception:
2014-06-20 09:58:03,757 DEBUG [agent.transport.Request] (catalina-exec-6:null) 
Seq 1-1602164611: Received:  { Ans: , MgmtId: 52234925782, via: 1, Ver: v1, 
Flags: 10, { Answer } }
2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl] 
(catalina-exec-6:null) Details from executing class 
com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
at 
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:531)
at 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:185)
at 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:177)

I even tried defining a pool using virsh command even that is failing with 
"internal error missing backend for pool type 8".
This indicates my KVM LibVirt is not supported for RBD.

Virsh exception on manual pool definition

c574980a-19fc-37e9-b6e3-788a7439575d
c574980a-19fc-37e9-b6e3-788a7439575d


cloudstack






[root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error missing backend for pool type 8

The Ceph storage is working fine and confirmed with following statistics of it.

Ceph output
[root@kvm-ovs-002 ~]# ceph auth list
installed auth entries:

osd.0
key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
caps: [mon] allow profile bootstrap-osd
client.cloudstack
key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
caps: [mon] allow r
caps: [osd] allow rwx pool=cloudstack

[root@ceph ~]# ceph status
cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
 health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck stale; 
292 pgs stuck unclean; 1/1 in osds are down; clock skew detected on 
mon.kvm-ovs-00  
  2
 monmap e1: 2 mons at 
{ceph=192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election epoch 
10, quorum 0,1 ceph,kvm-ovs-002
 osdmap e8: 1 osds: 0 up, 1 in
  pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
26036 MB used, 824 GB / 895 GB avail
 292 stale+active+degraded

[root@kvm-ovs-002 agent]# cat /etc/redhat-release
CentOS release 6.5 (Final)

The compiled LibVirt is showing upgraded version in virsh but still finding old 
rpm packages in KVM.
Give me some hint on whether to clean-up these old RPMs?

Virsh version
[root@kvm-ovs-002 agent]# virsh version
Compiled against library: libvirt 1.2.6
Using library: libvirt 1.2.6
Using API: QEMU 1.2.6
Running hypervisor: QEMU 0.12.1

[root@kvm-ovs-002 agent]# rpm -qa | grep qemu
qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
You have mail in /var/spool/mail/root
[root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
libvirt-python-0.10.2-29.el6_5.9.x86_64
libvirt-java-0.4.9-1.el6.noarch
libvirt-cim-0.6.1-9.el6_5.1.x86_64
libvirt-client-0.10.2-29.el6_5.9.x86_64
libvirt-devel-0.10.2-29.el6_5.9.x86_64
fence-virtd-libvirt-0.2.3-15.el6.x86_64
libvirt-0.10.2-29.el6_5.9.x86_64
libvirt-snmp-0.0.2-4.el6.x86_64

*Attached all the log files from management and kvm servers.

Thanks,
Praveen Kumar Buravilli



Ovs option not listed in a new network offering when it is enabled

2014-06-09 Thread Praveen Buravilli
Hi,

I have enabled "Ovs" network service provider in CloudStack version 4.3 but it 
is not getting listed as an option to select in static NAT provider, port 
forwarding, virtual networking or load balancing services while creating a new 
network offering.  Noticed supported services are displayed as BLANK for Ovs. 
Not sure if this has any link with options listed in offering services.
I followed steps given in 
http://docs.cloudstack.apache.org/en/latest/networking/ovs-plugin.html.

Has anyone enabled Ovs and worked on it?  If this is confirmed to be a UI bug, 
any DB hack  to proceed working with Ovs?

Thanks,
Praveen Kumar



RE: VPN for VPC feature in 4.3

2014-04-02 Thread Praveen Buravilli
Yes Benoit, I have 40% of free public IPs available. So, that should not be an 
issue.

I don’t see any errors in log file too. Have you noticed any exceptions in log 
files by any chance when you encountered this issue? 

Thanks,
Praveen Kumar

-Original Message-
From: benoit lair [mailto:kurushi4...@gmail.com] 
Sent: 02 April 2014 17:30
To: users@cloudstack.apache.org
Subject: Re: VPN for VPC feature in 4.3

Hi Praveen,


I already have this issue with vpc vr :
Have you checked if you have some public ip adresses available on your zone ?


Regards, Benoit.


2014-04-02 12:24 GMT+02:00 Praveen Buravilli :

> Thanks Geoff. Actually, eth1 for VPC router is missing.
>
> When I looked at log file, surprisingly a request has been sent to 
> create router VM with two NICs(one link local and other public) 
> whereas, the router was created with only one NIC.
>
>
>
> Any thoughts? fyi, I'm running CloudStack 4.3 with KVM nodes.
>
>
>
> Here attached is log file snippet containing both request and response 
> info on router start command:
>
> (Highlighted NIC entries in the log with red and green texts).
>
>
> ==
> 
>
> 2014-04-02 06:00:47,968 DEBUG [c.c.a.t.Request]
> (Job-Executor-35:ctx-544b3513 ctx-5d9c4b47) Seq 6-1545667825: Sending  
> { Cmd , MgmtId: 52237010300, via: 6(localhost.localdomain), Ver: v1, Flags:
> 100111,
> [{"com.cloud.agent.api.StartCommand":{"vm":{"id":43,"name":"r-43-VM","
> type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":1
> 34217728,"maxRam":134217728,"arch":"x86_64","os":"Debian
> GNU/Linux 7(64-bit)","bootArgs":" 
> vpccidr=10.201.0.0/16domain=cs7cloud.internal dns1=8.8.8.8 
> template=domP name=r-43-VM
> eth0ip=169.254.1.131 eth0mask=255.255.0.0 type=vpcrouter 
> disable_rp_filter=true","rebootOnCrash":false,"enableHA":true,"limitCp
> uUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"21a870dc77
> 23830","params":{},"uuid":"05b714cf-a511-42d9-b24a-6d077342865f","disk
> s":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid"
> :"b61da4e1-121e-4e02-b345-35719deec994","volumeType":"ROOT","dataStore
> ":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"90ff
> a1df-e8bd-3e46-893d-bb9b63e0b180","id":2,"poolType":"NetworkFilesystem
> ","host":"172.20.105.2","path":"/export/praveen/csprimary","port":2049
> ,"url":"NetworkFilesystem://
> 172.20.105.2//export/praveen/csprimary/?ROLE=Primary&STOREUUID=90ffa1d
> f-e8bd-3e46-893d-bb9b63e0b180 
> "}},"name":"ROOT-43","size":262144,"path":"b61da4e1-121e-4e02-b345-35719deec994","volumeId":46,"vmName":"r-43-VM","accountId":7,"format":"QCOW2","id":46,"deviceId":0,"hypervisorType":"KVM"}},"diskSeq":0,"path":"b61da4e1-121e-4e02-b345-35719deec994","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.20.105.2","volumeSize":"262144"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"uuid":"2d4b2574-5e7d-45e7-bcbb-f64d1d9237c1","ip":"169.254.1.131","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:01:83","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"hostIp":"172.20.210.7","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.1.131","port":3922,"interval":6,"retries":100,"name":"r-43-VM","wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{"router.ip":"169.254.1.131","
> router.name
> ":"r-43-VM"},"wait":0}},{"com.cloud.agent.api.PlugNicCommand":{"nic":{"deviceId":1,"networkRateMbps":200,"defaultNic":true,"uuid

RE: VPN for VPC feature in 4.3

2014-04-02 Thread Praveen Buravilli
ot;:false,"broadcastUri":"211","vlanGateway":"172.20.211.1","vlanNetmask":"255.255.255.0","vifMacAddress":"06:41:1a:00:00:20","networkRate":200,"trafficType":"Public","networkName":"cloudbr1"}],"accessDetails":{"router.guest.ip":"172.20.211.132","zone.network.type":"Advanced","router.ip":"169.254.1.131","router.name":"r-43-VM"},"wait":0}},{"com.cloud.agent.api.routing.SetSourceNatCommand":{"ipAddress":{"accountId":7,"publicIp":"172.20.211.132","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":false,"broadcastUri":"211","vlanGateway":"172.20.211.1","vlanNetmask":"255.255.255.0","vifMacAddress":"06:41:1a:00:00:20","networkRate":200,"trafficType":"Public","networkName":"cloudbr1"},"add":true,"accessDetails":{"zone.network.type":"Advanced","router.ip":"169.254.1.131","router.name":"r-43-VM"},"wait":0}},{}]
 }





2014-04-02 06:01:43,695 DEBUG [c.c.a.t.Request] (AgentManager-Handler-10:null) 
Seq 6-1545667825: Processing:  { Ans: , MgmtId: 52237010300, via: 6, Ver: v1, 
Flags: 110, 
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":43,"name":"r-43-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"Debian
 GNU/Linux 7(64-bit)","bootArgs":" vpccidr=10.201.0.0/16 
domain=cs7cloud.internal dns1=8.8.8.8 template=domP name=r-43-VM 
eth0ip=169.254.1.131 eth0mask=255.255.0.0 type=vpcrouter 
disable_rp_filter=true","rebootOnCrash":false,"enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"21a870dc7723830","vncAddr":"172.20.210.7","params":{},"uuid":"05b714cf-a511-42d9-b24a-6d077342865f","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"b61da4e1-121e-4e02-b345-35719deec994","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"90ffa1df-e8bd-3e46-893d-bb9b63e0b180","id":2,"poolType":"NetworkFilesystem","host":"172.20.105.2","path":"/export/praveen/csprimary","port":2049,"url":"NetworkFilesystem://172.20.105.2//export/praveen/csprimary/?ROLE=Primary&STOREUUID=90ffa1df-e8bd-3e46-893d-bb9b63e0b180"}},"name":"ROOT-43","size":262144,"path":"b61da4e1-121e-4e02-b345-35719deec994","volumeId":46,"vmName":"r-43-VM","accountId":7,"format":"QCOW2","id":46,"deviceId":0,"hypervisorType":"KVM"}},"diskSeq":0,"path":"b61da4e1-121e-4e02-b345-35719deec994","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.20.105.2","volumeSize":"262144"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"uuid":"2d4b2574-5e7d-45e7-bcbb-f64d1d9237c1","ip":"169.254.1.131","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:01:83","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"templateVersion":"Cloudstack
 Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 
2014","scriptsVersion":"07277b52f67248060835ca19947016cf","result":true,"details":"Cloudstack
 Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 
2014&07277b52f67248060835ca19947016cf","wait":0}},{"com.cloud.agent.api.PlugNicAnswer":{"result":true,"details":"success","wait":0}},{"com.cloud.agent.api.routing.IpAssocA

VPN for VPC feature in 4.3

2014-04-01 Thread Praveen Buravilli
Hi,

I have noticed an issue in working with VPN for VPC which is a new feature 
introduced in CloudStack 4.3("1.1.6 Remote Access VPN for VPC" section of 
CloudStack 4.3 release notes).
Regular remote VPN for guest networks fine without any problem whereas, VPN for 
VPN isn't working.

When I checked  the VPC router, there is no IP address assigned to its public 
Nic.
Has anyone noticed this behaviour? Does this seem like a bug? Any known 
workaround available?

VPC router interface details:
===
root@r-38-VM:/etc/network# ifconfig -a
eth0  Link encap:Ethernet  HWaddr 0e:00:a9:fe:00:7d
  inet addr:169.254.0.125  Bcast:169.254.255.255  Mask:255.255.0.0
  inet6 addr: fe80::c00:a9ff:fefe:7d/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1478 errors:0 dropped:0 overruns:0 frame:0
  TX packets:722 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:126628 (123.6 KiB)  TX bytes:124068 (121.1 KiB)

eth1  Link encap:Ethernet  HWaddr 06:59:68:00:00:1a
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:2 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:214 (214.0 B)  TX bytes:214 (214.0 B)

Thanks,
Praveen Kumar