[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721555#comment-15721555
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 156
 Hypervisor xenserver
 NetworkType Advanced
 Passed=102
 Failed=3
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_snapshots.py

 * test_01_snapshot_root_disk Failing since 3 runs

* test_deploy_vm_iso.py

 * test_deploy_vm_from_iso Failing since 27 runs

* test_vm_life_cycle.py

 * test_10_attachAndDetach_iso Failing since 28 runs


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_routers_network_ops.py
test_disk_offerings.py


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721572#comment-15721572
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9635:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1802
  
LGTM. Merging this now, @murali-reddy in case we hit the issue, we'll need 
to rework the fix. Thank you for the PR.


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721576#comment-15721576
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721578#comment-15721578
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721580#comment-15721580
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9635:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1802


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721575#comment-15721575
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit db39a060858005a15d55201e72a10069f24ef2f1 in cloudstack's branch 
refs/heads/4.8 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=db39a06 ]

CLOUDSTACK-9635: fix test_privategw_acl.py

ensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721583#comment-15721583
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721581#comment-15721581
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit db39a060858005a15d55201e72a10069f24ef2f1 in cloudstack's branch 
refs/heads/4.9 from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=db39a06 ]

CLOUDSTACK-9635: fix test_privategw_acl.py

ensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721584#comment-15721584
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721587#comment-15721587
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@blueorangutan package


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721588#comment-15721588
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721591#comment-15721591
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721590#comment-15721590
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit db39a060858005a15d55201e72a10069f24ef2f1 in cloudstack's branch 
refs/heads/master from [~muralireddy]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=db39a06 ]

CLOUDSTACK-9635: fix test_privategw_acl.py

ensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9635) fix test_privategw_acl.py

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721592#comment-15721592
 ] 

ASF subversion and git services commented on CLOUDSTACK-9635:
-

Commit d540015bc8bb8da6b53615bf1e1b30a06688ecce in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d540015 ]

Merge pull request #1802 from murali-reddy/test_privategw_acl

CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for 
createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway

* pr/1802:
  CLOUDSTACK-9635: fix test_privategw_acl.py

Signed-off-by: Rohit Yadav 


> fix test_privategw_acl.py
> -
>
> Key: CLOUDSTACK-9635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Murali Reddy
> Fix For: 4.8.1, 4.10.0.0, 4.9.2.0
>
>
> Marvin test cases in test suite test_privategw_acl.py fail intermittently 
> with error 'createprivategateway failed, due to: errorCode: 431, 
> errorText:Network with vlan vlan://549 already exists in zone 1'
> Test cases use a VLAN from the zone VLAN range, to create VPC private 
> gateway. But the VPC private gateway implementation does not book keep the 
> info on op_dc_vent_alloc. So when test case create network they end up using 
> same VLAN, resulting failure of createVpcPrivateGateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721599#comment-15721599
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
Thanks @karuturi 
@blueorangutan package


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721600#comment-15721600
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9632) Upgrade bountycastle to 1.55+

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721604#comment-15721604
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9632:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1799
  
@blueorangutan package


> Upgrade bountycastle to 1.55+
> -
>
> Key: CLOUDSTACK-9632
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9632
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Upgrade bountycastle library to latest versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9632) Upgrade bountycastle to 1.55+

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721606#comment-15721606
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9632:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1799
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Upgrade bountycastle to 1.55+
> -
>
> Key: CLOUDSTACK-9632
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9632
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Upgrade bountycastle library to latest versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721744#comment-15721744
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9648:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1808
  
I'll proceed with merging this on discretion. Tested this locally.


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721747#comment-15721747
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) has 
been kicked to run smoke tests


> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721746#comment-15721746
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
@blueorangutan test centos7 xenserver-65sp1



> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721771#comment-15721771
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 90a3d97c5e20b625b528c527a2f31474082214ef in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=90a3d97 ]

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721775#comment-15721775
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d14e8e8b5909d4bfe0b505880eb92194e0b08a4 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d14e8e ]

Merge pull request #1729 from shapeblue/vmware-memleak-fix

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management 
server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

* pr/1729:
  CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721774#comment-15721774
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721769#comment-15721769
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Thanks @murali-reddy @abhinandanprateek I'll proceed with merging this. We 
can explore considering using apache-commons pool in future.


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721772#comment-15721772
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 77d2984494aa5fb32662c990503ae4f251fc4c6f in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=77d2984 ]

CLOUDSTACK-9648: Fix release script to update checkstyle pom

This fixes build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721776#comment-15721776
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d14e8e8b5909d4bfe0b505880eb92194e0b08a4 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d14e8e ]

Merge pull request #1729 from shapeblue/vmware-memleak-fix

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management 
server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

* pr/1729:
  CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721773#comment-15721773
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721790#comment-15721790
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 77d2984494aa5fb32662c990503ae4f251fc4c6f in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=77d2984 ]

CLOUDSTACK-9648: Fix release script to update checkstyle pom

This fixes build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721791#comment-15721791
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721797#comment-15721797
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721796#comment-15721796
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721794#comment-15721794
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1729


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721795#comment-15721795
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 77d2984494aa5fb32662c990503ae4f251fc4c6f in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=77d2984 ]

CLOUDSTACK-9648: Fix release script to update checkstyle pom

This fixes build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721798#comment-15721798
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9648:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1808


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721789#comment-15721789
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 90a3d97c5e20b625b528c527a2f31474082214ef in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=90a3d97 ]

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721793#comment-15721793
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d14e8e8b5909d4bfe0b505880eb92194e0b08a4 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d14e8e ]

Merge pull request #1729 from shapeblue/vmware-memleak-fix

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management 
server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

* pr/1729:
  CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9648) Checkstyle module version fails to update by build_asf.sh

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721792#comment-15721792
 ] 

ASF subversion and git services commented on CLOUDSTACK-9648:
-

Commit 9e4246a26d1d9813646cc754346c62d6c176c422 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9e4246a ]

Merge pull request #1808 from shapeblue/fix-release-script-checkstyle

CLOUDSTACK-9648: Fix release script to update checkstyle pomThis fixes 
build_asf.sh release script to update checkstyle pom.xml with the
provided new version.

* pr/1808:
  CLOUDSTACK-9648: Fix release script to update checkstyle pom

Signed-off-by: Rohit Yadav 


> Checkstyle module version fails to update by build_asf.sh
> -
>
> Key: CLOUDSTACK-9648
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9648
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> As reported on users@, the build_asf.sh fails to update checkstyle module's 
> pom.xml that fails builds when build from source tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721800#comment-15721800
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721802#comment-15721802
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@blueorangutan package


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for parent domain admin 
> user

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721804#comment-15721804
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
>

[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721815#comment-15721815
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@murali-reddy @abhinandanprateek let me know any help needed from my end?


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9453) Optimizing Marvin

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721818#comment-15721818
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9453:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1675
  
@abhinandanprateek is this still wip? let me know any help from my end, 
also please rebase/fix conflicts?


> Optimizing Marvin
> -
>
> Key: CLOUDSTACK-9453
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9453
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Currently running all Marvin tests can take upto 4 days. The tests are not 
> optimized for nested cloud setup where most of the test automation runs. 
> There are some simple things that can be done to optimize the runs:
> 1. Have smaller default template: If we install macchinina template by 
> default and use it where there is no specific dependency on OS, then it will 
> result in speeding up many of Marvin tests.
> 2. Most of the tests have template names hard-coded. It will be a good idea 
> to allow some form of configuration so that test writers can use templates 
> that better suit their test scenario.
> 3. Some test timeouts are unnecessary long and a failure can be deducted much 
> early on instead of undergoing several long timeouts.
> 4. Ability to tune service offerings to better suit marvin environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9586) When using local storage with Xenserver prepareTemplate does not work with multiple primary store

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721822#comment-15721822
 ] 

ASF subversion and git services commented on CLOUDSTACK-9586:
-

Commit 20aea27dc0dd2a212acd830d47945ea5ea579f0c in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=20aea27 ]

Merge pull request #1765 from shapeblue/CLOUDSTACK-9586

Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary storeThe race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously.

Due to synchronised block one mount will be successful and other thread will 
get the already mounted SR. Without the fix the two thread will try to mount it 
parallely and one will fail on Xenserver.

* pr/1765:
  Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary store The race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously. Due to synchronised block one mount 
will be successful and other thread will get the already mounted SR. Without 
the fix the two thread will try to mount it parallely and one will fail on 
Xenserver.

Signed-off-by: Rohit Yadav 


> When using local storage with Xenserver prepareTemplate does not work with 
> multiple primary store
> -
>
> Key: CLOUDSTACK-9586
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9586
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage, XenServer
>Affects Versions: 4.5.2
> Environment: XenServer 6.5 SP1
> Local Storage
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> 2016-11-09 15:05:15,876 DEBUG [c.c.h.x.r.XenServerStorageProcessor] 
> (DirectAgent-29:ctx-8d890b55) Failed to destroy pbd
> SR_BACKEND_FAILURE_40The SR scan failed  [opterr=['INTERNAL_ERROR', 
> 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", 
> "703f59ca-6e5e-38d3-bbef-707b5b14c704")']]
>   at com.xensource.xenapi.Types.checkResponse(Types.java:2021)
>   at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>   at 
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
>   at com.xensource.xenapi.SR.scan(SR.java:1257)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSR(Xenserver625StorageProcessor.java:113)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSr(Xenserver625StorageProcessor.java:139)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.copyTemplateToPrimaryStorage(Xenserver625StorageProcessor.java:173)
> Root Cause: CloudPlatform creates a SR on each host, which points to the 
> template location on the secondary storage 
> (secondary_Storage/template/tmpl//). This causes the 
> database unique constraint violation when each XenServer tries to scan the SR 
> created on each host. The host that scans the SR last, throws the exception 
> because VDI was recognized already from the SR scan of the first host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9586) When using local storage with Xenserver prepareTemplate does not work with multiple primary store

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721825#comment-15721825
 ] 

ASF subversion and git services commented on CLOUDSTACK-9586:
-

Commit 20aea27dc0dd2a212acd830d47945ea5ea579f0c in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=20aea27 ]

Merge pull request #1765 from shapeblue/CLOUDSTACK-9586

Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary storeThe race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously.

Due to synchronised block one mount will be successful and other thread will 
get the already mounted SR. Without the fix the two thread will try to mount it 
parallely and one will fail on Xenserver.

* pr/1765:
  Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary store The race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously. Due to synchronised block one mount 
will be successful and other thread will get the already mounted SR. Without 
the fix the two thread will try to mount it parallely and one will fail on 
Xenserver.

Signed-off-by: Rohit Yadav 


> When using local storage with Xenserver prepareTemplate does not work with 
> multiple primary store
> -
>
> Key: CLOUDSTACK-9586
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9586
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage, XenServer
>Affects Versions: 4.5.2
> Environment: XenServer 6.5 SP1
> Local Storage
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> 2016-11-09 15:05:15,876 DEBUG [c.c.h.x.r.XenServerStorageProcessor] 
> (DirectAgent-29:ctx-8d890b55) Failed to destroy pbd
> SR_BACKEND_FAILURE_40The SR scan failed  [opterr=['INTERNAL_ERROR', 
> 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", 
> "703f59ca-6e5e-38d3-bbef-707b5b14c704")']]
>   at com.xensource.xenapi.Types.checkResponse(Types.java:2021)
>   at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>   at 
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
>   at com.xensource.xenapi.SR.scan(SR.java:1257)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSR(Xenserver625StorageProcessor.java:113)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSr(Xenserver625StorageProcessor.java:139)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.copyTemplateToPrimaryStorage(Xenserver625StorageProcessor.java:173)
> Root Cause: CloudPlatform creates a SR on each host, which points to the 
> template location on the secondary storage 
> (secondary_Storage/template/tmpl//). This causes the 
> database unique constraint violation when each XenServer tries to scan the SR 
> created on each host. The host that scans the SR last, throws the exception 
> because VDI was recognized already from the SR scan of the first host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9586) When using local storage with Xenserver prepareTemplate does not work with multiple primary store

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721832#comment-15721832
 ] 

ASF subversion and git services commented on CLOUDSTACK-9586:
-

Commit 20aea27dc0dd2a212acd830d47945ea5ea579f0c in cloudstack's branch 
refs/heads/4.8 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=20aea27 ]

Merge pull request #1765 from shapeblue/CLOUDSTACK-9586

Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary storeThe race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously.

Due to synchronised block one mount will be successful and other thread will 
get the already mounted SR. Without the fix the two thread will try to mount it 
parallely and one will fail on Xenserver.

* pr/1765:
  Cloudstack 9586: When using local storage with Xenserver prepareTemplate does 
not work with multiple primary store The race condition will happen whenever 
there are multiple primary storages and the CS tries to mount the secondary 
store to xenserver host simultaneously. Due to synchronised block one mount 
will be successful and other thread will get the already mounted SR. Without 
the fix the two thread will try to mount it parallely and one will fail on 
Xenserver.

Signed-off-by: Rohit Yadav 


> When using local storage with Xenserver prepareTemplate does not work with 
> multiple primary store
> -
>
> Key: CLOUDSTACK-9586
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9586
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage, XenServer
>Affects Versions: 4.5.2
> Environment: XenServer 6.5 SP1
> Local Storage
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> 2016-11-09 15:05:15,876 DEBUG [c.c.h.x.r.XenServerStorageProcessor] 
> (DirectAgent-29:ctx-8d890b55) Failed to destroy pbd
> SR_BACKEND_FAILURE_40The SR scan failed  [opterr=['INTERNAL_ERROR', 
> 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", 
> "703f59ca-6e5e-38d3-bbef-707b5b14c704")']]
>   at com.xensource.xenapi.Types.checkResponse(Types.java:2021)
>   at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>   at 
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
>   at com.xensource.xenapi.SR.scan(SR.java:1257)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSR(Xenserver625StorageProcessor.java:113)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.createFileSr(Xenserver625StorageProcessor.java:139)
>   at 
> com.cloud.hypervisor.xenserver.resource.Xenserver625StorageProcessor.copyTemplateToPrimaryStorage(Xenserver625StorageProcessor.java:173)
> Root Cause: CloudPlatform creates a SR on each host, which points to the 
> template location on the secondary storage 
> (secondary_Storage/template/tmpl//). This causes the 
> database unique constraint violation when each XenServer tries to scan the SR 
> created on each host. The host that scans the SR last, throws the exception 
> because VDI was recognized already from the SR scan of the first host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9653) listCapacity API shows incorrect output when sortBy=usage option is added

2016-12-05 Thread Rashmi Dixit (JIRA)
Rashmi Dixit created CLOUDSTACK-9653:


 Summary: listCapacity API shows incorrect output when sortBy=usage 
option is added
 Key: CLOUDSTACK-9653
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9653
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: API
Affects Versions: 4.9.0, 4.8.0, 4.7.0, 4.6.0
Reporter: Rashmi Dixit
 Fix For: 4.9.1.0


listCapacity API does not sum up values correctly when used with sortBy=usage 
option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721847#comment-15721847
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-323


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721852#comment-15721852
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@blueorangutan test matrix


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721854#comment-15721854
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@rhtyd a Trillian-Jenkins matrix job (centos6 mgmt + xs65sp1, centos7 mgmt 
+ vmware55u3, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721888#comment-15721888
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-324


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9632) Upgrade bountycastle to 1.55+

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721890#comment-15721890
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9632:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1799
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-325


> Upgrade bountycastle to 1.55+
> -
>
> Key: CLOUDSTACK-9632
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9632
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Upgrade bountycastle library to latest versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721892#comment-15721892
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
@blueorangutan test


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9646) [Usage] No usage is generated for uploaded templates/volumes from local

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721894#comment-15721894
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9646:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1809
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> [Usage] No usage is generated for uploaded templates/volumes from local
> ---
>
> Key: CLOUDSTACK-9646
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9646
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.0, 4.8.0, 4.9.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Repro steps:
> 1. Upload a template from local 
> 2. Upload a volume from local
> Bug:
> notice no usage events and eventually no usage is generated for uploaded 
> template and volume 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721969#comment-15721969
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1729
  
Packaging result: ✖centos6 ✔centos7 ✔debian. JID-326


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15721970#comment-15721970
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
Packaging result: ✖centos6 ✔centos7 ✔debian. JID-327


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deplo

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722003#comment-15722003
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1816

CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

The test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

(I'll merge this if Travis passes)

/cc @abhinandanprateek @murali-reddy 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack 4.9-fix-npe-vmware

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1816.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1816


commit dcbf3c8689ed3eaed8653763ec27d2907671c72b
Author: Rohit Yadav 
Date:   2016-12-05T11:15:33Z

CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

The test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

Signed-off-by: Rohit Yadav 




> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722006#comment-15722006
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@blueorangutan test centos7 vmware-55u3


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for parent dom

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722009#comment-15722009
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722010#comment-15722010
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@blueorangutan test centos7 xenserver-65sp1


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for parent

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722013#comment-15722013
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) has 
been kicked to run smoke tests


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===

[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1572#comment-1572
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
@blueorangutan package


> Template create from snapshot does not populate vm_template_details
> ---
>
> Key: CLOUDSTACK-9637
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9637
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
> Environment:  VMware ESX , CS 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Template create from snapshot does not populate vm_template_details
> TROUBLESHOOTING
> ==
> {noformat}
> mysql> select id,name,uuid,instance_name,vm_template_id from vm_instance 
> where uuid='453313f5-ef97-461a-94f5-0838617fe826'
> -> ;
> ++---+--+---++
> | id | name  | uuid | instance_name | 
> vm_template_id |
> ++---+--+---++
> |  9 | vm001 | 453313f5-ef97-461a-94f5-0838617fe826 | i-2-9-VM  | 
>202 |
> ++---+--+---++
> 1 row in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where id=202; 
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 202 | Debian |   NULL |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=202; 
> ++-++---+-+
> | id | template_id | name   | value | display |
> ++-++---+-+
> |  1 | 202 | keyboard   | us|   1 |
> |  2 | 202 | nicAdapter | E1000 |   1 |
> |  3 | 202 | rootDiskController | scsi  |   1 |
> ++-++---+-+
> 3 rows in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where 
> source_template_id=202;
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 203 | derived-debian |202 |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=203;
> Empty set (0.00 sec)
> {noformat}
> REPRO STEPS
> ==
> 1. Register a template A and specify property:
> Root disk controller: scsi
> NIC adapter type: E1000
> Keyboard type: us
> 2. Create a vm instance from template A
> 3. Take volume snapshot for vm instance
> 4. Delete VM instance
> 5. Switch to "Storage->Snapshots", convert snapshot to a template B
> 6. Observe template B does not inherit property from template A, the table 
> vm_template_details is empty
> EXPECTED BEHAVIOR
> ==
> Template should inherit property from source template
>  
> ACTUAL BEHAVIOR
> ==
> Detail template property lost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1575#comment-1575
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Template create from snapshot does not populate vm_template_details
> ---
>
> Key: CLOUDSTACK-9637
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9637
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
> Environment:  VMware ESX , CS 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Template create from snapshot does not populate vm_template_details
> TROUBLESHOOTING
> ==
> {noformat}
> mysql> select id,name,uuid,instance_name,vm_template_id from vm_instance 
> where uuid='453313f5-ef97-461a-94f5-0838617fe826'
> -> ;
> ++---+--+---++
> | id | name  | uuid | instance_name | 
> vm_template_id |
> ++---+--+---++
> |  9 | vm001 | 453313f5-ef97-461a-94f5-0838617fe826 | i-2-9-VM  | 
>202 |
> ++---+--+---++
> 1 row in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where id=202; 
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 202 | Debian |   NULL |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=202; 
> ++-++---+-+
> | id | template_id | name   | value | display |
> ++-++---+-+
> |  1 | 202 | keyboard   | us|   1 |
> |  2 | 202 | nicAdapter | E1000 |   1 |
> |  3 | 202 | rootDiskController | scsi  |   1 |
> ++-++---+-+
> 3 rows in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where 
> source_template_id=202;
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 203 | derived-debian |202 |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=203;
> Empty set (0.00 sec)
> {noformat}
> REPRO STEPS
> ==
> 1. Register a template A and specify property:
> Root disk controller: scsi
> NIC adapter type: E1000
> Keyboard type: us
> 2. Create a vm instance from template A
> 3. Take volume snapshot for vm instance
> 4. Delete VM instance
> 5. Switch to "Storage->Snapshots", convert snapshot to a template B
> 6. Observe template B does not inherit property from template A, the table 
> vm_template_details is empty
> EXPECTED BEHAVIOR
> ==
> Template should inherit property from source template
>  
> ACTUAL BEHAVIOR
> ==
> Detail template property lost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722280#comment-15722280
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
Packaging result: ✔centos6 ✔centos7 ✖debian. JID-328


> Template create from snapshot does not populate vm_template_details
> ---
>
> Key: CLOUDSTACK-9637
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9637
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
> Environment:  VMware ESX , CS 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Template create from snapshot does not populate vm_template_details
> TROUBLESHOOTING
> ==
> {noformat}
> mysql> select id,name,uuid,instance_name,vm_template_id from vm_instance 
> where uuid='453313f5-ef97-461a-94f5-0838617fe826'
> -> ;
> ++---+--+---++
> | id | name  | uuid | instance_name | 
> vm_template_id |
> ++---+--+---++
> |  9 | vm001 | 453313f5-ef97-461a-94f5-0838617fe826 | i-2-9-VM  | 
>202 |
> ++---+--+---++
> 1 row in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where id=202; 
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 202 | Debian |   NULL |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=202; 
> ++-++---+-+
> | id | template_id | name   | value | display |
> ++-++---+-+
> |  1 | 202 | keyboard   | us|   1 |
> |  2 | 202 | nicAdapter | E1000 |   1 |
> |  3 | 202 | rootDiskController | scsi  |   1 |
> ++-++---+-+
> 3 rows in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where 
> source_template_id=202;
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 203 | derived-debian |202 |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=203;
> Empty set (0.00 sec)
> {noformat}
> REPRO STEPS
> ==
> 1. Register a template A and specify property:
> Root disk controller: scsi
> NIC adapter type: E1000
> Keyboard type: us
> 2. Create a vm instance from template A
> 3. Take volume snapshot for vm instance
> 4. Delete VM instance
> 5. Switch to "Storage->Snapshots", convert snapshot to a template B
> 6. Observe template B does not inherit property from template A, the table 
> vm_template_details is empty
> EXPECTED BEHAVIOR
> ==
> Template should inherit property from source template
>  
> ACTUAL BEHAVIOR
> ==
> Detail template property lost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722286#comment-15722286
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
@blueorangutan test


> Template create from snapshot does not populate vm_template_details
> ---
>
> Key: CLOUDSTACK-9637
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9637
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
> Environment:  VMware ESX , CS 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Template create from snapshot does not populate vm_template_details
> TROUBLESHOOTING
> ==
> {noformat}
> mysql> select id,name,uuid,instance_name,vm_template_id from vm_instance 
> where uuid='453313f5-ef97-461a-94f5-0838617fe826'
> -> ;
> ++---+--+---++
> | id | name  | uuid | instance_name | 
> vm_template_id |
> ++---+--+---++
> |  9 | vm001 | 453313f5-ef97-461a-94f5-0838617fe826 | i-2-9-VM  | 
>202 |
> ++---+--+---++
> 1 row in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where id=202; 
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 202 | Debian |   NULL |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=202; 
> ++-++---+-+
> | id | template_id | name   | value | display |
> ++-++---+-+
> |  1 | 202 | keyboard   | us|   1 |
> |  2 | 202 | nicAdapter | E1000 |   1 |
> |  3 | 202 | rootDiskController | scsi  |   1 |
> ++-++---+-+
> 3 rows in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where 
> source_template_id=202;
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 203 | derived-debian |202 |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=203;
> Empty set (0.00 sec)
> {noformat}
> REPRO STEPS
> ==
> 1. Register a template A and specify property:
> Root disk controller: scsi
> NIC adapter type: E1000
> Keyboard type: us
> 2. Create a vm instance from template A
> 3. Take volume snapshot for vm instance
> 4. Delete VM instance
> 5. Switch to "Storage->Snapshots", convert snapshot to a template B
> 6. Observe template B does not inherit property from template A, the table 
> vm_template_details is empty
> EXPECTED BEHAVIOR
> ==
> Template should inherit property from source template
>  
> ACTUAL BEHAVIOR
> ==
> Detail template property lost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722288#comment-15722288
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Template create from snapshot does not populate vm_template_details
> ---
>
> Key: CLOUDSTACK-9637
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9637
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
> Environment:  VMware ESX , CS 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Template create from snapshot does not populate vm_template_details
> TROUBLESHOOTING
> ==
> {noformat}
> mysql> select id,name,uuid,instance_name,vm_template_id from vm_instance 
> where uuid='453313f5-ef97-461a-94f5-0838617fe826'
> -> ;
> ++---+--+---++
> | id | name  | uuid | instance_name | 
> vm_template_id |
> ++---+--+---++
> |  9 | vm001 | 453313f5-ef97-461a-94f5-0838617fe826 | i-2-9-VM  | 
>202 |
> ++---+--+---++
> 1 row in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where id=202; 
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 202 | Debian |   NULL |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=202; 
> ++-++---+-+
> | id | template_id | name   | value | display |
> ++-++---+-+
> |  1 | 202 | keyboard   | us|   1 |
> |  2 | 202 | nicAdapter | E1000 |   1 |
> |  3 | 202 | rootDiskController | scsi  |   1 |
> ++-++---+-+
> 3 rows in set (0.00 sec)
> mysql> select id,name,source_template_id from vm_template where 
> source_template_id=202;
> +-+++
> | id  | name   | source_template_id |
> +-+++
> | 203 | derived-debian |202 |
> +-+++
> 1 row in set (0.00 sec)
> mysql> select * from vm_template_details where template_id=203;
> Empty set (0.00 sec)
> {noformat}
> REPRO STEPS
> ==
> 1. Register a template A and specify property:
> Root disk controller: scsi
> NIC adapter type: E1000
> Keyboard type: us
> 2. Create a vm instance from template A
> 3. Take volume snapshot for vm instance
> 4. Delete VM instance
> 5. Switch to "Storage->Snapshots", convert snapshot to a template B
> 6. Observe template B does not inherit property from template A, the table 
> vm_template_details is empty
> EXPECTED BEHAVIOR
> ==
> Template should inherit property from source template
>  
> ACTUAL BEHAVIOR
> ==
> Detail template property lost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9558) Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is expunged

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722330#comment-15722330
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9558:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1722
  
I've manually tested the pr and it seems to work as expected, 
I'll kick the smoketests to check for any regressions.
LGTM
@blueorangutan test


> Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is 
> expunged
> -
>
> Key: CLOUDSTACK-9558
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9558
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> ===
> i) Deploy a new VM in CCP on Xenserver
> ii) Create a snapshot for the volume created in step i) from CCP. This step 
> will create a snapshot on the primary storage and keeps it on storage as we 
> use it as reference for the incremental snapshots
> iii) Now destroy and expunge the VM created in step i)
> You will notice that the volume for the VM ( created in step i) is deleted 
> from the primary storage. However the snapshot created on primary ( as part 
> of step ii)) still exists on the primary and this needs to be deleted 
> manually by the admin.
> Snapshot exists on the primary storage even after deleting the Volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9558) Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is expunged

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722332#comment-15722332
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9558:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1722
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is 
> expunged
> -
>
> Key: CLOUDSTACK-9558
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9558
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> ===
> i) Deploy a new VM in CCP on Xenserver
> ii) Create a snapshot for the volume created in step i) from CCP. This step 
> will create a snapshot on the primary storage and keeps it on storage as we 
> use it as reference for the incremental snapshots
> iii) Now destroy and expunge the VM created in step i)
> You will notice that the volume for the VM ( created in step i) is deleted 
> from the primary storage. However the snapshot created on primary ( as part 
> of step ii)) still exists on the primary and this needs to be deleted 
> manually by the admin.
> Snapshot exists on the primary storage even after deleting the Volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9597) Incorrect updateResourceCount()

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722376#comment-15722376
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9597:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1764
  
Tested the fix manually it looks OK. Smoketests results also looking good. 


> Incorrect updateResourceCount()
> ---
>
> Key: CLOUDSTACK-9597
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9597
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> h3. Currently
> On management server startup, the {{ConfigurationServerImpl}} does a check on 
> the resource type * resource count versus number of accounts & domains to see 
> if all accounts and domains have a resource count set for each resource type. 
> The list of accounts and domains are fetched excluding the removed ones. But 
> the number of resourceCount by account and domain takes all of them, leading 
> to an incorrect math check.
> The API command {{updateResourceCount}} can crash with an incorrect SQL query.
> I discovered the problem while adding a new {{ResourceType}}.
> h3. Changes
> Fetch the number of resourceCount by domain and account excluding the removed 
> ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722439#comment-15722439
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
LGTM


> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722923#comment-15722923
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Trillian test result (tid-581)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28738 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1659-t581-kvm-centos7.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 860.01 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 463.64 | 
test_privategw_acl.py
test_01_create_template | `Error` | 70.62 | test_templates.py
test_01_vpc_site2site_vpn | Success | 135.36 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 86.44 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 258.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 250.02 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 566.53 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 521.44 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1543.24 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 566.83 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1300.59 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.91 | test_volumes.py
test_08_resize_volume | Success | 15.42 | test_volumes.py
test_07_resize_fail | Success | 20.49 | test_volumes.py
test_06_download_detached_volume | Success | 15.62 | test_volumes.py
test_05_detach_volume | Success | 100.29 | test_volumes.py
test_04_delete_attached_volume | Success | 10.25 | test_volumes.py
test_03_download_attached_volume | Success | 15.38 | test_volumes.py
test_02_attach_volume | Success | 74.61 | test_volumes.py
test_01_create_volume | Success | 681.63 | test_volumes.py
test_deploy_vm_multiple | Success | 304.30 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.65 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.96 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.16 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.89 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.32 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 126.28 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_10_destroy_cpvm | Success | 131.46 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.60 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.38 | test_ssvm.py
test_07_reboot_ssvm | Success | 103.47 | test_ssvm.py
test_06_stop_cpvm | Success | 101.52 | test_ssvm.py
test_05_stop_ssvm | Success | 133.51 | test_ssvm.py
test_04_cpvm_internals | Success | 1.06 | test_ssvm.py
test_03_ssvm_internals | Success | 3.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.54 | test_snapshots.py
test_04_change_offering_small | Success | 235.00 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_09_reboot_router | Success | 35.32 | test_routers.py
test_08_start_router | Success | 30.32 | test_routers.py
test_07_stop_router | Success | 10.16 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
t

[jira] [Commented] (CLOUDSTACK-9619) Fixes for PR 1600

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722998#comment-15722998
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9619:


Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
I've updated the commit summary, @rhtyd


> Fixes for PR 1600
> -
>
> Key: CLOUDSTACK-9619
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9619
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> In StorageSystemDataMotionStrategy.performCopyOfVdi we call 
> getSnapshotDetails. In one such scenario, the source snapshot in question is 
> coming from secondary storage (when we are creating a new volume on managed 
> storage from a snapshot of ours that’s on secondary storage).
> This usually “worked” in the regression tests due to a bit of "luck": We 
> retrieve the ID of the snapshot (which is on secondary storage) and then try 
> to pull out its StorageVO object (which is for primary storage). If you 
> happen to have a primary storage that matches the ID (which is the ID of a 
> secondary storage), then getSnapshotDetails populates its Map 
> with inapplicable data (that is later ignored) and you don’t easily see a 
> problem. However, if you don’t have a primary storage that matches that ID 
> (which I didn’t today because I had removed that primary storage), then a 
> NullPointerException is thrown.
> I have fixed that issue by skipping getSnapshotDetails if the source is 
> coming from secondary storage.
> While fixing that, I noticed a couple more problems:
>   We can invoke grantAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
>   We can invoke revokeAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
> I have corrected those issues, as well.
> I then came across one more problem:
> · When using a SAN snapshot and copying it to secondary storage or creating a 
> new managed-storage volume from a snapshot of ours on secondary storage, we 
> attach to the SR in the XenServer code, but detach from it in the 
> StorageSystemDataMotionStrategy code (by sending a message to the XenServer 
> code to perform an SR detach). Since we know to detach from the SR after the 
> copy is done, we should detach from the SR in the XenServer code (without 
> that code having to be explicitly called from outside of the XenServer logic).
> I went ahead and changed that, as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9620) KVM Improvements for Managed Storage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723013#comment-15723013
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1748
  
I've updated the commit summary, @rhtyd.


> KVM Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server
>Affects Versions: Future
> Environment: KVM
>Reporter: Mike Tutkowski
> Fix For: Future
>
>
> Allow zone-wide primary storage based on a custom plug-in to be added via the 
> GUI in a KVM-only environment.
> Support for root disks on managed storage with KVM
> Template caching with managed storage and KVM
> Support for volume snapshots with managed storage on KVM
> Added the ability to revert a volume to a snapshot on KVM
> Updated some integration tests
> Enforce that a SolidFire volume’s Min IOPS cannot exceed 15,000 and its Max 
> and Burst IOPS cannot exceed 100,000.
> A SolidFire volume must be at least one GB.
> The storage driver should not remove the row from the 
> cloud.template_spool_ref table.
> Enable cluster-scoped managed storage
> Only volumes from zone-wide managed storage can be storage motioned from a 
> host in one cluster to a host in another cluster (cannot do so at the time 
> being with volumes from cluster-scoped managed storage).
> Updates for SAN-assisted snapshots



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723215#comment-15723215
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
Trillian test result (tid-578)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 37415 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1807-t578-xenserver-65sp1.zip
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 516.08 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1356.90 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 615.09 
| test_vpc_redundant.py
test_01_vpc_site2site_vpn | Success | 361.88 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 176.89 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 673.53 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 341.19 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 760.63 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 891.92 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1102.85 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.94 | test_volumes.py
test_08_resize_volume | Success | 111.07 | test_volumes.py
test_07_resize_fail | Success | 116.09 | test_volumes.py
test_06_download_detached_volume | Success | 20.32 | test_volumes.py
test_05_detach_volume | Success | 100.27 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.28 | test_volumes.py
test_02_attach_volume | Success | 16.47 | test_volumes.py
test_01_create_volume | Success | 393.70 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.20 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 197.71 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 141.88 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 289.05 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 46.82 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 136.56 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 15.20 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.25 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.27 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.29 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 166.28 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.11 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.10 | test_templates.py
test_01_create_template | Success | 65.63 | test_templates.py
test_10_destroy_cpvm | Success | 261.86 | test_ssvm.py
test_09_destroy_ssvm | Success | 229.85 | test_ssvm.py
test_08_reboot_cpvm | Success | 157.00 | test_ssvm.py
test_07_reboot_ssvm | Success | 144.08 | test_ssvm.py
test_06_stop_cpvm | Success | 172.18 | test_ssvm.py
test_05_stop_ssvm | Success | 139.45 | test_ssvm.py
test_04_cpvm_internals | Success | 1.12 | test_ssvm.py
test_03_ssvm_internals | Success | 3.50 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.32 | test_snapshots.py
test_04_change_offering_small | Success | 129.16 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_01_scale_vm | Success | 5.21 | test_scale_vm.py
test_09_reboot_router | Success | 75.60 | test_routers.py
test_08_start_router | Success | 60.96 

[jira] [Commented] (CLOUDSTACK-9558) Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is expunged

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723400#comment-15723400
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9558:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1722
  
Trillian test result (tid-589)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26461 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1722-t589-kvm-centos7.zip
Test completed. 40 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 237.54 | test_vpc_redundant.py
test_router_dhcp_opts | `Failure` | 21.13 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 456.25 | 
test_privategw_acl.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 539.96 | 
test_vpc_redundant.py
test_01_vpc_site2site_vpn | Success | 169.81 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 76.03 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 285.16 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 328.78 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 538.80 | test_vpc_router_nics.py
test_04_rvpc_network_garbage_collector_nics | Success | 1347.76 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 601.75 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 754.11 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1334.46 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.32 | test_volumes.py
test_08_resize_volume | Success | 15.24 | test_volumes.py
test_07_resize_fail | Success | 20.50 | test_volumes.py
test_06_download_detached_volume | Success | 15.24 | test_volumes.py
test_05_detach_volume | Success | 100.25 | test_volumes.py
test_04_delete_attached_volume | Success | 10.29 | test_volumes.py
test_03_download_attached_volume | Success | 15.21 | test_volumes.py
test_02_attach_volume | Success | 73.96 | test_volumes.py
test_01_create_volume | Success | 758.31 | test_volumes.py
test_deploy_vm_multiple | Success | 288.00 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.01 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 21.56 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.28 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.93 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.89 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.90 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.11 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.31 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 70.58 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.03 | test_templates.py
test_04_extract_template | Success | 5.27 | test_templates.py
test_03_delete_template | Success | 5.17 | test_templates.py
test_02_edit_template | Success | 90.17 | test_templates.py
test_01_create_template | Success | 55.47 | test_templates.py
test_10_destroy_cpvm | Success | 161.54 | test_ssvm.py
test_09_destroy_ssvm | Success | 164.01 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.56 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.89 | test_ssvm.py
test_06_stop_cpvm | Success | 161.92 | test_ssvm.py
test_05_stop_ssvm | Success | 133.60 | test_ssvm.py
test_04_cpvm_internals | Success | 1.16 | test_ssvm.py
test_03_ssvm_internals | Success | 3.28 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.10 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.17 | test_snapshots.py
test_04_change_offering_small | Success | 240.18 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.02 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.07 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.07 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.10 | test_secondary_storage.py
test_09_reboot_router | Success | 40.27 | test_routers.py
test_08_start_router | Success | 50.58 | test_routers.py
test_07_stop_ro

[jira] [Commented] (CLOUDSTACK-9637) Template create from snapshot does not populate vm_template_details

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723434#comment-15723434
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9637:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1805
  
Trillian test result (tid-588)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28632 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1805-t588-kvm-centos7.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 908.56 | 
test_vpc_redundant.py
ContextSuite context=TestTemplates>:setup | `Error` | 329.37 | 
test_templates.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 155.00 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 65.83 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 259.93 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 311.69 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 528.97 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 550.42 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1345.47 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 563.04 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1364.70 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 16.18 | test_volumes.py
test_08_resize_volume | Success | 15.42 | test_volumes.py
test_07_resize_fail | Success | 20.34 | test_volumes.py
test_06_download_detached_volume | Success | 15.31 | test_volumes.py
test_05_detach_volume | Success | 100.23 | test_volumes.py
test_04_delete_attached_volume | Success | 10.34 | test_volumes.py
test_03_download_attached_volume | Success | 15.23 | test_volumes.py
test_02_attach_volume | Success | 73.76 | test_volumes.py
test_01_create_volume | Success | 727.54 | test_volumes.py
test_deploy_vm_multiple | Success | 332.93 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.56 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.70 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.09 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.63 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.64 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.25 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 105.65 | test_templates.py
test_01_create_template | Success | 60.41 | test_templates.py
test_10_destroy_cpvm | Success | 131.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.51 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.70 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.55 | test_ssvm.py
test_06_stop_cpvm | Success | 161.76 | test_ssvm.py
test_05_stop_ssvm | Success | 133.84 | test_ssvm.py
test_04_cpvm_internals | Success | 1.29 | test_ssvm.py
test_03_ssvm_internals | Success | 5.14 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.05 | test_snapshots.py
test_04_change_offering_small | Success | 237.70 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.09 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.12 | test_secondary_storage.py
test_09_reboot_router | Success | 40.26 | test_routers.py
test_08_start_router | Success | 30.22 | test_routers.py
test_07_stop_router | Success | 10.13 | test_routers.py
test_06_router_advanced | Success | 0.04 | test_routers.py
test_05_router_basic | Success | 0.03 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.65 | test_routers.py
test_03_restart_network_cleanup | Success | 75.45 | test_routers.py
test_02_router_internal_adv | Success | 1.10 | test_routers.py
test_01_router_internal_basic | Success | 0.56 | test_routers.py
test_router_dns_guestipquery |

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723441#comment-15723441
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
Trillian test result (tid-585)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35923 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1579-t585-xenserver-65sp1.zip
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 642.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1355.28 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 587.49 
| test_vpc_redundant.py
ContextSuite context=TestRVPCSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 838.87 | test_vpc_redundant.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 843.98 | 
test_vpc_redundant.py
test_01_vpc_site2site_vpn | Success | 365.40 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 136.50 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 348.87 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 691.97 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 839.57 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1119.72 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.68 | test_volumes.py
test_08_resize_volume | Success | 85.61 | test_volumes.py
test_07_resize_fail | Success | 95.65 | test_volumes.py
test_06_download_detached_volume | Success | 20.25 | test_volumes.py
test_05_detach_volume | Success | 100.23 | test_volumes.py
test_04_delete_attached_volume | Success | 10.15 | test_volumes.py
test_03_download_attached_volume | Success | 15.20 | test_volumes.py
test_02_attach_volume | Success | 10.71 | test_volumes.py
test_01_create_volume | Success | 393.83 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 224.39 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 100.73 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 237.47 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 36.74 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 65.99 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.12 | test_vm_life_cycle.py
test_02_start_vm | Success | 15.15 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.20 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 181.28 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.09 | test_templates.py
test_02_edit_template | Success | 90.11 | test_templates.py
test_01_create_template | Success | 80.88 | test_templates.py
test_10_destroy_cpvm | Success | 231.75 | test_ssvm.py
test_09_destroy_ssvm | Success | 198.93 | test_ssvm.py
test_08_reboot_cpvm | Success | 151.88 | test_ssvm.py
test_07_reboot_ssvm | Success | 144.06 | test_ssvm.py
test_06_stop_cpvm | Success | 196.72 | test_ssvm.py
test_05_stop_ssvm | Success | 168.90 | test_ssvm.py
test_04_cpvm_internals | Success | 1.13 | test_ssvm.py
test_03_ssvm_internals | Success | 3.37 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.10 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.23 | test_snapshots.py
test_04_change_offering_small | Success | 91.20 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.06 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.11 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.12 | test_second

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723697#comment-15723697
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
Trillian test result (tid-584)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 41829 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1579-t584-vmware-55u3.zip
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | `Error` | 582.72 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 738.70 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 186.91 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 446.26 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 718.54 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 696.81 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1540.80 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 741.13 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 797.51 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1418.36 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 30.97 | test_volumes.py
test_06_download_detached_volume | Success | 90.76 | test_volumes.py
test_05_detach_volume | Success | 110.33 | test_volumes.py
test_04_delete_attached_volume | Success | 15.25 | test_volumes.py
test_03_download_attached_volume | Success | 20.34 | test_volumes.py
test_02_attach_volume | Success | 53.81 | test_volumes.py
test_01_create_volume | Success | 531.24 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.24 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.41 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 212.01 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 167.15 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 298.80 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.09 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.31 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.16 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.30 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.15 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 367.43 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 25.50 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 146.06 | test_templates.py
test_10_destroy_cpvm | Success | 297.40 | test_ssvm.py
test_09_destroy_ssvm | Success | 273.98 | test_ssvm.py
test_08_reboot_cpvm | Success | 157.04 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.62 | test_ssvm.py
test_06_stop_cpvm | Success | 237.05 | test_ssvm.py
test_05_stop_ssvm | Success | 209.20 | test_ssvm.py
test_04_cpvm_internals | Success | 1.24 | test_ssvm.py
test_03_ssvm_internals | Success | 4.41 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 96.79 | test_snapshots.py
test_04_change_offering_small | Success | 132.33 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.09 | test_service_offerings.py
test_01_create_service_offering | Success | 0.12 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_09_reboot_router | Success | 166.11 | test_routers.py
test_08_start_router | Success | 146.01 | test_routers.py
test_07_stop_router | Success | 20.23 | test_routers.py
test_06_router_advanced | Success

[jira] [Created] (CLOUDSTACK-9654) Incorrect hypervisor mapping of various SUSE Linux guest os versions on VMware

2016-12-05 Thread Sateesh Chodapuneedi (JIRA)
Sateesh Chodapuneedi created CLOUDSTACK-9654:


 Summary: Incorrect hypervisor mapping of various SUSE Linux guest 
os versions on VMware
 Key: CLOUDSTACK-9654
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9654
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.9.0
 Environment: ACS 4.9
VMware 5.1
Reporter: Sateesh Chodapuneedi
Assignee: Sateesh Chodapuneedi
 Fix For: 4.9.1.0


Currently many versions of SUSE Linux does not have any hypervisor mapping 
entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.
The current (4.9) mappings in database looks as below,
{noformat}
mysql> select id,display_name from guest_os where display_name like '%suse%';
+-+--+
| id  | display_name |
+-+--+
|  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
|  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
|  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
|  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
|  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
|  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
|  46 | SUSE Linux Enterprise Server 11 (32-bit) |
|  47 | SUSE Linux Enterprise Server 11 (64-bit) |
|  96 | SUSE Linux Enterprise 8(32-bit)  |
|  97 | SUSE Linux Enterprise 8(64-bit)  |
| 107 | SUSE Linux Enterprise 9(32-bit)  |
| 108 | SUSE Linux Enterprise 9(64-bit)  |
| 109 | SUSE Linux Enterprise 10(32-bit) |
| 110 | SUSE Linux Enterprise 10(64-bit) |
| 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
| 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
| 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
| 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
| 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
| 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
| 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
| 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
| 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
| 202 | Other SUSE Linux(32-bit) |
| 203 | Other SUSE Linux(64-bit) |
| 244 | SUSE Linux Enterprise Server 12 (64-bit) |
+-+--+
26 rows in set (0.00 sec)

mysql> select o.id,o.display_name, h.guest_os_name from guest_os as o, 
guest_os_hypervisor as h where o.id=h.guest_os_id and 
h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
like '%SUSE%';
+-+--+---+
| id  | display_name | guest_os_name |
+-+--+---+
|  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest |
|  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   |
| 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest |
| 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   |
| 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest |
| 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   |
| 202 | Other SUSE Linux(32-bit) | suseGuest |
| 203 | Other SUSE Linux(64-bit) | suse64Guest   |
+-+--+---+
8 rows in set (0.00 sec)

mysql> select * from version;
++-+-+--+
| id | version | updated | step |
++-+-+--+
|  1 | 4.0.0   | 2016-12-05 17:35:27 | Complete |
|  2 | 4.1.0   | 2016-12-05 12:05:57 | Complete |
|  3 | 4.2.0   | 2016-12-05 12:05:58 | Complete |
|  4 | 4.2.1   | 2016-12-05 12:05:58 | Complete |
|  5 | 4.3.0   | 2016-12-05 12:05:58 | Complete |
|  6 | 4.4.0   | 2016-12-05 12:05:58 | Complete |
|  7 | 4.4.1   | 2016-12-05 12:05:58 | Complete |
|  8 | 4.4.2   | 2016-12-05 12:05:58 | Complete |
|  9 | 4.5.0   | 2016-12-05 12:05:58 | Complete |
| 10 | 4.5.1   | 2016-12-05 12:05:58 | Complete |
| 11 | 4.5.2   | 2016-12-05 12:05:58 | Complete |
| 12 | 4.6.0   | 2016-12-05 12:05:58 | Complete |
| 13 | 4.6.1   | 2016-12-05 12:05:58 | Complete |
| 14 | 4.7.0   | 2016-12-05 12:05:58 | Complete |
| 15 | 4.7.1   | 2016-12-05 12:05:58 | Complete |
| 16 | 4.8.0   | 2016-12-05 12:05:58 | Complete |
| 17 | 4.8.1   | 2016-12-05 12:05:58 | Complete |
| 18 | 4.9.0   | 2016-12-05 12:05:58 | Complete |
| 19 | 4.9.1.0 | 2016-12-05 12:05:58 | Complete |
++---

[jira] [Updated] (CLOUDSTACK-9654) Incorrect hypervisor mapping of various SUSE Linux guest os versions on VMware

2016-12-05 Thread Sateesh Chodapuneedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sateesh Chodapuneedi updated CLOUDSTACK-9654:
-
Description: 
Currently many versions of SUSE Linux does not have any hypervisor mapping 
entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.
The current (4.9) list of SUSE Linux guest os in database looks as below,
{noformat}
mysql> select id,display_name from guest_os where display_name like '%suse%';
+-+--+
| id  | display_name |
+-+--+
|  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
|  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
|  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
|  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
|  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
|  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
|  46 | SUSE Linux Enterprise Server 11 (32-bit) |
|  47 | SUSE Linux Enterprise Server 11 (64-bit) |
|  96 | SUSE Linux Enterprise 8(32-bit)  |
|  97 | SUSE Linux Enterprise 8(64-bit)  |
| 107 | SUSE Linux Enterprise 9(32-bit)  |
| 108 | SUSE Linux Enterprise 9(64-bit)  |
| 109 | SUSE Linux Enterprise 10(32-bit) |
| 110 | SUSE Linux Enterprise 10(64-bit) |
| 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
| 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
| 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
| 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
| 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
| 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
| 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
| 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
| 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
| 202 | Other SUSE Linux(32-bit) |
| 203 | Other SUSE Linux(64-bit) |
| 244 | SUSE Linux Enterprise Server 12 (64-bit) |
+-+--+
26 rows in set (0.00 sec)
{noformat}
The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 6.0 
in database looks as below. We can observe in the below query result, which 
lists all hypervisor mappings for SUSE Linux many guest OS over VMware 6.0, 
many guest os listed in above query result are missing. Hence the need to add 
the missing hypervisor mappings.
{noformat}
mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version from 
guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and 
h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
like '%SUSE%';
+-+--+---++
| id  | display_name | guest_os_name | hypervisor_version |
+-+--+---++
|  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest | 6.0|
|  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   | 6.0|
| 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest | 6.0|
| 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   | 6.0|
| 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0|
| 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   | 6.0|
| 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0|
| 203 | Other SUSE Linux(64-bit) | suse64Guest   | 6.0|
+-+--+---++
8 rows in set (0.00 sec)
{noformat}


  was:
Currently many versions of SUSE Linux does not have any hypervisor mapping 
entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.
The current (4.9) mappings in database looks as below,
{noformat}
mysql> select id,display_name from guest_os where display_name like '%suse%';
+-+--+
| id  | display_name |
+-+--+
|  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
|  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
|  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
|  43 | SUSE Linu

[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724094#comment-15724094
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Trillian test result (tid-591)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 6
Total time taken: 34361 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1659-t591-xenserver-65sp1.zip
Test completed. 43 look ok, 5 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 507.47 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1382.51 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 565.36 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 772.68 | 
test_privategw_acl.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 57.32 | 
test_snapshots.py
test_router_dns_guestipquery | `Error` | 5.23 | test_router_dns.py
ContextSuite context=TestRouterDHCPOpts>:teardown | `Error` | 107.60 | 
test_router_dhcphosts.py
test_01_vpc_site2site_vpn | Success | 331.78 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 167.16 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 583.93 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 321.44 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 714.76 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 926.95 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1054.20 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.79 | test_volumes.py
test_08_resize_volume | Success | 111.21 | test_volumes.py
test_07_resize_fail | Success | 121.33 | test_volumes.py
test_06_download_detached_volume | Success | 25.41 | test_volumes.py
test_05_detach_volume | Success | 100.29 | test_volumes.py
test_04_delete_attached_volume | Success | 10.25 | test_volumes.py
test_03_download_attached_volume | Success | 20.39 | test_volumes.py
test_02_attach_volume | Success | 10.74 | test_volumes.py
test_01_create_volume | Success | 387.57 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.32 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 224.58 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 130.87 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 243.93 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.06 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.30 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.19 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.28 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.33 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.34 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 126.16 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.07 | test_templates.py
test_05_template_permissions | Success | 0.09 | test_templates.py
test_04_extract_template | Success | 5.19 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.20 | test_templates.py
test_01_create_template | Success | 60.70 | test_templates.py
test_10_destroy_cpvm | Success | 226.81 | test_ssvm.py
test_09_destroy_ssvm | Success | 234.24 | test_ssvm.py
test_08_reboot_cpvm | Success | 171.73 | test_ssvm.py
test_07_reboot_ssvm | Success | 184.10 | test_ssvm.py
test_06_stop_cpvm | Success | 166.78 | test_ssvm.py
test_05_stop_ssvm | Success | 174.09 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 3.68 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.73 | test_snapshots.py
test_04_change_offering_small | Success | 129.25 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.11 | test_service_offerings.py
test_01_create_service_offering 

[jira] [Updated] (CLOUDSTACK-9654) Incorrect hypervisor mapping of various SUSE Linux guest os versions on VMware

2016-12-05 Thread Sateesh Chodapuneedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sateesh Chodapuneedi updated CLOUDSTACK-9654:
-
Description: 
Currently many versions of SUSE Linux does not have any hypervisor mapping 
entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.
The current (4.9) list of SUSE Linux guest os in database looks as below,
{noformat}
mysql> select id,display_name from guest_os where display_name like '%suse%';
+-+--+
| id  | display_name |
+-+--+
|  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
|  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
|  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
|  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
|  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
|  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
|  46 | SUSE Linux Enterprise Server 11 (32-bit) |
|  47 | SUSE Linux Enterprise Server 11 (64-bit) |
|  96 | SUSE Linux Enterprise 8(32-bit)  |
|  97 | SUSE Linux Enterprise 8(64-bit)  |
| 107 | SUSE Linux Enterprise 9(32-bit)  |
| 108 | SUSE Linux Enterprise 9(64-bit)  |
| 109 | SUSE Linux Enterprise 10(32-bit) |
| 110 | SUSE Linux Enterprise 10(64-bit) |
| 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
| 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
| 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
| 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
| 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
| 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
| 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
| 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
| 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
| 202 | Other SUSE Linux(32-bit) |
| 203 | Other SUSE Linux(64-bit) |
| 244 | SUSE Linux Enterprise Server 12 (64-bit) |
+-+--+
26 rows in set (0.00 sec)
{noformat}
The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 6.0 
in database looks as below. We can observe in the below query result, which 
lists all hypervisor mappings for SUSE Linux guest OS over VMware 6.0, many 
guest os listed in above query result are missing their mappings for VMware 
6.0. Hence the need to add the missing hypervisor mappings.
{noformat}
mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version from 
guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and 
h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
like '%SUSE%';
+-+--+---++
| id  | display_name | guest_os_name | hypervisor_version |
+-+--+---++
|  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest | 6.0|
|  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   | 6.0|
| 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest | 6.0|
| 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   | 6.0|
| 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0|
| 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   | 6.0|
| 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0|
| 203 | Other SUSE Linux(64-bit) | suse64Guest   | 6.0|
+-+--+---++
8 rows in set (0.00 sec)
{noformat}


  was:
Currently many versions of SUSE Linux does not have any hypervisor mapping 
entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.
The current (4.9) list of SUSE Linux guest os in database looks as below,
{noformat}
mysql> select id,display_name from guest_os where display_name like '%suse%';
+-+--+
| id  | display_name |
+-+--+
|  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
|  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
|  42 | SUSE Linux Enterpris

[jira] [Updated] (CLOUDSTACK-9654) Incorrect & missing hypervisor mapping of various SUSE Linux guest os versions on VMware 6.0

2016-12-05 Thread Sateesh Chodapuneedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sateesh Chodapuneedi updated CLOUDSTACK-9654:
-
Summary: Incorrect & missing hypervisor mapping of various SUSE Linux guest 
os versions on VMware 6.0  (was: Incorrect hypervisor mapping of various SUSE 
Linux guest os versions on VMware)

> Incorrect & missing hypervisor mapping of various SUSE Linux guest os 
> versions on VMware 6.0
> 
>
> Key: CLOUDSTACK-9654
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9654
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: ACS 4.9
> VMware 5.1
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently many versions of SUSE Linux does not have any hypervisor mapping 
> entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
> observed that the guest_os_name field is incorrect for some SUSE Linux 
> variants, which results in deployed instance (with SUSE Linux) set to guest 
> OS type as "Other (64-bit)" on vCenter, which would not represent the guest 
> OS accurately on hypervisor.
> The current (4.9) list of SUSE Linux guest os in database looks as below,
> {noformat}
> mysql> select id,display_name from guest_os where display_name like '%suse%';
> +-+--+
> | id  | display_name |
> +-+--+
> |  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
> |  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
> |  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
> |  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
> |  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
> |  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
> |  46 | SUSE Linux Enterprise Server 11 (32-bit) |
> |  47 | SUSE Linux Enterprise Server 11 (64-bit) |
> |  96 | SUSE Linux Enterprise 8(32-bit)  |
> |  97 | SUSE Linux Enterprise 8(64-bit)  |
> | 107 | SUSE Linux Enterprise 9(32-bit)  |
> | 108 | SUSE Linux Enterprise 9(64-bit)  |
> | 109 | SUSE Linux Enterprise 10(32-bit) |
> | 110 | SUSE Linux Enterprise 10(64-bit) |
> | 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
> | 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
> | 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
> | 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
> | 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
> | 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
> | 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
> | 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
> | 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
> | 202 | Other SUSE Linux(32-bit) |
> | 203 | Other SUSE Linux(64-bit) |
> | 244 | SUSE Linux Enterprise Server 12 (64-bit) |
> +-+--+
> 26 rows in set (0.00 sec)
> {noformat}
> The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 6.0 
> in database looks as below. We can observe in the below query result, which 
> lists all hypervisor mappings for SUSE Linux guest OS over VMware 6.0, many 
> guest os listed in above query result are missing their mappings for VMware 
> 6.0. Hence the need to add the missing hypervisor mappings.
> {noformat}
> mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version from 
> guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and 
> h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
> like '%SUSE%';
> +-+--+---++
> | id  | display_name | guest_os_name | hypervisor_version 
> |
> +-+--+---++
> |  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest | 6.0
> |
> |  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   | 6.0
> |
> | 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest | 6.0
> |
> | 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   | 6.0
> |
> | 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0
> |
> | 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   | 6.0
> |
> | 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0
> |
> | 203 | Other SUSE Linux(64-bit) | suse64Guest   

[jira] [Commented] (CLOUDSTACK-9619) Fixes for PR 1600

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724185#comment-15724185
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9619:


Github user mike-tutkowski commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
@rhtyd Are we having trouble with Travis? I've amended my SHA a few times 
and get different Travis failures. Thanks!


> Fixes for PR 1600
> -
>
> Key: CLOUDSTACK-9619
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9619
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> In StorageSystemDataMotionStrategy.performCopyOfVdi we call 
> getSnapshotDetails. In one such scenario, the source snapshot in question is 
> coming from secondary storage (when we are creating a new volume on managed 
> storage from a snapshot of ours that’s on secondary storage).
> This usually “worked” in the regression tests due to a bit of "luck": We 
> retrieve the ID of the snapshot (which is on secondary storage) and then try 
> to pull out its StorageVO object (which is for primary storage). If you 
> happen to have a primary storage that matches the ID (which is the ID of a 
> secondary storage), then getSnapshotDetails populates its Map 
> with inapplicable data (that is later ignored) and you don’t easily see a 
> problem. However, if you don’t have a primary storage that matches that ID 
> (which I didn’t today because I had removed that primary storage), then a 
> NullPointerException is thrown.
> I have fixed that issue by skipping getSnapshotDetails if the source is 
> coming from secondary storage.
> While fixing that, I noticed a couple more problems:
>   We can invoke grantAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
>   We can invoke revokeAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
> I have corrected those issues, as well.
> I then came across one more problem:
> · When using a SAN snapshot and copying it to secondary storage or creating a 
> new managed-storage volume from a snapshot of ours on secondary storage, we 
> attach to the SR in the XenServer code, but detach from it in the 
> StorageSystemDataMotionStrategy code (by sending a message to the XenServer 
> code to perform an SR detach). Since we know to detach from the SR after the 
> copy is done, we should detach from the SR in the XenServer code (without 
> that code having to be explicitly called from outside of the XenServer logic).
> I went ahead and changed that, as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724215#comment-15724215
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
@syed can you look at this 'revert' i guess this is going to be an issue 
with the managed storage change that has been put in. cc @rhtyd 


> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724394#comment-15724394
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
@syed yes I assume that PR in itself is good. You need to consider the 
broken test and broken upgrade ,where upgrading users will have path without 
the extension.


> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9633) test_snapshot is failing due to incorrect string construction in utils.py

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724414#comment-15724414
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9633:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1807
  
@abhinandanprateek @syed I'm seeing known intermittent failures with vpc/vr 
related tests, can I get LGTM on this? thanks.


> test_snapshot is failing due to incorrect string construction in utils.py
> -
>
> Key: CLOUDSTACK-9633
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9633
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: marvin
>Affects Versions: 4.10.0.0
> Environment: https://github.com/apache/cloudstack/pull/1800
>Reporter: Boris Stoyanov
> Fix For: 4.10.0.0
>
>
> When searching for the snapshot vhd on the nfs storage it adds 
> ([name].vhd.vhd) I've removed the extension for xenserver and it passed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Updated] (CLOUDSTACK-9654) Missing hypervisor mapping of various SUSE Linux guest os versions on VMware 6.0

2016-12-05 Thread Sateesh Chodapuneedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sateesh Chodapuneedi updated CLOUDSTACK-9654:
-
Summary: Missing hypervisor mapping of various SUSE Linux guest os versions 
on VMware 6.0  (was: Incorrect & missing hypervisor mapping of various SUSE 
Linux guest os versions on VMware 6.0)

> Missing hypervisor mapping of various SUSE Linux guest os versions on VMware 
> 6.0
> 
>
> Key: CLOUDSTACK-9654
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9654
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: ACS 4.9
> VMware 5.1
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently many versions of SUSE Linux does not have any hypervisor mapping 
> entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also 
> observed that the guest_os_name field is incorrect for some SUSE Linux 
> variants, which results in deployed instance (with SUSE Linux) set to guest 
> OS type as "Other (64-bit)" on vCenter, which would not represent the guest 
> OS accurately on hypervisor.
> The current (4.9) list of SUSE Linux guest os in database looks as below,
> {noformat}
> mysql> select id,display_name from guest_os where display_name like '%suse%';
> +-+--+
> | id  | display_name |
> +-+--+
> |  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
> |  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
> |  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
> |  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
> |  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
> |  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
> |  46 | SUSE Linux Enterprise Server 11 (32-bit) |
> |  47 | SUSE Linux Enterprise Server 11 (64-bit) |
> |  96 | SUSE Linux Enterprise 8(32-bit)  |
> |  97 | SUSE Linux Enterprise 8(64-bit)  |
> | 107 | SUSE Linux Enterprise 9(32-bit)  |
> | 108 | SUSE Linux Enterprise 9(64-bit)  |
> | 109 | SUSE Linux Enterprise 10(32-bit) |
> | 110 | SUSE Linux Enterprise 10(64-bit) |
> | 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
> | 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
> | 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
> | 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
> | 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
> | 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
> | 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
> | 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
> | 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
> | 202 | Other SUSE Linux(32-bit) |
> | 203 | Other SUSE Linux(64-bit) |
> | 244 | SUSE Linux Enterprise Server 12 (64-bit) |
> +-+--+
> 26 rows in set (0.00 sec)
> {noformat}
> The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 6.0 
> in database looks as below. We can observe in the below query result, which 
> lists all hypervisor mappings for SUSE Linux guest OS over VMware 6.0, many 
> guest os listed in above query result are missing their mappings for VMware 
> 6.0. Hence the need to add the missing hypervisor mappings.
> {noformat}
> mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version from 
> guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and 
> h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
> like '%SUSE%';
> +-+--+---++
> | id  | display_name | guest_os_name | hypervisor_version 
> |
> +-+--+---++
> |  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest | 6.0
> |
> |  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   | 6.0
> |
> | 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest | 6.0
> |
> | 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   | 6.0
> |
> | 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0
> |
> | 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   | 6.0
> |
> | 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0
> |
> | 203 | Other SUSE Linux(64-bit) | suse64Guest   | 6.0

[jira] [Commented] (CLOUDSTACK-9654) Missing hypervisor mapping of various SUSE Linux guest os versions on VMware 6.0

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724426#comment-15724426
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9654:


GitHub user sateesh-chodapuneedi opened a pull request:

https://github.com/apache/cloudstack/pull/1817

CLOUDSTACK-9654 Missing hypervisor mapping of various SUSE Linux gues…

…t os versions on VMware 6.0

Issue: Currently many versions of SUSE Linux does not have any hypervisor 
mapping entry in guest_os_hypervisor table in cloud database for VMware 6.0. 
Also observed that the guest_os_name field is incorrect for some SUSE Linux 
variants, which results in deployed instance (with SUSE Linux) set to guest OS 
type as "Other (64-bit)" on vCenter, which would not represent the guest OS 
accurately on hypervisor.

Fix: Add the missing hypervisor mappings
Signed-off-by: Sateesh Chodapuneedi 

The current (4.9) list of SUSE Linux guest os in database looks as below,

> mysql> select id,display_name from guest_os where display_name like 
'%suse%';
> +-+--+
> | id  | display_name |
> +-+--+
> |  40 | SUSE Linux Enterprise Server 9 SP4 (32-bit)  |
> |  41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
> |  42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
> |  43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
> |  44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
> |  45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
> |  46 | SUSE Linux Enterprise Server 11 (32-bit) |
> |  47 | SUSE Linux Enterprise Server 11 (64-bit) |
> |  96 | SUSE Linux Enterprise 8(32-bit)  |
> |  97 | SUSE Linux Enterprise 8(64-bit)  |
> | 107 | SUSE Linux Enterprise 9(32-bit)  |
> | 108 | SUSE Linux Enterprise 9(64-bit)  |
> | 109 | SUSE Linux Enterprise 10(32-bit) |
> | 110 | SUSE Linux Enterprise 10(64-bit) |
> | 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
> | 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
> | 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
> | 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
> | 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
> | 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
> | 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
> | 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
> | 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
> | 202 | Other SUSE Linux(32-bit) |
> | 203 | Other SUSE Linux(64-bit) |
> | 244 | SUSE Linux Enterprise Server 12 (64-bit) |
> +-+--+
> 26 rows in set (0.00 sec)

The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 
6.0 in database looks as below. We can observe in the below query result, which 
lists all hypervisor mappings for SUSE Linux guest OS over VMware 6.0, many 
guest os listed in above query result are missing their mappings for VMware 
6.0. Hence the need to add the missing hypervisor mappings.

```
mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version 
from guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and 
h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name 
like '%SUSE%';

+-+--+---++
| id  | display_name | guest_os_name | 
hypervisor_version |

+-+--+---++
|  96 | SUSE Linux Enterprise 8(32-bit)  | suseGuest | 6.0  
  |
|  97 | SUSE Linux Enterprise 8(64-bit)  | suse64Guest   | 6.0  
  |
| 107 | SUSE Linux Enterprise 9(32-bit)  | suseGuest | 6.0  
  |
| 108 | SUSE Linux Enterprise 9(64-bit)  | suse64Guest   | 6.0  
  |
| 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0  
  |
| 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest   | 6.0  
  |
| 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0  
  |
| 203 | Other SUSE Linux(64-bit) | suse64Guest   | 6.0  
  |

+-+--+---++
8 rows in set (0.00 sec)

```


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sateesh-chodapuneedi/cloudstack 
pr-cloudstack-9654

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloud

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724458#comment-15724458
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


GitHub user rhtyd reopened a pull request:

https://github.com/apache/cloudstack/pull/1816

CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

The test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

(we can merge this if Travis passes)

/cc @abhinandanprateek @murali-reddy 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack 4.9-fix-npe-vmware

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1816.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1816


commit dcbf3c8689ed3eaed8653763ec27d2907671c72b
Author: Rohit Yadav 
Date:   2016-12-05T11:15:33Z

CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

The test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

Signed-off-by: Rohit Yadav 




> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724457#comment-15724457
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd closed the pull request at:

https://github.com/apache/cloudstack/pull/1816


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724472#comment-15724472
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
LGTM on code review and testing @murali-reddy @rhtyd 



> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724484#comment-15724484
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
Test LGTM. 


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for parent domain admin 
> user in a share

[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724487#comment-15724487
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9564:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1816
  
LGTM, merging this on last TravisCI's first job result. Only the first job 
runs/builds with unit tests.


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9632) Upgrade bountycastle to 1.55+

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724557#comment-15724557
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9632:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1799
  
@blueorangutan test centos7 vmware-55u3



> Upgrade bountycastle to 1.55+
> -
>
> Key: CLOUDSTACK-9632
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9632
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Upgrade bountycastle library to latest versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9632) Upgrade bountycastle to 1.55+

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724559#comment-15724559
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9632:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1799
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Upgrade bountycastle to 1.55+
> -
>
> Key: CLOUDSTACK-9632
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9632
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Upgrade bountycastle library to latest versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9619) Fixes for PR 1600

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724560#comment-15724560
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9619:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1749
  
@mike-tutkowski yes, the failure was due to an intermittent unit test 
failure. This will be fixed with #1816 


> Fixes for PR 1600
> -
>
> Key: CLOUDSTACK-9619
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9619
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: All
>Reporter: Mike Tutkowski
> Fix For: 4.10.0.0
>
>
> In StorageSystemDataMotionStrategy.performCopyOfVdi we call 
> getSnapshotDetails. In one such scenario, the source snapshot in question is 
> coming from secondary storage (when we are creating a new volume on managed 
> storage from a snapshot of ours that’s on secondary storage).
> This usually “worked” in the regression tests due to a bit of "luck": We 
> retrieve the ID of the snapshot (which is on secondary storage) and then try 
> to pull out its StorageVO object (which is for primary storage). If you 
> happen to have a primary storage that matches the ID (which is the ID of a 
> secondary storage), then getSnapshotDetails populates its Map 
> with inapplicable data (that is later ignored) and you don’t easily see a 
> problem. However, if you don’t have a primary storage that matches that ID 
> (which I didn’t today because I had removed that primary storage), then a 
> NullPointerException is thrown.
> I have fixed that issue by skipping getSnapshotDetails if the source is 
> coming from secondary storage.
> While fixing that, I noticed a couple more problems:
>   We can invoke grantAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
>   We can invoke revokeAccess on a snapshot that’s actually on secondary 
> storage (this doesn’t amount to much because the VolumeServiceImpl ignores 
> the call when it’s not for a primary-storage driver).
> I have corrected those issues, as well.
> I then came across one more problem:
> · When using a SAN snapshot and copying it to secondary storage or creating a 
> new managed-storage volume from a snapshot of ours on secondary storage, we 
> attach to the SR in the XenServer code, but detach from it in the 
> StorageSystemDataMotionStrategy code (by sending a message to the XenServer 
> code to perform an SR detach). Since we know to detach from the SR after the 
> copy is done, we should detach from the SR in the XenServer code (without 
> that code having to be explicitly called from outside of the XenServer logic).
> I went ahead and changed that, as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724573#comment-15724573
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d506a624bbb8cf7fdc1fcd381551f319a90a810 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d506a6 ]

Merge pull request #1816 from shapeblue/4.9-fix-npe-vmware

CLOUDSTACK-9564: Fix NPE due to intermittent test assertionThe test assertion 
on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

(we can merge this if Travis passes)

/cc @abhinandanprateek @murali-reddy

* pr/1816:
  CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724566#comment-15724566
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d506a624bbb8cf7fdc1fcd381551f319a90a810 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d506a6 ]

Merge pull request #1816 from shapeblue/4.9-fix-npe-vmware

CLOUDSTACK-9564: Fix NPE due to intermittent test assertionThe test assertion 
on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

(we can merge this if Travis passes)

/cc @abhinandanprateek @murali-reddy

* pr/1816:
  CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9564) Fix memory leak in VmwareContextPool

2016-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724572#comment-15724572
 ] 

ASF subversion and git services commented on CLOUDSTACK-9564:
-

Commit 8d506a624bbb8cf7fdc1fcd381551f319a90a810 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d506a6 ]

Merge pull request #1816 from shapeblue/4.9-fix-npe-vmware

CLOUDSTACK-9564: Fix NPE due to intermittent test assertionThe test assertion 
on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.

(we can merge this if Travis passes)

/cc @abhinandanprateek @murali-reddy

* pr/1816:
  CLOUDSTACK-9564: Fix NPE due to intermittent test assertion

Signed-off-by: Rohit Yadav 


> Fix memory leak in VmwareContextPool
> 
>
> Key: CLOUDSTACK-9564
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9564
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> In a recent management server crash, it was found that the largest 
> contributor to memory leak was in VmwareContextPool where a registry is held 
> (arraylist) that grows indefinitely. The list itself is not used anywhere or 
> consumed. There exists a hashmap (pool) that returns a list of contexts for 
> existing poolkey (address/username) that is used instead. The fix would be to 
> get rid of the registry and limit the hashmap context list length for any 
> poolkey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >