[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695173#comment-15695173
 ] 

ASF subversion and git services commented on CLOUDSTACK-9402:
-

Commit 8d4dc81223032f104bd4c7d576aceed329c6a099 in cloudstack's branch 
refs/heads/master from [~nlivens]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d4dc81 ]

CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP plugin

CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features 
verification with NuageVsp (both overlay and underlay infra).

Co-Authored-By: Prashanth Manthena , 
Frank Maximus 


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestN

[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695175#comment-15695175
 ] 

ASF subversion and git services commented on CLOUDSTACK-9402:
-

Commit 62c8496d7e38365f8cf5bebfc8b98ecc5a371d8b in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=62c8496 ]

Merge pull request #1580 from nlivens/nuage_vsp_pat_fip2ul

CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT 
to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for 
corresponding Source & Static NAT features on master. Moreover, our Marvin 
tests are written in such a way that they can validate our supported feature 
set with both Nuage VSP SDN platform's overlay and underlay infra.

PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with 
Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.

* pr/1580:
  CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP plugin

Signed-off-by: Rohit Yadav 


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests t

[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695171#comment-15695171
 ] 

ASF subversion and git services commented on CLOUDSTACK-9402:
-

Commit 8d4dc81223032f104bd4c7d576aceed329c6a099 in cloudstack's branch 
refs/heads/master from [~nlivens]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8d4dc81 ]

CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP plugin

CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features 
verification with NuageVsp (both overlay and underlay infra).

Co-Authored-By: Prashanth Manthena , 
Frank Maximus 


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestN

[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695174#comment-15695174
 ] 

ASF subversion and git services commented on CLOUDSTACK-9402:
-

Commit 62c8496d7e38365f8cf5bebfc8b98ecc5a371d8b in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=62c8496 ]

Merge pull request #1580 from nlivens/nuage_vsp_pat_fip2ul

CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT 
to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for 
corresponding Source & Static NAT features on master. Moreover, our Marvin 
tests are written in such a way that they can validate our supported feature 
set with both Nuage VSP SDN platform's overlay and underlay infra.

PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with 
Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.

* pr/1580:
  CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to 
underlay) in Nuage VSP plugin

Signed-off-by: Rohit Yadav 


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests t

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695180#comment-15695180
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1577
  
LGTM. Few failures are related to env, merging this now.


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already has a PF rule. ... === TestName:

[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695176#comment-15695176
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9402:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1580


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_05_nuage_Stat

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695181#comment-15695181
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1710#discussion_r89575783
  
--- Diff: 
engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/XenserverSnapshotStrategy.java
 ---
@@ -268,7 +268,9 @@ public boolean deleteSnapshot(Long snapshotId) {
 SnapshotDataStoreVO snapshotOnPrimary = 
snapshotStoreDao.findBySnapshot(snapshotId, DataStoreRole.Primary);
 if (snapshotOnPrimary != null) {
 SnapshotInfo snapshotOnPrimaryInfo = 
snapshotDataFactory.getSnapshot(snapshotId, DataStoreRole.Primary);
-if 
(((PrimaryDataStoreImpl)snapshotOnPrimaryInfo.getDataStore()).getPoolType() == 
StoragePoolType.RBD) {
+long volumeId = snapshotOnPrimary.getVolumeId();
+VolumeVO volumeVO = volumeDao.findById(volumeId);
--- End diff --

@rhtyd Hi Rohit, each snapshot should have a volume_id (at least on all our 
platforms).
Actually in 
engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotObject.java,
 the getVolumeId() also return long , not Long.


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/5400

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695184#comment-15695184
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 62e858131fcc0650d61699efffcf7eb57721e1b1 in cloudstack's branch 
refs/heads/master from [~nlivens]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=62e8581 ]

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

CLOUDSTACK-9321 : Adding component tests for VPC Network functionality - 
Internal LB rules

CLOUDSTACK-9321 : Extending Nuage VSP Internal LB Marvin tests

Co-Authored-By: Prashanth Manthena , 
Frank Maximus 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test cas

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695191#comment-15695191
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@nlivens @prashanthvarma @singalrahul please squash your changes, fix 
conflicts, rebase against latest master. Pending lgtm/review is requested. /cc 
@jburwell 


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695186#comment-15695186
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 62e858131fcc0650d61699efffcf7eb57721e1b1 in cloudstack's branch 
refs/heads/master from [~nlivens]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=62e8581 ]

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

CLOUDSTACK-9321 : Adding component tests for VPC Network functionality - 
Internal LB rules

CLOUDSTACK-9321 : Extending Nuage VSP Internal LB Marvin tests

Co-Authored-By: Prashanth Manthena , 
Frank Maximus 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test cas

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695189#comment-15695189
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 185be24ed8ddcda5250ca17f230ca9c640ba4d11 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=185be24 ]

Merge pull request #1577 from nlivens/CLOUDSTACK-9321

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than 
one Internal LB rule with same source IP address) are not getting resolved in 
the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time 
a new Internal LB rule is added to the corresponding InternalLbVm instance, it 
replaces the existing one. Thus, traffic corresponding to these un-resolved 
(old) Internal LB rules are getting dropped by the InternalLbVm instance.

PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup 
(component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.

* pr/1577:
  CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

Signed-off-by: Rohit Yadav 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUC

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695187#comment-15695187
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 185be24ed8ddcda5250ca17f230ca9c640ba4d11 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=185be24 ]

Merge pull request #1577 from nlivens/CLOUDSTACK-9321

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than 
one Internal LB rule with same source IP address) are not getting resolved in 
the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time 
a new Internal LB rule is added to the corresponding InternalLbVm instance, it 
replaces the existing one. Thus, traffic corresponding to these un-resolved 
(old) Internal LB rules are getting dropped by the InternalLbVm instance.

PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup 
(component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.

* pr/1577:
  CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

Signed-off-by: Rohit Yadav 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUC

[jira] [Commented] (CLOUDSTACK-8676) Deploy user instance from vm snapshot for VMware hypervisor

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695193#comment-15695193
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8676:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1664
  
Thanks for sharing @sateesh-chodapuneedi 


> Deploy user instance from vm snapshot for VMware hypervisor
> ---
>
> Key: CLOUDSTACK-8676
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8676
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: Future
>
>
> Currently, ACS provides the ability to deploy a VM from a template or ISO. 
> However, ACS does not provide the ability to deploy a VM(s) directly from a 
> VM snapshot. 
> VM snapshots are stored in the primary storage and have a hierarchical or 
> parent/child relationship. The requirement would be to provide the ability to 
> deploy user instances from selected VM snapshots. Additionally, any VM 
> snapshot in the hierarchy can be deployed concurrently.  
> Even though this can be  supported and applicable to all hypervisors, to 
> start with this feature is supported only for VMware hypervisor.
> Feature specification is at 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Deploy+instance+from+VM+snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695188#comment-15695188
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 185be24ed8ddcda5250ca17f230ca9c640ba4d11 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=185be24 ]

Merge pull request #1577 from nlivens/CLOUDSTACK-9321

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than 
one Internal LB rule with same source IP address) are not getting resolved in 
the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time 
a new Internal LB rule is added to the corresponding InternalLbVm instance, it 
replaces the existing one. Thus, traffic corresponding to these un-resolved 
(old) Internal LB rules are getting dropped by the InternalLbVm instance.

PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup 
(component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.

* pr/1577:
  CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

Signed-off-by: Rohit Yadav 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUC

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695185#comment-15695185
 ] 

ASF subversion and git services commented on CLOUDSTACK-9321:
-

Commit 62e858131fcc0650d61699efffcf7eb57721e1b1 in cloudstack's branch 
refs/heads/master from [~nlivens]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=62e8581 ]

CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule 
with same source IP address) are not getting resolved in the corresponding 
InternalLbVm instance's haproxy.cfg file

CLOUDSTACK-9321 : Adding component tests for VPC Network functionality - 
Internal LB rules

CLOUDSTACK-9321 : Extending Nuage VSP Internal LB Marvin tests

Co-Authored-By: Prashanth Manthena , 
Frank Maximus 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test cas

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695190#comment-15695190
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1577


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already has a PF rule. ... === TestName: 
> test_13_VPC_LBRuleCreateFailForPFSourceNATIP | Status :

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695198#comment-15695198
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1710#discussion_r89576049
  
--- Diff: 
engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/XenserverSnapshotStrategy.java
 ---
@@ -268,7 +268,9 @@ public boolean deleteSnapshot(Long snapshotId) {
 SnapshotDataStoreVO snapshotOnPrimary = 
snapshotStoreDao.findBySnapshot(snapshotId, DataStoreRole.Primary);
 if (snapshotOnPrimary != null) {
 SnapshotInfo snapshotOnPrimaryInfo = 
snapshotDataFactory.getSnapshot(snapshotId, DataStoreRole.Primary);
-if 
(((PrimaryDataStoreImpl)snapshotOnPrimaryInfo.getDataStore()).getPoolType() == 
StoragePoolType.RBD) {
+long volumeId = snapshotOnPrimary.getVolumeId();
+VolumeVO volumeVO = volumeDao.findById(volumeId);
--- End diff --

Thanks @ustcweizhou 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   289910292

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695201#comment-15695201
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
There were some intermittent errors seen, I'll re-kick tests.
@blueorangutan package


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:16:52 |  4774 

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695202#comment-15695202
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:1

[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695218#comment-15695218
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@blueorangutan package


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695221#comment-15695221
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695231#comment-15695231
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9402:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1580
  
@rhtyd @jburwell Thank you for reviewing and helping us merge this PR, much 
appreciated !!


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality fo

[jira] [Resolved] (CLOUDSTACK-9562) Linux Guest VM get wrong default route when there are multiple Nic with a nic with vpc

2016-11-25 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou resolved CLOUDSTACK-9562.
--
Resolution: Fixed

PR 1766 for CLOUDSTACK-9598 fixes this issue as well.

> Linux Guest VM get wrong default route when there are multiple Nic with a nic 
> with vpc
> --
>
> Key: CLOUDSTACK-9562
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9562
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>
> REPRO STEPS
> ==
> 1. Log as admin, create a VM CentOSx64 integrate with default network 
> "Admin-Network" (gateway is 10.1.1.1)
> 2. Create a VPC "TestVpc" and inside network named "TechNet" (gateway is 
> 10.0.0.1)
> 3. Add VPC network to VM as NIC 2
> 4. Reboot VM and examine VM default VR changed to VPC default gateway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695234#comment-15695234
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1577
  
@rhtyd @jburwell Thank you for reviewing and helping us merge this PR, much 
appreciated !!


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695244#comment-15695244
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@rhtyd @jburwell We will rebase this PR with the latest master asap, and 
update you.

As mentioned in the previous comments, we wanted to merge this PR after 
merging PR #1580 as there are some feature interactions and dependencies, which 
we will fix during the rebase of this PR with the latest master.


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for admin user in a domain in 
> a shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser | Status 
> : SUCCESS ===
> ok
> Valiate that 

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695279#comment-15695279
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-252


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:16:52 |  4774 |
> | 189 |1 |  

[jira] [Created] (CLOUDSTACK-9615) Ingress Firewall Rules with blank start and End ports doesnt get applied

2016-11-25 Thread Jayapal Reddy (JIRA)
Jayapal Reddy created CLOUDSTACK-9615:
-

 Summary: Ingress Firewall Rules with blank start and End ports 
doesnt get applied
 Key: CLOUDSTACK-9615
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9615
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Reporter: Jayapal Reddy
Assignee: Jayapal Reddy
 Fix For: 4.9.2.0


   1.  Navigate to Network -> view ip adress -> Source Nat -> Configuration -> 
Firewall.
   2.  Add new TCp rule without giving any start port or end port.
   3. The rule creation is success in API, but its not applied in VR .
   4.  Only when specific port are provided the rule is applied .




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9616) KVM iSCSI adaptor: alllow other adaptors to disconnect disks

2016-11-25 Thread Peter Pentchev (JIRA)
Peter Pentchev created CLOUDSTACK-9616:
--

 Summary: KVM iSCSI adaptor: alllow other adaptors to disconnect 
disks
 Key: CLOUDSTACK-9616
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9616
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: KVM
 Environment: KVM hypervisor, more than one storage adaptor
Reporter: Peter Pentchev


Hi,

Thanks for developing CloudStack!

If the following conditions are met:
- more than one storage adaptor is defined for a KVM storage pool
- a disconnectPhysicalDiskByPath() request comes in for the pool (the 
KVMStoragePoolManager.disconnectPhysicalDiskByPath() method is invoked)
- the loop in that method finds the iSCSI adaptor first
- the disk being disconnected is *not* managed by the iSCSI adaptor

then the IscsiAdmStorageAdaptor.disconnectPhysicalDiskByPath() will return 
a "true" value, signalling that the request has been handled, while it has 
actually been silently ignored.

A patch making the iSCSI storage adaptor return "false" (thus allowing other 
storage adaptors to process the request and actually disconnect the disk) is 
available as a GitHub pull request at 
https://github.com/apache/cloudstack/pull/1780

Thanks in advance for your time and consideration!

Best regards,
Peter




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9617) Enabling Remote access Vpn Fails when there is a portforwarding rule of the reserved ports ( 1701 , 500 , 4500) under TCP protocol.

2016-11-25 Thread Jayapal Reddy (JIRA)
Jayapal Reddy created CLOUDSTACK-9617:
-

 Summary: Enabling Remote access Vpn Fails when there is a 
portforwarding rule of the reserved ports ( 1701 , 500 , 4500) under TCP 
protocol.
 Key: CLOUDSTACK-9617
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9617
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Reporter: Jayapal Reddy
Assignee: Jayapal Reddy
 Fix For: 4.9.2.0


Navigate to Network -> Configuration -> Portforwarding
Create a new rule 1701 under both private and public ports for TCP Protocol and 
assign a VM.
Now Enable Remote access VPN on the Network .

observation :
Enabling VPN acess is failing with the following error "The range 
specified, 1701-1701, conflicts with rule 53 which has 1701-1701 "

Expected Result :

Enabling VPN should be sucessful , as the port forwarding rule added is TCP 
protocol , and the firewall rules populated when VPN is enabled is UDP protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695293#comment-15695293
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-254


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695300#comment-15695300
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
@blueorangutan test


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:16:52 |  4774 |
> | 189 |1 |  95 | 2016-10-12 10:17:08 | NULL  

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695301#comment-15695301
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10

[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695308#comment-15695308
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@blueorangutan test


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695310#comment-15695310
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9617) Enabling Remote access Vpn Fails when there is a portforwarding rule of the reserved ports ( 1701 , 500 , 4500) under TCP protocol.

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695366#comment-15695366
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9617:


GitHub user jayapalu opened a pull request:

https://github.com/apache/cloudstack/pull/1782

CLOUDSTACK-9617: Fixed enabling remote access after PF configured on …

Enabling Remote access Vpn Fails when there is a portforwarding rule of the 
reserved ports ( 1701 , 500 , 4500) under TCP protocol on Source nat IP

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jayapalu/cloudstack CLOUDSTACK-9617

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1782.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1782


commit c4f855deb20ee11a4022d400e5db5bb10259a435
Author: Jayapalu 
Date:   2016-11-25T08:53:03Z

CLOUDSTACK-9617: Fixed enabling remote access after PF configured on vpn 
tcp ports




> Enabling Remote access Vpn Fails when there is a portforwarding rule of the 
> reserved ports ( 1701 , 500 , 4500) under TCP protocol.
> ---
>
> Key: CLOUDSTACK-9617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
> Navigate to Network -> Configuration -> Portforwarding
> Create a new rule 1701 under both private and public ports for TCP Protocol 
> and assign a VM.
> Now Enable Remote access VPN on the Network .
> observation :
> Enabling VPN acess is failing with the following error "The range 
> specified, 1701-1701, conflicts with rule 53 which has 1701-1701 "
> Expected Result :
> Enabling VPN should be sucessful , as the port forwarding rule added is TCP 
> protocol , and the firewall rules populated when VPN is enabled is UDP 
> protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9615) Ingress Firewall Rules with blank start and End ports doesnt get applied

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695378#comment-15695378
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9615:


GitHub user jayapalu opened a pull request:

https://github.com/apache/cloudstack/pull/1783

CLOUDSTACK-9615: Fixd applying ingress rules without ports

When ingress rule is applied without ports (port start and port end params 
are not passed) then API/UI is showing rule got applied but in the VR, iptables 
rule not got applied.

Fixed this issue in the VR script.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jayapalu/cloudstack CLOUDSTACK-9615

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1783.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1783


commit 35c7222ee8e8c788d8ed4bed8c33f985557dbf75
Author: Jayapalu 
Date:   2016-11-25T08:49:11Z

CLOUDSTACK-9615: Fixd applying ingress rules without ports




> Ingress Firewall Rules with blank start and End ports doesnt get applied
> 
>
> Key: CLOUDSTACK-9615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
>1.  Navigate to Network -> view ip adress -> Source Nat -> Configuration 
> -> Firewall.
>2.  Add new TCp rule without giving any start port or end port.
>3. The rule creation is success in API, but its not applied in VR .
>4.  Only when specific port are provided the rule is applied .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9617) Enabling Remote access Vpn Fails when there is a portforwarding rule of the reserved ports ( 1701 , 500 , 4500) under TCP protocol.

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695377#comment-15695377
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9617:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1782
  
@jayapalu what about load balancer ports ? shall we consider it as well?


> Enabling Remote access Vpn Fails when there is a portforwarding rule of the 
> reserved ports ( 1701 , 500 , 4500) under TCP protocol.
> ---
>
> Key: CLOUDSTACK-9617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
> Navigate to Network -> Configuration -> Portforwarding
> Create a new rule 1701 under both private and public ports for TCP Protocol 
> and assign a VM.
> Now Enable Remote access VPN on the Network .
> observation :
> Enabling VPN acess is failing with the following error "The range 
> specified, 1701-1701, conflicts with rule 53 which has 1701-1701 "
> Expected Result :
> Enabling VPN should be sucessful , as the port forwarding rule added is TCP 
> protocol , and the firewall rules populated when VPN is enabled is UDP 
> protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9615) Ingress Firewall Rules with blank start and End ports doesnt get applied

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695386#comment-15695386
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9615:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1783
  
@jayapalu line 179 should be changed as well, same to line 143


> Ingress Firewall Rules with blank start and End ports doesnt get applied
> 
>
> Key: CLOUDSTACK-9615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
>1.  Navigate to Network -> view ip adress -> Source Nat -> Configuration 
> -> Firewall.
>2.  Add new TCp rule without giving any start port or end port.
>3. The rule creation is success in API, but its not applied in VR .
>4.  Only when specific port are provided the rule is applied .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena resolved CLOUDSTACK-9402.
---
Resolution: Fixed

Closing this bug as the corresponding upstream PR got merged into master


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_05_nuage_StaticNAT_isolated_networks_traff

[jira] [Closed] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena closed CLOUDSTACK-9402.
-

Closing this task as the corresponding upstream PR got merged into master


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_05_nuage_StaticNAT_isolated_networks_traffic | Status : SUCCESS ===

[jira] [Resolved] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haprox

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena resolved CLOUDSTACK-9321.
---
Resolution: Fixed

> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already has a PF rule. ... === TestName: 
> test_13_VPC_LBRuleCreateFailForPFSourceNATIP | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already 

[jira] [Closed] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena closed CLOUDSTACK-9321.
-

Closing this bug as the corresponding upstream PR got merged into master


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already has a PF rule. ... === TestName: 
> test_13_VPC_LBRuleCreateFailForPFSourceNATIP | Status : SUCCESS ===
> ok
> Test User should not be allowed to 

[jira] [Issue Comment Deleted] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena updated CLOUDSTACK-9402:
--
Comment: was deleted

(was: Closing this bug as the corresponding upstream PR got merged into master
)

> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_05_nuage_StaticNAT_isolated_networ

[jira] [Updated] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena updated CLOUDSTACK-9402:
--
Fix Version/s: 4.10.0.0

> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.10.0.0
>
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_05_nuage_StaticNAT_isolated_networks_traffic | Status : SUCCESS ===
> ok
> Test

[jira] [Updated] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena updated CLOUDSTACK-9321:
--
Fix Version/s: (was: 4.9.1.0)
   4.10.0.0

> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> already has a PF rule. ... === TestName: 
> test_13_VPC_LBRuleCreateFailForPFSourceNATIP | Status : SUCCESS ===
> ok
> Test User should not be allowed to create

[jira] [Updated] (CLOUDSTACK-9416) ACS master GUI: Enabling Static NAT on an associated Public IP to one of the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP being selec

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena updated CLOUDSTACK-9416:
--
Fix Version/s: 4.9.2.0

> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI
> ---
>
> Key: CLOUDSTACK-9416
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9416
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.9.2.0
>
> Attachments: network1.png, network2.png
>
>
> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI:
> {noformat}
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'alertManagerImpl'
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.nat.EnableStaticNatCmd': 
> AutowiredFieldElement for public com.cloud.utils.db.UUIDManager 
> org.apache.cloudstack.api.BaseCmd._uuidMgr
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'uUIDManagerImpl'
> 2016-06-13 04:50:00,472 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'messageBus'
> 2016-06-13 04:50:00,479 INFO  [c.c.a.ApiServer] (catalina-exec-7:ctx-83926837 
> ctx-fc7aa5ed) VM ip 10.10.2.163 address not belongs to the vm
> 2016-06-13 04:50:00,480 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) ===END===  10.31.56.146 -- GET  
> command=enableStaticNat&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&ipaddressid=36262bcc-282a-46c2-8a80-472e2a24ab5e&virtualmachineid=af160cde-6762-4756-b97f-f3829f6d9802&vmguestip=10.10.2.163&_=1465818687943
> 2016-06-13 04:50:02,261 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-9:ctx-6050366f) ===START===  10.31.56.146 -- GET  
> command=queryAsyncJobResult&jobId=7850c125-54a2-4e99-ab78-9f0e3578c304&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&_=1465818689752
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.configuration.ConfigurationService 
> org.apache.cloudstack.api.BaseCmd._configService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'configurationManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.user.AccountService 
> org.apache.cloudstack.api.BaseCmd._accountService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'accountManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.vm.UserVmService 
> org.apache.cloudstack.api.BaseCmd._userVmService
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9398) Nuage VSP : Creation of persistent isolated networks is being restricted (isolated network offerings with isPersistent flag set to True)

2016-11-25 Thread Mani Prashanth Varma Manthena (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani Prashanth Varma Manthena updated CLOUDSTACK-9398:
--
Fix Version/s: (was: 4.9.1.0)
   4.10.0.0

> Nuage VSP : Creation of persistent isolated networks is being restricted 
> (isolated network offerings with isPersistent flag set to True)
> 
>
> Key: CLOUDSTACK-9398
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9398
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.10.0.0
>
>
> Creation of persistent isolated networks is being restricted (isolated 
> network offerings with isPersistent flag set to True).
> In master, isPersistent flag in the network offering is used as a check to 
> determine whether the network offering is for VPC tiers (or) isolated 
> networks as same services provider name "NuageVsp" is used for both isolated 
> and VPC network offerings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9593) User data check is inconsistent with python

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695511#comment-15695511
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9593:


Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1760
  
right, I forgot about the match required with the version in the pom. I 
thought I should prepare the stuff for the next 4.9.x release but that cannot 
work unless the pom files are updated too. I will just move the code inside 
Upgrade490to4910.java to Upgrade4910to4920.java when it's available.


> User data check is inconsistent with python
> ---
>
> Key: CLOUDSTACK-9593
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9593
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.4.2, 4.4.3, 4.3.2, 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 
> 4.6.2, 4.7.0, 4.7.1, 4.8.0, 4.9.0
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> The user data is validated through the Apache commons codec library, but this 
> library does not check that the length is a multiple of 4 characters. The RFC 
> does not require it either. But the python script in the virtual router that 
> loads the user data does check for the possible padding presence, requiring 
> the string to be a multiple of 4 characters.
> {code:python}
> >>> import base64
> >>> base64.b64decode('foo')
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/base64.py",
>  line 78, in b64decode
> raise TypeError(msg)
> TypeError: Incorrect padding
> >>> base64.b64decode('foo=')
> '~\x8a'
> {code}
> Currently since the java check is less restrictive, the user data gets saved 
> into the database but the VR script crashes when it receives this VM user 
> data. On a single VM it is not really a problem. The critical issue is when a 
> VR is restarted. The invalid pythonic base64 string makes the vmdata.py 
> script crashed, resulting in a VR not starting at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9416) ACS master GUI: Enabling Static NAT on an associated Public IP to one of the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP being sel

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695566#comment-15695566
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9416:


GitHub user prashanthvarma opened a pull request:

https://github.com/apache/cloudstack/pull/1785

CLOUDSTACK-9416 : Enabling Static NAT on an associated Public IP to one of 
the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM 
IP being selected in the GUI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/prashanthvarma/cloudstack 4.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1785.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1785


commit 094c4cf02bf75ed3e5bd227f7b4cfe614f386871
Author: Nick Livens 
Date:   2016-06-15T12:47:50Z

CLOUDSTACK-9416 : Enabling Static NAT on an associated Public IP to one of 
the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM 
IP being selected in the GUI




> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI
> ---
>
> Key: CLOUDSTACK-9416
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9416
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.9.2.0
>
> Attachments: network1.png, network2.png
>
>
> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI:
> {noformat}
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'alertManagerImpl'
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.nat.EnableStaticNatCmd': 
> AutowiredFieldElement for public com.cloud.utils.db.UUIDManager 
> org.apache.cloudstack.api.BaseCmd._uuidMgr
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'uUIDManagerImpl'
> 2016-06-13 04:50:00,472 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'messageBus'
> 2016-06-13 04:50:00,479 INFO  [c.c.a.ApiServer] (catalina-exec-7:ctx-83926837 
> ctx-fc7aa5ed) VM ip 10.10.2.163 address not belongs to the vm
> 2016-06-13 04:50:00,480 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) ===END===  10.31.56.146 -- GET  
> command=enableStaticNat&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&ipaddressid=36262bcc-282a-46c2-8a80-472e2a24ab5e&virtualmachineid=af160cde-6762-4756-b97f-f3829f6d9802&vmguestip=10.10.2.163&_=1465818687943
> 2016-06-13 04:50:02,261 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-9:ctx-6050366f) ===START===  10.31.56.146 -- GET  
> command=queryAsyncJobResult&jobId=7850c125-54a2-4e99-ab78-9f0e3578c304&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&_=1465818689752
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.configuration.ConfigurationService 
> org.apache.cloudstack.api.BaseCmd._configService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'configurationManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.user.AccountService 
> org.apache.cloudstack.api.BaseCmd._accountService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bea

[jira] [Commented] (CLOUDSTACK-9416) ACS master GUI: Enabling Static NAT on an associated Public IP to one of the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP being sel

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695585#comment-15695585
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9416:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1592
  
@rhtyd I had difficulties changing the base branch to 4.9 and re-basing 
against 4.9 as the source branch on my fork is master (prashanthvarma:master).

Anyhow, I have opened a new PR #1785 with this UI bug fix commit against 
4.9 branch. 

Once you merge PR #1785, I will close this PR.

Sorry for the inconvenience !!


> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI
> ---
>
> Key: CLOUDSTACK-9416
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9416
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.9.2.0
>
> Attachments: network1.png, network2.png
>
>
> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI:
> {noformat}
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'alertManagerImpl'
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.nat.EnableStaticNatCmd': 
> AutowiredFieldElement for public com.cloud.utils.db.UUIDManager 
> org.apache.cloudstack.api.BaseCmd._uuidMgr
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'uUIDManagerImpl'
> 2016-06-13 04:50:00,472 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'messageBus'
> 2016-06-13 04:50:00,479 INFO  [c.c.a.ApiServer] (catalina-exec-7:ctx-83926837 
> ctx-fc7aa5ed) VM ip 10.10.2.163 address not belongs to the vm
> 2016-06-13 04:50:00,480 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) ===END===  10.31.56.146 -- GET  
> command=enableStaticNat&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&ipaddressid=36262bcc-282a-46c2-8a80-472e2a24ab5e&virtualmachineid=af160cde-6762-4756-b97f-f3829f6d9802&vmguestip=10.10.2.163&_=1465818687943
> 2016-06-13 04:50:02,261 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-9:ctx-6050366f) ===START===  10.31.56.146 -- GET  
> command=queryAsyncJobResult&jobId=7850c125-54a2-4e99-ab78-9f0e3578c304&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&_=1465818689752
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.configuration.ConfigurationService 
> org.apache.cloudstack.api.BaseCmd._configService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'configurationManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.user.AccountService 
> org.apache.cloudstack.api.BaseCmd._accountService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'accountManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.vm.UserVmService 
> org.apache.cloudstack.api.BaseCmd._userVmService
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9416) ACS master GUI: Enabling Static NAT on an associated Public IP to one of the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP being sel

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695593#comment-15695593
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9416:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1785
  
@rhtyd  This is a shadow of PR #1592 , rebased against 4.9 as requested.

Note: The original PR #1592 has enough LGTMs to merge this PR.

Once you merge this PR, I will close the PR #1592 .

Sorry for the inconvenience !!


> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI
> ---
>
> Key: CLOUDSTACK-9416
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9416
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.9.2.0
>
> Attachments: network1.png, network2.png
>
>
> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI:
> {noformat}
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'alertManagerImpl'
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.nat.EnableStaticNatCmd': 
> AutowiredFieldElement for public com.cloud.utils.db.UUIDManager 
> org.apache.cloudstack.api.BaseCmd._uuidMgr
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'uUIDManagerImpl'
> 2016-06-13 04:50:00,472 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'messageBus'
> 2016-06-13 04:50:00,479 INFO  [c.c.a.ApiServer] (catalina-exec-7:ctx-83926837 
> ctx-fc7aa5ed) VM ip 10.10.2.163 address not belongs to the vm
> 2016-06-13 04:50:00,480 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) ===END===  10.31.56.146 -- GET  
> command=enableStaticNat&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&ipaddressid=36262bcc-282a-46c2-8a80-472e2a24ab5e&virtualmachineid=af160cde-6762-4756-b97f-f3829f6d9802&vmguestip=10.10.2.163&_=1465818687943
> 2016-06-13 04:50:02,261 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-9:ctx-6050366f) ===START===  10.31.56.146 -- GET  
> command=queryAsyncJobResult&jobId=7850c125-54a2-4e99-ab78-9f0e3578c304&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&_=1465818689752
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.configuration.ConfigurationService 
> org.apache.cloudstack.api.BaseCmd._configService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'configurationManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.user.AccountService 
> org.apache.cloudstack.api.BaseCmd._accountService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'accountManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.vm.UserVmService 
> org.apache.cloudstack.api.BaseCmd._userVmService
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695640#comment-15695640
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9595:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1762
  
@serg38 if that "AssignIpAddressFromPodVlanSearch" object was being used to 
generate the SQL; should not we see a join with "pod_vlan_map" too? For me 
this, this SC is very confusing.

Following the same idea of what I would do if using Spring to manage 
transactions, the method "fetchNewPublicIp" does not need the "@DB" annotation 
(assuming this is the annotation that opens a transaction and locks tables in 
ACS). The method “fetchNewPublicIp” is a simple "retrieve/get" method. Whenever 
we have to lock the table that is being used by this method, we could use the 
"fetchNewPublicIp" in a method that has the "@DB" annotation (assuming it has 
transaction propagation). This is something that already seems to happen. 
Methods "allocateIp" and "assignDedicateIpAddress" use “fetchNewPublicIp” and 
they have their own “@DB” annotation.

Methods “assignPublicIpAddressFromVlans” and “assignPublicIpAddress” seem 
not to do anything that requires a transaction; despite misleading (at least 
for me) with names indicating that something will be assigned to someone, they 
just call and return the response of  “fetchNewPublicIp” method. Therefore, I 
do not think they require a locking transaction.



> Transactions are not getting retried in case of database deadlock errors
> 
>
> Key: CLOUDSTACK-9595
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Customer is seeing occasional error 'Deadlock found when trying to get lock; 
> try restarting transaction' messages in their management server logs.  It 
> happens regularly at least once a day.  The following is the error seen 
> 2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
> (catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
> executing api command: [Ljava.lang.String;@230a6e7f
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
> com.mysql.jdbc.JDBC4PreparedStatement@74f134e3: DELETE FROM 
> instance_group_vm_map WHERE instance_group_vm_map.instance_id = 941374
>   at com.cloud.utils.db.GenericDaoBase.expunge(GenericDaoBase.java:1209)
>   at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at com.sun.proxy.$Proxy237.expunge(Unknown Source)
>   at 
> com.cloud.vm.UserVmManagerImpl$2.doInTransactionWithoutResult(UserVmManagerImpl.java:2593)
>   at 
> com.cloud.utils.db.TransactionCallbackNoReturn.doInTransaction(TransactionCallbackNoReturn.java:25)
>   at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:57)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:45)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:54)
>   at 
> com.cloud.vm.UserVmManagerImpl.addInstanceToGroup(UserVmManagerImpl.java:2575)
>   at 
> com.cloud.vm.UserVmManagerImpl.updateVirtualMachine(UserVmManagerImpl.java:2332)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9618) Load Balancer configuration page does not have "Source" method in the drop down list

2016-11-25 Thread Nitin Kumar Maharana (JIRA)
Nitin Kumar Maharana created CLOUDSTACK-9618:


 Summary: Load Balancer configuration page does not have "Source" 
method in the drop down list
 Key: CLOUDSTACK-9618
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9618
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Nitin Kumar Maharana


If we create an isolated network with NetScaler published service offering for 
Load balancing service, then the load balancing configuration UI does not show 
"Source" as one of the supported LB methods in the drop down list. It only 
shows "Round-Robin" and "LeastConnection" methods in the list. Howerver, It 
successfully creates LB rule with "Source" as the LB method using API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9618) Load Balancer configuration page does not have "Source" method in the drop down list

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695772#comment-15695772
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9618:


GitHub user nitin-maharana opened a pull request:

https://github.com/apache/cloudstack/pull/1786

CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" 
method in the drop down list.

If we create an isolated network with NetScaler published service offering 
for Load balancing service, then the load balancing configuration UI does not 
show "Source" as one of the supported LB methods in the drop down list. It only 
shows "Round-Robin" and "LeastConnection" methods in the list. However, It 
successfully creates LB rule with "Source" as the LB method using API.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nitin-maharana/CloudStack-Nitin nitin5

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1786.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1786


commit 709343a0abf9ee00ef7c5ad6ebc70ff7cf1df57a
Author: Nitin Kumar Maharana 
Date:   2016-11-25T12:37:17Z

CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" 
method in the drop down list

Added the source method to supported algorithm list in Netscaler element.
Added a validation check.




> Load Balancer configuration page does not have "Source" method in the drop 
> down list
> 
>
> Key: CLOUDSTACK-9618
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9618
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> If we create an isolated network with NetScaler published service offering 
> for Load balancing service, then the load balancing configuration UI does not 
> show "Source" as one of the supported LB methods in the drop down list. It 
> only shows "Round-Robin" and "LeastConnection" methods in the list. Howerver, 
> It successfully creates LB rule with "Source" as the LB method using API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9416) ACS master GUI: Enabling Static NAT on an associated Public IP to one of the NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP being sel

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695871#comment-15695871
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9416:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1785
  
LGTM
this PR solves the same issue as #1778 


> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI
> ---
>
> Key: CLOUDSTACK-9416
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9416
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
> Fix For: 4.9.2.0
>
> Attachments: network1.png, network2.png
>
>
> ACS master GUI: Enabling Static NAT on an associated Public IP to one of the 
> NICs (networks) of a multi-NIC VM fails due to a wrong (default) Guest VM IP 
> being selected in the GUI:
> {noformat}
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'alertManagerImpl'
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.nat.EnableStaticNatCmd': 
> AutowiredFieldElement for public com.cloud.utils.db.UUIDManager 
> org.apache.cloudstack.api.BaseCmd._uuidMgr
> 2016-06-13 04:50:00,456 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'uUIDManagerImpl'
> 2016-06-13 04:50:00,472 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) Returning cached instance of 
> singleton bean 'messageBus'
> 2016-06-13 04:50:00,479 INFO  [c.c.a.ApiServer] (catalina-exec-7:ctx-83926837 
> ctx-fc7aa5ed) VM ip 10.10.2.163 address not belongs to the vm
> 2016-06-13 04:50:00,480 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-7:ctx-83926837 ctx-fc7aa5ed) ===END===  10.31.56.146 -- GET  
> command=enableStaticNat&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&ipaddressid=36262bcc-282a-46c2-8a80-472e2a24ab5e&virtualmachineid=af160cde-6762-4756-b97f-f3829f6d9802&vmguestip=10.10.2.163&_=1465818687943
> 2016-06-13 04:50:02,261 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-9:ctx-6050366f) ===START===  10.31.56.146 -- GET  
> command=queryAsyncJobResult&jobId=7850c125-54a2-4e99-ab78-9f0e3578c304&response=json&sessionkey=Mfs%2B%2F0LCnpWSNQ1SdTi1Q8MxLBc%3D&_=1465818689752
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.configuration.ConfigurationService 
> org.apache.cloudstack.api.BaseCmd._configService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'configurationManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.user.AccountService 
> org.apache.cloudstack.api.BaseCmd._accountService
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.s.DefaultListableBeanFactory] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Returning cached instance of 
> singleton bean 'accountManagerImpl'
> 2016-06-13 04:50:02,264 DEBUG [o.s.b.f.a.InjectionMetadata] 
> (catalina-exec-9:ctx-6050366f ctx-00689e3f) Processing injected method of 
> bean 'org.apache.cloudstack.api.command.user.job.QueryAsyncJobResultCmd': 
> AutowiredFieldElement for public com.cloud.vm.UserVmService 
> org.apache.cloudstack.api.BaseCmd._userVmService
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15695934#comment-15695934
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@rhtyd @koushik-das @serg38 thanks, I'll work on second option


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9610) Disabled Host Keeps Being up status after unmanaging cluster

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696035#comment-15696035
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9610:


Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1779
  
Hi @priyankparihar,

Thanks for the patch. Can you provide a more descriptive message about the 
bug and how this fix addresses that? 




> Disabled Host Keeps Being up status after unmanaging cluster 
> -
>
> Key: CLOUDSTACK-9610
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9610
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> ENVIRONMENT 
> = 
> XenServer Version : 6.2 
> ISSUE 
> == 
> Disabled Host Keeps Being up status after unmanging cluster 
> Repro steps followed 
> == 
> Disabled Host from UI 
> Unmanaged the cluster which the host was in. 
> Still can see the Host showing up in UI 
> Host to get removed from UI and should not show up after disabling it and 
> unmanaging the cluster is expected. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9606) While IP address is released, tag are not deleted

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696042#comment-15696042
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9606:


Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1775
  
can you please provide a description and steps to reproduce


> While IP address is released, tag are not deleted
> -
>
> Key: CLOUDSTACK-9606
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9606
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> IP address release API call (disassociateIpAddress) does not have any 
> mechanism to remove the tags.
> All though the IP address is not allocated, corresponding tag still exists.
> REPRO STEPS
> ==
> 1. Acquire an IP address by Domain-Admin account A. 
> 2. Add tag to the target IP address by Domain-Admin account A. 
> 3. Release the target IP address without deleting the tag. 
> ⇒We found out that the state of the IP address is "Free" at this point, 
> but the tag which added by Domain-Admin account A still remains. 
> 4. Acquire the target IP address by Domain-Admin account B. 
> ⇒The tag still remains without change. 
> If account B tries to delete the tag, in our lab we can delete the tag as 
> domain admin. Although customer reported that they can't complete it because 
> of authority error.
> EXPECTED BEHAVIOR
> ==
> When we release an IP address, the corresponding tags should be removed from 
> related tables
> ACTUAL BEHAVIOR
> ==
> When we release an IP address, the corresponding tags are not removed from 
> related tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9612) Restart Network with clean up fails for networks whose offering has been changed from Isolated -> RVR

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696061#comment-15696061
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9612:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1781
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 141
 Hypervisor xenserver
 NetworkType Advanced
 Passed=78
 Failed=17
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_vm_snapshots.py

 * test_01_create_vm_snapshots Failed

 * test_02_revert_vm_snapshots Failed

 * test_03_delete_vm_snapshots Failed

* test_service_offerings.py

 * test_04_change_offering_small Failed

* test_routers_iptables_default_policy.py

 * test_01_single_VPC_iptables_policies Failed

* test_loadbalance.py

 * test_01_create_lb_rule_src_nat Failed

 * test_02_create_lb_rule_non_nat Failed

 * test_assign_and_removal_lb Failed

* test_router_dns.py

 * test_router_dns_guestipquery Failing since 2 runs

* test_deploy_vm_iso.py

 * test_deploy_vm_from_iso Failing since 23 runs

* test_volumes.py

 * test_01_create_volume Failed

 * test_02_attach_volume Failed

* test_vm_life_cycle.py

 * test_10_attachAndDetach_iso Failing since 24 runs

* test_routers_network_ops.py

 * test_01_isolate_network_FW_PF_default_routes_egress_true Failed

 * test_02_isolate_network_FW_PF_default_routes_egress_false Failed

 * test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failed

 * test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_routers.py
test_reset_vm_on_reboot.py
test_snapshots.py
test_deploy_vms_with_varied_deploymentplanners.py
test_non_contigiousvlan.py
test_login.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_disk_offerings.py


> Restart Network with clean up fails for networks whose offering has been 
> changed from Isolated -> RVR
> -
>
> Key: CLOUDSTACK-9612
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9612
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
> Deploy a network N1 with " Offering for Isolated networks with Source Nat 
> service enabled" . Ensure both vm and vr are UP .
> Create a RVR offering and edit the network offering from the current to 
> RVR ofefring .
> Ensure both Master and Backup are up and running.
> Now restart the network with clean up option enabled.
> Observations :
> Restarting the nw with clean up is creating is failing with the below error.
> {noformat}
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Start completed for VM VM[DomainRouter|r-21-QA]
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Done executing VM work job: 
> com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"RestartNetwork":"rO0ABXNyABFqYXZhLmxhbmcuQm9vbGVhbs0gcoDVnPruAgABWgAFdmFsdWV4cAE"},"userId":2,"accountId":2,"vmId":21,"handlerName":"VirtualMachineManagerImpl"}
> 2016-11-24 15:49:32,432 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Complete async job-104, jobStatus: SUCCEEDED, resultCode: 0, 
> result: null
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Publish async job-104 complete on message bus
> 2016

[jira] [Commented] (CLOUDSTACK-9563) ExtractTemplate returns malformed URL after migrating NFS to s3

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696059#comment-15696059
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9563:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1733
  
Trillian test result (tid-436)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 25816 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1733-t436-kvm-centos7.zip
Test completed. 41 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_router_dhcp_opts | `Failure` | 21.31 | test_router_dhcphosts.py
test_10_attachAndDetach_iso | `Error` | 33.22 | test_vm_life_cycle.py
test_01_vpc_site2site_vpn | Success | 166.34 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 91.68 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 352.73 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 265.96 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 587.30 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 529.40 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1362.26 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 555.51 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 746.95 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1335.23 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.52 | test_volumes.py
test_08_resize_volume | Success | 15.55 | test_volumes.py
test_07_resize_fail | Success | 20.75 | test_volumes.py
test_06_download_detached_volume | Success | 15.44 | test_volumes.py
test_05_detach_volume | Success | 100.32 | test_volumes.py
test_04_delete_attached_volume | Success | 10.33 | test_volumes.py
test_03_download_attached_volume | Success | 15.43 | test_volumes.py
test_02_attach_volume | Success | 74.02 | test_volumes.py
test_01_create_volume | Success | 651.93 | test_volumes.py
test_deploy_vm_multiple | Success | 259.76 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.09 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 36.00 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 126.05 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 127.02 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.36 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 161.42 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.48 | test_templates.py
test_03_delete_template | Success | 5.16 | test_templates.py
test_02_edit_template | Success | 90.23 | test_templates.py
test_01_create_template | Success | 40.55 | test_templates.py
test_10_destroy_cpvm | Success | 162.02 | test_ssvm.py
test_09_destroy_ssvm | Success | 138.33 | test_ssvm.py
test_08_reboot_cpvm | Success | 161.76 | test_ssvm.py
test_07_reboot_ssvm | Success | 103.30 | test_ssvm.py
test_06_stop_cpvm | Success | 132.03 | test_ssvm.py
test_05_stop_ssvm | Success | 133.45 | test_ssvm.py
test_04_cpvm_internals | Success | 1.23 | test_ssvm.py
test_03_ssvm_internals | Success | 3.05 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.36 | test_snapshots.py
test_04_change_offering_small | Success | 209.90 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.10 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.17 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 50.49 | test_routers.py
test_08_start_router | Success | 30.60 | test_routers.py
test_07_stop_router | Success | 10.19 | test_routers.py
test_06_router_advanced | Success | 0.05 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_n

[jira] [Commented] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696396#comment-15696396
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9595:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1762
  
@rafaelweingartner You might be right that pod_vlan_map should be in the 
join. May be I didn't find the correct methods after all. @jburwell @rhtyd What 
do you think?

I was able to find management serve log for Deadlock 1. Looks like one of 
transaction came from findAndUpdateDirectAgentToLoad  method in HostDaoImpl 
which creates rather complex transaction:

2016-11-24 15:04:39,284 DEBUG [host.dao.HostDaoImpl] (ClusteredAgentManager 
Timer:ctx-a8e9449c) Resetting hosts suitable for reconnect
2016-11-24 15:04:39,320 DEBUG [db.Transaction.Transaction] 
(ClusteredAgentManager Timer:ctx-a8e9449c) Rolling back the transaction: Time = 
36 Name =  ClusteredAgentManager Timer; called by 
-TransactionLegacy.rollback:879-TransactionLegacy.removeUpTo:822-TransactionLegacy.close:646-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy48.findAndUpdateDirectAgentToLoad:-1-ClusteredAgentManagerImpl.scanDirectAgentToLoad:195-ClusteredAgentManagerImpl.runDirectAgentScanTimerTask:185-ClusteredAgentManagerImpl.access$100:99
2016-11-24 15:04:39,322 ERROR [agent.manager.ClusteredAgentManagerImpl] 
(ClusteredAgentManager Timer:ctx-a8e9449c) Unexpected exception DB Exception 
on: com.mysql.jdbc.JDBC4PreparedStatement@1e58727c: SELECT host.id, 
host.disconnected, host.name, host.status, host.type, host.private_ip_address, 
host.private_mac_address, host.private_netmask, host.public_netmask, 
host.public_ip_address, host.public_mac_address, host.storage_ip_address, 
host.cluster_id, host.storage_netmask, host.storage_mac_address, 
host.storage_ip_address_2, host.storage_netmask_2, host.storage_mac_address_2, 
host.hypervisor_type, host.proxy_port, host.resource, host.fs_type, 
host.available, host.setup, host.resource_state, host.hypervisor_version, 
host.update_count, host.uuid, host.data_center_id, host.pod_id, 
host.cpu_sockets, host.cpus, host.url, host.speed, host.ram, host.parent, 
host.guid, host.capabilities, host.total_size, host.last_ping, 
host.mgmt_server_id, host.dom0_memory, host.version, host.created, host.removed 
FROM host WHERE host.resource IS NOT NULL  AND host.mgmt_server_id = 
345048964870  AND host.last_ping <= 1445339907  AND host.cluster_id IS NOT NULL 
 AND host.status IN ('Disconnected','Down','Alert')  AND host.removed IS NULL  
FOR UPDATE 
Caused by: 
com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock 
found when trying to get lock; try restarting transaction

Beginning of second transaction was 
SELECT host.id, host.disconnected, host.name, host.status, host.type, 
host.private_ip_address, host.private_mac_address, host.private_netmask, 
host.public_netmask, host.public_ip_address, host.public_mac_address, 
host.storage_ip_address, host.cluster_id, host.storage_netmask, 
host.storage_mac_address, host.storage_ip_address_2, host.storage_netmask_2, 
host.storage_mac_address_2, host.hypervisor_type, host.proxy_port, 
host.resource, host.fs_type, host.available, host.setup, host.resource_state, 
host.hypervisor_version, host.update_count, host.uuid, host.data_center_id, 
host.pod_id, host.cpu_sockets, host.cpus, host.url, host.speed, host.ram, 
host.parent, host.guid, host.capabilities, host.total_size, host.last_ping, 
host.mgmt_server_id, host.dom0_memory, host.version, host.created, host.removed 
FROM host  LEFT OUTER JOIN op_host_transfer ON host.id=op_host_transfer.id  IN

I will try to trace it to the ACS method.


> Transactions are not getting retried in case of database deadlock errors
> 
>
> Key: CLOUDSTACK-9595
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Customer is seeing occasional error 'Deadlock found when trying to get lock; 
> try restarting transaction' messages in their management server logs.  It 
> happens regularly at least once a day.  The following is the error seen 
> 2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
> (catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
> executing api command: [Ljava.lang.String;@230a6e7f
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
> com.mysql.jdbc.

[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696434#comment-15696434
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Trillian test result (tid-440)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33701 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1659-t440-kvm-centos7.zip
Test completed. 35 look ok, 13 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 812.61 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | `Failure` | 802.74 | 
test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Failure` | 344.39 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 292.52 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Failure` | 308.11 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | `Failure` | 850.49 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 369.71 
| test_vpc_redundant.py
test_02_attach_volume | `Failure` | 668.65 | test_volumes.py
test_01_create_volume | `Failure` | 683.96 | test_volumes.py
test_10_attachAndDetach_iso | `Failure` | 684.02 | test_vm_life_cycle.py
test_04_change_offering_small | `Failure` | 794.64 | 
test_service_offerings.py
test_router_dns_guestipquery | `Failure` | 277.48 | test_router_dns.py
test_router_dhcphosts | `Failure` | 188.70 | test_router_dhcphosts.py
test_router_dhcp_opts | `Failure` | 21.10 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 994.91 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 934.94 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 914.87 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 188.81 | 
test_password_server.py
test_reboot_router | `Failure` | 442.02 | test_network.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | `Failure` | 
831.79 | test_network.py
test_network_rules_acquired_public_ip_2_nat_rule | `Failure` | 679.53 | 
test_network.py
test_network_rules_acquired_public_ip_1_static_nat_rule | `Failure` | 
675.65 | test_network.py
test_02_port_fwd_on_non_src_nat | `Failure` | 678.93 | test_network.py
test_01_port_fwd_on_src_nat | `Failure` | 673.81 | test_network.py
test_assign_and_removal_lb | `Failure` | 110.44 | test_loadbalance.py
test_02_create_lb_rule_non_nat | `Failure` | 110.40 | test_loadbalance.py
test_01_create_lb_rule_src_nat | `Failure` | 110.55 | test_loadbalance.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Failure` | 275.29 | 
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Failure` | 209.92 | 
test_internal_lb.py
test_01_vpc_site2site_vpn | `Error` | 295.74 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 376.30 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 405.30 | test_vpc_redundant.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 234.13 | 
test_router_dhcphosts.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 230.33 
| test_internal_lb.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 215.26 | 
test_internal_lb.py
test_01_vpc_remote_access_vpn | Success | 61.08 | test_vpc_vpn.py
test_09_delete_detached_volume | Success | 15.48 | test_volumes.py
test_08_resize_volume | Success | 15.38 | test_volumes.py
test_07_resize_fail | Success | 20.50 | test_volumes.py
test_06_download_detached_volume | Success | 15.29 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 10.23 | test_volumes.py
test_03_download_attached_volume | Success | 15.28 | test_volumes.py
test_deploy_vm_multiple | Success | 289.44 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.20 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 30.93 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.81 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.14 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.89 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 171.46 | test_templates.py
   

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696440#comment-15696440
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
Trillian test result (tid-438)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34149 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1710-t438-kvm-centos7.zip
Test completed. 35 look ok, 13 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_VPC_default_routes | `Failure` | 797.63 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | `Failure` | 797.52 | 
test_vpc_router_nics.py
test_05_rvpc_multi_tiers | `Failure` | 349.40 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 287.55 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Failure` | 323.23 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | `Failure` | 860.28 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 380.03 
| test_vpc_redundant.py
test_02_attach_volume | `Failure` | 668.60 | test_volumes.py
test_01_create_volume | `Failure` | 684.03 | test_volumes.py
test_10_attachAndDetach_iso | `Failure` | 698.83 | test_vm_life_cycle.py
test_04_change_offering_small | `Failure` | 794.89 | 
test_service_offerings.py
test_router_dns_guestipquery | `Failure` | 277.55 | test_router_dns.py
test_router_dhcphosts | `Failure` | 188.81 | test_router_dhcphosts.py
test_router_dhcp_opts | `Failure` | 21.25 | test_router_dhcphosts.py
test_04_rvpc_privategw_static_routes | `Failure` | 1045.68 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 899.94 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 924.67 | 
test_privategw_acl.py
test_isolate_network_password_server | `Failure` | 188.92 | 
test_password_server.py
test_reboot_router | `Failure` | 426.89 | test_network.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | `Failure` | 
831.83 | test_network.py
test_network_rules_acquired_public_ip_2_nat_rule | `Failure` | 679.49 | 
test_network.py
test_network_rules_acquired_public_ip_1_static_nat_rule | `Failure` | 
675.67 | test_network.py
test_02_port_fwd_on_non_src_nat | `Failure` | 678.76 | test_network.py
test_01_port_fwd_on_src_nat | `Failure` | 673.86 | test_network.py
test_assign_and_removal_lb | `Failure` | 110.42 | test_loadbalance.py
test_02_create_lb_rule_non_nat | `Failure` | 110.41 | test_loadbalance.py
test_01_create_lb_rule_src_nat | `Failure` | 110.63 | test_loadbalance.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Failure` | 280.18 | 
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | `Failure` | 225.12 | 
test_internal_lb.py
test_01_vpc_site2site_vpn | `Error` | 290.79 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 381.59 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 410.40 | test_vpc_redundant.py
ContextSuite context=TestRouterDHCPHosts>:teardown | `Error` | 239.26 | 
test_router_dhcphosts.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 230.23 
| test_internal_lb.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces | `Error` | 220.37 | 
test_internal_lb.py
test_01_vpc_remote_access_vpn | Success | 66.23 | test_vpc_vpn.py
test_09_delete_detached_volume | Success | 15.50 | test_volumes.py
test_08_resize_volume | Success | 15.44 | test_volumes.py
test_07_resize_fail | Success | 20.55 | test_volumes.py
test_06_download_detached_volume | Success | 15.33 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.27 | test_volumes.py
test_deploy_vm_multiple | Success | 289.25 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.23 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.46 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.86 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.15 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.89 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 105.88 | test_templates.py
  

[jira] [Commented] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2016-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696452#comment-15696452
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9595:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1762
  
@rafaelweingartner I might be wrong but 2d  came from 
findAndUpdateDirectAgentToLoad in HostDaoImpl  which also creates a large 
transaction.


> Transactions are not getting retried in case of database deadlock errors
> 
>
> Key: CLOUDSTACK-9595
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Customer is seeing occasional error 'Deadlock found when trying to get lock; 
> try restarting transaction' messages in their management server logs.  It 
> happens regularly at least once a day.  The following is the error seen 
> 2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
> (catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
> executing api command: [Ljava.lang.String;@230a6e7f
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
> com.mysql.jdbc.JDBC4PreparedStatement@74f134e3: DELETE FROM 
> instance_group_vm_map WHERE instance_group_vm_map.instance_id = 941374
>   at com.cloud.utils.db.GenericDaoBase.expunge(GenericDaoBase.java:1209)
>   at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at com.sun.proxy.$Proxy237.expunge(Unknown Source)
>   at 
> com.cloud.vm.UserVmManagerImpl$2.doInTransactionWithoutResult(UserVmManagerImpl.java:2593)
>   at 
> com.cloud.utils.db.TransactionCallbackNoReturn.doInTransaction(TransactionCallbackNoReturn.java:25)
>   at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:57)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:45)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:54)
>   at 
> com.cloud.vm.UserVmManagerImpl.addInstanceToGroup(UserVmManagerImpl.java:2575)
>   at 
> com.cloud.vm.UserVmManagerImpl.updateVirtualMachine(UserVmManagerImpl.java:2332)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9619) Fixes for PR 1600

2016-11-25 Thread Mike Tutkowski (JIRA)
Mike Tutkowski created CLOUDSTACK-9619:
--

 Summary: Fixes for PR 1600
 Key: CLOUDSTACK-9619
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9619
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.10.0.0
 Environment: All
Reporter: Mike Tutkowski
 Fix For: 4.10.0.0


In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. 
In one such scenario, the source snapshot in question is coming from secondary 
storage (when we are creating a new volume on managed storage from a snapshot 
of ours that’s on secondary storage).

This usually “worked” in the regression tests due to a bit of "luck": We 
retrieve the ID of the snapshot (which is on secondary storage) and then try to 
pull out its StorageVO object (which is for primary storage). If you happen to 
have a primary storage that matches the ID (which is the ID of a secondary 
storage), then getSnapshotDetails populates its Map with 
inapplicable data (that is later ignored) and you don’t easily see a problem. 
However, if you don’t have a primary storage that matches that ID (which I 
didn’t today because I had removed that primary storage), then a 
NullPointerException is thrown.

I have fixed that issue by skipping getSnapshotDetails if the source is coming 
from secondary storage.

While fixing that, I noticed a couple more problems:

  We can invoke grantAccess on a snapshot that’s actually on secondary storage 
(this doesn’t amount to much because the VolumeServiceImpl ignores the call 
when it’s not for a primary-storage driver).
  We can invoke revokeAccess on a snapshot that’s actually on secondary storage 
(this doesn’t amount to much because the VolumeServiceImpl ignores the call 
when it’s not for a primary-storage driver).
I have corrected those issues, as well.

I then came across one more problem:
· When using a SAN snapshot and copying it to secondary storage or creating a 
new managed-storage volume from a snapshot of ours on secondary storage, we 
attach to the SR in the XenServer code, but detach from it in the 
StorageSystemDataMotionStrategy code (by sending a message to the XenServer 
code to perform an SR detach). Since we know to detach from the SR after the 
copy is done, we should detach from the SR in the XenServer code (without that 
code having to be explicitly called from outside of the XenServer logic).

I went ahead and changed that, as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9620) KVM Improvements for Managed Storage

2016-11-25 Thread Mike Tutkowski (JIRA)
Mike Tutkowski created CLOUDSTACK-9620:
--

 Summary: KVM Improvements for Managed Storage
 Key: CLOUDSTACK-9620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: KVM, Management Server
Affects Versions: Future
 Environment: KVM
Reporter: Mike Tutkowski
 Fix For: Future


Allow zone-wide primary storage based on a custom plug-in to be added via the 
GUI in a KVM-only environment.

Support for root disks on managed storage with KVM

Template caching with managed storage and KVM

Support for volume snapshots with managed storage on KVM

Added the ability to revert a volume to a snapshot on KVM

Updated some integration tests

Enforce that a SolidFire volume’s Min IOPS cannot exceed 15,000 and its Max and 
Burst IOPS cannot exceed 100,000.

A SolidFire volume must be at least one GB.

The storage driver should not remove the row from the cloud.template_spool_ref 
table.

Enable cluster-scoped managed storage

Only volumes from zone-wide managed storage can be storage motioned from a host 
in one cluster to a host in another cluster (cannot do so at the time being 
with volumes from cluster-scoped managed storage).

Updates for SAN-assisted snapshots



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)