[jira] [Commented] (CLOUDSTACK-9356) VPC add VPN User fails same error as CLOUDSTACK-8927

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820402#comment-15820402
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9356:


GitHub user ustcweizhou opened a pull request:

https://github.com/apache/cloudstack/pull/1903

[4.9] CLOUDSTACK-9356: FIX Cannot add users in VPC VPN

This happens if VPC has redundant VRs.
The results from VRs are combined in commit 13eb789.
This PR simply separates the results to two parts and check them if there 
are two VRs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ustcweizhou/cloudstack vpc-vpn-add-user

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1903.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1903


commit 2ec3ba36bdffa99f5cff9837893d7a697f393ef5
Author: Wei Zhou 
Date:   2017-01-12T07:00:44Z

CLOUDSTACK-9356: FIX Cannot add users in VPC VPN




> VPC add VPN User fails same error as CLOUDSTACK-8927
> 
>
> Key: CLOUDSTACK-9356
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9356
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VPC, XenServer
>Affects Versions: 4.8.0, 4.9.0
> Environment: Two CentOS7 MGMT Servers, Two XenServerClusters, 
> Advanced Networking, VLAN isolated
>Reporter: Thomas
>Priority: Critical
>
> When we try to add an VPN User on a VPC following error occurs:
> Management Server:
> ---
> Apr 20 09:24:43 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:43 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [network.vpn.RemoteAccessVpnManagerImpl] 
> (API-Job-Executor-58:ctx-7f86f610 job-1169 ctx-1073feac) (logid:180e35ed) 
> Unable to apply vpn users
> Apr 20 09:24:47 localhost java.lang.IndexOutOfBoundsException: Index: 1, 
> Size: 1
> Apr 20 09:24:47 localhost at 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)
> Apr 20 09:24:47 localhost at java.util.ArrayList.get(ArrayList.java:429)
> Apr 20 09:24:47 localhost at 
> com.cloud.network.vpn.RemoteAccessVpnManagerImpl.applyVpnUsers(RemoteAccessVpnManagerImpl.java:532)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 20 09:24:47 localhost at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 20 09:24:47 localhost at 
> java.lang.reflect.Method.invoke(Method.java:498)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
> Apr 20 09:24:47 localhost at 
> com.sun.proxy.$Proxy234.applyVpnUsers(Unknown Source)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.api.command.user.vpn.AddVpnUserCmd.execute(AddVpnUserCmd.java:122)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> Apr 20 0

[jira] [Commented] (CLOUDSTACK-9405) listDomains API call takes an extremely long time to respond

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820315#comment-15820315
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9405:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1901
  
@serg38 what do you want to validate if it is necessary ?


> listDomains API call takes an extremely long time to respond
> 
>
> Key: CLOUDSTACK-9405
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9405
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.8.0
>Reporter: dsclose
>  Labels: performance
>
> We recently upgraded from Cloudstack 4.5.2 to Cloudstack 4.8.0. Since this 
> update, the listDomains API call has started taking an extremely long time to 
> respond. This has caused issues with our services that rely on this API call. 
> Initially they simply timed out until we increased the thresholds. Now we 
> have processes that used to take a few seconds taking many minutes.
> This is so problematic for us that our organisation has put a halt on further 
> updates of Cloudstack 4.5.2 installations. If reversing the update of zones 
> already on 4.8.0 was feasible, we would have reverted back to 4.5.2.
> Here is a table of the times we're seeing:
> ||CS Version||Domain Count||API Response Time||
> |4.5.2|251|~3s|
> |4.8.0|182|~26s|
> |4.8.0|<10|<1s|
> This small data sample indicates that the response time for zones with a 
> larger amount of domains is significantly worse after the update to 4.8.0. 
> Zones with few domains aren't able to reproduce this issue.
> I recall a bug being resolved recently that concerned reducing the response 
> time for list* API calls. I also recall [~remibergsma] resolving a bug 
> concerning the sorting of the listDomains response. Is it possible that these 
> issues are connected?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9712) Establishing Remote access VPN is failing due to mismatch of preshared secrets post Disable/Enable VPN.

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820304#comment-15820304
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9712:


Github user ustcweizhou closed the pull request at:

https://github.com/apache/cloudstack/pull/1890


> Establishing Remote access VPN  is failing due to mismatch of preshared 
> secrets post Disable/Enable VPN.
> 
>
> Key: CLOUDSTACK-9712
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9712
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.9.0
>Reporter: DeepthiMachiraju
>Priority: Critical
> Attachments: management-server.rar
>
>
> - On a Isolated Network enable VPN , and configure few VPN users.
> - Deploy a windows 2012R2 VM in the shared network . Create a new VPN 
> connection by providing the NAt ip , select L2tp in the confguration and 
> provide the psk provided by cloudstack.
> - Try logging with the vpn users created above.
> Observations : 
> - User fails to login with the following error message at client : " Error 
> 789 : The L2TP connection attempt failed because the security layer 
> encountered a processing error during initial negotiations with the remote 
> computer ".
> - Each time VPN is DIsabled/Enabled , new key is stored in ipsec.any.secrets.
> root@r-5-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
> : PSK "O3rEXqxgMXRvNkPRXaqtkg43"
> : PSK "ZwEcGeHKnE9z2zpPht9eh77T"
> : PSK "7CUjMgwO8sbMJXjyHhRg2NDp"
> Note : when the older psk are deleted and only the current key is retained in 
> the file  , remote vpn is established sucessfully.
> =auth.log==
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [MS NT5 ISAKMPOAKLEY 0009]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: received 
> Vendor ID payload [RFC 3947] method set to=109
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: received 
> Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already 
> using method 109
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [FRAGMENTATION]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [MS-Negotiation Discovery Capable]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [Vid-Initial-Contact]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [IKE CGA version 1]
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> responding to Main Mode from unknown peer 10.147.52.62
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> OAKLEY_GROUP 20 not supported.  Attribute OAKLEY_GROUP_DESCRIPTION
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> OAKLEY_GROUP 19 not supported.  Attribute OAKLEY_GROUP_DESCRIPTION
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: multiple 
> ipsec.secrets entries with distinct secrets match endpoints: first secret used
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: multiple 
> ipsec.secrets entries with distinct secrets match endpoints: first secret used
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> transition from state STATE_MAIN_R0 to state STATE_MAIN_R1
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> STATE_MAIN_R1: sent MR1, expecting MI2
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> NAT-Traversal: Result using RFC 3947 (NAT-Traversal): no NAT detected
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: multiple 
> ipsec.secrets entries with distinct secrets match endpoints: first secret used
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: multiple 
> ipsec.secrets entries with distinct secrets match endpoints: first secret used
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> transition from state STATE_MAIN_R1 to state STATE_MAIN_R2
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> STATE_MAIN_R2: sent MR2, expecting MI3
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: next 
> payload type of ISAKMP Identification Payload has an unknown value: 255
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: probable 
> authentication failure (mismatch of preshared secrets?): malformed payload in 
> p

[jira] [Commented] (CLOUDSTACK-9712) Establishing Remote access VPN is failing due to mismatch of preshared secrets post Disable/Enable VPN.

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820305#comment-15820305
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9712:


GitHub user ustcweizhou reopened a pull request:

https://github.com/apache/cloudstack/pull/1890

CLOUDSTACK-9712: FIX issue on preshared key if we disable/enable remote 
access vpn

Way to reproduce the issue
(1) enable remote access vpn
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "mVSx5KDXCPYX7X5DGb2W8yNW"

(2) disable/enable vpn
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "mVSx5KDXCPYX7X5DGb2W8yNW"
: PSK "HeV3dHZpZXt4chhfvhx8D83C"

Expected configuration:
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "HeV3dHZpZXt4chhfvhx8D83C"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ustcweizhou/cloudstack vpn-preshared-key-issue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1890.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1890


commit 16c2cd0244e65238fa1aa7fe85fe2636a2298a7c
Author: Wei Zhou 
Date:   2017-01-05T11:14:13Z

FIX issue on preshared key if we disable/enable remote access vpn

Way to reproduce the issue
(1) enable remote access vpn
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "mVSx5KDXCPYX7X5DGb2W8yNW"

(2) disable/enable vpn
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "mVSx5KDXCPYX7X5DGb2W8yNW"
: PSK "HeV3dHZpZXt4chhfvhx8D83C"

Expected configuration:
root@r-8349-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
: PSK "HeV3dHZpZXt4chhfvhx8D83C"




> Establishing Remote access VPN  is failing due to mismatch of preshared 
> secrets post Disable/Enable VPN.
> 
>
> Key: CLOUDSTACK-9712
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9712
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.9.0
>Reporter: DeepthiMachiraju
>Priority: Critical
> Attachments: management-server.rar
>
>
> - On a Isolated Network enable VPN , and configure few VPN users.
> - Deploy a windows 2012R2 VM in the shared network . Create a new VPN 
> connection by providing the NAt ip , select L2tp in the confguration and 
> provide the psk provided by cloudstack.
> - Try logging with the vpn users created above.
> Observations : 
> - User fails to login with the following error message at client : " Error 
> 789 : The L2TP connection attempt failed because the security layer 
> encountered a processing error during initial negotiations with the remote 
> computer ".
> - Each time VPN is DIsabled/Enabled , new key is stored in ipsec.any.secrets.
> root@r-5-VM:~# cat /etc/ipsec.d/ipsec.any.secrets
> : PSK "O3rEXqxgMXRvNkPRXaqtkg43"
> : PSK "ZwEcGeHKnE9z2zpPht9eh77T"
> : PSK "7CUjMgwO8sbMJXjyHhRg2NDp"
> Note : when the older psk are deleted and only the current key is retained in 
> the file  , remote vpn is established sucessfully.
> =auth.log==
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [MS NT5 ISAKMPOAKLEY 0009]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: received 
> Vendor ID payload [RFC 3947] method set to=109
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: received 
> Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already 
> using method 109
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [FRAGMENTATION]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [MS-Negotiation Discovery Capable]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [Vid-Initial-Contact]
> Dec 28 10:49:30 r-5-VM pluto[2828]: packet from 10.147.52.62:500: ignoring 
> Vendor ID payload [IKE CGA version 1]
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> responding to Main Mode from unknown peer 10.147.52.62
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> OAKLEY_GROUP 20 not supported.  Attribute OAKLEY_GROUP_DESCRIPTION
> Dec 28 10:49:30 r-5-VM pluto[2828]: "L2TP-PSK"[3] 10.147.52.62 #18: 
> OAKLEY_GROUP 19 not supported.  Attribute OAKLEY_GROUP_DESCRI

[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820262#comment-15820262
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820260#comment-15820260
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@blueorangutan test


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9718) Revamp the dropdown showing lists of hosts available for migration in a Zone

2017-01-11 Thread Rashmi Dixit (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rashmi Dixit updated CLOUDSTACK-9718:
-
Status: Reviewable  (was: In Progress)

> Revamp the dropdown showing lists of hosts available for migration in a Zone
> 
>
> Key: CLOUDSTACK-9718
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9718
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: MigrateInstance-SeeHosts-Search.PNG, 
> MigrateInstance-SeeHosts.PNG
>
>
> There are a couple of issues:
> 1. When looking for the possible hosts for migration, not all are displayed.
> 2. If there is a large number of hosts, then the drop down showing is not 
> easy to use.
> To fix this, propose to change the view to a list view which will show the 
> hosts in a list view with radio button. Additionally have a search option 
> where the hostname can be searched in this list to make it more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9675) Cloudstack Metrics: Miscellaneous bug fixes

2017-01-11 Thread Rashmi Dixit (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rashmi Dixit updated CLOUDSTACK-9675:
-
Status: Reviewable  (was: In Progress)

> Cloudstack Metrics: Miscellaneous bug fixes
> ---
>
> Key: CLOUDSTACK-9675
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9675
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.7.1
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.9.0
>
>
> There are a number of issues in cloudstack metrics feature
> 1. Goto Zone metrics or Hosts metrics. NUmerical values are not listed under 
> Mem Usage and Mem Allocation columns. Instead 'NaN' is displayed.
> 2. Create a Windows instance on a Xen cluster. No IOPS data is generated or 
> shown in the Disk Usage Tab for that Instance.
> 3. Changing storage.overprovisioning factor should cause changed values in 
> storage metrics. This doesnt happen currently.
> 4. Allocated memory is not correctly calculated on Hosts Metrics page for a 
> xen server with multiple instances.
> 5. List of Virtual Machines will be incorrect if the number is greater than 
> the default.page.size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9675) Cloudstack Metrics: Miscellaneous bug fixes

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820020#comment-15820020
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9675:


Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1826
  
@rhtyd Please take a look now. I have squashed the changes.


> Cloudstack Metrics: Miscellaneous bug fixes
> ---
>
> Key: CLOUDSTACK-9675
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9675
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.7.1
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.9.0
>
>
> There are a number of issues in cloudstack metrics feature
> 1. Goto Zone metrics or Hosts metrics. NUmerical values are not listed under 
> Mem Usage and Mem Allocation columns. Instead 'NaN' is displayed.
> 2. Create a Windows instance on a Xen cluster. No IOPS data is generated or 
> shown in the Disk Usage Tab for that Instance.
> 3. Changing storage.overprovisioning factor should cause changed values in 
> storage metrics. This doesnt happen currently.
> 4. Allocated memory is not correctly calculated on Hosts Metrics page for a 
> xen server with multiple instances.
> 5. List of Virtual Machines will be incorrect if the number is greater than 
> the default.page.size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819298#comment-15819298
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
Trillian test result (tid-778)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 51612 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1727-t778-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 252.62 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 441.94 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 172.18 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 719.34 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 410.07 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 760.20 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 724.28 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1555.35 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 810.39 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 696.00 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1537.82 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 31.08 | test_volumes.py
test_06_download_detached_volume | Success | 85.78 | test_volumes.py
test_05_detach_volume | Success | 100.46 | test_volumes.py
test_04_delete_attached_volume | Success | 15.24 | test_volumes.py
test_03_download_attached_volume | Success | 20.31 | test_volumes.py
test_02_attach_volume | Success | 58.83 | test_volumes.py
test_01_create_volume | Success | 537.15 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 575.69 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 280.30 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 241.07 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 201.58 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 166.72 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 272.93 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.84 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.22 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.17 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.27 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.15 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 271.95 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.07 | test_templates.py
test_04_extract_template | Success | 15.30 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 141.10 | test_templates.py
test_10_destroy_cpvm | Success | 297.06 | test_ssvm.py
test_09_destroy_ssvm | Success | 269.16 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.74 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.66 | test_ssvm.py
test_06_stop_cpvm | Success | 207.08 | test_ssvm.py
test_05_stop_ssvm | Success | 173.82 | test_ssvm.py
test_04_cpvm_internals | Success | 1.25 | test_ssvm.py
test_03_ssvm_internals | Success | 3.86 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 66.51 | test_snapshots.py
test_04_change_offering_small | Success | 126.93 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering 

[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819203#comment-15819203
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
Trillian test result (tid-779)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 42810 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1888-t779-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_routers.py
Intermitten failure detected: /marvin/tests/smoke/test_ssvm.py
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 212.50 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 349.56 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 207.38 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Error` | 359.82 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 371.81 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 171.87 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 588.01 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 476.92 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 761.82 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 673.22 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1559.94 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 745.25 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 675.58 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1380.72 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 36.05 | test_volumes.py
test_06_download_detached_volume | Success | 55.55 | test_volumes.py
test_05_detach_volume | Success | 110.35 | test_volumes.py
test_04_delete_attached_volume | Success | 20.26 | test_volumes.py
test_03_download_attached_volume | Success | 20.70 | test_volumes.py
test_02_attach_volume | Success | 58.75 | test_volumes.py
test_01_create_volume | Success | 513.45 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.25 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 227.22 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 176.48 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.66 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 262.79 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.82 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.26 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 81.19 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.27 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.14 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.23 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.15 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 272.10 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 15.24 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.17 | test_templates.py
test_01_create_template | Success | 121.07 | test_templates.py
test_10_destroy_cpvm | Success | 267.03 | test_ssvm.py
test_09_destroy_ssvm | Success | 269.29 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.69 | test_ssvm.py
test_07_reboot_ssvm | Success | 188.55 | test_ssvm.py
test_06_stop_cpvm | Success | 176.85 | test_ssvm.py
test_05_stop_ssvm | Success | 178.85 | test_ssvm.py
test_04_cpvm_internals | Success | 1.13 | test_ssvm.py
test_03_ssvm_internals | Success | 2.92 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.20 | test_snapshots.py
test_04_change_offering_small | Suc

[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818877#comment-15818877
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user milamberspace commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
Thanks for the docs update.

I've tested the PR with my test topology on Ubuntu 14.04 + Openjdk PPA. 
With the new systemvm (generated from the PR too)
Installation works, Simple deployment works (ssvm, cpvm, RV)

LGTM.

Note : the docker build command with Ubuntu 14.04 don't works if the 
Openjdk8/PPA is not installed before.


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8746) VM Snapshotting implementation for KVM

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818401#comment-15818401
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8746:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@kiwiflyer indeed, we need make some changes in 
./test/integration/smoke/test_vm_snapshots.py

1. 
'''
vm_snapshot = VmSnapshot.create(
self.apiclient,
self.virtual_machine.id,
"false",
"TestSnapshot",
"Dsiplay Text"
)
'''
the 4th line should be "true" if self.hypervisor is KVM.

2.
'''
self.virtual_machine.stop(self.apiclient)

VmSnapshot.revertToSnapshot(
self.apiclient,
list_snapshot_response[0].id)

self.virtual_machine.start(self.apiclient)
'''

This part should also be changed. as the vm should be running if revert vm 
snapshot.

3. it seems we need to add a new test to backup snapshot from vm snapshot 
on kvm.


> VM Snapshotting implementation for KVM
> --
>
> Key: CLOUDSTACK-8746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8746
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> Currently it is not supported.
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Snapshots



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9736) Incoherent validation and error message when you change the vm.password.length configuration parameter

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818179#comment-15818179
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9736:


GitHub user milamberspace opened a pull request:

https://github.com/apache/cloudstack/pull/1902

CLOUDSTACK-9736 Incoherent validation and error message when you chan…

…ge the vm.password.length configuration parameter

Default value introduce in schema-430to440.sql are 6 for the length


Probably needs to be merge with LTS branch (4.9) and other since 4.4

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/milamberspace/cloudstack PasswordLengthFix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1902.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1902


commit 9d3d3b4c24500986b3286baf4a3b2bb1d4f32611
Author: Milamber 
Date:   2017-01-11T12:29:09Z

CLOUDSTACK-9736 Incoherent validation and error message when you change the 
vm.password.length configuration parameter

Default value introduce in schema-430to440.sql are 6 for the length




> Incoherent validation and error message when you change the 
> vm.password.length configuration parameter
> --
>
> Key: CLOUDSTACK-9736
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9736
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.7.0, 4.8.0, 4.9.0, 4.10.0.0
>Reporter: Milamber
>Assignee: Milamber
>Priority: Minor
> Fix For: 4.10.0.0
>
>
> When you try to change the value of vm.password.length parameters, if the 
> value is < 10 the error message says:
> "Please enter a value greater than 6 for the configuration parameter"
> In the code server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
> the validation use 10 as length and the message says 6 for length:
> if ("vm.password.length".equalsIgnoreCase(name) && val < 10) {
> throw new InvalidParameterValueException("Please enter a 
> value greater than 6 for the configuration parameter:" + name);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9736) Incoherent validation and error message when you change the vm.password.length configuration parameter

2017-01-11 Thread Milamber (JIRA)
Milamber created CLOUDSTACK-9736:


 Summary: Incoherent validation and error message when you change 
the vm.password.length configuration parameter
 Key: CLOUDSTACK-9736
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9736
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.9.0, 4.8.0, 4.7.0, 4.10.0.0
Reporter: Milamber
Assignee: Milamber
Priority: Minor
 Fix For: 4.10.0.0


When you try to change the value of vm.password.length parameters, if the value 
is < 10 the error message says:
"Please enter a value greater than 6 for the configuration parameter"

In the code server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
the validation use 10 as length and the message says 6 for length:

if ("vm.password.length".equalsIgnoreCase(name) && val < 10) {
throw new InvalidParameterValueException("Please enter a 
value greater than 6 for the configuration parameter:" + name);




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-9513) Migrate transifex workflow and format to json

2017-01-11 Thread Milamber (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milamber closed CLOUDSTACK-9513.

   Resolution: Fixed
Fix Version/s: (was: Future)

The work is done.

> Migrate transifex workflow and format to json
> -
>
> Key: CLOUDSTACK-9513
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9513
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0
>
>
> With the changes introduced by the PR 
> https://github.com/apache/cloudstack/pull/1669 we no longer need 
> messages.properties format. Therefore, we can migrate Transifex workflow to 
> the json format. /cc [~milamber]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9650) Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817894#comment-15817894
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9650:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1899
  
@koushik-das as a RM I want to ensure that all changes, irrespective of 
their size, are merged after a QA process.

For UI/translation/docs changes tests are not necessary. Please understand 
that it's difficult to review and keep track of each and every PR in detail, 
and help and support from the community is appreciated to make this work. We 
have agreed to gate the branches, and you've went ahead with merging without 
following proper guidelines; testing against one hypervisor which have errors 
that may/may-not be related to a PR are not good enough, given we've systems 
like Trillian to test against at least three major hypervisors.

Lastly, I disagree, I think the setting should be available per zone for 
admins to override this on a per-zone basis.


> Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting
> 
>
> Key: CLOUDSTACK-9650
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9650
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.10.1.0
>
>
> VM deployments are not allowed on clusters where resource (cpu/memory) 
> allocation has exceeded the cluster disabled thresholds. The same policy also 
> gets applied in case of start VM if last host where the VM was running 
> doesn't have enough capacity and a new host is picked up. In certain 
> scenarios this can be restrictive and despite having capacity, the VM cannot 
> be started.
> This improvement is to provide administrator an option to disable/enable 
> cluster threshold enforcement during start of a stopped VM as long as 
> sufficient capacity is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8746) VM Snapshotting implementation for KVM

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817883#comment-15817883
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8746:


Github user kiwiflyer commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@ustcweizhou @rhtyd 

So the vmsnapshot test seems might need some work. Here's the exception it 
threw:

2017-01-10 17:27:01,025 - CRITICAL - EXCEPTION: 
test_01_create_vm_snapshots: ['Traceback (most recent call last):\n', '  File 
"/usr/lib64/python2.7/unittest/case.py", line 369, in run\ntestMethod()\n', 
'  File "/marvin/tests/smoke/test_vm_snapshots.py", line 158, in 
test_01_create_vm_snapshots\n"Dsiplay Text"\n', '  File 
"/usr/lib/python2.7/site-packages/marvin/lib/base.py", line 4702, in create\n   
 return VmSnapshot(apiclient.createVMSnapshot(cmd).__dict__)\n', '  File 
"/usr/lib/python2.7/site-packages/marvin/cloudstackAPI/cloudstackAPIClient.py", 
line 1281, in createVMSnapshot\nresponse = 
self.connection.marvinRequest(command, response_type=response, 
method=method)\n', '  File 
"/usr/lib/python2.7/site-packages/marvin/cloudstackConnection.py", line 379, in 
marvinRequest\nraise e\n', 'CloudstackAPIException: Execute cmd: 
createvmsnapshot failed, due to: errorCode: 431, errorText:KVM VM does not 
allow to take a disk-only snapshot when VM is in running state\n']

The nuance here is that you need to specify snapshotmemory=true in order to 
take a KVM snapshot with the VM running.


> VM Snapshotting implementation for KVM
> --
>
> Key: CLOUDSTACK-8746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8746
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> Currently it is not supported.
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Snapshots



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9650) Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817835#comment-15817835
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9650:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1899
  
@rhtyd If you had checked #1812 you wouldn't have asked these questions :)
This config was introduced in #1812 and the scope was incorrectly put as 
zone, the config is meant to be a global one.


> Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting
> 
>
> Key: CLOUDSTACK-9650
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9650
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.10.1.0
>
>
> VM deployments are not allowed on clusters where resource (cpu/memory) 
> allocation has exceeded the cluster disabled thresholds. The same policy also 
> gets applied in case of start VM if last host where the VM was running 
> doesn't have enough capacity and a new host is picked up. In certain 
> scenarios this can be restrictive and despite having capacity, the VM cannot 
> be started.
> This improvement is to provide administrator an option to disable/enable 
> cluster threshold enforcement during start of a stopped VM as long as 
> sufficient capacity is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9650) Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817815#comment-15817815
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9650:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1899
  
@koushik-das there are no test results on this PR, also can you explain why 
you've removed the option to override this setting at the zone level?


> Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting
> 
>
> Key: CLOUDSTACK-9650
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9650
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.10.1.0
>
>
> VM deployments are not allowed on clusters where resource (cpu/memory) 
> allocation has exceeded the cluster disabled thresholds. The same policy also 
> gets applied in case of start VM if last host where the VM was running 
> doesn't have enough capacity and a new host is picked up. In certain 
> scenarios this can be restrictive and despite having capacity, the VM cannot 
> be started.
> This improvement is to provide administrator an option to disable/enable 
> cluster threshold enforcement during start of a stopped VM as long as 
> sufficient capacity is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9731) Hardcoded label appears on the Add zone wizard

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817808#comment-15817808
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9731:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1892
  
@sureshanaparti please post a screen shot of the UI change


> Hardcoded label appears on the Add zone wizard
> --
>
> Key: CLOUDSTACK-9731
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9731
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
> Attachments: hardcoded_label.jpg
>
>
> Repro Steps: 
> 1. Setup basic environments as normal.
> 2. Open a browser, go to CloudStack Web Console.
> 3. Go on "Infrastructure" on left panel, choose Zone and click "View all".
> 4. Click on "Add Zone", choose "Advanced" and click "Next".
> 5. Input name and IPv4 NDS on "Setup Zone", click "next".
> 6. Add "Physical network 2", move mouse to "X" (close) button, check the tip.
> 7. Tip shows the label name: label.remove.this.physical.network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9699) Metrics: Add a global setting to enable/disable Metrics view

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817807#comment-15817807
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9699:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1884
  
@rashmidixit I'm working on writing separate metrics view apis 
(backend/apis), they will improve the performance significantly. An explicit 
global settings is not necessary, pl. hold on this PR.


> Metrics: Add a global setting to enable/disable Metrics view
> 
>
> Key: CLOUDSTACK-9699
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9699
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: enable-metrics-flag.PNG, metrics-disabled.PNG, 
> metrics-enabled.PNG
>
>
> The Metrics view for each type of entity basically fires APIs and calculates 
> required values on the client end. For e.g. to display memory usage etc at 
> the zone level, it will fetch all zones. For each zone it will fetch 
> pods->cluster->host->VMs
> For a very large Cloudstack installation this will have a major impact on the 
> performance. 
> Ideally, there should be an API which calculates all this in the backend and 
> the UI should simply show the values. However, for the time, introduce a 
> global setting called enable.metrics which will be set to false. This will 
> cause the metrics button not to be shown on any of the pages.
> If the Admin changes this to true, then the button will be visible and 
> Metrics functionality will work as usual.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9650) Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817802#comment-15817802
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9650:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1899
  
@rhtyd Tests are present in #1812


> Allow starting VMs regardless of cpu/memory cluster.disablethreshold setting
> 
>
> Key: CLOUDSTACK-9650
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9650
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.10.1.0
>
>
> VM deployments are not allowed on clusters where resource (cpu/memory) 
> allocation has exceeded the cluster disabled thresholds. The same policy also 
> gets applied in case of start VM if last host where the VM was running 
> doesn't have enough capacity and a new host is picked up. In certain 
> scenarios this can be restrictive and despite having capacity, the VM cannot 
> be started.
> This improvement is to provide administrator an option to disable/enable 
> cluster threshold enforcement during start of a stopped VM as long as 
> sufficient capacity is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1581#comment-1581
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817776#comment-15817776
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@blueorangutan test centos7 vmware-55u3


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817769#comment-15817769
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-457


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817738#comment-15817738
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@milamberspace @terbolous I've added the information for Ubuntu users on 
release notes for 4.10 (preview) now:

http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10/upgrade/upgrade-4.9.html#java-8-jre-on-ubuntu

http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10/upgrade/upgrade_notes.html#java-8-jre-on-ubuntu

I think with this we don't have any blockers, I'll merge this PR with a 
final round of testing now. Thanks everyone.


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817709#comment-15817709
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
Trillian test result (tid-776)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43663 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1767-t776-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_templates.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 45 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 869.57 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 391.44 | 
test_privategw_acl.py
test_01_create_volume | `Error` | 279.98 | test_volumes.py
test_03_delete_template | `Error` | 5.18 | test_templates.py
test_01_vpc_site2site_vpn | Success | 195.95 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.47 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 352.03 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 453.60 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 827.87 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 537.38 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1336.10 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 580.77 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1292.65 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.61 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 161.53 | test_volumes.py
test_06_download_detached_volume | Success | 156.58 | test_volumes.py
test_05_detach_volume | Success | 241.36 | test_volumes.py
test_04_delete_attached_volume | Success | 151.27 | test_volumes.py
test_03_download_attached_volume | Success | 156.33 | test_volumes.py
test_02_attach_volume | Success | 185.35 | test_volumes.py
test_deploy_vm_multiple | Success | 338.47 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.22 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.93 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.01 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.20 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.91 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 70.76 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.18 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 50.54 | test_templates.py
test_10_destroy_cpvm | Success | 161.84 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.62 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.73 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.64 | test_ssvm.py
test_06_stop_cpvm | Success | 131.94 | test_ssvm.py
test_05_stop_ssvm | Success | 133.80 | test_ssvm.py
test_04_cpvm_internals | Success | 1.21 | test_ssvm.py
test_03_ssvm_internals | Success | 3.48 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.32 | test_snapshots.py
test_04_change_offering_small | Success | 242.88 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.12 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.

[jira] [Commented] (CLOUDSTACK-9405) listDomains API call takes an extremely long time to respond

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817697#comment-15817697
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9405:


GitHub user ustcweizhou opened a pull request:

https://github.com/apache/cloudstack/pull/1901

CLOUDSTACK-9405: add details parameter in listDomains API to reduce the 
execution time

The resource limitation causes long time while execute listDomains API. The 
resources are not needed in some cases so that the execution time will be 
reduced.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ustcweizhou/cloudstack listDomains-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1901


commit 471139eaa71a155add894ac9c1c1277f80264894
Author: Wei Zhou 
Date:   2016-11-14T10:57:59Z

CLOUDSTACK-9405: add details parameter in listDomains API to reduce the 
execution time




> listDomains API call takes an extremely long time to respond
> 
>
> Key: CLOUDSTACK-9405
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9405
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.8.0
>Reporter: dsclose
>  Labels: performance
>
> We recently upgraded from Cloudstack 4.5.2 to Cloudstack 4.8.0. Since this 
> update, the listDomains API call has started taking an extremely long time to 
> respond. This has caused issues with our services that rely on this API call. 
> Initially they simply timed out until we increased the thresholds. Now we 
> have processes that used to take a few seconds taking many minutes.
> This is so problematic for us that our organisation has put a halt on further 
> updates of Cloudstack 4.5.2 installations. If reversing the update of zones 
> already on 4.8.0 was feasible, we would have reverted back to 4.5.2.
> Here is a table of the times we're seeing:
> ||CS Version||Domain Count||API Response Time||
> |4.5.2|251|~3s|
> |4.8.0|182|~26s|
> |4.8.0|<10|<1s|
> This small data sample indicates that the response time for zones with a 
> larger amount of domains is significantly worse after the update to 4.8.0. 
> Zones with few domains aren't able to reproduce this issue.
> I recall a bug being resolved recently that concerned reducing the response 
> time for list* API calls. I also recall [~remibergsma] resolving a bug 
> concerning the sorting of the listDomains response. Is it possible that these 
> issues are connected?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817689#comment-15817689
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9710) Switch to JDK 1.8

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817687#comment-15817687
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9710:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1888
  
@blueorangutan package


> Switch to JDK 1.8
> -
>
> Key: CLOUDSTACK-9710
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9710
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0
>
>
> Switch to using JDK1.8 by default for building and running CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817653#comment-15817653
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1735
  
Trillian test result (tid-774)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 42807 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1735-t774-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 345.63 | 
test_privategw_acl.py
test_01_create_volume | `Error` | 249.60 | test_volumes.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 165.13 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.17 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 301.46 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 445.31 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 831.58 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 516.55 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1406.00 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 548.52 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 769.66 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1284.91 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.68 | test_volumes.py
test_08_resize_volume | Success | 156.39 | test_volumes.py
test_07_resize_fail | Success | 161.38 | test_volumes.py
test_06_download_detached_volume | Success | 156.25 | test_volumes.py
test_05_detach_volume | Success | 236.36 | test_volumes.py
test_04_delete_attached_volume | Success | 151.22 | test_volumes.py
test_03_download_attached_volume | Success | 156.32 | test_volumes.py
test_02_attach_volume | Success | 214.74 | test_volumes.py
test_deploy_vm_multiple | Success | 388.71 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 46.70 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.23 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.21 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.85 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.84 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.82 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 65.57 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 65.56 | test_templates.py
test_10_destroy_cpvm | Success | 166.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 164.22 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.57 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.39 | test_ssvm.py
test_06_stop_cpvm | Success | 136.78 | test_ssvm.py
test_05_stop_ssvm | Success | 163.33 | test_ssvm.py
test_04_cpvm_internals | Success | 1.25 | test_ssvm.py
test_03_ssvm_internals | Success | 2.83 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.08 | test_snapshots.py
test_04_change_offering_small | Success | 204.53 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service

[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817588#comment-15817588
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user mike-tutkowski commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1735#discussion_r94660627
  
--- Diff: server/src/com/cloud/api/ApiResponseHelper.java ---
@@ -493,7 +494,9 @@ public SnapshotResponse createSnapshotResponse(Snapshot 
snapshot) {
 snapshotInfo = (SnapshotInfo)snapshot;
 } else {
 DataStoreRole dataStoreRole = getDataStoreRole(snapshot, 
_snapshotStoreDao, _dataStoreMgr);
-
--- End diff --

I would say the default should be DataStoreRole.Image for getDataStoreRole 
and so getDataStoreRole should probably never return null.


> Bug in listSnapshots for snapshots with deleted data stores
> ---
>
> Key: CLOUDSTACK-9570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> If there is snapshot on a data store that is removed, {{listSnapshots}} still 
> tries to enumerate it and gives error (in this example data store 2 has been 
> removed):
> {code:xml|title=/client/api?command=listSnapshots&isrecursive=true&listall=true|borderStyle=solid}
> 
>530
>4250
>Unable to locate datastore with id 2
> 
> {code}
> h3. Reproduce error
> This steps can be followed to reproduce issue:
> * Take a snapshot of a volume (this creates a references for primary storage 
> and secondary storage in snapshot_store_ref table
> * Simulate retiring primary data storage where snapshot is cached (in this 
> example X is a fake data store and Y is snapshot id):
> {{UPDATE `cloud`.`snapshot_store_ref` SET `store_id`='X', `state`="Destroyed" 
> WHERE `id`='Y';}}
> * List snapshots
> {{/client/api?command=listSnapshots&isrecursive=true&listall=true}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817585#comment-15817585
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user mike-tutkowski commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1735#discussion_r94661240
  
--- Diff: 
api/src/org/apache/cloudstack/api/command/user/snapshot/ListSnapshotsCmd.java 
---
@@ -115,8 +115,10 @@ public void execute() {
 List snapshotResponses = new 
ArrayList();
 for (Snapshot snapshot : result.first()) {
 SnapshotResponse snapshotResponse = 
_responseGenerator.createSnapshotResponse(snapshot);
-snapshotResponse.setObjectName("snapshot");
-snapshotResponses.add(snapshotResponse);
+if (snapshotResponse != null) {
+snapshotResponse.setObjectName("snapshot");
--- End diff --

I've added a few comments.

If we implement those comments, then null should not be a problem here (in 
other words, the code in this class could remain unchanged).


> Bug in listSnapshots for snapshots with deleted data stores
> ---
>
> Key: CLOUDSTACK-9570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> If there is snapshot on a data store that is removed, {{listSnapshots}} still 
> tries to enumerate it and gives error (in this example data store 2 has been 
> removed):
> {code:xml|title=/client/api?command=listSnapshots&isrecursive=true&listall=true|borderStyle=solid}
> 
>530
>4250
>Unable to locate datastore with id 2
> 
> {code}
> h3. Reproduce error
> This steps can be followed to reproduce issue:
> * Take a snapshot of a volume (this creates a references for primary storage 
> and secondary storage in snapshot_store_ref table
> * Simulate retiring primary data storage where snapshot is cached (in this 
> example X is a fake data store and Y is snapshot id):
> {{UPDATE `cloud`.`snapshot_store_ref` SET `store_id`='X', `state`="Destroyed" 
> WHERE `id`='Y';}}
> * List snapshots
> {{/client/api?command=listSnapshots&isrecursive=true&listall=true}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817586#comment-15817586
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user mike-tutkowski commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1735#discussion_r94660806
  
--- Diff: 
engine/storage/volume/src/org/apache/cloudstack/storage/datastore/manager/PrimaryDataStoreProviderManagerImpl.java
 ---
@@ -56,7 +55,7 @@ public void config() {
 
 @Override
 public PrimaryDataStore getPrimaryDataStore(long dataStoreId) {
-StoragePoolVO dataStoreVO = dataStoreDao.findById(dataStoreId);
+StoragePoolVO dataStoreVO = 
dataStoreDao.findByIdIncludingRemoved(dataStoreId);
--- End diff --

Is getPrimaryDataStore(long) called from many places? If so, it might be a 
bit risky to change this from findById to findByIdIncludingRemoved unless we 
are pretty sure all of the calling code is OK with that change.


> Bug in listSnapshots for snapshots with deleted data stores
> ---
>
> Key: CLOUDSTACK-9570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> If there is snapshot on a data store that is removed, {{listSnapshots}} still 
> tries to enumerate it and gives error (in this example data store 2 has been 
> removed):
> {code:xml|title=/client/api?command=listSnapshots&isrecursive=true&listall=true|borderStyle=solid}
> 
>530
>4250
>Unable to locate datastore with id 2
> 
> {code}
> h3. Reproduce error
> This steps can be followed to reproduce issue:
> * Take a snapshot of a volume (this creates a references for primary storage 
> and secondary storage in snapshot_store_ref table
> * Simulate retiring primary data storage where snapshot is cached (in this 
> example X is a fake data store and Y is snapshot id):
> {{UPDATE `cloud`.`snapshot_store_ref` SET `store_id`='X', `state`="Destroyed" 
> WHERE `id`='Y';}}
> * List snapshots
> {{/client/api?command=listSnapshots&isrecursive=true&listall=true}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817587#comment-15817587
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user mike-tutkowski commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1735#discussion_r94659104
  
--- Diff: server/src/com/cloud/api/ApiResponseHelper.java ---
@@ -526,16 +529,18 @@ public static DataStoreRole getDataStoreRole(Snapshot 
snapshot, SnapshotDataStor
 }
 
 long storagePoolId = snapshotStore.getDataStoreId();
-DataStore dataStore = dataStoreMgr.getDataStore(storagePoolId, 
DataStoreRole.Primary);
--- End diff --

Another possibility here is that we could simply still try to retrieve 
"dataStore" and then perform this check:

 DataStore dataStore = dataStoreMgr.getDataStore(storagePoolId, 
DataStoreRole.Primary);

 if (dataStore == null) {
 return DataStoreRole.Image;
  }

If "dataStore" equals null, then it was removed, which should only be 
something that happened when unmanaged storage is being used (thus when the the 
snapshot resides on secondary storage).


> Bug in listSnapshots for snapshots with deleted data stores
> ---
>
> Key: CLOUDSTACK-9570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> If there is snapshot on a data store that is removed, {{listSnapshots}} still 
> tries to enumerate it and gives error (in this example data store 2 has been 
> removed):
> {code:xml|title=/client/api?command=listSnapshots&isrecursive=true&listall=true|borderStyle=solid}
> 
>530
>4250
>Unable to locate datastore with id 2
> 
> {code}
> h3. Reproduce error
> This steps can be followed to reproduce issue:
> * Take a snapshot of a volume (this creates a references for primary storage 
> and secondary storage in snapshot_store_ref table
> * Simulate retiring primary data storage where snapshot is cached (in this 
> example X is a fake data store and Y is snapshot id):
> {{UPDATE `cloud`.`snapshot_store_ref` SET `store_id`='X', `state`="Destroyed" 
> WHERE `id`='Y';}}
> * List snapshots
> {{/client/api?command=listSnapshots&isrecursive=true&listall=true}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)