[jira] [Commented] (CLOUDSTACK-9612) Restart Network with clean up fails for networks whose offering has been changed from Isolated -> RVR

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701260#comment-15701260
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9612:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1781
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-267


> Restart Network with clean up fails for networks whose offering has been 
> changed from Isolated -> RVR
> -
>
> Key: CLOUDSTACK-9612
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9612
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
> Deploy a network N1 with " Offering for Isolated networks with Source Nat 
> service enabled" . Ensure both vm and vr are UP .
> Create a RVR offering and edit the network offering from the current to 
> RVR ofefring .
> Ensure both Master and Backup are up and running.
> Now restart the network with clean up option enabled.
> Observations :
> Restarting the nw with clean up is creating is failing with the below error.
> {noformat}
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Start completed for VM VM[DomainRouter|r-21-QA]
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Done executing VM work job: 
> com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"RestartNetwork":"rO0ABXNyABFqYXZhLmxhbmcuQm9vbGVhbs0gcoDVnPruAgABWgAFdmFsdWV4cAE"},"userId":2,"accountId":2,"vmId":21,"handlerName":"VirtualMachineManagerImpl"}
> 2016-11-24 15:49:32,432 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Complete async job-104, jobStatus: SUCCEEDED, resultCode: 0, 
> result: null
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Publish async job-104 complete on message bus
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Wake up jobs related to job-104
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Update db status for job-104
> 2016-11-24 15:49:32,435 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Wake up jobs joined with job-104 and disjoin all subjobs 
> created from job- 104
> 2016-11-24 15:49:32,446 DEBUG [c.c.v.VmWorkJobDispatcher] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Done with 
> run of VM work job: com.cloud.vm.VmWorkStart for VM 21, job origin: 99
> 2016-11-24 15:49:32,446 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Done 
> executing com.cloud.vm.VmWorkStart for job-104
> 2016-11-24 15:49:32,448 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Remove 
> job-104 from job monitoring
> 2016-11-24 15:49:32,455 WARN  [o.a.c.e.o.NetworkOrchestrator] 
> (API-Job-Executor-10:ctx-d835fe9f job-99 ctx-2cd2b41c) (logid:fb2d5b7b) 
> Failed to implement network Ntwk[204|Guest|16] elements and resources as a 
> part of network restart due to 
> com.cloud.exception.ResourceUnavailableException: Resource [DataCenter:1] is 
> unreachable: Can't find all necessary running routers!
>   at 
> com.cloud.network.element.VirtualRouterElement.implement(VirtualRouterElement.java:226)
>   at 
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetworkElementsAndResources(NetworkOrchestrator.java:1132)
>   at 
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.restartNetwork(NetworkOrchestrator.java:2740)
>   at 
> com.cloud.network.NetworkServiceImpl.restartNetwork(NetworkServiceImpl.java:1907)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framew

[jira] [Commented] (CLOUDSTACK-9558) Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is expunged

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701261#comment-15701261
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9558:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1722
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-268


> Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is 
> expunged
> -
>
> Key: CLOUDSTACK-9558
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9558
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> ===
> i) Deploy a new VM in CCP on Xenserver
> ii) Create a snapshot for the volume created in step i) from CCP. This step 
> will create a snapshot on the primary storage and keeps it on storage as we 
> use it as reference for the incremental snapshots
> iii) Now destroy and expunge the VM created in step i)
> You will notice that the volume for the VM ( created in step i) is deleted 
> from the primary storage. However the snapshot created on primary ( as part 
> of step ii)) still exists on the primary and this needs to be deleted 
> manually by the admin.
> Snapshot exists on the primary storage even after deleting the Volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701304#comment-15701304
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@blueorangutan package


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701305#comment-15701305
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701355#comment-15701355
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
@blueorangutan test matrix


> Return ip6address in Basic Networking
> -
>
> Key: CLOUDSTACK-9359
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9359
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Management Server
> Environment: CloudStack Basic Networking
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>  Labels: api, basic-networking, ipv6
> Fix For: Future
>
>
> In Basic Networking Instances will obtain their IPv6 address using SLAAC 
> (Stateless Autoconfiguration) as described in the Wiki: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
> When a ip6cidr is configured and is a /64 we can calculate the IPv6 address 
> an Instance will obtain.
> There is no need to store a IPv6 address in the database with the /64 subnet 
> (ip6cidr) and the MAC address we can calculate the address using EUI-64:
> "A 64-bit interface identifier is most commonly derived from its 48-bit MAC 
> address. A MAC address 00:0C:29:0C:47:D5 is turned into a 64-bit EUI-64 by 
> inserting FF:FE in the middle: 00:0C:29:FF:FE:0C:47:D5. When this EUI-64 is 
> used to form an IPv6 address it is modified:[1] the meaning of the 
> Universal/Local bit (the 7th most significant bit of the EUI-64, starting 
> from 1) is inverted, so that a 1 now means Universal. To create an IPv6 
> address with the network prefix 2001:db8:1:2::/64 it yields the address 
> 2001:db8:1:2:020c:29ff:fe0c:47d5 (with the underlined U/L (=Universal/Local) 
> bit inverted to a 1, because the MAC address is universally unique)."
> The API should return this address in the ip6address field for a NIC in Basic 
> Networking.
> End-Users can use this, but it can also be used internally by Security 
> Grouping to program rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701358#comment-15701358
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
@karuturi a Trillian-Jenkins matrix job (centos6 mgmt + xs65sp1, centos7 
mgmt + vmware55u3, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests


> Return ip6address in Basic Networking
> -
>
> Key: CLOUDSTACK-9359
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9359
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Management Server
> Environment: CloudStack Basic Networking
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>  Labels: api, basic-networking, ipv6
> Fix For: Future
>
>
> In Basic Networking Instances will obtain their IPv6 address using SLAAC 
> (Stateless Autoconfiguration) as described in the Wiki: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
> When a ip6cidr is configured and is a /64 we can calculate the IPv6 address 
> an Instance will obtain.
> There is no need to store a IPv6 address in the database with the /64 subnet 
> (ip6cidr) and the MAC address we can calculate the address using EUI-64:
> "A 64-bit interface identifier is most commonly derived from its 48-bit MAC 
> address. A MAC address 00:0C:29:0C:47:D5 is turned into a 64-bit EUI-64 by 
> inserting FF:FE in the middle: 00:0C:29:FF:FE:0C:47:D5. When this EUI-64 is 
> used to form an IPv6 address it is modified:[1] the meaning of the 
> Universal/Local bit (the 7th most significant bit of the EUI-64, starting 
> from 1) is inverted, so that a 1 now means Universal. To create an IPv6 
> address with the network prefix 2001:db8:1:2::/64 it yields the address 
> 2001:db8:1:2:020c:29ff:fe0c:47d5 (with the underlined U/L (=Universal/Local) 
> bit inverted to a 1, because the MAC address is universally unique)."
> The API should return this address in the ip6address field for a NIC in Basic 
> Networking.
> End-Users can use this, but it can also be used internally by Security 
> Grouping to program rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701363#comment-15701363
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-269


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701377#comment-15701377
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@blueorangutan test


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701378#comment-15701378
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701395#comment-15701395
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
LGTM on code review @rhtyd 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701440#comment-15701440
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@borisstoyanov thanks, but a Trillian test is not necessary as all changes 
are related to Travis. Since Travis is green, I'll need a couple of some lgtms 
to proceed with merging this.


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701460#comment-15701460
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
yes, you have a point @rhtyd, but I've kicked it since I noticed there were 
some changes in the ../integration/smoke/ as well.. I thought it's worth 
confirming that smoketest is running ok with Trillian as well. 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701471#comment-15701471
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
@borisstoyanov okay, though changes in smoketests have been confirmed by 
Travis test results as well.


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701481#comment-15701481
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
yes @rhtyd , I'll cancel the run to free resources in the Lab.
LGTM on code review and Travis results


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8958) add dedicated ips to domain

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701516#comment-15701516
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8958:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1357
  
@ustcweizhou can you rebase against 4.8/4.9, change the base branch to 
4.8/4.9?


> add dedicated ips to domain
> ---
>
> Key: CLOUDSTACK-8958
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8958
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> add dedicated ips to domain 
> ips are dedicated to Account for now, so other customers and projects in the 
> same domain will use the system ip. this is not what we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701560#comment-15701560
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
Thanks @borisstoyanov I'll proceed with merging this as Travis is all green 
now.


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701580#comment-15701580
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit fd6833b9cb331429a3b0dccfe178717f06dad46a in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fd6833b ]

Merge pull request #1755 from shapeblue/4.9-component-tests

CLOUDSTACK-9584: run component tests in Travis runThis would run additional 
component tests in Travis run.

* pr/1755:
  CLOUDSTACK-9584: run component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701579#comment-15701579
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit fd6833b9cb331429a3b0dccfe178717f06dad46a in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fd6833b ]

Merge pull request #1755 from shapeblue/4.9-component-tests

CLOUDSTACK-9584: run component tests in Travis runThis would run additional 
component tests in Travis run.

* pr/1755:
  CLOUDSTACK-9584: run component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701577#comment-15701577
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit 7a96d32c7eeb98990eb53aa4c8c3052e7592fecc in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7a96d32 ]

CLOUDSTACK-9584: run component tests in Travis run

This would run additional component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701601#comment-15701601
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


GitHub user sateesh-chodapuneedi opened a pull request:

https://github.com/apache/cloudstack/pull/1793

CLOUDSTACK-9624 Incorrect hypervisor mapping of guest os Windows 2008 
Server R2 (64-bit) for VMware

**JIRA ticket** 
CLOUDSTACK-9624 Incorrect hypervisor mapping of guest os Windows 2008 
Server R2 (64-bit) for VMware

**Issue**
Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
2008 (64-bit).
Due to this the VM's guest os type was set to "Other (64-bit)", which would 
not represent the guest OS accurately on hypervisor.

**Solution**
Fix is to update incorrect guest_os_name field value in DB table 
cloud.guest_os_hypervisor.
Th query is,
UPDATE IGNORE `cloud`.`guest_os_hypervisor` SET guest_os_name = 
'windows7Server64Guest' WHERE guest_os_id IN (SELECT id FROM guest_os WHERE 
display_name LIKE 'windows%2008%r2%64%') AND hypervisor_type = 'VMware' AND 
hypervisor_version != 'default';

After running above query, the 6 updated rows looks like

UPDATE IGNORE `cloud`.`guest_os_hypervisor` SET guest_os_name = 
'windows7Server64Guest' WHERE guest_os_id IN (SELECT id FROM guest_os WHERE 
display_name LIKE 'windows%2008%r2%64%') AND hypervisor_type = 'VMware' AND 
hypervisor_version != 'default';
Query OK, 6 rows affected (0.01 sec)
Rows matched: 6  Changed: 6  Warnings: 0

mysql> select * from guest_os_hypervisor where guest_os_id in (select id 
from guest_os where display_name like 'windows%2008%r2%64%') and 
hypervisor_type = 'VMware' and hypervisor_version != 'default';

+--+-+---+-++--+-+-+-+
| id   | hypervisor_type | guest_os_name | guest_os_id | 
hypervisor_version | uuid | created 
| removed | is_user_defined |

+--+-+---+-++--+-+-+-+
| 1307 | VMware  | windows7Server64Guest |  54 | 4.0
| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL
|   0 |
| 1448 | VMware  | windows7Server64Guest |  54 | 4.1
| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL
|   0 |
| 1589 | VMware  | windows7Server64Guest |  54 | 5.0
| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL
|   0 |
| 1730 | VMware  | windows7Server64Guest |  54 | 5.1
| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL
|   0 |
| 1871 | VMware  | windows7Server64Guest |  54 | 5.5
| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL
|   0 |
| 2381 | VMware  | windows7Server64Guest |  54 | 6.0
| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL
|   0 |

+--+-+---+-++--+-+-+-+
6 rows in set (0.01 sec)

**Tests**
Registered a template with Windows 2008 R2 (64-bit) guest OS and deployed 
an instance from the template. Found that the VM appeared in vCenter with valid 
guest OS type instead of "Other (64-bit)" shown up before the fix.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sateesh-chodapuneedi/cloudstack pr-cs-9624

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1793


commit b025fa34739f7bd2d197adb1f531572ba6d6bb9e
Author: Sateesh Chodapuneedi 
Date:   2016-11-27T21:54:17Z

CLOUDSTACK-9624 Incorrect hypervisor mapping of guest os Windows 2008 
Server R2 (64-bit) for VMware

Issue:Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect 
guest os at hypervisor, which is winLonghorn64Guest, same as that of Windows 
Server 2008 (64-bit).
Due to this the VM's guest os type was set to "Other (64-bit)", which would 
not represent the guest OS accurately on hypervi

[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701604#comment-15701604
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit fd6833b9cb331429a3b0dccfe178717f06dad46a in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fd6833b ]

Merge pull request #1755 from shapeblue/4.9-component-tests

CLOUDSTACK-9584: run component tests in Travis runThis would run additional 
component tests in Travis run.

* pr/1755:
  CLOUDSTACK-9584: run component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701602#comment-15701602
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit 7a96d32c7eeb98990eb53aa4c8c3052e7592fecc in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7a96d32 ]

CLOUDSTACK-9584: run component tests in Travis run

This would run additional component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701603#comment-15701603
 ] 

ASF subversion and git services commented on CLOUDSTACK-9584:
-

Commit fd6833b9cb331429a3b0dccfe178717f06dad46a in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fd6833b ]

Merge pull request #1755 from shapeblue/4.9-component-tests

CLOUDSTACK-9584: run component tests in Travis runThis would run additional 
component tests in Travis run.

* pr/1755:
  CLOUDSTACK-9584: run component tests in Travis run

Signed-off-by: Rohit Yadav 


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701605#comment-15701605
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1755


> Increase component tests coverage in Travis run
> ---
>
> Key: CLOUDSTACK-9584
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9584
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
>
> Increase component tests in Travis for PRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701612#comment-15701612
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
@sateesh-chodapuneedi thanks, this looks useful. Can you rebase against 
4.9, and change the PR's base branch to 4.9?


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9625) Unable to scale VM from any offerring to a dynamic offerring

2016-11-28 Thread Sudhansu Sahu (JIRA)
Sudhansu Sahu created CLOUDSTACK-9625:
-

 Summary: Unable to scale VM from any offerring to a dynamic 
offerring
 Key: CLOUDSTACK-9625
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9625
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Reporter: Sudhansu Sahu
Assignee: Sudhansu Sahu


Test cpu limits of account while deploying VM with dynamic ... === TestName: 
test_max_account_cpus_deploy_VM_1_ADMIN_ACCOUNT | Status : SUCCESS ===
ok
Test cpu limits of account while deploying VM with dynamic ... === TestName: 
test_max_account_cpus_deploy_VM_2_USER_ACCOUNT | Status : SUCCESS ===
ok
Test cpu limits of account while scaling VM with dynamic ... === TestName: 
test_max_account_cpus_scale_VM_1_ADMIN_ACCOUNT | Status : SUCCESS ===
ok
Test cpu limits of account while scaling VM with dynamic ... === TestName: 
test_max_account_cpus_scale_VM_2_USER_ACCOUNT | Status : SUCCESS ===
ok
Test memory limits of account while deploying VM with dynamic ... === TestName: 
test_max_account_memory_deploy_VM_1_ADMIN_ACCOUNT | Status : SUCCESS ===
ok
Test memory limits of account while deploying VM with dynamic ... === TestName: 
test_max_account_memory_deploy_VM_2_USER_ACCOUNT | Status : SUCCESS ===
ok
Test memory limits of account while scaling VM with ... === TestName: 
test_max_account_memory_scale_VM_1_ADMIN_ACCOUNT | Status : SUCCESS ===
ok
Test memory limits of account while scaling VM with ... === TestName: 
test_max_account_memory_scale_VM_2_USER_ACCOUNT | Status : SUCCESS ===
ok
Test deploy VMs with affinity group and dynamic compute offering ... === 
TestName: test_deploy_VM_with_affinity_group_1_ADMIN_ACCOUNT | Status : SUCCESS 
===
ok
Test deploy VMs with affinity group and dynamic compute offering ... === 
TestName: test_deploy_VM_with_affinity_group_2_USER_ACCOUNT | Status : SUCCESS 
===
ok
=== TestName: test_deploy_VM_with_affinity_group_2_USER_ACCOUNT | Status : 
EXCEPTION ===
ERROR
Test scale running VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_running_vm_dynamic_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_running_vm_dynamic_to_dynamic_2_USER_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to static offering ... === 
TestName: test_change_so_running_vm_dynamic_to_static_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to static offering ... === 
TestName: test_change_so_running_vm_dynamic_to_static_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale running VM from static offering to dynamic offering ... === 
TestName: test_change_so_running_vm_static_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from static offering to dynamic offering ... === 
TestName: test_change_so_running_vm_static_to_dynamic_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale running VM from static offering to static offering ... === TestName: 
test_change_so_running_vm_static_to_static_1_ADMIN_ACCOUNT | Status : FAILED ===
FAIL
Test scale running VM from static offering to static offering ... === TestName: 
test_change_so_running_vm_static_to_static_2_USER_ACCOUNT | Status : FAILED ===
FAIL
Test scale stopped VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_dynamic_2_USER_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from dynamic offering to static offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_static_1_ADMIN_ACCOUNT | Status 
: SUCCESS ===
ok
Test scale stopped VM from dynamic offering to static offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_static_2_USER_ACCOUNT | Status : 
SUCCESS ===
ok
Test scale stopped VM from static offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_static_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from static offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_static_to_dynamic_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale stopped VM from static offering to static offering ... === TestName: 
test_change_so_stopped_vm_static_to_static_1_ADMIN_ACCOUNT | Status : SUCCESS 
===
ok
Test scale stopped VM from static offering to static offering ... === TestName: 
test_change_so_stopped_vm_static_to_static_2_USER_ACCOUNT | Status : SUCCESS ===
ok

==
ERROR: test suite for 
--

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701666#comment-15701666
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
Trillian test result (tid-468)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34374 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1710-t468-xenserver-65sp1.zip
Test completed. 46 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 592.04 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1387.60 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 583.98 
| test_vpc_redundant.py
test_router_dhcp_opts | `Failure` | 31.05 | test_router_dhcphosts.py
test_01_vpc_site2site_vpn | Success | 431.71 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 197.20 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 653.52 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 459.21 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 851.36 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 996.05 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1094.06 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.81 | test_volumes.py
test_08_resize_volume | Success | 96.12 | test_volumes.py
test_07_resize_fail | Success | 105.95 | test_volumes.py
test_06_download_detached_volume | Success | 30.40 | test_volumes.py
test_05_detach_volume | Success | 100.70 | test_volumes.py
test_04_delete_attached_volume | Success | 10.20 | test_volumes.py
test_03_download_attached_volume | Success | 15.27 | test_volumes.py
test_02_attach_volume | Success | 10.88 | test_volumes.py
test_01_create_volume | Success | 392.49 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.32 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 211.55 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 100.69 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 253.58 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.71 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.11 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 15.21 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.27 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 141.16 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.06 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 75.62 | test_templates.py
test_10_destroy_cpvm | Success | 231.81 | test_ssvm.py
test_09_destroy_ssvm | Success | 234.72 | test_ssvm.py
test_08_reboot_cpvm | Success | 121.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 153.85 | test_ssvm.py
test_06_stop_cpvm | Success | 136.69 | test_ssvm.py
test_05_stop_ssvm | Success | 174.05 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 3.47 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 31.41 | test_snapshots.py
test_04_change_offering_small | Success | 119.02 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_01_scale_vm | Success | 5.19 | test_scale_vm.py
test_09_reboot_router | Succes

[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701671#comment-15701671
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
Thanks @rhtyd 
Rebased the commit over 4.9, and updated base branch to 4.9.



> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701677#comment-15701677
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1710
  
Tests LGTM.


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:16:52 |  4774 |
> | 189 |1 |  95 | 2016-10-12 10:17:08 | NULL | NUL

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701696#comment-15701696
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit 784c33585fbce93b363543c362d7b821e5896be8 in cloudstack's branch 
refs/heads/4.9 from Wei Zhou
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=784c335 ]

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701688#comment-15701688
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit 784c33585fbce93b363543c362d7b821e5896be8 in cloudstack's branch 
refs/heads/master from Wei Zhou
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=784c335 ]

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701691#comment-15701691
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701692#comment-15701692
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701689#comment-15701689
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>

[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701693#comment-15701693
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
Thanks @sateesh-chodapuneedi 
@blueorangutan package


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701698#comment-15701698
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  |

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701697#comment-15701697
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  |

[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701701#comment-15701701
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701702#comment-15701702
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9538:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1710


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  | Ready |2 |   0 | 2016-10-12 10:15:39 |  4774 |
> | 188 |1 |  94 | 2016-10-12 10:15:39 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/45fc4f44-b377-49c0-9264-5d813fefe93f 
>  | Ready |2 |   0 | 2016-10-12 10:16:52 |  4774 |
> | 189 |1 |  95 | 2016-10-12 10:17:08 | NULL | NULL   
> | Prima

[jira] [Commented] (CLOUDSTACK-9538) Deleting Snapshot From Primary Storage Fails on RBD Storage if you already delete vm's itself

2016-11-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701699#comment-15701699
 ] 

ASF subversion and git services commented on CLOUDSTACK-9538:
-

Commit a5d5784859029f5abae6ceff2dbd370f50e79ae2 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a5d5784 ]

Merge pull request #1710 from ustcweizhou/CLOUDSTACK-9538-deletesnapshot

CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

* pr/1710:
  CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD 
Storage if vm has been removed

Signed-off-by: Rohit Yadav 


> Deleting Snapshot From Primary Storage Fails on RBD Storage if you already 
> delete vm's itself
> -
>
> Key: CLOUDSTACK-9538
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9538
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.9.0
> Environment: Ubuntu 14.04 Management Server +  Ubuntu 14.04 KVM
>Reporter: Özhan Rüzgar Karaman
>
> Hi;
> We plan to store vm snapshots as vm backups on secondary storage while we 
> still destroyed/expunged related vm. The idea is good there was a bug which 
> blocks this idea to work and it was fixed with CLOUDSTACK-9297 bug. 
> Normally with 4.9 release we expected this idea to work on our on our 4.9 ACS 
> environment but we noticed that because we are using rbd as primary storage 
> we need to fix one minor bug for this idea to work.
> The problem occurs because CLOUDSTACK-8302 bug fixed on 4.9 release and it 
> block our idea to work. If you destroy a vm which is on RBD Storage as 
> primary storage it also deletes any related snapshots of that vm on Primary 
> RBD Storage. So after vm destroy no disk file or snapshot file over RBD 
> Storage. This is good for cleanup purposes on primary storage but 
> XenserverSnapshotStrategy.deleteSnapshot class method did not expect this to 
> happen.
> org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.deleteSnapshot
>  method receives exception. The code tries 10 times on KVM node to remove RBD 
> snapshot but because there is no snapshot on RBD side it get exception after 
> 10 retries, it also spends nearly 5 minutes to delete snapshots and after 
> that it ends with an error like "Failed to delete snapshot" error.
> I think we need to disable snapshot cleanup on primary storage only for RBD 
> type Primary Storage if its related vm was already been destroyed. (Because 
> vm destroy stage removed all snapshots related to vm on primary storage so 
> there is no need to take any action on primary storage.)
> We make some tests below to make this issue clear for bug.
> 1) We create a vm with 3 snapshots on ACS.
> mysql> select * from snapshot_store_ref where snapshot_id in (93,94,95);
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | id  | store_id | snapshot_id | created | last_updated | job_id 
> | store_role | size| physical_size | parent_snapshot_id | 
> install_path  
>  | state | update_count | ref_cnt | updated | volume_id |
> +-+--+-+-+--+++-+---+++---+--+-+-+---+
> | 185 |1 |  93 | 2016-10-12 10:13:44 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/54008bf3-43dd-469d-91a7-4acd146d7b84
>  | Ready |2 |   0 | 2016-10-12 10:13:45 |  4774 |
> | 186 |1 |  93 | 2016-10-12 10:13:45 | NULL | NULL   
> | Image  | 28991029248 |   28991029248 |  0 | 
> snapshots/2/4774/54008bf3-43dd-469d-91a7-4acd146d7b84 
>  | Ready |2 |   0 | 2016-10-12 10:15:04 |  4774 |
> | 187 |1 |  94 | 2016-10-12 10:15:38 | NULL | NULL   
> | Primary| 28991029248 |   28991029248 |  0 | 
> cst4/bb9ca3c7-96d6-4465-85b5-cd01f4d635f2/45fc4f44-b377-49c0-9264-5d813fefe93f
>  |

[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701819#comment-15701819
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-273


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701835#comment-15701835
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
@blueorangutan test centos7 vmware-55u3


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701837#comment-15701837
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on 
> VMware
> --
>
> Key: CLOUDSTACK-9624
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9624
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
> Environment: VMware 6.0, 
> ACS master commit c6bb8c6f415edae8073f5d28b3a81a2eef372fed
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.10.0.0
>
>
> Guest OS Windows Server 2008 R2 (64-bit) is being mapped to incorrect guest 
> os at hypervisor, which is winLonghorn64Guest, same as that of Windows Server 
> 2008 (64-bit). Due to this the VM's guest os type was set to "Other 
> (64-bit)", which would not represent the guest OS accurately on hypervisor.
> The current (4.9) mappings in database looks as below,
> {noformat}
> mysql> select * from guest_os where display_name like 'windows%2008%r2%64%';
> ++-+--+--+-+-+-+-+
> | id | category_id | name | uuid | 
> display_name| created | removed | 
> is_user_defined |
> ++-+--+--+-+-+-+-+
> | 54 |   6 | NULL | 94b8ab90-b271-11e6-b56b-4e61adb7c6b1 | Windows 
> Server 2008 R2 (64-bit) | 2016-11-24 23:42:43 | NULL|   0 |
> ++-+--+--+-+-+-+-+
> 1 row in set (0.00 sec)
> mysql> select * from guest_os_hypervisor where guest_os_id in (select id from 
> guest_os where display_name like 'windows%2008%r2%64%') and hypervisor_type = 
> 'VMware' and hypervisor_version != 'default';
> +--+-++-++--+-+-+-+
> | id   | hypervisor_type | guest_os_name  | guest_os_id | 
> hypervisor_version | uuid | created   
>   | removed | is_user_defined |
> +--+-++-++--+-+-+-+
> | 1307 | VMware  | winLonghorn64Guest |  54 | 4.0 
>| 98fce372-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:44 | NULL|   
> 0 |
> | 1448 | VMware  | winLonghorn64Guest |  54 | 4.1 
>| 990abdcc-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1589 | VMware  | winLonghorn64Guest |  54 | 5.0 
>| 99166f75-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1730 | VMware  | winLonghorn64Guest |  54 | 5.1 
>| 9930ff30-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 1871 | VMware  | winLonghorn64Guest |  54 | 5.5 
>| 993acb18-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 23:42:45 | NULL|   
> 0 |
> | 2381 | VMware  | winLonghorn64Guest |  54 | 6.0 
>| 9cb53675-b271-11e6-b56b-4e61adb7c6b1 | 2016-11-24 18:12:51 | NULL|   
> 0 |
> +--+-++-++--+-+-+-+
> 6 rows in set (0.01 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9584) Increase component tests coverage in Travis run

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702089#comment-15702089
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9584:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1755
  
Trillian test result (tid-474)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 19367 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1755-t474-kvm-centos7.zip
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_router_dhcp_opts | `Failure` | 21.85 | test_router_dhcphosts.py
test_deploy_vm_multiple | Success | 253.64 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.04 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.85 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.89 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.14 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.30 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 110.90 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 35.70 | test_templates.py
test_10_destroy_cpvm | Success | 161.47 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.77 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.77 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.53 | test_ssvm.py
test_06_stop_cpvm | Success | 131.74 | test_ssvm.py
test_05_stop_ssvm | Success | 134.43 | test_ssvm.py
test_04_cpvm_internals | Success | 1.38 | test_ssvm.py
test_03_ssvm_internals | Success | 3.84 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.17 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.32 | test_snapshots.py
test_04_change_offering_small | Success | 239.67 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.09 | test_service_offerings.py
test_01_create_service_offering | Success | 0.13 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.15 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.29 | test_secondary_storage.py
test_09_reboot_router | Success | 35.47 | test_routers.py
test_08_start_router | Success | 45.43 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py
test_06_router_advanced | Success | 0.08 | test_routers.py
test_05_router_basic | Success | 0.05 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.72 | test_routers.py
test_03_restart_network_cleanup | Success | 60.62 | test_routers.py
test_02_router_internal_adv | Success | 1.07 | test_routers.py
test_01_router_internal_basic | Success | 0.57 | test_routers.py
test_router_dns_guestipquery | Success | 76.75 | test_router_dns.py
test_router_dns_externalipquery | Success | 0.09 | test_router_dns.py
test_router_dhcphosts | Success | 274.60 | test_router_dhcphosts.py
test_01_updatevolumedetail | Success | 0.22 | test_resource_detail.py
test_01_reset_vm_on_reboot | Success | 130.92 | test_reset_vm_on_reboot.py
test_createRegion | Success | 0.05 | test_regions.py
test_create_pvlan_network | Success | 5.25 | test_pvlan.py
test_dedicatePublicIpRange | Success | 0.46 | test_public_ip_range.py
test_04_rvpc_privategw_static_routes | Success | 498.32 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | Success | 476.38 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | Success | 426.97 | 
test_privategw_acl.py
test_01_vpc_privategw_acl | Success | 88.30 | test_privategw_acl.py
test_01_primary_storage_nfs | Success | 35.79 | test_primary_storage.py
test_createPortablePublicIPRange | Success | 15.19 | 
test_portable_publicip.py
test_createPortablePublicIPAcquire | Success | 15.44 | 
test_portable_publicip.py
test_isolate_network_password_s

[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702463#comment-15702463
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
Trillian test result (tid-472)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28250 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1700-t472-kvm-centos7.zip
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_extract_template | `Error` | 5.15 | test_templates.py
test_03_delete_template | `Error` | 5.09 | test_templates.py
test_01_vpc_site2site_vpn | Success | 185.40 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.24 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 271.20 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 314.06 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 542.82 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 540.93 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1331.95 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 575.59 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 761.84 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1333.65 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.84 | test_volumes.py
test_08_resize_volume | Success | 15.42 | test_volumes.py
test_07_resize_fail | Success | 20.50 | test_volumes.py
test_06_download_detached_volume | Success | 15.31 | test_volumes.py
test_05_detach_volume | Success | 100.29 | test_volumes.py
test_04_delete_attached_volume | Success | 10.21 | test_volumes.py
test_03_download_attached_volume | Success | 15.30 | test_volumes.py
test_02_attach_volume | Success | 74.04 | test_volumes.py
test_01_create_volume | Success | 713.84 | test_volumes.py
test_deploy_vm_multiple | Success | 289.17 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.71 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.26 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.27 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.92 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.92 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.58 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 80.71 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_01_create_template | Success | 66.31 | test_templates.py
test_10_destroy_cpvm | Success | 161.79 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.75 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.66 | test_ssvm.py
test_07_reboot_ssvm | Success | 135.16 | test_ssvm.py
test_06_stop_cpvm | Success | 131.92 | test_ssvm.py
test_05_stop_ssvm | Success | 165.15 | test_ssvm.py
test_04_cpvm_internals | Success | 1.31 | test_ssvm.py
test_03_ssvm_internals | Success | 3.46 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.47 | test_snapshots.py
test_04_change_offering_small | Success | 239.76 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.13 | test_service_offerings.py
test_01_create_service_offering | Success | 0.12 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.21 | test_secondary_storage.py
test_09_reboot_router | Success | 45.40 | test_routers.py
test_08_start_router | Success | 35.41 | test_routers.py
test_07_stop_router | Success | 10.19 | test_routers.py
test_06_router_advanced | Success | 0.07 | test_routers.py
test_05_router_basic | Success | 0.05 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.78 | test_routers.py
test_03_restart_n

[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702702#comment-15702702
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
Trillian test result (tid-471)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 6
Total time taken: 33700 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1700-t471-xenserver-65sp1.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 536.50 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1332.07 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 581.30 
| test_vpc_redundant.py
test_01_snapshot_root_disk | `Failure` | 16.42 | test_snapshots.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 62.10 | 
test_snapshots.py
test_01_port_fwd_on_src_nat | `Error` | 10.37 | test_network.py
test_01_vpc_site2site_vpn | Success | 341.55 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 147.12 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 624.93 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 397.90 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 730.93 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 902.62 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1043.85 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.78 | test_volumes.py
test_08_resize_volume | Success | 95.97 | test_volumes.py
test_07_resize_fail | Success | 101.13 | test_volumes.py
test_06_download_detached_volume | Success | 20.37 | test_volumes.py
test_05_detach_volume | Success | 100.34 | test_volumes.py
test_04_delete_attached_volume | Success | 10.23 | test_volumes.py
test_03_download_attached_volume | Success | 15.54 | test_volumes.py
test_02_attach_volume | Success | 11.17 | test_volumes.py
test_01_create_volume | Success | 397.92 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.33 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 214.46 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 100.86 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 253.84 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.76 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.25 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.29 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.19 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.25 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.30 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 141.95 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.19 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_01_create_template | Success | 80.77 | test_templates.py
test_10_destroy_cpvm | Success | 226.75 | test_ssvm.py
test_09_destroy_ssvm | Success | 229.33 | test_ssvm.py
test_08_reboot_cpvm | Success | 141.76 | test_ssvm.py
test_07_reboot_ssvm | Success | 143.81 | test_ssvm.py
test_06_stop_cpvm | Success | 166.68 | test_ssvm.py
test_05_stop_ssvm | Success | 169.17 | test_ssvm.py
test_04_cpvm_internals | Success | 1.37 | test_ssvm.py
test_03_ssvm_internals | Success | 4.06 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_04_change_offering_small | Success | 96.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.10 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.24 | test_secondary_storage.py
 

[jira] [Commented] (CLOUDSTACK-9625) Unable to scale VM from any offerring to a dynamic offerring

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702749#comment-15702749
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9625:


GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/1795

CLOUDSTACK-9625:Unable to scale VM from any offering to a dynamic offering

1.create a custom service offering.
2.stop the running vm
3.scale the vm offering from small to custom offering by providing 
(customcpunumber=4,customcpuspeed=512,custommemory=256)

tried with other values as well.

actual result:
scaleVirtualMachine fails with error and its says enter valid value cpu 
core and value should be in between 1 to 2147483647 even though we enter 
cpucore as 4

test/integration/component/test_dynamic_compute_offering.py  also faling.

Test scale running VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_running_vm_dynamic_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_running_vm_dynamic_to_dynamic_2_USER_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to static offering ... === 
TestName: test_change_so_running_vm_dynamic_to_static_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from dynamic offering to static offering ... === 
TestName: test_change_so_running_vm_dynamic_to_static_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale running VM from static offering to dynamic offering ... === 
TestName: test_change_so_running_vm_static_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale running VM from static offering to dynamic offering ... === 
TestName: test_change_so_running_vm_static_to_dynamic_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale running VM from static offering to static offering ... === 
TestName: test_change_so_running_vm_static_to_static_1_ADMIN_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale running VM from static offering to static offering ... === 
TestName: test_change_so_running_vm_static_to_static_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale stopped VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from dynamic offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_dynamic_2_USER_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from dynamic offering to static offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_static_1_ADMIN_ACCOUNT | Status 
: SUCCESS ===
ok
Test scale stopped VM from dynamic offering to static offering ... === 
TestName: test_change_so_stopped_vm_dynamic_to_static_2_USER_ACCOUNT | Status : 
SUCCESS ===
ok
Test scale stopped VM from static offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_static_to_dynamic_1_ADMIN_ACCOUNT | Status 
: FAILED ===
FAIL
Test scale stopped VM from static offering to dynamic offering ... === 
TestName: test_change_so_stopped_vm_static_to_dynamic_2_USER_ACCOUNT | Status : 
FAILED ===
FAIL
Test scale stopped VM from static offering to static offering ... === 
TestName: test_change_so_stopped_vm_static_to_static_1_ADMIN_ACCOUNT | Status : 
SUCCESS ===
ok
Test scale stopped VM from static offering to static offering ... === 
TestName: test_change_so_stopped_vm_static_to_static_2_USER_ACCOUNT | Status : 
SUCCESS ===


Root Cause:
ParamUnpackWorker creates a Map\\>, which 
should be converted to a  Map\. 




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9625

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1795


commit e97dda6da3a0222619c044c17c99ff7a577f902f
Author: Sudhansu 
Date:   2016-11-28T18:20:00Z

CLOUDSTACK-9625: Unable to scale VM from any offering to a dynamic offering




> Unable to scale VM from any offerring to a dynamic offerring
> 
>
> Key: CLOUDSTACK-9625
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9625
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Sudhansu Sahu
>As

[jira] [Created] (CLOUDSTACK-9626) Instance fails to start after unsuccesful compute offering upgrade.

2016-11-28 Thread Sudhansu Sahu (JIRA)
Sudhansu Sahu created CLOUDSTACK-9626:
-

 Summary: Instance fails to start after unsuccesful compute 
offering upgrade.
 Key: CLOUDSTACK-9626
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9626
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.8.0
Reporter: Sudhansu Sahu
Assignee: Sudhansu Sahu


ISSUE

Instance fails to start after unsuccesful compute offering upgrade.

 
TROUBLESHOOTING
==
We observed VM instance get compute values "cpuNumber","cpuSpeed","memory" 
removed from table "user_vm_details", which cause instance fail to startup next 
time on XenServer 

mysql> select * from user_vm_details where vm_id=10;
+-+---++-+-+
| id  | vm_id | name   | value  
 | display |
+-+---++-+-+
| 218 |10 | platform   | 
viridian:true;acpi:1;apic:true;pae:true;nx:true |   1 |
| 219 |10 | hypervisortoolsversion | xenserver56
 |   1 |
| 220 |10 | Message.ReservedCapacityFreed.Flag | true   
 |   1 |
+-+---++-+-+
3 rows in set (0.00 sec)
 

Unexpected exception while executing 
org.apache.cloudstack.api.command.user.vm.ScaleVMCmd
java.lang.NullPointerException
at 
com.cloud.vm.UserVmManagerImpl.upgradeStoppedVirtualMachine(UserVmManagerImpl.java:953)
at 
com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1331)
at 
com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1271)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:50)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy169.upgradeVirtualMachine(Unknown Source)
at 
org.apache.cloudstack.api.command.user.vm.ScaleVMCmd.execute(ScaleVMCmd.java:127)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:167)
at 
com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:97)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:543)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:50)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:47)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:500)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)

2015-02-09 15:23:46,578 TRACE [c.c.u.d.Gen

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702807#comment-15702807
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1579
  
@rhtyd @jburwell 

UPDATE: We are currently re-qualifying this PR (internally, re-based with 
latest master and commits squashed) as we hit the issue "systemvm: Fix 
regression from 825935" in our weekend regression runs on this PR, which was 
fixed yesterday on master.

Took us some time to find the root cause :)

Anyhow, once all our internal regression runs and added Marvin tests in 
this PR pass, we will update this PR accordingly (re-based with latest master 
and commits squashed).

Let me know, if you need anything from my side.

Thank you for your support !!




> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9403
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Reporter: Rahul Singal
>Assignee: Nick Livens
>
> This is first phase of support of Shared Network in cloudstack through 
> NuageVsp Network Plugin. A shared network is a type of virtual network that 
> is shared between multiple accounts i.e. a shared network can be accessed by 
> virtual machines that belong to many different accounts. This basic 
> functionality will be supported with the below common use case:
> - shared network can be used for monitoring purposes. A shared network can be 
> assigned to a domain and can be used for monitoring VMs  belonging to all 
> accounts in that domain.
> - Public accessible of shared Network.
> With the current implementation with NuageVsp plugin, It support over-lapping 
> of Ip address, Public Access and also adding Ip ranges in shared Network.
> In VSD, it is implemented in below manner:
> - In order to have tenant isolation for shared networks, we will have to 
> create a Shared L3 Subnet for each shared network, and instantiate it across 
> the relevant enterprises. A shared network will only exist under an 
> enterprise when it is needed, so when the first VM is spinned under that ACS 
> domain inside that shared network.
> - For public shared Network it will also create a floating ip subnet pool in 
> VSD along with all the things mentioned in above point.
> PR contents:
> 1) Support for shared networks with tenant isolation on master with Nuage VSP 
> SDN Plugin.
> 2) Support of shared network with publicly accessible ip ranges.  
> 2) Marvin test coverage for shared networks on master with Nuage VSP SDN 
> Plugin.
> 3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 4) PEP8 & PyFlakes compliance with our Marvin test code.
> Test Results are:-
> Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain 
> in a shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a 
> shared network with ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same 
> domain but in a ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser | 
> Status : SUCCESS ===
> ok
> Valiate that ROOT admin is NOT able to deploy a VM for user in the same 
> domain but in a different ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for regular user in a shared 
> network with scope=account ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_account_user | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser | Status : SUCCESS 
> ===
> ok
> Valiate that ROOT admin is able to deploy a VM for a domain admin users in a 
> shared network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser | Status : 
> SUCCESS ===
> ok
> Valiate that ROOT admin is able to deploy a VM for other users in a shared 
> network with scope=all ... === TestName: 
> test_deployVM_in_sharedNetwork_as_admi

[jira] [Created] (CLOUDSTACK-9627) Template Doens't get sync when using Swift as Secondary Storage

2016-11-28 Thread Syed Ahmed (JIRA)
Syed Ahmed created CLOUDSTACK-9627:
--

 Summary: Template Doens't get sync when using Swift as Secondary 
Storage
 Key: CLOUDSTACK-9627
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9627
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Secondary Storage, Template
Affects Versions: 4.9.0, Future
Reporter: Syed Ahmed


When using a region store like Swift or S3 as secondary storage,
the `zoneId` can be null. This causes an exception when we try
to convert it to a `long`. This fix guards against that.

Also, on the secondary storage side, we are writing the incorrect
unique name which prevents the sync logic to assosiate the template
on swift with the template in DB.

Before this fix, if you restart the management server, all the templates
would change to "NOT READY" because the code which syncs the NFS cache
and the object store crashes due to the above mentioned issue.
This PR fixes that.

https://github.com/apache/cloudstack/pull/1772/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9627) Template Doens't get sync when using Swift as Secondary Storage

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702835#comment-15702835
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9627:


Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1772
  
Thanks @rhtyd . I've created an issue in JIRA and updated the summary.




> Template Doens't get sync when using Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9627
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9627
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage, Template
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> When using a region store like Swift or S3 as secondary storage,
> the `zoneId` can be null. This causes an exception when we try
> to convert it to a `long`. This fix guards against that.
> Also, on the secondary storage side, we are writing the incorrect
> unique name which prevents the sync logic to assosiate the template
> on swift with the template in DB.
> Before this fix, if you restart the management server, all the templates
> would change to "NOT READY" because the code which syncs the NFS cache
> and the object store crashes due to the above mentioned issue.
> This PR fixes that.
> https://github.com/apache/cloudstack/pull/1772/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9359) Return ip6address in Basic Networking

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702940#comment-15702940
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9359:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1700
  
Trillian test result (tid-473)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 39222 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1700-t473-vmware-55u3.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 707.12 | test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 288.49 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | `Error` | 557.18 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 783.54 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Error` | 808.19 | test_vpc_redundant.py
test_01_vpc_remote_access_vpn | Success | 232.09 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 557.10 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 786.09 | test_vpc_router_nics.py
test_04_rvpc_network_garbage_collector_nics | Success | 1580.18 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 871.08 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 897.66 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1438.42 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 30.95 | test_volumes.py
test_06_download_detached_volume | Success | 70.92 | test_volumes.py
test_05_detach_volume | Success | 105.36 | test_volumes.py
test_04_delete_attached_volume | Success | 10.18 | test_volumes.py
test_03_download_attached_volume | Success | 25.34 | test_volumes.py
test_02_attach_volume | Success | 66.06 | test_volumes.py
test_01_create_volume | Success | 535.02 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.29 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 231.83 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.83 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 273.55 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.87 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 121.72 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.13 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.28 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.14 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 281.77 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_04_extract_template | Success | 15.24 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.11 | test_templates.py
test_01_create_template | Success | 161.12 | test_templates.py
test_10_destroy_cpvm | Success | 266.80 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.90 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.53 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.53 | test_ssvm.py
test_06_stop_cpvm | Success | 241.96 | test_ssvm.py
test_05_stop_ssvm | Success | 203.83 | test_ssvm.py
test_04_cpvm_internals | Success | 1.21 | test_ssvm.py
test_03_ssvm_internals | Success | 3.57 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_01_snapshot_root_disk | Success | 31.28 | test_snapshots.py
test_04_change_offering_small | Success | 98.28 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.12 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 181.13 | test_routers.p

[jira] [Commented] (CLOUDSTACK-9612) Restart Network with clean up fails for networks whose offering has been changed from Isolated -> RVR

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15702957#comment-15702957
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9612:


Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1781
  
:+1: LGTM


> Restart Network with clean up fails for networks whose offering has been 
> changed from Isolated -> RVR
> -
>
> Key: CLOUDSTACK-9612
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9612
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.9.2.0
>
>
> Deploy a network N1 with " Offering for Isolated networks with Source Nat 
> service enabled" . Ensure both vm and vr are UP .
> Create a RVR offering and edit the network offering from the current to 
> RVR ofefring .
> Ensure both Master and Backup are up and running.
> Now restart the network with clean up option enabled.
> Observations :
> Restarting the nw with clean up is creating is failing with the below error.
> {noformat}
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Start completed for VM VM[DomainRouter|r-21-QA]
> 2016-11-24 15:49:32,432 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Done executing VM work job: 
> com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"RestartNetwork":"rO0ABXNyABFqYXZhLmxhbmcuQm9vbGVhbs0gcoDVnPruAgABWgAFdmFsdWV4cAE"},"userId":2,"accountId":2,"vmId":21,"handlerName":"VirtualMachineManagerImpl"}
> 2016-11-24 15:49:32,432 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Complete async job-104, jobStatus: SUCCEEDED, resultCode: 0, 
> result: null
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Publish async job-104 complete on message bus
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Wake up jobs related to job-104
> 2016-11-24 15:49:32,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Update db status for job-104
> 2016-11-24 15:49:32,435 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104 ctx-8f4ab192) 
> (logid:fb2d5b7b) Wake up jobs joined with job-104 and disjoin all subjobs 
> created from job- 104
> 2016-11-24 15:49:32,446 DEBUG [c.c.v.VmWorkJobDispatcher] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Done with 
> run of VM work job: com.cloud.vm.VmWorkStart for VM 21, job origin: 99
> 2016-11-24 15:49:32,446 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Done 
> executing com.cloud.vm.VmWorkStart for job-104
> 2016-11-24 15:49:32,448 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Work-Job-Executor-47:ctx-a1f65072 job-99/job-104) (logid:fb2d5b7b) Remove 
> job-104 from job monitoring
> 2016-11-24 15:49:32,455 WARN  [o.a.c.e.o.NetworkOrchestrator] 
> (API-Job-Executor-10:ctx-d835fe9f job-99 ctx-2cd2b41c) (logid:fb2d5b7b) 
> Failed to implement network Ntwk[204|Guest|16] elements and resources as a 
> part of network restart due to 
> com.cloud.exception.ResourceUnavailableException: Resource [DataCenter:1] is 
> unreachable: Can't find all necessary running routers!
>   at 
> com.cloud.network.element.VirtualRouterElement.implement(VirtualRouterElement.java:226)
>   at 
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetworkElementsAndResources(NetworkOrchestrator.java:1132)
>   at 
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.restartNetwork(NetworkOrchestrator.java:2740)
>   at 
> com.cloud.network.NetworkServiceImpl.restartNetwork(NetworkServiceImpl.java:1907)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(Refle

[jira] [Created] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2016-11-28 Thread Syed Ahmed (JIRA)
Syed Ahmed created CLOUDSTACK-9628:
--

 Summary: Fix Template Size in Swift as Secondary Storage
 Key: CLOUDSTACK-9628
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.9.0, Future
Reporter: Syed Ahmed


Cloudstack incorrectly uses the physical size as the size of the
template. Ideally, the size should refelct the virtual size. This
PR fixes that issue.

https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15703345#comment-15703345
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1659
  
Packaging result: ✖centos6 ✔centos7 ✖debian. JID-277


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9624) Incorrect hypervisor mapping of guest os Windows 2008 Server R2 (64-bit) on VMware

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15703367#comment-15703367
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9624:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1793
  
Trillian test result (tid-476)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 37204 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1793-t476-vmware-55u3.zip
Test completed. 44 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_router_dhcp_opts | `Failure` | 21.20 | test_router_dhcphosts.py
test_01_vpc_site2site_vpn | `Error` | 537.56 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 739.82 | test_vpc_vpn.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Error` | 628.04 | test_vpc_redundant.py
test_09_reboot_router | `Error` | 191.29 | test_routers.py
test_01_vpc_remote_access_vpn | Success | 217.83 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 367.90 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 760.10 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 693.21 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1545.51 | 
test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 762.69 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1452.53 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 26.25 | test_volumes.py
test_06_download_detached_volume | Success | 55.55 | test_volumes.py
test_05_detach_volume | Success | 105.30 | test_volumes.py
test_04_delete_attached_volume | Success | 10.21 | test_volumes.py
test_03_download_attached_volume | Success | 20.73 | test_volumes.py
test_02_attach_volume | Success | 63.84 | test_volumes.py
test_01_create_volume | Success | 525.28 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.26 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 237.31 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 151.40 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.71 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 203.59 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.82 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.22 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 136.60 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.16 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.16 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.25 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.19 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 267.13 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 25.41 | test_templates.py
test_03_delete_template | Success | 5.14 | test_templates.py
test_02_edit_template | Success | 90.21 | test_templates.py
test_01_create_template | Success | 146.19 | test_templates.py
test_10_destroy_cpvm | Success | 211.92 | test_ssvm.py
test_09_destroy_ssvm | Success | 208.90 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.65 | test_ssvm.py
test_07_reboot_ssvm | Success | 218.70 | test_ssvm.py
test_06_stop_cpvm | Success | 166.86 | test_ssvm.py
test_05_stop_ssvm | Success | 173.94 | test_ssvm.py
test_04_cpvm_internals | Success | 1.23 | test_ssvm.py
test_03_ssvm_internals | Success | 3.45 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.48 | test_snapshots.py
test_04_change_offering_small | Success | 97.02 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.09 | test_service_offerings.py
test_01_create_service_offering | Success | 0.13 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.22 | test_secondary_storage.py
test_08_start_router | Success | 120.88 | test_routers.py
test_07_stop_router 

[jira] [Commented] (CLOUDSTACK-9626) Instance fails to start after unsuccesful compute offering upgrade.

2016-11-28 Thread Sudhansu Sahu (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15704416#comment-15704416
 ] 

Sudhansu Sahu commented on CLOUDSTACK-9626:
---

Workaround:

insert into `user_vm_details` (`vm_id`, `name`, `value`,`display`) select 
ue.`resource_id`, ued.name, ued.value , 1 from `usage_event_details` ued , 
`usage_event` ue  where ued.`usage_event_id` = ue.`id`  and ue.type like 'VM.%' 
 and ue.`resource_id` =  order by  ue.`created` desc LIMIT 3;

> Instance fails to start after unsuccesful compute offering upgrade.
> ---
>
> Key: CLOUDSTACK-9626
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9626
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Instance fails to start after unsuccesful compute offering upgrade.
>  
> TROUBLESHOOTING
> ==
> We observed VM instance get compute values "cpuNumber","cpuSpeed","memory" 
> removed from table "user_vm_details", which cause instance fail to startup 
> next time on XenServer 
> mysql> select * from user_vm_details where vm_id=10;
> +-+---++-+-+
> | id  | vm_id | name   | value
>| display |
> +-+---++-+-+
> | 218 |10 | platform   | 
> viridian:true;acpi:1;apic:true;pae:true;nx:true |   1 |
> | 219 |10 | hypervisortoolsversion | xenserver56  
>|   1 |
> | 220 |10 | Message.ReservedCapacityFreed.Flag | true 
>|   1 |
> +-+---++-+-+
> 3 rows in set (0.00 sec)
>  
> Unexpected exception while executing 
> org.apache.cloudstack.api.command.user.vm.ScaleVMCmd
> java.lang.NullPointerException
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeStoppedVirtualMachine(UserVmManagerImpl.java:953)
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1331)
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1271)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:616)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:50)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at $Proxy169.upgradeVirtualMachine(Unknown Source)
>   at 
> org.apache.cloudstack.api.command.user.vm.ScaleVMCmd.execute(ScaleVMCmd.java:127)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:167)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:97)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:543)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:50)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run

[jira] [Commented] (CLOUDSTACK-9626) Instance fails to start after unsuccesful compute offering upgrade.

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15704418#comment-15704418
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9626:


GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/1796

CLOUDSTACK-9626: Instance fails to start after unsuccesful compute

offering upgrade.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9626

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1796


commit 43cac6202c558b8b0f7a02901817997603bf621d
Author: Sudhansu 
Date:   2016-11-29T05:18:04Z

CLOUDSTACK-9626: Instance fails to start after unsuccesful compute
offering upgrade.




> Instance fails to start after unsuccesful compute offering upgrade.
> ---
>
> Key: CLOUDSTACK-9626
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9626
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.8.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> ISSUE
> 
> Instance fails to start after unsuccesful compute offering upgrade.
>  
> TROUBLESHOOTING
> ==
> We observed VM instance get compute values "cpuNumber","cpuSpeed","memory" 
> removed from table "user_vm_details", which cause instance fail to startup 
> next time on XenServer 
> mysql> select * from user_vm_details where vm_id=10;
> +-+---++-+-+
> | id  | vm_id | name   | value
>| display |
> +-+---++-+-+
> | 218 |10 | platform   | 
> viridian:true;acpi:1;apic:true;pae:true;nx:true |   1 |
> | 219 |10 | hypervisortoolsversion | xenserver56  
>|   1 |
> | 220 |10 | Message.ReservedCapacityFreed.Flag | true 
>|   1 |
> +-+---++-+-+
> 3 rows in set (0.00 sec)
>  
> Unexpected exception while executing 
> org.apache.cloudstack.api.command.user.vm.ScaleVMCmd
> java.lang.NullPointerException
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeStoppedVirtualMachine(UserVmManagerImpl.java:953)
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1331)
>   at 
> com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1271)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:616)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:50)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at $Proxy169.upgradeVirtualMachine(Unknown Source)
>   at 
> org.apache.cloudstack.api.command.user.vm.ScaleVMCmd.execute(ScaleVMCmd.java:127)
>   at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:167)
>   at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:97)
>   at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:543)
>   at 
> org.apache.cloudstack.managed.context.