[jira] [Commented] (CLOUDSTACK-9851) travis CI build failure after merge of PR#1953

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944720#comment-15944720
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9851:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
LGTM


> travis CI build failure after merge of PR#1953
> --
>
> Key: CLOUDSTACK-9851
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9851
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9099) SecretKey is returned from the APIs

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944724#comment-15944724
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9099:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1996
  
@jayapalu this is a useful security fix for 4.9 as well, can you please 
rebase against the 4.9 branch and edit the base branch of the PR to 4.9?


> SecretKey is returned from the APIs
> ---
>
> Key: CLOUDSTACK-9099
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9099
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>
> The sercreKey parameter is returned from the following APIs:
> createAccount
> createUser
> disableAccount
> disableUser
> enableAccount
> enableUser
> listAccounts
> listUsers
> lockAccount
> lockUser
> registerUserKeys
> updateAccount
> updateUser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8310) commit to commit db upgrades and db version control

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8310:

Security: (was: Public)

> commit to commit db upgrades and db version control
> ---
>
> Key: CLOUDSTACK-8310
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8310
> Project: CloudStack
>  Issue Type: Improvement
>Reporter: Rajani Karuturi
>  Labels: gsoc2017
>
> Cloudstack currently uses a homegrown tool to do the database upgrades. 
> The challenges with the current one being, it can only do version to version 
> but not commit to commit upgrades.
> Due to this, when different people work on the same branch and have db 
> changes, environment is broken untill you do fresh db deploy.
> To achieve this, we can use existing and well tested tools like liquibase, 
> flywaydb etc. or improve on the existing one. 
> Related discussions on dev lists
> http://markmail.org/thread/aicijeu6g5mzx4sc
> http://markmail.org/thread/r7wv36o356nolq7f



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9804) Add Cinder as a storage driver to Cloudstack

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9804:

Security: (was: Public)

> Add Cinder as a storage driver to Cloudstack
> 
>
> Key: CLOUDSTACK-9804
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9804
> Project: CloudStack
>  Issue Type: New Feature
>  Components: Storage Controller
>Reporter: Syed Ahmed
>Priority: Minor
>  Labels: GSoC2017, mentor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8104) Add TRIM/Discard support to Qemu

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8104:

Security: (was: Public)

> Add TRIM/Discard support to Qemu
> 
>
> Key: CLOUDSTACK-8104
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> Project: CloudStack
>  Issue Type: New Feature
>  Components: KVM
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>Priority: Minor
>  Labels: gsoc2017
>
> Thinly provisioned volumes on storage devices continue to grow because the 
> storage device has no idea of which blocks are in use.
> For SSDs the TRIM/Discard feature was invented to give free/unused blocks 
> back to the flash device, but this can also be used for Qemu.
> Ceph's RBD for example supports trimming so that volumes can shrink again 
> when blocks are no longer in use.
> This is supported since Qemu 1.5, but since 1.6 it also works for QCOW2 
> images.
> It however requires the new virtio-scsi to work optimal, so it requires some 
> changes.
> For more information see:
> * http://wiki.qemu.org/ChangeLog/1.5#Block_devices
> * http://wiki.qemu.org/ChangeLog/1.6#Block_devices
> * http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
> * http://wiki.qemu.org/Features/VirtioSCSI



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8239) Add support for VirtIO-SCSI for KVM hypervisors

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8239:

Security: (was: Public)

> Add support for VirtIO-SCSI for KVM hypervisors
> ---
>
> Key: CLOUDSTACK-8239
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> Project: CloudStack
>  Issue Type: New Feature
>  Components: KVM, Storage Controller
>Affects Versions: 4.6.0
> Environment: KVM
>Reporter: Andrei Mikhailovsky
>Assignee: Wido den Hollander
>Priority: Critical
>  Labels: ceph, gsoc2017, kvm, libvirt, rbd, storage_drivers, 
> virtio
> Fix For: Future
>
>
> It would be nice to have support for virtio-scsi for KVM hypervisors.
> The reason for using virtio-scsi instead of virtio-blk would be increasing 
> the number of devices you can attach to a vm, have ability to use discard and 
> reclaim unused blocks from the backend storage like ceph rbd. There are also 
> talks about having a greater performance advantage as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9778) Replace custom console with NoVNC console

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9778:

Security: (was: Public)

> Replace custom console with NoVNC console
> -
>
> Key: CLOUDSTACK-9778
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9778
> Project: CloudStack
>  Issue Type: New Feature
>Reporter: Syed Ahmed
>  Labels: GSoC2017, considerForGsoc
>
> There are a lot of advantages of using NoVNC
> * Uses websockets so connections are persistent
> * Can do operations like copy/paste
> * Has better browser compatibility
> * Overall is more responsive and has a nice feel to it



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8629) Use Ceph RBD storage pool for writing HA heartbeats

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8629:

Security: (was: Public)

> Use Ceph RBD storage pool for writing HA heartbeats
> ---
>
> Key: CLOUDSTACK-8629
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8629
> Project: CloudStack
>  Issue Type: Improvement
>  Components: KVM
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>  Labels: gsoc2017
> Fix For: Future
>
>
> Just like NFS we should write a heartbeat for each Instance to RADOS.
> Each hosts could write a simple object like: /
> They simply write the timestamp to the object encoded in JSON.
> Other hosts can read that object and see if the host wrote the timestamp 
> lately. If it did it means that it is still up and running and Fencing is not 
> needed or required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9777) decouple cloudstack UI

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9777:

Security: (was: Public)

> decouple cloudstack UI
> --
>
> Key: CLOUDSTACK-9777
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9777
> Project: CloudStack
>  Issue Type: Bug
>Reporter: Rajani Karuturi
>  Labels: gsoc2017
>
> just like cloudmonkey, decouple cloudstack UI to a separate project.
> It should be able to talk to any cloudstack endpoint 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9778) Replace custom console with NoVNC console

2017-03-28 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944769#comment-15944769
 ] 

Rajani Karuturi commented on CLOUDSTACK-9778:
-

Thanks for the headsup [~arafalov]. I changed the security level for all gsoc 
tagged issues.

> Replace custom console with NoVNC console
> -
>
> Key: CLOUDSTACK-9778
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9778
> Project: CloudStack
>  Issue Type: New Feature
>Reporter: Syed Ahmed
>  Labels: GSoC2017, considerForGsoc
>
> There are a lot of advantages of using NoVNC
> * Uses websockets so connections are persistent
> * Can do operations like copy/paste
> * Has better browser compatibility
> * Overall is more responsive and has a nice feel to it



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8223) generate ui widgets for api sets

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8223:

Security: (was: Public)

> generate ui widgets for api sets
> 
>
> Key: CLOUDSTACK-8223
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8223
> Project: CloudStack
>  Issue Type: New Feature
>Reporter: Daan Hoogland
>  Labels: gsoc2017
>
> based on update, create and list api calls widgets that can be used in custom 
> user intefaces can be created. A good framework must be choosen and 
> generation code written. Most likely a v2 api must be created to deal with 
> the inconsistencies in the present api.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8223) generate ui widgets for api sets

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8223:

Labels: gsoc2017  (was: GSoC)

> generate ui widgets for api sets
> 
>
> Key: CLOUDSTACK-8223
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8223
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Daan Hoogland
>  Labels: gsoc2017
>
> based on update, create and list api calls widgets that can be used in custom 
> user intefaces can be created. A good framework must be choosen and 
> generation code written. Most likely a v2 api must be created to deal with 
> the inconsistencies in the present api.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-6045) [GSoC] Create GUI to add primary storage based on plug-ins

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6045:

Labels: gsoc gsoc2014 gsoc2017  (was: gsoc gsoc2014)

> [GSoC] Create GUI to add primary storage based on plug-ins
> --
>
> Key: CLOUDSTACK-6045
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6045
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.4.0
> Environment: All browsers that CloudStack supports
>Reporter: Mike Tutkowski
>Assignee: Seif Eddine Jemli
>  Labels: gsoc, gsoc2014
> Fix For: 4.4.0
>
>
> At the time being, if an admin wants to add primary storage to CloudStack 
> that is NOT based on the default storage plug-in, the admin must invoke the 
> addStoragePool API outside of the CloudStack GUI.
> It would be beneficial to CloudStack admins if they could add this kind of 
> primary storage to CloudStack via its standard GUI.
> This project will require a degree of usability work in that the designer 
> must analyze CloudStack's GUI sufficiently to come up with a plan for where 
> the necessary information can be input.
> Once a GUI prototype has been developed (one could use a tool like PowerPoint 
> for this purpose), then the developer must analyze the necessary HTML and 
> JavaScript logic to add the proposed support.
> This project could take the form of an optional GUI plug-in.
> It is possible this project may add one or more parameters to the 
> addStoragePool API. If so, then this will require Java changes on the backend.
> It is likely the developer will have to create a new CloudStack API: One to 
> retrieve the list of installed storage plug-ins.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8206) Support Bhyve as a hypervisor in CloudStack

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8206:

Security: (was: Public)

> Support Bhyve as a hypervisor in CloudStack
> ---
>
> Key: CLOUDSTACK-8206
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8206
> Project: CloudStack
>  Issue Type: New Feature
>Reporter: Rohit Yadav
>  Labels: cloud, gsoc2015, gsoc2017, java
>
> Support Bhyve (from FreeBSD community) as a hypervisor in CloudStack. This 
> would require using libvirt, and seeing what is possible wrt basic/advance 
> zone and isolated/shared networking.
> Suggested Mentor: Rohit Yadav



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-6045) [GSoC] Create GUI to add primary storage based on plug-ins

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6045:

Labels: gsoc gsoc2014  (was: gsoc gsoc2014 gsoc2017)

> [GSoC] Create GUI to add primary storage based on plug-ins
> --
>
> Key: CLOUDSTACK-6045
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6045
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.4.0
> Environment: All browsers that CloudStack supports
>Reporter: Mike Tutkowski
>Assignee: Seif Eddine Jemli
>  Labels: gsoc, gsoc2014
> Fix For: 4.4.0
>
>
> At the time being, if an admin wants to add primary storage to CloudStack 
> that is NOT based on the default storage plug-in, the admin must invoke the 
> addStoragePool API outside of the CloudStack GUI.
> It would be beneficial to CloudStack admins if they could add this kind of 
> primary storage to CloudStack via its standard GUI.
> This project will require a degree of usability work in that the designer 
> must analyze CloudStack's GUI sufficiently to come up with a plan for where 
> the necessary information can be input.
> Once a GUI prototype has been developed (one could use a tool like PowerPoint 
> for this purpose), then the developer must analyze the necessary HTML and 
> JavaScript logic to add the proposed support.
> This project could take the form of an optional GUI plug-in.
> It is possible this project may add one or more parameters to the 
> addStoragePool API. If so, then this will require Java changes on the backend.
> It is likely the developer will have to create a new CloudStack API: One to 
> retrieve the list of installed storage plug-ins.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-8206) Support Bhyve as a hypervisor in CloudStack

2017-03-28 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8206:

Labels: cloud gsoc2015 gsoc2017 java  (was: cloud gsoc gsoc2015 java)

> Support Bhyve as a hypervisor in CloudStack
> ---
>
> Key: CLOUDSTACK-8206
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8206
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>  Labels: cloud, gsoc2015, gsoc2017, java
>
> Support Bhyve (from FreeBSD community) as a hypervisor in CloudStack. This 
> would require using libvirt, and seeing what is possible wrt basic/advance 
> zone and isolated/shared networking.
> Suggested Mentor: Rohit Yadav



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944774#comment-15944774
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@karuturi I'm working on the changes.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9830) QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944777#comment-15944777
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9830:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
LGTM. @abhinandanprateek ?


> QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change
> --
>
> Key: CLOUDSTACK-9830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
> Environment: master
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
>
> this line (182 as of right now):
> assertTrue(QuotaAlertManagerImpl.getDifferenceDays(now, new 
> DateTime(now).plusDays(1).toDate()) == 1L);
> fails on days where we're about to "spring forward" and lose an hour.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9830) QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944779#comment-15944779
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9830:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
@nathanejohnson can we continue to use jodatime but you can fix your issue?


> QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change
> --
>
> Key: CLOUDSTACK-9830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
> Environment: master
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
>
> this line (182 as of right now):
> assertTrue(QuotaAlertManagerImpl.getDifferenceDays(now, new 
> DateTime(now).plusDays(1).toDate()) == 1L);
> fails on days where we're about to "spring forward" and lose an hour.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9851) travis CI build failure after merge of PR#1953

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945053#comment-15945053
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9851:


Github user SudharmaJain commented on the issue:

https://github.com/apache/cloudstack/pull/2019
  
@koushik-das As suggested by you, I have  added changes to correct the 
maxDataVolume limits.


> travis CI build failure after merge of PR#1953
> --
>
> Key: CLOUDSTACK-9851
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9851
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945077#comment-15945077
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@blueorangutan package


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945078#comment-15945078
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945122#comment-15945122
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) 
has been kicked to run smoke tests


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945121#comment-15945121
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@blueorangutan test centos7 xenserver-65sp1


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945109#comment-15945109
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-602


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945135#comment-15945135
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov Can you also kick off vmware test in parallel?


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945140#comment-15945140
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945138#comment-15945138
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@serg38 are you reading my mind somehow? :)
@blueorangutan test centos7 vmware-55u3


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9830) QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945165#comment-15945165
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9830:


Github user nathanejohnson commented on the issue:

https://github.com/apache/cloudstack/pull/2001
  
@rthyd I'm not sure if this was a joda time bug or (more likely) misuse of 
joda time.  I'm not even sure how best to verify that.  All I know is that when 
using the java data methods the issue went away.


> QuotaAlertManagerTest fails testGetDifferenceDays on day before DST change
> --
>
> Key: CLOUDSTACK-9830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
> Environment: master
>Reporter: Nathan Johnson
>Assignee: Nathan Johnson
>Priority: Minor
>
> this line (182 as of right now):
> assertTrue(QuotaAlertManagerImpl.getDifferenceDays(now, new 
> DateTime(now).plusDays(1).toDate()) == 1L);
> fails on days where we're about to "spring forward" and lose an hour.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9854) Fix test_primary_storage test failure due to live migration

2017-03-28 Thread Nicolas Vazquez (JIRA)
Nicolas Vazquez created CLOUDSTACK-9854:
---

 Summary: Fix test_primary_storage test failure due to live 
migration
 Key: CLOUDSTACK-9854
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9854
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Test
Reporter: Nicolas Vazquez
Assignee: Nicolas Vazquez


Fix for test_primary_storage integration tests.

When finding storage pool migration options for volume on running vm, API 
returns None as hypervisor doesn't support live migration.

{noformat}
2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
findStoragePoolsForMigration===
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
test_03_migration_options_storage_tags: ['Traceback (most recent call 
last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
329, in run\ntestMethod()\n', '  File 
"/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
 line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
"/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
 line 547, in test_03_migration_options_storage_tags\npools_suitable = 
filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
'NoneType' object is not iterable\n"]
{noformat}

So we simply stop vm before sending findStoragePoolsForMigration command



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9854) Fix test_primary_storage test failure due to live migration

2017-03-28 Thread Nicolas Vazquez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Vazquez updated CLOUDSTACK-9854:

Description: 
Fix for test_primary_storage integration tests on simulator.

When finding storage pool migration options for volume on running vm, API 
returns None as hypervisor doesn't support live migration.

{noformat}
2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
findStoragePoolsForMigration===
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
test_03_migration_options_storage_tags: ['Traceback (most recent call 
last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
329, in run\ntestMethod()\n', '  File 
"/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
 line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
"/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
 line 547, in test_03_migration_options_storage_tags\npools_suitable = 
filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
'NoneType' object is not iterable\n"]
{noformat}

So we simply stop vm before sending findStoragePoolsForMigration command

  was:
Fix for test_primary_storage integration tests.

When finding storage pool migration options for volume on running vm, API 
returns None as hypervisor doesn't support live migration.

{noformat}
2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
findStoragePoolsForMigration===
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
test_03_migration_options_storage_tags: ['Traceback (most recent call 
last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
329, in run\ntestMethod()\n', '  File 
"/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
 line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
"/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
 line 547, in test_03_migration_options_storage_tags\npools_suitable = 
filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
'NoneType' object is not iterable\n"]
{noformat}

So we simply stop vm before sending findStoragePoolsForMigration command


> Fix test_primary_storage test failure due to live migration
> ---
>
> Key: CLOUDSTACK-9854
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9854
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Test
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> Fix for test_primary_storage integration tests on simulator.
> When finding storage pool migration options for volume on running vm, API 
> returns None as hypervisor doesn't support live migration.
> {noformat}
> 2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
> findStoragePoolsForMigration===
> 2017-03-28 06:07:55,977 - DEBUG - Response : None
> 2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
> test_03_migration_options_storage_tags: ['Traceback (most recent call 
> last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
> 329, in run\ntestMethod()\n', '  File 
> "/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
>  line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
> "/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
>  line 547, in test_03_migration_options_storage_tags\npools_suitable = 
> filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
> 'NoneType' object is not iterable\n"]
> {noformat}
> So we simply stop vm before sending findStoragePoolsForMigration command



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9854) Fix test_primary_storage test failure due to live migration

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945286#comment-15945286
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9854:


GitHub user nvazquez opened a pull request:

https://github.com/apache/cloudstack/pull/2021

CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration

Fix for test_primary_storage integration tests on simulator.

When finding storage pool migration options for volume on running vm, API 
returns None as hypervisor doesn't support live migration.


2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
findStoragePoolsForMigration===
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
test_03_migration_options_storage_tags: ['Traceback (most recent call 
last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
329, in run\ntestMethod()\n', '  File 
"/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
 line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
"/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
 line 547, in test_03_migration_options_storage_tags\npools_suitable = 
filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
'NoneType' object is not iterable\n"]


So we simply stop vm before sending findStoragePoolsForMigration command

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nvazquez/cloudstack CLOUDSTACK-9854

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2021


commit e313dafea46cf281bf09cc66cfcaf6a38d53ca90
Author: nvazquez 
Date:   2017-03-28T14:35:55Z

CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration




> Fix test_primary_storage test failure due to live migration
> ---
>
> Key: CLOUDSTACK-9854
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9854
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Test
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> Fix for test_primary_storage integration tests on simulator.
> When finding storage pool migration options for volume on running vm, API 
> returns None as hypervisor doesn't support live migration.
> {noformat}
> 2017-03-28 06:07:55,958 - DEBUG - Sending GET Cmd : 
> findStoragePoolsForMigration===
> 2017-03-28 06:07:55,977 - DEBUG - Response : None
> 2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: 
> test_03_migration_options_storage_tags: ['Traceback (most recent call 
> last):\n', '  File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 
> 329, in run\ntestMethod()\n', '  File 
> "/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py",
>  line 30, in test_wrapper\nreturn test(self, *args, **kwargs)\n', '  File 
> "/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py",
>  line 547, in test_03_migration_options_storage_tags\npools_suitable = 
> filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 
> 'NoneType' object is not iterable\n"]
> {noformat}
> So we simply stop vm before sending findStoragePoolsForMigration command



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945478#comment-15945478
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r108469206
  
--- Diff: server/src/com/cloud/alert/AlertManagerImpl.java ---
@@ -767,7 +767,9 @@ public void sendAlert(AlertType alertType, long 
dataCenterId, Long podId, Long c
 // set up a new alert
 AlertVO newAlert = new AlertVO();
 newAlert.setType(alertType.getType());
-newAlert.setSubject(subject);
+//do not have a seperate column for content.
+//appending the message to the subject for now.
+newAlert.setSubject(subject+content);
--- End diff --

I agree with you regarding the time of contributor. I also find it great 
that you documented this and opened a Jira ticket. However, for this specific 
case, I am really not comfortable with the change as it is. As I said before, 
the code at line 772 is opening the gates for unexpected runtime exceptions 
(A.K.A. bugs). If others are willing to take the risk of merging and then later 
dealing with the consequences, I cannot do anything against it. I am only 
pointing at the problem and making it quite clear what I think.

I really do not see any trouble to do things the right way here. It is only 
a matter of creating an alter table SQL that adds a field to a table. Then, you 
have to create this new field in `AlertVO`, and use it; as simple as that. 



> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8672) NCC Integration with CloudStack

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945614#comment-15945614
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8672:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1859
  
Folks, what about a middle ground here?

I was checking the commits. For instance, all of the commits 
"Added/implemented XXX ." could be all squashed by the same author. There are a 
bunch of commits in this style that are introducing a single class. Also, 
subsequent commits that change the introduced classes by the same author can 
also be squashed. Therefore, no one loses merit and the history is maintained.

After the squashing process is done, we can evaluate and discuss the 
situation further. 



> NCC Integration with CloudStack
> ---
>
> Key: CLOUDSTACK-8672
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8672
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Devices
>Affects Versions: 4.6.0
>Reporter: Rajesh Battala
>Assignee: Rajesh Battala
>Priority: Critical
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9198) VR gets created in the disabled POD

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945627#comment-15945627
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9198:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1278
  
@anshul1886, this pointing finger thing is not good.

I do not know why people did not do the work as it should have been done 
before. I was probably not around when that was done. I only asked you to 
remove those variables because you were touching the code in which they are 
found. It is not only with you, every time I review a code and there is room 
for improvements, I always suggest it. I also measure my suggestions, I will 
never ask something huge; normally I ask/suggest for small and concise 
improvements such as the removal of unused variables/blocks of codes.

I was probably present in most of the PRs created by @nvazquez, you can see 
how this type of discussion improved greatly all of the code he had already 
worked on.

If you do not want to remove something that is not being used is fine. 
However, I would like a clarification. If the variables you are changing are 
not used (as you finally admitted), then how can changing them solve the 
problem you reported on CLOUDSTACK-9198?


> VR gets created in the disabled POD
> ---
>
> Key: CLOUDSTACK-9198
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9198
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> VR gets created in the disabled POD



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946330#comment-15946330
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-964)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 42693 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t964-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 500.62 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1346.02 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 532.74 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 719.05 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 316.10 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 136.58 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 542.83 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 324.82 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 668.07 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 873.93 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1072.21 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.78 | test_volumes.py
test_08_resize_volume | Success | 100.92 | test_volumes.py
test_07_resize_fail | Success | 121.04 | test_volumes.py
test_06_download_detached_volume | Success | 25.33 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 15.30 | test_volumes.py
test_02_attach_volume | Success | 15.73 | test_volumes.py
test_01_create_volume | Success | 397.46 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 374.28 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 280.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 186.28 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 133.68 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 177.15 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.06 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.22 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.26 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.27 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 80.69 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.08 | test_templates.py
test_01_create_template | Success | 55.51 | test_templates.py
test_10_destroy_cpvm | Success | 226.72 | test_ssvm.py
test_09_destroy_ssvm | Success | 208.94 | test_ssvm.py
test_08_reboot_cpvm | Success | 356.88 | test_ssvm.py
test_07_reboot_ssvm | Success | 178.89 | test_ssvm.py
test_06_stop_cpvm | Success | 166.71 | test_ssvm.py
test_05_stop_ssvm | Success | 168.94 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.36 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_02_list_snapshots_with_removed_data_store | Success | 105.15 | 
test_snapshots.py
test_01_snapshot_root_disk | Success | 26.38 | test_snapshots.py
test_04_change_offering_small | Success | 121.04 | test_service_offerings.py
test_03_delete_service_offering | Su

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946374#comment-15946374
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-965)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43960 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t965-vmware-55u3.zip
Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vm_root_resize.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_test_vm_volume_snapshot | `Failure` | 316.73 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Failure` | 846.71 | 
test_privategw_acl.py
test_02_vpc_privategw_static_routes | `Failure` | 121.36 | 
test_privategw_acl.py
test_02_deploy_vm_root_resize | `Failure` | 65.56 | 
test_deploy_vm_root_resize.py
test_01_deploy_vm_root_resize | `Failure` | 40.42 | 
test_deploy_vm_root_resize.py
test_00_deploy_vm_root_resize | `Failure` | 211.19 | 
test_deploy_vm_root_resize.py
test_01_vpc_site2site_vpn | Success | 350.38 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 151.17 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 556.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 366.84 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 679.38 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 645.41 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1551.81 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 677.85 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 692.05 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1352.22 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.68 | test_volumes.py
test_06_download_detached_volume | Success | 60.41 | test_volumes.py
test_05_detach_volume | Success | 105.22 | test_volumes.py
test_04_delete_attached_volume | Success | 15.16 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 53.88 | test_volumes.py
test_01_create_volume | Success | 450.07 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 448.48 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 228.99 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.60 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 282.04 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.70 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 60.90 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.11 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 206.09 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 15.18 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 110.63 | test_templates.py
test_10_destroy_cpvm | Success | 266.59 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.22 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.26 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.20 | test_ssvm.py
test_06_stop_cpvm | Success | 171.43 | test_ssvm.py
test_05_stop_ssvm | Success | 208.71 | test_ssvm.py
test_04_cpvm_internals | Success | 0.96 | test_ssvm.py
test_03_ssv

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946436#comment-15946436
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user serg38 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r108580807
  
--- Diff: test/integration/smoke/test_deploy_vm_root_resize.py ---
@@ -114,36 +134,46 @@ def test_00_deploy_vm_root_resize(self):
 # 2. root disk has new size per listVolumes
 # 3. Rejects non-supported hypervisor types
 """
-if(self.hypervisor.lower() == 'kvm'):
-newrootsize = (self.template.size >> 30) + 2
-self.virtual_machine = VirtualMachine.create(
-self.apiclient,
-self.testdata["virtual_machine"],
-accountid=self.account.name,
-zoneid=self.zone.id,
-domainid=self.account.domainid,
-serviceofferingid=self.service_offering.id,
-templateid=self.template.id,
-rootdisksize=newrootsize
+
+
+newrootsize = (self.template.size >> 30) + 2
+if(self.hypervisor.lower() == 'kvm' or self.hypervisor.lower() ==
+'xenserver'or self.hypervisor.lower() == 'vmware'  ):
+
+if self.hypervisor=="vmware":
+self.virtual_machine = VirtualMachine.create(
+self.apiclient, self.services["virtual_machine"],
+zoneid=self.zone.id,
+accountid=self.account.name,
+domainid=self.domain.id,
+serviceofferingid=self.services_offering_vmware.id,
+templateid=self.template.id
+)
+
--- End diff --

B.O. tests are failing because for vmware you don't specify 
rootdisksize=newrootsize. You probably better to remove if-else entirely.


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather tha

[jira] [Commented] (CLOUDSTACK-9630) Cannot use listNics API as advertised

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946500#comment-15946500
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9630:


Github user PranaliM commented on the issue:

https://github.com/apache/cloudstack/pull/1797
  
Test LGTM based on manual testing of the fix:

**Before Fix:**

d82b2278-ca19-46da-b532-0a044e778bb8
71424164-015c-4163-9dde-74019fb22ce2
255.255.255.0
10.1.1.1
10.1.1.119
Guest
true
02:00:1c:fe:00:01
0
71a0be44-69e4-4821-b8d9-e579ed04d52a


**After Fix:**

d82b2278-ca19-46da-b532-0a044e778bb8
71424164-015c-4163-9dde-74019fb22ce2
255.255.255.0
10.1.1.1
10.1.1.119
Guest
**Isolated**
true
02:00:1c:fe:00:01
0
71a0be44-69e4-4821-b8d9-e579ed04d52a



> Cannot use listNics API as advertised
> -
>
> Key: CLOUDSTACK-9630
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9630
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Sudhansu Sahu
>
> listNics API for a VM, "type" was not returned within API response. 
> EXPECTED BEHAVIOR
> ==
> The listNics API response return type of NIC (type), as specified in 
> https://cloudstack.apache.org/api/apidocs-4.8/user/listNics.html
>  
> ACTUAL BEHAVIOR
> ==
> The listNics API response does not return type of NIC.
> (local) 🐵 > list nics virtualmachineid=a69edaf5-8f21-41ff-8c05-263dc4bd5354 
> count = 1
> nic:
> id = 211e0d46-6b94-4425-99f7-e8e9efea2472
> deviceid = 0
> gateway = 10.1.1.1
> ipaddress = 10.1.1.45
> isdefault = True
> macaddress = 02:00:06:f6:00:01
> netmask = 255.255.255.0
> networkid = c08fddf1-fd77-4810-a062-ea9d03c5c7e6
> virtualmachineid = a69edaf5-8f21-41ff-8c05-263dc4bd5354
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)