[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685942#comment-15685942
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@koushik-das No I don't. I tend to agree with you but obviously the 
implementation will be messier since configs are random. @nvazquez Do you think 
it can be changes so that modification  is done via detail tag of 
updateVirtuaMachine and updateTemplate APIs. 


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685929#comment-15685929
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@koushik-das Good catch. Thanks. I think there are 2 options. Either not 
allow changing service offering for VM with VM snapshot if current offering is 
Custom. Or save custom offering details to vm_snaphsot details and apply them 
during the snapshot reversion. Which option do you think is more applicable? I 
think option 2 is better.


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685881#comment-15685881
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
What about custom offerings? In case of custom offerings the cpu/ram are 
stored in the VM details table? How will this case be handled?


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685878#comment-15685878
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@serg38 @nvazquez Thanks for the update. As I understand form some of the 
detail parameter examples, these are tied to the lifecycle of the entity 
(create/destroy or start/stop of the VM). If thats the case then it make sense 
to pass on these details along with the corresponding lifecycle APIs rather 
than creating new APIs. Do you see any use case where the details are not tied 
to entity lifecycle?


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9458) Some VMs are being stopped when agent is reconnecting

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685741#comment-15685741
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9458:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1640
  
@marcaurele Thats correct. In case of shared/remote storage the same disk 
is used to spawn a VM on another host once the VM is successfully fenced. If 
the fencer has successfully fenced off a VM, it is assumed that the original VM 
is correctly stopped. Now if you are saying that the original VM continues to 
run then that means that the specific fencer has bugs and needs fixing. Note 
that there are different types of fencer available in cloudstack based on 
hypervisor types.

@abhinandanprateek In the scenario you mentioned vmsync won't be able to 
mark VM as stopped as the ping command is no longer running as the host is in 
alert/down state.


> Some VMs are being stopped when agent is reconnecting
> -
>
> Key: CLOUDSTACK-9458
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9458
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> If you loose the communication between the management server and one of the 
> agent for a few minutes, even though HA mode is not active the 
> HighAvailibilityManager kicks in and start to schedule vm restart. Those 
> tasks are being inserted as async job in the DB and if the agent comes back 
> online during the time the jobs are still in the async table, they are pushed 
> to the agent and shuts down the VMs. Then since HA is not active, the VM are 
> not restarted.
> The expected behavior in my opinion is that the VM should not be restarted at 
> all if HA mode is not active on them, and let the agent update the VM state 
> with the power report.
> The bug lies in 
> {{HighAvailibilityManagerImpl.scheduleRestartForVmsOnHost(final HostVO host, 
> boolean investigate)}}, PR will follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9585) UI doesn't give an option to select the xentools version for non ROOT users

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685723#comment-15685723
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9585:


Github user jayakarteek commented on the issue:

https://github.com/apache/cloudstack/pull/1756
  
@yvsubhash on testing PR  , faced issues while registering new templates.

LOG:

command=listConfigurations=json=xenserver.pvdriver.version&_=1479725144072
The given command:listConfigurations does not exist or it is not available 
for user with id:7

command=listConfigurations=json=xenserver.pvdriver.version&_=1479725144072


> UI doesn't give an option to select the xentools version for non ROOT users
> ---
>
> Key: CLOUDSTACK-9585
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9585
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.8.0
> Environment: Xen Server
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> UI doesn't give an option to select the xentools version while registering 
> template for any user other than ROOT admin. Templates registered by other 
> users are marked as 'xenserver56' and results in unsusable VMs due to the 
> device_id:002 issue with windows if the template is having xentools version 
> higher than 6.1 .
> Repro Steps
> Select register template as any other user than ROOT domain admin and UI 
> doesn't give an option to select the xentools version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9458) Some VMs are being stopped when agent is reconnecting

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685711#comment-15685711
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9458:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1640
  
@marcaurele for a host that is found to be down we go ahead and schedule a 
restart for HA enabled VM, this is good.

For the VMs that are not HA enabled they will continue to show as running.  
This works in the scenario where the host finally comes around. What if host is 
gone for long or forever, then the VMs will continue to show as running. The 
user will have to guess that he has to stop and then start the VM.  Can you 
check if VMs will be eventually marked down by VM sync ? If that is the case, I 
think this fix should be good then 

Another suggestion: In the specific case where host drops and then come 
back in certain interval. Can we make the parameter that times out to mark a 
host down as configurable. In your case you can increase it to several hours 
and it will not start HA during that time and host can still connect back ?




> Some VMs are being stopped when agent is reconnecting
> -
>
> Key: CLOUDSTACK-9458
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9458
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> If you loose the communication between the management server and one of the 
> agent for a few minutes, even though HA mode is not active the 
> HighAvailibilityManager kicks in and start to schedule vm restart. Those 
> tasks are being inserted as async job in the DB and if the agent comes back 
> online during the time the jobs are still in the async table, they are pushed 
> to the agent and shuts down the VMs. Then since HA is not active, the VM are 
> not restarted.
> The expected behavior in my opinion is that the VM should not be restarted at 
> all if HA mode is not active on them, and let the agent update the VM state 
> with the power report.
> The bug lies in 
> {{HighAvailibilityManagerImpl.scheduleRestartForVmsOnHost(final HostVO host, 
> boolean investigate)}}, PR will follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9570) Bug in listSnapshots for snapshots with deleted data stores

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684663#comment-15684663
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9570:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1735
  
@jburwell I added new Marvin test in `test_snapshots.py`



> Bug in listSnapshots for snapshots with deleted data stores
> ---
>
> Key: CLOUDSTACK-9570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> If there is snapshot on a data store that is removed, {{listSnapshots}} still 
> tries to enumerate it and gives error (in this example data store 2 has been 
> removed):
> {code:xml|title=/client/api?command=listSnapshots=true=true|borderStyle=solid}
> 
>530
>4250
>Unable to locate datastore with id 2
> 
> {code}
> h3. Reproduce error
> This steps can be followed to reproduce issue:
> * Take a snapshot of a volume (this creates a references for primary storage 
> and secondary storage in snapshot_store_ref table
> * Simulate retiring primary data storage where snapshot is cached (in this 
> example X is a fake data store and Y is snapshot id):
> {{UPDATE `cloud`.`snapshot_store_ref` SET `store_id`='X', `state`="Destroyed" 
> WHERE `id`='Y';}}
> * List snapshots
> {{/client/api?command=listSnapshots=true=true}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9593) User data check is inconsistent with python

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684628#comment-15684628
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9593:


Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1760
  
@rhtyd the SQL function I'm using to fix current user data in the database 
is not present in MySQL 5.5, but only in 5.6 (TO_BASE64, FROM_BASE64). I have 
to find a workaround, either in SQL or in Java to fix previous data.


> User data check is inconsistent with python
> ---
>
> Key: CLOUDSTACK-9593
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9593
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.4.2, 4.4.3, 4.3.2, 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 
> 4.6.2, 4.7.0, 4.7.1, 4.8.0, 4.9.0
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> The user data is validated through the Apache commons codec library, but this 
> library does not check that the length is a multiple of 4 characters. The RFC 
> does not require it either. But the python script in the virtual router that 
> loads the user data does check for the possible padding presence, requiring 
> the string to be a multiple of 4 characters.
> {code:python}
> >>> import base64
> >>> base64.b64decode('foo')
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/base64.py",
>  line 78, in b64decode
> raise TypeError(msg)
> TypeError: Incorrect padding
> >>> base64.b64decode('foo=')
> '~\x8a'
> {code}
> Currently since the java check is less restrictive, the user data gets saved 
> into the database but the VR script crashes when it receives this VM user 
> data. On a single VM it is not really a problem. The critical issue is when a 
> VR is restarted. The invalid pythonic base64 string makes the vmdata.py 
> script crashed, resulting in a VR not starting at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684609#comment-15684609
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
Trillian test result (tid-378)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35100 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1681-t378-vmware-55u3.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 881.18 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | `Error` | 491.85 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Error` | 692.86 | test_vpc_vpn.py
test_03_restart_network_cleanup | `Error` | 141.19 | test_routers.py
test_03_vpc_privategw_restart_vpc_cleanup | `Error` | 679.12 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Error` | 729.79 | 
test_privategw_acl.py
test_01_vpc_remote_access_vpn | Success | 166.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 380.57 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 712.43 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 692.84 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1665.06 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 720.00 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 735.66 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1377.56 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 25.86 | test_volumes.py
test_06_download_detached_volume | Success | 85.81 | test_volumes.py
test_05_detach_volume | Success | 105.23 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 20.38 | test_volumes.py
test_02_attach_volume | Success | 48.73 | test_volumes.py
test_01_create_volume | Success | 477.51 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.24 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 242.30 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 337.50 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 159.40 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 228.68 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.19 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.17 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 71.37 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.14 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.24 | test_vm_life_cycle.py
test_01_stop_vm | Success | 5.12 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 261.64 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 20.59 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.13 | test_templates.py
test_01_create_template | Success | 125.87 | test_templates.py
test_10_destroy_cpvm | Success | 236.74 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.77 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.56 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.70 | test_ssvm.py
test_06_stop_cpvm | Success | 176.78 | test_ssvm.py
test_05_stop_ssvm | Success | 213.78 | test_ssvm.py
test_04_cpvm_internals | Success | 1.29 | test_ssvm.py
test_03_ssvm_internals | Success | 3.96 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 31.23 | test_snapshots.py
test_04_change_offering_small | Success | 91.85 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.15 | test_service_offerings.py
test_02_sys_template_ready | Success | 

[jira] [Commented] (CLOUDSTACK-9402) Nuage VSP Plugin : Support for underlay features (Source & Static NAT to underlay) including Marvin test coverage on master

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684332#comment-15684332
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9402:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1580
  
LGTM - based on our Internal regression testing and code review on the 
latest code in this PR.

@rhtyd @jburwell


> Nuage VSP Plugin : Support for underlay features (Source & Static NAT to 
> underlay) including Marvin test coverage on master
> ---
>
> Key: CLOUDSTACK-9402
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9402
> Project: CloudStack
>  Issue Type: Task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Network Controller
>Affects Versions: 4.10.0.0
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>
> Support for underlay features (Source & Static NAT to underlay) with Nuage 
> VSP SDN Plugin including Marvin test coverage for corresponding Source & 
> Static NAT features on master. Moreover, our Marvin tests are written in such 
> a way that they can validate our supported feature set with both Nuage VSP 
> SDN platform's overlay and underlay infra.
> PR contents:
> 1) Support for Source NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 2) Support for Static NAT to underlay feature on master with Nuage VSP SDN 
> Plugin.
> 3) Marvin test coverage for Source & Static NAT to underlay on master with 
> Nuage VSP SDN Plugin.
> 4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
> 5) PEP8 & PyFlakes compliance with our Marvin test code.
> Our Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Underlay infra (Source & Static NAT to underlay)
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_source_nat.py
> Test results:
> Test Nuage VSP Isolated networks with different combinations of Source NAT 
> service providers ... === TestName: test_01_nuage_SourceNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Source NAT service 
> providers ... === TestName: test_02_nuage_SourceNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for Isolated network by performing 
> (wget) traffic tests to the ... === TestName: 
> test_03_nuage_SourceNAT_isolated_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality for VPC network by performing (wget) 
> traffic tests to the Internet ... === TestName: 
> test_04_nuage_SourceNAT_vpc_network_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with different Egress 
> Firewall/Network ACL rules by performing (wget) ... === TestName: 
> test_05_nuage_SourceNAT_acl_rules_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM NIC operations by performing 
> (wget) traffic tests to the ... === TestName: 
> test_06_nuage_SourceNAT_vm_nic_operations_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with VM migration by performing 
> (wget) traffic tests to the Internet ... === TestName: 
> test_07_nuage_SourceNAT_vm_migration_traffic | Status : SUCCESS ===
> ok
> Test Nuage VSP Source NAT functionality with network restarts by performing 
> (wget) traffic tests to the ... === TestName: 
> test_08_nuage_SourceNAT_network_restarts_traffic | Status : SUCCESS ===
> ok
> --
> Ran 8 tests in 13360.858s
> OK
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> plugins/nuagevsp/test_nuage_static_nat.py
> Test results:
> Test Nuage VSP Public IP Range creation and deletion ... === TestName: 
> test_01_nuage_StaticNAT_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Nuage Underlay (underlay networking) enabled Public IP Range 
> creation and deletion ... === TestName: 
> test_02_nuage_StaticNAT_underlay_public_ip_range | Status : SUCCESS ===
> ok
> Test Nuage VSP Isolated networks with different combinations of Static NAT 
> service providers ... === TestName: test_03_nuage_StaticNAT_isolated_networks 
> | Status : SUCCESS ===
> ok
> Test Nuage VSP VPC networks with different combinations of Static NAT service 
> providers ... === TestName: test_04_nuage_StaticNAT_vpc_networks | Status : 
> SUCCESS ===
> ok
> Test Nuage VSP Static NAT 

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684307#comment-15684307
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1577
  
LGTM - based on our Internal regression testing and code review on the 
latest code in this PR.

@rhtyd @jburwell 


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : SUCCESS 
> ===
> ok
> Test Delete few(not all) LB rules for a single virtual network of ... === 
> TestName: test_06_VPC_CreateAndDeleteLBRuleVRStopppedState | Status : SUCCESS 
> ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_07_VPC_CreateAndDeleteAllLBRule | Status : SUCCESS ===
> ok
> Test Delete all LB rules for a single virtual network of a ... === TestName: 
> test_08_VPC_CreateAndDeleteAllLBRuleVRStoppedState | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that belongs to 
> a different VPC. ... === TestName: test_09_VPC_LBRuleCreateFailMultipleVPC | 
> Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule for a VM that does not 
> belong to any VPC. ... === TestName: 
> test_10_VPC_FailedToCreateLBRuleNonVPCNetwork | Status : SUCCESS ===
> ok
> Test case no 217 and 236: User should not be allowed to create a LB rule for 
> a ... === TestName: test_11_VPC_LBRuleCreateNotAllowed | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an Ipaddress that 
> Source Nat enabled. ... === TestName: test_12_VPC_LBRuleCreateFailForRouterIP 
> | Status : SUCCESS ===
> ok
> Test User should not be allowed to create a LB rule on an 

[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684304#comment-15684304
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/1577
  
@rhtyd @jburwell I have briefly investigated the above failed tests 
"test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL" and 
"test_oobm_enabledisable_across_clusterzones", here are my findings:

1) "test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL " fails during 
establishment of SSH connection to the VM via its public IP (SSH connection 
failed to setup, timeout error). Moreover, this test passed in the previous 
blueorangutan run (refer my previous comment on investigations of the previous 
blueorangutan run).

2) "test_oobm_enabledisable_across_clusterzones" failure looks like a test 
environment (i.e. host hardware issue). Moreover, this test passed in the 
previous blueorangutan run (refer my previous comment on investigations of the 
previous blueorangutan run).

Here is the error from the logs:
errorcode : 530, errortext : u'Out-of-band Management action (STATUS) on 
host (d2049cf2-47ba-4a81-8e72-d01c1dc77dd0) failed with error: > Error: no 
response from RAKP 1 message\nError: Unable to establish IPMI v2 / RMCP+ 
session\nRunning Get PICMG Properties my_addr 0x20, transit 0, target 0x20\nNo 
Response from Get PICMG Properties\nNo PICMG Extenstion discovered\nUnable to 
get Chassis Power Status\n'}

IMHO, the above failing tests are most likely due to test environment 
issues, and has nothing to do with the code changes in this PR.


> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file
> --
>
> Key: CLOUDSTACK-9321
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9321
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Network Controller
>Reporter: Mani Prashanth Varma Manthena
>Assignee: Nick Livens
>Priority: Critical
> Fix For: 4.9.1.0
>
>
> Multiple Internal LB rules (more than one Internal LB rule with same source 
> IP address) are not getting resolved in the corresponding InternalLbVm 
> instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is 
> added to the corresponding InternalLbVm instance, it replaces the existing 
> one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules 
> are getting dropped by the InternalLbVm instance.
> PR contents:
> 1) Fix for this bug.
> 2) Marvin test coverage for Internal LB feature on master with native ACS 
> setup (component directory) including validations for this bug fix.
> 3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins 
> directory) to validate this bug fix.
> 4) PEP8 & PyFlakes compliance with the added Marvin test code.
> Added Marvin test code PEP8 & PyFlakes compliance:
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pyflakes 
> test/integration/component/test_vpc_network_internal_lbrules.py
> CloudStack$
> CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/.py
> CloudStack$
> CloudStack$ pyflakes test/integration/plugins/nuagevsp/.py
> CloudStack$
> Validations:
> 1) Made sure that we didn't break any Public LB (VpcVirtualRouter) 
> functionality.
> Marvin test run:
> nosetests --with-marvin --marvin-config=nuage.cfg 
> test/integration/component/test_vpc_network_lbrules.py
> Test results:
> Test case no 210 and 227: List Load Balancing Rules belonging to a VPC ... 
> === TestName: test_01_VPC_LBRulesListing | Status : SUCCESS ===
> ok
> Test Create LB rules for 1 network which is part of a two/multiple virtual 
> networks of a ... === TestName: test_02_VPC_CreateLBRuleInMultipleNetworks | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_03_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 222 : Create LB rules for a two/multiple virtual networks of a 
> ... === TestName: test_04_VPC_CreateLBRuleInMultipleNetworksVRStoppedState | 
> Status : SUCCESS ===
> ok
> Test case no 214 : Delete few(not all) LB rules for a single virtual network 
> of a ... === TestName: test_05_VPC_CreateAndDeleteLBRule | Status : 

[jira] [Commented] (CLOUDSTACK-9489) When upgrading, Config.java new configuration are not updated.

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684217#comment-15684217
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9489:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1684
  
Trillian test result (tid-375)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26478 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1684-t375-kvm-centos7.zip
Test completed. 46 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 389.90 
| test_vpc_redundant.py
test_02_vpc_privategw_static_routes | `Failure` | 199.18 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 168.16 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.16 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 386.11 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 283.79 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 570.99 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.61 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1447.91 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 561.81 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 771.72 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.48 | test_volumes.py
test_08_resize_volume | Success | 15.44 | test_volumes.py
test_07_resize_fail | Success | 20.56 | test_volumes.py
test_06_download_detached_volume | Success | 15.32 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 10.21 | test_volumes.py
test_03_download_attached_volume | Success | 15.40 | test_volumes.py
test_02_attach_volume | Success | 45.16 | test_volumes.py
test_01_create_volume | Success | 714.83 | test_volumes.py
test_deploy_vm_multiple | Success | 303.97 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.01 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.26 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.48 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.91 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.19 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.36 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 85.72 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.23 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 35.43 | test_templates.py
test_10_destroy_cpvm | Success | 136.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.72 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 134.60 | test_ssvm.py
test_06_stop_cpvm | Success | 162.03 | test_ssvm.py
test_05_stop_ssvm | Success | 134.04 | test_ssvm.py
test_04_cpvm_internals | Success | 1.20 | test_ssvm.py
test_03_ssvm_internals | Success | 5.29 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.46 | test_snapshots.py
test_04_change_offering_small | Success | 239.71 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 35.30 | test_routers.py
test_08_start_router | Success | 30.30 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py
test_06_router_advanced | Success | 0.07 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py

[jira] [Commented] (CLOUDSTACK-9503) The router script times out resulting in failure of deployment

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684194#comment-15684194
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9503:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1745
  
Trillian test result (tid-376)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 25923 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1745-t376-kvm-centos7.zip
Test completed. 43 look ok, 0 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | Success | 260.69 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 126.83 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 258.09 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 331.81 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 534.70 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 526.50 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1449.74 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 573.28 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 763.84 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1289.88 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.59 | test_volumes.py
test_08_resize_volume | Success | 15.37 | test_volumes.py
test_07_resize_fail | Success | 20.56 | test_volumes.py
test_06_download_detached_volume | Success | 15.39 | test_volumes.py
test_05_detach_volume | Success | 100.31 | test_volumes.py
test_04_delete_attached_volume | Success | 10.35 | test_volumes.py
test_03_download_attached_volume | Success | 15.40 | test_volumes.py
test_02_attach_volume | Success | 45.38 | test_volumes.py
test_01_create_volume | Success | 720.17 | test_volumes.py
test_deploy_vm_multiple | Success | 295.56 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.05 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.04 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.45 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.00 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.99 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.29 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.22 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.43 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 80.79 | test_templates.py
test_08_list_system_templates | Success | 0.07 | test_templates.py
test_07_list_public_templates | Success | 0.05 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 60.80 | test_templates.py
test_10_destroy_cpvm | Success | 191.91 | test_ssvm.py
test_09_destroy_ssvm | Success | 164.28 | test_ssvm.py
test_08_reboot_cpvm | Success | 102.16 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.66 | test_ssvm.py
test_06_stop_cpvm | Success | 131.78 | test_ssvm.py
test_05_stop_ssvm | Success | 133.69 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 3.37 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.11 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.54 | test_snapshots.py
test_04_change_offering_small | Success | 209.95 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.20 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.46 | test_secondary_storage.py
test_09_reboot_router | Success | 35.39 | test_routers.py
test_08_start_router | Success | 30.37 | test_routers.py
test_07_stop_router | Success | 10.16 | test_routers.py
test_06_router_advanced | Success | 0.05 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.76 | test_routers.py
test_03_restart_network_cleanup 

[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684142#comment-15684142
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@koushik-das @ustcweizhou Just want to add on the use case. There is 
growing need to change template/VM details for after the deployment e.g. to 
switch root disk controller or change ratio of cores/socket with 
cpuid.coresPerSocket or with adding vGPU startup parameters or switch boot 
firmware from bios to efi as required by some OS. Currently users are unable to 
easily do it with API and/or UI. As implemented now, all settings with 
display=1 in user_vm_details and vm_template_details will be returned by API 
and allowed to changed. If desired we can open another PR to switch details 
created by other API in automatic fashion to have display=0 so they will not 
show up in the response of these APIs and users wont be able to edit/delete 
them. We thought this to be out of scope for now.


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9280) System VM volumes cannot be deleted when there are no system VMs

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684055#comment-15684055
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9280:


Github user ProjectMoon commented on the issue:

https://github.com/apache/cloudstack/pull/1559
  
Addressed the comments. For some reason the tests for VolumeDataFactoryImpl 
were being skipped. Will try to see why.


> System VM volumes cannot be deleted when there are no system VMs
> 
>
> Key: CLOUDSTACK-9280
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9280
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0, 4.7.0
>Reporter: Jeff Hair
>
> Scenario: When deleting a zone, everything under it must be removed. This 
> results in the system VMs being destroyed as there are no more hosts running.
> The storage cleanup thread properly detects that there are volumes to be 
> deleted, but it cannot delete them because the endpoint selection fails with 
> "No remote endpoint to send DeleteCommand, check if host or ssvm is down?"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9321) Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's hapro

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683840#comment-15683840
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9321:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1577
  
Trillian test result (tid-373)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 25521 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1577-t373-kvm-centos7.zip
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 369.69 
| test_vpc_redundant.py
test_oobm_enabledisable_across_clusterzones | `Error` | 52.36 | 
test_outofbandmanagement.py
test_01_vpc_site2site_vpn | Success | 225.71 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.25 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 240.59 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 283.66 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 530.76 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 497.05 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1322.71 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 590.02 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 750.51 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.51 | test_volumes.py
test_08_resize_volume | Success | 15.42 | test_volumes.py
test_07_resize_fail | Success | 20.47 | test_volumes.py
test_06_download_detached_volume | Success | 15.29 | test_volumes.py
test_05_detach_volume | Success | 100.30 | test_volumes.py
test_04_delete_attached_volume | Success | 10.24 | test_volumes.py
test_03_download_attached_volume | Success | 15.32 | test_volumes.py
test_02_attach_volume | Success | 45.73 | test_volumes.py
test_01_create_volume | Success | 681.45 | test_volumes.py
test_deploy_vm_multiple | Success | 268.87 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.16 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 126.41 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 126.09 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.18 | test_templates.py
test_03_delete_template | Success | 5.14 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 40.47 | test_templates.py
test_10_destroy_cpvm | Success | 161.57 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.54 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.75 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.40 | test_ssvm.py
test_06_stop_cpvm | Success | 137.04 | test_ssvm.py
test_05_stop_ssvm | Success | 133.59 | test_ssvm.py
test_04_cpvm_internals | Success | 1.34 | test_ssvm.py
test_03_ssvm_internals | Success | 4.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.28 | test_snapshots.py
test_04_change_offering_small | Success | 243.19 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.14 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 40.37 | test_routers.py
test_08_start_router | Success | 30.30 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py

[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683805#comment-15683805
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
Hi @koushik-das @ustcweizhou,
For this new methods as they require the entity id (VM or template) I added 
a basic validation to check if the entity is found on DB, but I realize that I 
should also validate that entity is not destroyed. This data is to be stored in 
*details tables.
About existing API methods, I have checked `updateTemplate` and 
`updateVirtualMachines` methods before adding this new methods, and they 
provide a way to add/update details (although updateTemplate overrides existing 
details). Also, details can be listed on `listTemplates` and 
`listVirtualMachines` but thought the best approach was introducing this new 
methods to reduce overhead and treat entity details separately instead of 
updating/listing the entity every time. What do you think of this approach? 


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9458) Some VMs are being stopped when agent is reconnecting

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683739#comment-15683739
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9458:


Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1640
  
To get back to your previous comment @koushik-das on the broken scenario: 
what happen if the host is not reachable and the VMs are using a remote 
storage. With the fencing operation marking the VM as stopped, does it mean 
that the same remote disk volume is used if the VM is spawned on another host 
(while the other one still running on the first host)?

@abhinandanprateek if the reason to fence off the VM is to clean up 
resources, IMO this should be the job of the VM sync, on the ping 
command/startup command. In case a host is lost, the capacity of the cluster 
should reflect the lose of that host and the stat capacity should calculate its 
value based on the hosts that are Up only. When a host comes back (possibly 
with some VMs still running), the startup command should sync the VM states and 
the capacity of the cluster/zone should be updated. 
In short, cleaning up resources that are not "reachable" anymore should not 
be needed and should not be taken into account when calculating the actual 
capacity of the cluster/zone.


> Some VMs are being stopped when agent is reconnecting
> -
>
> Key: CLOUDSTACK-9458
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9458
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> If you loose the communication between the management server and one of the 
> agent for a few minutes, even though HA mode is not active the 
> HighAvailibilityManager kicks in and start to schedule vm restart. Those 
> tasks are being inserted as async job in the DB and if the agent comes back 
> online during the time the jobs are still in the async table, they are pushed 
> to the agent and shuts down the VMs. Then since HA is not active, the VM are 
> not restarted.
> The expected behavior in my opinion is that the VM should not be restarted at 
> all if HA mode is not active on them, and let the agent update the VM state 
> with the power report.
> The bug lies in 
> {{HighAvailibilityManagerImpl.scheduleRestartForVmsOnHost(final HostVO host, 
> boolean investigate)}}, PR will follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8781) Superfluous field during VPC creation

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683768#comment-15683768
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8781:


Github user prashanthvarma commented on the issue:

https://github.com/apache/cloudstack/pull/756
  
@rhtyd Here are the screen-shots without and with this UI bug fix:


![6a4ded44-adb4-4821-bceb-3eb8c44c79f8](https://cloud.githubusercontent.com/assets/3722369/20486648/d037587a-b000-11e6-9cda-f82e52747626.jpg)


![rsz_screenshot_from_2016-11-21_15-15-52](https://cloud.githubusercontent.com/assets/3722369/20486654/d65af0ae-b000-11e6-9201-d2df4e8bcd7f.jpg)

As pointed out by @nlivens in the previous comment, "The actual Public Load 
Balancer provider is derived from the VPC offering, and not from this extra 
field "Public Load Balancer Provider". So, this field has no use at all since 
it's completely ignored."

The cleaner implementation for this UI tab would be to only list "Public 
Load Balancer Providers" mentioned in the selected VPC offering. Moreover, 
don't show this tab, if there are no "Public Load Balancer Providers" being 
mentioned in the selected VPC offering. Finally, make use of the selected  
"Public Load Balancer Provider" in the actual implementation of the VPC.


> Superfluous field during VPC creation
> -
>
> Key: CLOUDSTACK-8781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.6.0
>Reporter: Nick Livens
>Assignee: Nick Livens
>Priority: Trivial
> Fix For: Future
>
> Attachments: addVpc.png
>
>
> When creating a VPC, there is a superfluous field "Public Load Balancer 
> Provider" which is being ignored since the LB Provider is specified in the 
> VPC offering. This might confuse the users whether they can use a different 
> LB provider than the one specified in the VPC offering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9379) Support nested virtualization at VM level on VMware Hypervisor

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683719#comment-15683719
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9379:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1542
  
LGTM 


> Support nested virtualization at VM level on VMware Hypervisor
> --
>
> Key: CLOUDSTACK-9379
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9379
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> h2. Introduction
> It is desired to support nested virtualization at VM level for VMware 
> hypervisor. Current behaviour supports enabling/desabling global nested 
> virtualization by modifying global config {{'vmware.nested.virtualization'}}. 
> It is wished to improve this feature, having control at VM level instead of a 
> global control only.
> h2. Proposal
> A new global configuration is added, to enable/disable VM nested 
> virtualization control: {{'vmware.nested.virtualization.perVM'}}. Default 
> value=false
> h2. Behaviour
> After a vm deployment or start command, vm params include 
> {{nestedVirtualizationFlag}} key and its value is:
> * true -> nested virtualization enabled
> * false -> nested virtualization disabled
> We will determinate nested virtualization enabled/disabled by examining:
> * (1) global configuration {{'vmware.nested.virtualization'}} value
> * (2) global configuration {{'vmware.nested.virtualization.perVM'}} value
> * (3) {{'nestedVirtualizationFlag'}} value in {{user_vm_details}} if present, 
> null if not.
> Using this 3 values, there are different use cases:
> # (1) = TRUE, (2) = TRUE, (3) is null -> ENABLED
> # (1) = TRUE, (2) = TRUE, (3) = TRUE -> ENABLED
> # (1) = TRUE, (2) = TRUE, (3) = FALSE -> DISABLED
> # (1) = TRUE, (2) = FALSE -> ENABLED
> # (1) = FALSE, (2) = TRUE, (3) is null -> DISABLED
> # (1) = FALSE, (2) = TRUE, (3) = TRUE -> ENABLED
> # (1) = FALSE, (2) = TRUE, (3) = FALSE -> DISABLED
> # (1) = FALSE, (2) = FALSE -> DISABLED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9280) System VM volumes cannot be deleted when there are no system VMs

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683717#comment-15683717
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9280:


Github user ProjectMoon commented on the issue:

https://github.com/apache/cloudstack/pull/1559
  
Updated to latest 4.8. Will also address the comments too.


> System VM volumes cannot be deleted when there are no system VMs
> 
>
> Key: CLOUDSTACK-9280
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9280
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0, 4.7.0
>Reporter: Jeff Hair
>
> Scenario: When deleting a zone, everything under it must be removed. This 
> results in the system VMs being destroyed as there are no more hosts running.
> The storage cleanup thread properly detects that there are volumes to be 
> deleted, but it cannot delete them because the endpoint selection fails with 
> "No remote endpoint to send DeleteCommand, check if host or ssvm is down?"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9317) Disabling static NAT on many IPs can leave wrong IPs on the router

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683606#comment-15683606
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9317:


Github user ProjectMoon commented on the issue:

https://github.com/apache/cloudstack/pull/1623
  
Updated to latest 4.8.


> Disabling static NAT on many IPs can leave wrong IPs on the router
> --
>
> Key: CLOUDSTACK-9317
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9317
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Virtual Router
>Affects Versions: 4.7.0, 4.7.1, 4.7.2
>Reporter: Jeff Hair
>
> The current behavior of enabling or disabling static NAT will call the apply 
> IP associations method in the management server. The method is not 
> thread-safe. If it's called from multiple threads, each thread will load up 
> the list of public IPs in different states (add or revoke)--correct for the 
> thread, but not correct overall. Depending on execution order on the virtual 
> router, the router can end up with public IPs assigned to it that are not 
> supposed to be on it anymore. When another account acquires the same IP, this 
> of course leads to network problems.
> The problem has been in CS since at least 4.2, and likely affects all 
> recently released versions. Affected version is set to 4.7.x because that's 
> what we verified against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683376#comment-15683376
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
I have the same concerns as @koushik-das .


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683249#comment-15683249
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@nvazquez Can you add use case details that will be addressed by these new 
methods? Also what kind of validation will be performed on the inputs and the 
state of the entity (VM or template)? Will it be metadata for book keeping only 
or will impact the entity during runtime? I am assuming that data will be 
stored in the corresponding *details table in DB. How will these APIs impact 
already existing data that are created using some other APIs and stored in the 
details table? 


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683151#comment-15683151
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


> Vmware resource: incorrect parsing of device list to find ethener index of 
> plugged nic
> --
>
> Key: CLOUDSTACK-9491
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9491
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Murali Reddy
>Assignee: Murali Reddy
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> In VmwareResource.java, there is logic ( in findRouterEthDeviceIndex) to find 
> ethernet interface a mac address is associated with.
> After NIC in plugged in to a Vm through vSphere, it takes some time for the 
> device to show up in the guest VM.
> Logic loops through the device list obtained from /proc/sys/net/ipv4/conf 
> from the VM, and matched againest mac.
> However '/proc/sys/net/ipv4/conf'  is not refreshed, heve logic loops through 
> old device list always.
> In addition there is no exception thrown and error is maked by sending -1. 
> Eventually, VR scripts are getting -1 as device number causing failure in 
> processing the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683150#comment-15683150
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
@blueorangutan test centos7 vmware-55u3



> Vmware resource: incorrect parsing of device list to find ethener index of 
> plugged nic
> --
>
> Key: CLOUDSTACK-9491
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9491
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Murali Reddy
>Assignee: Murali Reddy
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> In VmwareResource.java, there is logic ( in findRouterEthDeviceIndex) to find 
> ethernet interface a mac address is associated with.
> After NIC in plugged in to a Vm through vSphere, it takes some time for the 
> device to show up in the guest VM.
> Logic loops through the device list obtained from /proc/sys/net/ipv4/conf 
> from the VM, and matched againest mac.
> However '/proc/sys/net/ipv4/conf'  is not refreshed, heve logic loops through 
> old device list always.
> In addition there is no exception thrown and error is maked by sending -1. 
> Eventually, VR scripts are getting -1 as device number causing failure in 
> processing the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8715) Add support for qemu-guest-agent to libvirt provider

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683121#comment-15683121
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8715:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1545
  
@wido the Trillian env failed to run the tests for your latest PR due to 
following error:
2016-11-21 09:59:54,000 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Agent started
2016-11-21 09:59:54,005 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Implementation Version is 4.10.0.0-SNAPSHOT
2016-11-21 09:59:54,007 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
agent.properties found at /etc/cloudstack/agent/agent.properties
2016-11-21 09:59:54,013 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Defaulting to using properties file for storage
2016-11-21 09:59:54,016 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Defaulting to the constant time backoff algorithm
2016-11-21 09:59:54,035 INFO  [cloud.utils.LogUtils] (main:null) (logid:) 
log4j configuration found at /etc/cloudstack/agent/log4j-cloud.xml
2016-11-21 09:59:54,050 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Using default Java settings for IPv6 preference for agent connection
2016-11-21 09:59:54,211 INFO  [cloud.agent.Agent] (main:null) (logid:) id is
2016-11-21 09:59:54,251 ERROR [cloud.agent.AgentShell] (main:null) (logid:) 
Unable to start agent:
java.lang.NullPointerException
at java.io.File.(File.java:277)
at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:786)
at com.cloud.agent.Agent.(Agent.java:165)
at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:397)
at 
com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:367)
at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:351)
at com.cloud.agent.AgentShell.start(AgentShell.java:456)
at com.cloud.agent.AgentShell.main(AgentShell.java:491)

Please modify the code to still work where an explicit path may not be 
defined, and that with your change KVM agent would run on CentOS6/7. Thanks.
/cc @jburwell 


> Add support for qemu-guest-agent to libvirt provider
> 
>
> Key: CLOUDSTACK-8715
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8715
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Sten Spans
>Assignee: Wido den Hollander
>  Labels: kvm, libvirt, qemu, systemvm
> Fix For: Future
>
>
> The qemu guest agent is a newer part of qemu/kvm/libvirt which exposes quite 
> a lot of useful functionality, which can only be provided by having an agent 
> on the VM. This includes things like freezing/thawing filesystems (for 
> backups), reading files on the guest, listing interfaces / ip addresses, etc.
> This feature has been requested by users, but is currently not implemented.
> http://users.cloudstack.apache.narkive.com/3TTmy3zj/enabling-qemu-guest-agent
> The first change needed is to add the following to the XML generated for KVM 
> virtual machines,:
> 
>   
>   
> 
> This provides the communication channel between libvirt and the agent on the 
> host. All in all a pretty simple change to LibvirtComputingResource.java / 
> LibvirtVMDef.java
> Secondly the qemu-guest-agent package needs to be added to the systemvm 
> template.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9458) Some VMs are being stopped when agent is reconnecting

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683064#comment-15683064
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9458:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1640
  
I had already mentioned in a previous comment that there is no need for 
this PR in 4.9/master. So that means a -1.


> Some VMs are being stopped when agent is reconnecting
> -
>
> Key: CLOUDSTACK-9458
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9458
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> If you loose the communication between the management server and one of the 
> agent for a few minutes, even though HA mode is not active the 
> HighAvailibilityManager kicks in and start to schedule vm restart. Those 
> tasks are being inserted as async job in the DB and if the agent comes back 
> online during the time the jobs are still in the async table, they are pushed 
> to the agent and shuts down the VMs. Then since HA is not active, the VM are 
> not restarted.
> The expected behavior in my opinion is that the VM should not be restarted at 
> all if HA mode is not active on them, and let the agent update the VM state 
> with the power report.
> The bug lies in 
> {{HighAvailibilityManagerImpl.scheduleRestartForVmsOnHost(final HostVO host, 
> boolean investigate)}}, PR will follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682908#comment-15682908
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
@rhtyd unsupported parameters provided. Supported mgmt server os are: 
`centos6, centos7, ubuntu`. Supported hypervisors are: `kvm-centos6, 
kvm-centos7, kvm-ubuntu, xenserver-65sp1, xenserver-62sp1, vmware-60u2, 
vmware-55u3, vmware-51u1, vmware-50u1`


> Vmware resource: incorrect parsing of device list to find ethener index of 
> plugged nic
> --
>
> Key: CLOUDSTACK-9491
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9491
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Murali Reddy
>Assignee: Murali Reddy
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> In VmwareResource.java, there is logic ( in findRouterEthDeviceIndex) to find 
> ethernet interface a mac address is associated with.
> After NIC in plugged in to a Vm through vSphere, it takes some time for the 
> device to show up in the guest VM.
> Logic loops through the device list obtained from /proc/sys/net/ipv4/conf 
> from the VM, and matched againest mac.
> However '/proc/sys/net/ipv4/conf'  is not refreshed, heve logic loops through 
> old device list always.
> In addition there is no exception thrown and error is maked by sending -1. 
> Eventually, VR scripts are getting -1 as device number causing failure in 
> processing the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682907#comment-15682907
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
@blueorangutan test centos7 vmware55u3


> Vmware resource: incorrect parsing of device list to find ethener index of 
> plugged nic
> --
>
> Key: CLOUDSTACK-9491
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9491
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Murali Reddy
>Assignee: Murali Reddy
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> In VmwareResource.java, there is logic ( in findRouterEthDeviceIndex) to find 
> ethernet interface a mac address is associated with.
> After NIC in plugged in to a Vm through vSphere, it takes some time for the 
> device to show up in the guest VM.
> Logic loops through the device list obtained from /proc/sys/net/ipv4/conf 
> from the VM, and matched againest mac.
> However '/proc/sys/net/ipv4/conf'  is not refreshed, heve logic loops through 
> old device list always.
> In addition there is no exception thrown and error is maked by sending -1. 
> Eventually, VR scripts are getting -1 as device number causing failure in 
> processing the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9491) Vmware resource: incorrect parsing of device list to find ethener index of plugged nic

2016-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682895#comment-15682895
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9491:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1681
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-226


> Vmware resource: incorrect parsing of device list to find ethener index of 
> plugged nic
> --
>
> Key: CLOUDSTACK-9491
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9491
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Murali Reddy
>Assignee: Murali Reddy
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> In VmwareResource.java, there is logic ( in findRouterEthDeviceIndex) to find 
> ethernet interface a mac address is associated with.
> After NIC in plugged in to a Vm through vSphere, it takes some time for the 
> device to show up in the guest VM.
> Logic loops through the device list obtained from /proc/sys/net/ipv4/conf 
> from the VM, and matched againest mac.
> However '/proc/sys/net/ipv4/conf'  is not refreshed, heve logic loops through 
> old device list always.
> In addition there is no exception thrown and error is maked by sending -1. 
> Eventually, VR scripts are getting -1 as device number causing failure in 
> processing the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)