[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906941#comment-15906941
 ] 

Wei Zhou commented on CLOUDSTACK-9811:
--

Hi [~bstoyanov], can you please revert the commit on cs_ip.py which mentioned 
by [~remibergsma] and test it ?


> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9727) Password reset discrepancy in RVR when one of the Router is not in Running state.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906953#comment-15906953
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9727:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1965
  
@bvbharatk Yes, you might know only the vm has sshkey attached will have 
the password in vm details. If the vm does not have ssh keypair, then the vm 
password will not saved into user_vm_details.

Actually the vm password should be synced between master and backup. Saving 
to only one of them or saving to both of them are not working fine.
for example, if we save password in master, but not save it in backup. Once 
the master is down, then vm cannot get password from backup vr.
another example is, if we save password on both of master and backup, if vm 
get the password from master and reset it, once the master is down (or 
master->backup switch) and we reboot vm later, the vm will get the old password 
from backup again.


> Password reset discrepancy in RVR when one of the Router is not in Running 
> state.
> -
>
> Key: CLOUDSTACK-9727
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9727
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
> Fix For: 4.9.2.0
>
>
> - Deploy an instance and place " cloud-set-guest-password " script in the 
> /etc/init.d location and provide the executable permission.
> - Create a template from the above VM.
> - Create a new network offering with RVR enabled.
> - Deploy a new VM from the above created template and select the above RVR 
> offering.
> - Ensure that the password script is sucessfuly running.
> - Put the backup router in stopped state and ensure only master is running.
> - Now stop the VM and and Reset the password.
> - DO not start the VM , Now Stop the current Master and start the Back up.
> - Now the Back Up would be the Master. Now start the VM.
> Observations:
> - The password is saved onto only Master which is in stopped state now or 
> either in backup if we start it.
> - The current Master which was back up earlier do not have the new password. 
> Hence user cannot now login with the new password.
> - In this scenario there is disperancy in the password stored on both the 
> RVR's.
> The only way to sync both the passwords now is , ensure both the RVR are 
> running and reset the password on VM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9831) Previous pod_id still remains in the vm_instance table after VM migration with migrateVirtualMachineWithVolume

2017-03-13 Thread Sudhansu Sahu (JIRA)
Sudhansu Sahu created CLOUDSTACK-9831:
-

 Summary: Previous pod_id still remains in the vm_instance table 
after VM migration with migrateVirtualMachineWithVolume
 Key: CLOUDSTACK-9831
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9831
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.10.0.0
Reporter: Sudhansu Sahu


Previous pod_id still remains in the vm_instance table after VM migration with 
migrateVirtualMachineWithVolume

{noformat}

Previous pod_id still remains in the vm_instance table after VM migration with 
migrateVirtualMachineWithVolume

Before migrateVirtualMachineWithVolume
mysql> select v.id,v.instance_name,h.name,v.pod_id as 
pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
host h where v.host_id=h.id and v.id=2;
--
id  instance_name   namepod_id_from_instance_tb 
pod_id_from_host_tb

--
2   i-2-2-VMtestVM  1   1

--
1 row in set (0.00 sec)

After migrateVirtualMachineWithVolume
mysql> select v.id,v.instance_name,h.name,v.pod_id as 
pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
host h where v.host_id=h.id and v.id=3;
-
id  instance_name   namepod_id_from_instance_tb 
pod_id_from_host_tb

-
3   i-2-3-VMtestVm1 1   2

-
1 row in set (0.00 sec)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9718) Revamp the dropdown showing lists of hosts available for migration in a Zone

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907009#comment-15907009
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9718:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1889
  
@rashmidixit We have ported it in our branch and found it inconvenient to 
click the OK button in dialog with warning "No more hosts are available for 
migration" frequently.

could you remove the line below ?
```
 
cloudStack.dialog.notice({
 message: 
_l('message.no.host.available')
 }); //Only a 
single host in the set up
-} else {
- 
cloudStack.dialog.notice({
- message: 
_l('message.no.more.hosts.available')
- });
 }
 }
 });
```


> Revamp the dropdown showing lists of hosts available for migration in a Zone
> 
>
> Key: CLOUDSTACK-9718
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9718
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: MigrateInstance-SeeHosts.PNG, 
> MigrateInstance-SeeHosts-Search.PNG
>
>
> There are a couple of issues:
> 1. When looking for the possible hosts for migration, not all are displayed.
> 2. If there is a large number of hosts, then the drop down showing is not 
> easy to use.
> To fix this, propose to change the view to a list view which will show the 
> hosts in a list view with radio button. Additionally have a search option 
> where the hostname can be searched in this list to make it more usable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907025#comment-15907025
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user priyankparihar commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@serg38 and @borisstoyanov Thanks for giving your precious time.

@sadhugit is looking for test cases related suggestions.




> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9831) Previous pod_id still remains in the vm_instance table after VM migration with migrateVirtualMachineWithVolume

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907124#comment-15907124
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9831:


GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/2002

CLOUDSTACK-9831: Previous pod_id still remains in the vm_instance table

Previous pod_id still remains in the vm_instance table after VM migration 
with migrateVirtualMachineWithVolume

{noformat}

Previous pod_id still remains in the vm_instance table after VM migration 
with migrateVirtualMachineWithVolume

Before migrateVirtualMachineWithVolume
mysql> select v.id,v.instance_name,h.name,v.pod_id as 
pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
host h where v.host_id=h.id and v.id=2;

--
id  instance_name   namepod_id_from_instance_tb 
pod_id_from_host_tb


--
2   i-2-2-VMtestVM  1   1


--
1 row in set (0.00 sec)

After migrateVirtualMachineWithVolume
mysql> select v.id,v.instance_name,h.name,v.pod_id as 
pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
host h where v.host_id=h.id and v.id=3;

-
id  instance_name   namepod_id_from_instance_tb 
pod_id_from_host_tb


-
3   i-2-3-VMtestVm1 1   2


-
1 row in set (0.00 sec)

{noformat}

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9831

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2002


commit 7018bf13e51e9d71dcd4ab2393fa95cd6a437def
Author: Sudhansu 
Date:   2017-03-13T07:44:12Z

CLOUDSTACK-9831: Previous pod_id still remains in the vm_instance table 
after
 VM migration with migrateVirtualMachineWithVolume




> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> --
>
> Key: CLOUDSTACK-9831
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9831
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Sudhansu Sahu
>
> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> {noformat}
> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> Before migrateVirtualMachineWithVolume
> mysql> select v.id,v.instance_name,h.name,v.pod_id as 
> pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
> host h where v.host_id=h.id and v.id=2;
> --
> idinstance_name   namepod_id_from_instance_tb 
> pod_id_from_host_tb
> --
> 2 i-2-2-VMtestVM  1   1
> --
> 1 row in set (0.00 sec)
> After migrateVirtualMachineWithVolume
> mysql> select v.id,v.instance_name,h.name,v.pod_id as 
> pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
> host h where v.host_id=h.id and v.id=3;
> -
> idinstance_name   namepod_id_from_instance_tb 
> pod_id_from_host_tb
> -
> 3 i-2-3-VMtestVm1 1   2
> ---

[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Boris Stoyanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907149#comment-15907149
 ] 

Boris Stoyanov commented on CLOUDSTACK-9811:


ok will try, thanks [~remibergsma] and [~ustcweiz...@gmail.com]

> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907175#comment-15907175
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907173#comment-15907173
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907174#comment-15907174
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
@blueorangutan package


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907281#comment-15907281
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-590


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907326#comment-15907326
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
@blueorangutan test centos7 vmware-65u1


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907328#comment-15907328
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
@borisstoyanov unsupported parameters provided. Supported mgmt server os 
are: `centos6, centos7, ubuntu`. Supported hypervisors are: `kvm-centos6, 
kvm-centos7, kvm-ubuntu, xenserver-65sp1, xenserver-62sp1, vmware-60u2, 
vmware-55u3, vmware-51u1, vmware-50u1`


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)



[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907329#comment-15907329
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
@blueorangutan test centos7 vmware-55u3


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907332#comment-15907332
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


> [VMware] Unable to add a host with space in its name to existing VMware 
> cluster
> ---
>
> Key: CLOUDSTACK-9730
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9730
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> ISSUE
> ==
> Unable to add a host with space in the name to existing VMware cluster.
> While adding host, CloudStack tries to persist the validated inventory url 
> path in database, which inserts encoded url into database which means 
> whitespace would be stored as '+' symbols. Url from API parameter string is 
> being converted to URI object as part of validation, where the url path is 
> getting encoded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8239) Add support for VirtIO-SCSI for KVM hypervisors

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907341#comment-15907341
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8239:


Github user kiwiflyer commented on the issue:

https://github.com/apache/cloudstack/pull/1955
  
@karuturi  3 x LGTM, testing successful. Ready for Merge.


> Add support for VirtIO-SCSI for KVM hypervisors
> ---
>
> Key: CLOUDSTACK-8239
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Storage Controller
>Affects Versions: 4.6.0
> Environment: KVM
>Reporter: Andrei Mikhailovsky
>Assignee: Wido den Hollander
>Priority: Critical
>  Labels: ceph, gsoc2017, kvm, libvirt, rbd, storage_drivers, 
> virtio
> Fix For: Future
>
>
> It would be nice to have support for virtio-scsi for KVM hypervisors.
> The reason for using virtio-scsi instead of virtio-blk would be increasing 
> the number of devices you can attach to a vm, have ability to use discard and 
> reclaim unused blocks from the backend storage like ceph rbd. There are also 
> talks about having a greater performance advantage as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907364#comment-15907364
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
@blueorangutan test centos7 vmware-55u3


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907365#comment-15907365
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907426#comment-15907426
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1994
  
@mike-tutkowski awesome, thanks for testing this PR!

@rafaelweingartner thanks for reviewing, I'll work on changes proposed

@karuturi sure, I'll work on it, thanks!


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907466#comment-15907466
 ] 

Will Stevens commented on CLOUDSTACK-9811:
--

I just checked the logs.  I see the key error.  I will review further...

> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907463#comment-15907463
 ] 

Will Stevens commented on CLOUDSTACK-9811:
--

I really doubt my change in https://github.com/apache/cloudstack/pull/1741 is 
causing this.  The change to `cs_ip.py` in that PR fixes a bug that cause the 
IP addresses to be reordered on reboot.  If there are secondary IPs on that 
nic, then the source nat IP will no longer be the primary IP on that nic and 
instead one of the other secondary IPs will be the primary IP after reboot.  
The change I made results in the same network config after a reboot by 
recording the index and modifying that index rather than removing it and 
re-adding it at the end.

I know Murali made a LOT of VR changes that got merged into 4.10 as well.  I 
have not reviewed all of them, but he made a lot of changes.  

I will review the logs and see if anything pops up.  [~boriss], let me know if 
reverting that change to cs_ip.py makes any difference.

> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907527#comment-15907527
 ] 

Will Stevens commented on CLOUDSTACK-9811:
--

I will admit that I am a bit confused that the  IP was matched by looping 
through the databag and an index was found, but then when it tries to set the 
IP, it is not found.  Is `eth3` supposed to exist?  Can you post the ips.json 
and ip_associations.json databags so we understand what the config is there?

We can easily 'get rid' of the error by changing this line of code: 
https://github.com/apache/cloudstack/blob/master/systemvm/patches/debian/config/opt/cloud/bin/cs_ip.py#L48

from:
if index != -1:

to:
if index != -1 and ip['device'] in dbag and index in dbag[ip['device']]:

I am curious if the `nic_dev_id` variable is correct.  It was introduced here 
(it looks to be relatively consistent with the functionality before): 
https://github.com/apache/cloudstack/commit/6749785caba78a9379e94bf3aaf0c1fbc44c5445#diff-a7d6f7150cca74029f23c19b72ad0622R24

Looking at the logs, I am unclear how the `nic_dev_id` is getting set to `3`.

Let me know if the code change I suggested in this comment works to fix your 
issue.  If it does, I will create a PR with that change for you.

Cheers...

> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (CLOUDSTACK-4603) Ability to configure a syslog destination for a virtual router / systemvm

2017-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rafael Weingärtner reopened CLOUDSTACK-4603:


Reopened per thinktwo (Jan-Arve Nygård) request

> Ability to configure a syslog destination for a virtual router / systemvm
> -
>
> Key: CLOUDSTACK-4603
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4603
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM, Virtual Router
>Reporter: Roeland Kuipers
>
> In order to improve monitoring and insights in a system / routervm.
> We like to see an option to send logging from system / routing vm's to a 
> syslog server.
> Destination preferably configurable per network and globally.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907605#comment-15907605
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r105676576
  
--- Diff: engine/schema/src/com/cloud/vm/dao/NicDaoImpl.java ---
@@ -302,4 +309,17 @@ public int countNicsForStartingVms(long networkId) {
 List results = customSearch(sc, null);
 return results.get(0);
 }
+
+@Override
+public Long getPeerRouterId(String publicMacAddress, final long 
routerId) {
+final SearchCriteria sc = PeerRouterSearch.create();
+sc.setParameters("instanceId", routerId);
+sc.setParameters("macAddress", publicMacAddress);
+sc.setParameters("vmType", VirtualMachine.Type.DomainRouter);
+NicVO nicVo = findOneBy(sc);
+if (nicVo != null) {
+return (new Long(nicVo.getInstanceId()));
--- End diff --

Let the auto-boxing and auto-unboxing do this for you.
When you do this manually the pool of String/numbers of java will not be 
used.
You just need to return `nicVo.getInstanceId()`


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907603#comment-15907603
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r105678507
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 ---
@@ -1928,6 +1929,54 @@ protected StartAnswer execute(StartCommand cmd) {
 VirtualDevice nic;
 int nicMask = 0;
 int nicCount = 0;
+
+if (vmSpec.getType() == VirtualMachine.Type.DomainRouter) {
+int extraPublicNics = mgr.getRouterExtraPublicNics();
+if (extraPublicNics > 0 && 
vmSpec.getDetails().containsKey("PeerRouterInstanceName")) {
+//Set identical MAC address for RvR on extra public 
interfaces
+String peerRouterInstanceName = 
vmSpec.getDetails().get("PeerRouterInstanceName");
+
+VirtualMachineMO peerVmMo = 
hyperHost.findVmOnHyperHost(peerRouterInstanceName);
+if (peerVmMo == null) {
+peerVmMo = 
hyperHost.findVmOnPeerHyperHost(peerRouterInstanceName);
+}
+
+if (peerVmMo != null) {
+StringBuffer sbOldMacSequence = new StringBuffer();
+for (NicTO oldNicTo : sortNicsByDeviceId(nics)) {
+
sbOldMacSequence.append(oldNicTo.getMac()).append("|");
+}
+if (!sbOldMacSequence.toString().isEmpty()) {
+
sbOldMacSequence.deleteCharAt(sbOldMacSequence.length() - 1); //Remove extra 
'|' char appended at the end
+}
+
+for (int nicIndex = nics.length - extraPublicNics; 
nicIndex < nics.length; nicIndex++) {
+VirtualDevice nicDevice = 
peerVmMo.getNicDeviceByIndex(nics[nicIndex].getDeviceId());
+if (nicDevice != null) {
+String mac = 
((VirtualEthernetCard)nicDevice).getMacAddress();
+if (mac != null) {
+s_logger.info("Use same MAC as 
previous RvR, the MAC is " + mac + " for extra NIC with device id: " + 
nics[nicIndex].getDeviceId());
+nics[nicIndex].setMac(mac);
+}
+}
+}
+
+String bootArgs = vmSpec.getBootArgs();
+if (!StringUtils.isEmpty(bootArgs)) {
+StringBuffer sbNewMacSequence = new 
StringBuffer();
--- End diff --

This method is still quite big.
What about extracting lines 1966-1971 to a method? These lines are used to 
generate/create the `sbNewMacSequence`. Then we could have a documentation 
describing its workings and test cases.


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/

[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907604#comment-15907604
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r105677642
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 ---
@@ -2072,6 +2121,17 @@ protected StartAnswer execute(StartCommand cmd) {
 }
 
 /**
+ * Update boot args with the new nic mac addresses.
+ */
+protected String replaceNicsMacSequenceInBootArgs(String 
oldMacSequence, String newMacSequence, VirtualMachineTO vmSpec) {
+String bootArgs = vmSpec.getBootArgs();
+if (!StringUtils.isEmpty(bootArgs) && 
!StringUtils.isEmpty(oldMacSequence) && !StringUtils.isEmpty(newMacSequence)) {
+return bootArgs.replace(oldMacSequence, newMacSequence);
+}
+return "";
--- End diff --

Is this case possible? `bootArgs` not empty and `oldMacSequence` or 
`newMacSequence` empty

If so, would not it be better to return at line 2131 the variable 
`bootArgs`?


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907606#comment-15907606
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r105679910
  
--- Diff: 
plugins/hypervisors/vmware/test/com/cloud/hypervisor/vmware/resource/VmwareResourceTest.java
 ---
@@ -216,6 +216,20 @@ public void testScaleVMF1() throws Exception {
 }
 
 @Test
+public void testReplaceNicsMacSequenceInBootArgs() throws Exception {
--- End diff --

Do you need this `throws Exception` here?
It does not seem o be required by any of the method calls you have here.


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9804) Add Cinder as a storage driver to Cloudstack

2017-03-13 Thread Syed Ahmed (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907741#comment-15907741
 ] 

Syed Ahmed commented on CLOUDSTACK-9804:


Hi Shanika. Thank you for your interest! Can you post, along with the JIRA ID 
of the project that you are interested in, on the mailing list 
d...@cloudstack.apache.org and we will pick it up from there.



> Add Cinder as a storage driver to Cloudstack
> 
>
> Key: CLOUDSTACK-9804
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9804
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Syed Ahmed
>Priority: Minor
>  Labels: GSoC2017
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9804) Add Cinder as a storage driver to Cloudstack

2017-03-13 Thread Syed Ahmed (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Syed Ahmed updated CLOUDSTACK-9804:
---
Labels: GSoC2017 mentor  (was: GSoC2017)

> Add Cinder as a storage driver to Cloudstack
> 
>
> Key: CLOUDSTACK-9804
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9804
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Syed Ahmed
>Priority: Minor
>  Labels: GSoC2017, mentor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907787#comment-15907787
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 714221234d41920ccb131367cca000cd4da7b261 in cloudstack's branch 
refs/heads/master from Wei Zhou
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7142212 ]

CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907794#comment-15907794
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 7b719c71fc15ce118fb3c2825790d615975eaefd in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7b719c7 ]

Merge pull request #1856 from ustcweizhou/set-kvm-host-params

[4.9] CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agentThe 
router.aggregation.command.each.timeout in global configuration is only applied 
on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent 
is connected.

* pr/1856:
  CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent

Signed-off-by: Rajani Karuturi 


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907803#comment-15907803
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 7b719c71fc15ce118fb3c2825790d615975eaefd in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7b719c7 ]

Merge pull request #1856 from ustcweizhou/set-kvm-host-params

[4.9] CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agentThe 
router.aggregation.command.each.timeout in global configuration is only applied 
on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent 
is connected.

* pr/1856:
  CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent

Signed-off-by: Rajani Karuturi 


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907813#comment-15907813
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 56e851ca46e3e3c1d0d1d0a544acc7ed964fb5bc in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=56e851c ]

Merge release branch 4.9 to master

* 4.9:
  moved logrotate from cron.daily to cron.hourly for vpcrouter in 
cloud-early-config
  CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907822#comment-15907822
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 714221234d41920ccb131367cca000cd4da7b261 in cloudstack's branch 
refs/heads/4.9 from Wei Zhou
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7142212 ]

CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907826#comment-15907826
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9698:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1861


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907827#comment-15907827
 ] 

ASF GitHub Bot commented on CLOUDSTACK-5806:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1958


> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907828#comment-15907828
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1856


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907834#comment-15907834
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9794:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1953


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907836#comment-15907836
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 7b719c71fc15ce118fb3c2825790d615975eaefd in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7b719c7 ]

Merge pull request #1856 from ustcweizhou/set-kvm-host-params

[4.9] CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agentThe 
router.aggregation.command.each.timeout in global configuration is only applied 
on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent 
is connected.

* pr/1856:
  CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent

Signed-off-by: Rajani Karuturi 


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9638) Problems caused when inputting double-byte numbers for custom compute offerings

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907841#comment-15907841
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9638:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1967
  
@bvbharat can you start internal CI and post results?


>  Problems caused when inputting double-byte numbers for custom compute 
> offerings
> 
>
> Key: CLOUDSTACK-9638
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9638
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
> Fix For: 4.9.1.0
>
>
> When creating a VM with a custom compute offering, CloudPlatform allows the 
> input of double-byte numbers. The VM will be created but listVirtualMachines 
> will subsequently fail with an exception. The problem seems to be with the 
> value of detail_value for the associated entry in the user_vm_view table. If 
> you manually change it to a regular number the issue is resolved. Double-byte 
> numbers should either be considered invalid for custom offerings and 
> rejected, or listVMs should work when they are used for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907839#comment-15907839
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8880:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/847


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907843#comment-15907843
 ] 

ASF subversion and git services commented on CLOUDSTACK-9569:
-

Commit 7b719c71fc15ce118fb3c2825790d615975eaefd in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7b719c7 ]

Merge pull request #1856 from ustcweizhou/set-kvm-host-params

[4.9] CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agentThe 
router.aggregation.command.each.timeout in global configuration is only applied 
on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent 
is connected.

* pr/1856:
  CLOUDSTACK-9569: propagate global configuration 
router.aggregation.command.each.timeout to KVM agent

Signed-off-by: Rajani Karuturi 


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907847#comment-15907847
 ] 

ASF subversion and git services commented on CLOUDSTACK-9698:
-

Commit d171bb78570416f9a54263805c59422e7be5a195 in cloudstack's branch 
refs/heads/master from [~sateeshc]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d171bb7 ]

CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable

Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to 
get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC 
adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer 
time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier 
configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable 
will be helpful for admins in such scenarios.

Also in future if VMware introduces new NIC adapter types which may take time 
to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.

Signed-off-by: Sateesh Chodapuneedi 


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907852#comment-15907852
 ] 

ASF subversion and git services commented on CLOUDSTACK-9698:
-

Commit 09802e0f3c7a488545248e03431b9741076b1942 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=09802e0 ]

Merge pull request #1861 from sateesh-chodapuneedi/pr-cloudstack-9698

CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug 
as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as 
configurable

Description
=
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR 
running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware 
NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer 
time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier 
configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable 
will be helpful for admins in such scenarios. This is specific to VR running 
over VMware hypervisor.

Also in future if VMware introduces new NIC adapter types which may take time 
to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.

Fix
===
Introduce new configuration parameter (via ConfigKey) 
"vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for 
hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard 
coded timeout, to ensure flexibility for admins given the listed scenarios 
above.

Signed-off-by: Sateesh Chodapuneedi 

* pr/1861:
  CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable

Signed-off-by: Rajani Karuturi 


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907856#comment-15907856
 ] 

ASF subversion and git services commented on CLOUDSTACK-9698:
-

Commit 09802e0f3c7a488545248e03431b9741076b1942 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=09802e0 ]

Merge pull request #1861 from sateesh-chodapuneedi/pr-cloudstack-9698

CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug 
as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as 
configurable

Description
=
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR 
running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware 
NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer 
time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier 
configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable 
will be helpful for admins in such scenarios. This is specific to VR running 
over VMware hypervisor.

Also in future if VMware introduces new NIC adapter types which may take time 
to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.

Fix
===
Introduce new configuration parameter (via ConfigKey) 
"vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for 
hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard 
coded timeout, to ensure flexibility for admins given the listed scenarios 
above.

Signed-off-by: Sateesh Chodapuneedi 

* pr/1861:
  CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable

Signed-off-by: Rajani Karuturi 


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9698) Make the wait timeout for NIC adapter hotplug as configurable

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907859#comment-15907859
 ] 

ASF subversion and git services commented on CLOUDSTACK-9698:
-

Commit 09802e0f3c7a488545248e03431b9741076b1942 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=09802e0 ]

Merge pull request #1861 from sateesh-chodapuneedi/pr-cloudstack-9698

CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug 
as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as 
configurable

Description
=
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR 
running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware 
NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer 
time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier 
configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable 
will be helpful for admins in such scenarios. This is specific to VR running 
over VMware hypervisor.

Also in future if VMware introduces new NIC adapter types which may take time 
to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.

Fix
===
Introduce new configuration parameter (via ConfigKey) 
"vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for 
hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard 
coded timeout, to ensure flexibility for admins given the listed scenarios 
above.

Signed-off-by: Sateesh Chodapuneedi 

* pr/1861:
  CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable

Signed-off-by: Rajani Karuturi 


> Make the wait timeout for NIC adapter hotplug as configurable
> -
>
> Key: CLOUDSTACK-9698
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9698
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.9.0.1
> Environment: ACS 4.9 branch commit 
> a0e36b73aebe43bfe6bec3ef8f53e8cb99ecbc32
> vSphere 5.5
>Reporter: Sateesh Chodapuneedi
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.9.1.0
>
>
> Currently ACS waits for 15 seconds (*hard coded*) for hot-plugged NIC in VR 
> to get detected by guest OS. The time taken to detect hot plugged NIC in 
> guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.) 
> and guest OS itself. In uncommon scenarios the NIC detection may take longer 
> time than 15 seconds, in such cases NIC hotplug would be treated as failure 
> which results in VPC tier configuration failure. Making the wait timeout for 
> NIC adapter hotplug as configurable will be helpful for admins in such 
> scenarios. 
> Also in future if VMware introduces new NIC adapter types which may take time 
> to get detected by guest OS, it is good to have flexibility of configuring 
> the wait timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907867#comment-15907867
 ] 

ASF subversion and git services commented on CLOUDSTACK-5806:
-

Commit 9b85cbca4187f34aee52dfe5b5d02257d7dcd5e7 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9b85cbc ]

Merge pull request #1958 from shapeblue/CLOUDSTACK-5806

CLOUDSTACK-5806: add presetup to storage types that support over provisioning

Ideally this should be configurable via global settings

* pr/1958:
  CLOUDSTACK-5806: add presetup to storage types that support over provisioning 
Ideally this should be configurable via global settings

Signed-off-by: Rajani Karuturi 


> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907871#comment-15907871
 ] 

ASF subversion and git services commented on CLOUDSTACK-5806:
-

Commit 9b85cbca4187f34aee52dfe5b5d02257d7dcd5e7 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9b85cbc ]

Merge pull request #1958 from shapeblue/CLOUDSTACK-5806

CLOUDSTACK-5806: add presetup to storage types that support over provisioning

Ideally this should be configurable via global settings

* pr/1958:
  CLOUDSTACK-5806: add presetup to storage types that support over provisioning 
Ideally this should be configurable via global settings

Signed-off-by: Rajani Karuturi 


> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907874#comment-15907874
 ] 

ASF subversion and git services commented on CLOUDSTACK-5806:
-

Commit 9b85cbca4187f34aee52dfe5b5d02257d7dcd5e7 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9b85cbc ]

Merge pull request #1958 from shapeblue/CLOUDSTACK-5806

CLOUDSTACK-5806: add presetup to storage types that support over provisioning

Ideally this should be configurable via global settings

* pr/1958:
  CLOUDSTACK-5806: add presetup to storage types that support over provisioning 
Ideally this should be configurable via global settings

Signed-off-by: Rajani Karuturi 


> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-5806) Storage types other than NFS/VMFS can't overprovision

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907863#comment-15907863
 ] 

ASF subversion and git services commented on CLOUDSTACK-5806:
-

Commit b6c259d72a5d5776634e07cb56b74e8cb4828434 in cloudstack's branch 
refs/heads/master from [~abhi_shapeblue]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=b6c259d ]

CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings


> Storage types other than NFS/VMFS can't overprovision
> -
>
> Key: CLOUDSTACK-5806
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5806
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0, 4.3.0, Future
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Critical
> Fix For: 4.4.0
>
>
> Edison, Mike, or myself can probably fix this. Mgmt server hardcodes storage 
> types that can overprovision. Need to fix this.
> Edison suggests:
> We can move it to storage driver's capabilities method.
> Each storage driver can report its capabilities in DataStoreDriver-> 
> getCapabilities(), which returns a map[String, String], we can change the 
> signature to map[String, Object]
> In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)-> 
> getCapabilities, which can return something like:
> Var comparator = new  storageOverProvision() {
> Public Boolean isOverProvisionSupported(DataStore store) {
>Var storagepool = (PrimaryDataStoreInfo)store;
>If (store.getPoolType() == NFS or VMFS) {
>Return true;
>   }
>  };
> };
> Var caps = new HashMap[String, Object]();
> Caps.put("storageOverProvision", comparator);
> Return caps;
> }
> Whenever, other places in mgt server want to check the capabilities of 
> overprovision, we can do the following:
> Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
> var caps = primaryStore. getDriver().getCapabilities();
> var overprovision = caps.get("storageOverProvision");
> var result = overprovision. isOverProvisionSupported(primaryStore);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907878#comment-15907878
 ] 

ASF subversion and git services commented on CLOUDSTACK-9794:
-

Commit 93f5b6e8a391ce8b09be484d029c54d48a2b88aa in cloudstack's branch 
refs/heads/master from [~sureshkumar.anaparti]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=93f5b6e ]

CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM

Updated hardcoded value with max data volumes limit from hypervisor 
capabilities.


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907880#comment-15907880
 ] 

ASF subversion and git services commented on CLOUDSTACK-9794:
-

Commit 3f0fbf251c6bea5829b524077c342ea810b52323 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3f0fbf2 ]

Merge pull request #1953 from Accelerite/CLOUDSTACK-9794

CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded 
value with max data volumes limit from hypervisor capabilities.

* pr/1953:
  CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM

Signed-off-by: Rajani Karuturi 


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907885#comment-15907885
 ] 

ASF subversion and git services commented on CLOUDSTACK-9794:
-

Commit 3f0fbf251c6bea5829b524077c342ea810b52323 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3f0fbf2 ]

Merge pull request #1953 from Accelerite/CLOUDSTACK-9794

CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded 
value with max data volumes limit from hypervisor capabilities.

* pr/1953:
  CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM

Signed-off-by: Rajani Karuturi 


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9794) Unable to attach more than 14 devices to a VM

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907889#comment-15907889
 ] 

ASF subversion and git services commented on CLOUDSTACK-9794:
-

Commit 3f0fbf251c6bea5829b524077c342ea810b52323 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3f0fbf2 ]

Merge pull request #1953 from Accelerite/CLOUDSTACK-9794

CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded 
value with max data volumes limit from hypervisor capabilities.

* pr/1953:
  CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM

Signed-off-by: Rajani Karuturi 


> Unable to attach more than 14 devices to a VM
> -
>
> Key: CLOUDSTACK-9794
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9794
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> A limit of 13 disks is set in hypervisor_capabilities for VMware hypervisor. 
> Changed this limit to a higher value in the DB directly for the VMware and 
> tried attaching more than 14 disks. This was failing with the below exception:
> {noformat}
> 2016-08-12 18:42:53,694 ERROR [c.c.a.ApiAsyncJobDispatcher] 
> (API-Job-Executor-40:ctx-56068c6b job-1015) (logid:b22938fd) Unexpected 
> exception while executing 
> org.apache.cloudstack.api.command.admin.volume.AttachVolumeCmdByAdmin
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:794)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.getDeviceId(VolumeApiServiceImpl.java:2439)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1308)
>   at 
> com.cloud.storage.VolumeApiServiceImpl.attachVolumeToVM(VolumeApiServiceImpl.java:1173)
>   at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
> {noformat}
> There was a hardcoded limit of 15 on the number of devices for a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907890#comment-15907890
 ] 

ASF subversion and git services commented on CLOUDSTACK-8880:
-

Commit 9a021904af9475c8d1a4ec0f981ff76729546cb2 in cloudstack's branch 
refs/heads/master from [~kishan]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9a02190 ]

 Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907892#comment-15907892
 ] 

ASF subversion and git services commented on CLOUDSTACK-8880:
-

Commit ad7ed7a1783e468f1570b232f9b9dbf2ae88ae01 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ad7ed7a ]

Merge pull request #847 from kishankavala/CLOUDSTACK-8880

Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)With memory over-provisioning set 
to 1, when mgmt server starts VMs in parallel on one host, then the memory 
allocated on that kvm can be larger than the actual physcial memory of the kvm 
host.

Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity

* pr/847:
  Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)

Signed-off-by: Rajani Karuturi 


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907895#comment-15907895
 ] 

ASF subversion and git services commented on CLOUDSTACK-8880:
-

Commit ad7ed7a1783e468f1570b232f9b9dbf2ae88ae01 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ad7ed7a ]

Merge pull request #847 from kishankavala/CLOUDSTACK-8880

Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)With memory over-provisioning set 
to 1, when mgmt server starts VMs in parallel on one host, then the memory 
allocated on that kvm can be larger than the actual physcial memory of the kvm 
host.

Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity

* pr/847:
  Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)

Signed-off-by: Rajani Karuturi 


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907905#comment-15907905
 ] 

ASF subversion and git services commented on CLOUDSTACK-8880:
-

Commit ad7ed7a1783e468f1570b232f9b9dbf2ae88ae01 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ad7ed7a ]

Merge pull request #847 from kishankavala/CLOUDSTACK-8880

Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)With memory over-provisioning set 
to 1, when mgmt server starts VMs in parallel on one host, then the memory 
allocated on that kvm can be larger than the actual physcial memory of the kvm 
host.

Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity

* pr/847:
  Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm.  
free memory = total memory - (all vm memory)

Signed-off-by: Rajani Karuturi 


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907914#comment-15907914
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user sureshanaparti commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105722036
  
--- Diff: 
engine/orchestration/src/com/cloud/agent/manager/AgentManagerImpl.java ---
@@ -1049,7 +1044,13 @@ public boolean executeUserRequest(final long hostId, 
final Event event) throws A
 }
 return true;
 } else if (event == Event.ShutdownRequested) {
-return reconnect(hostId);
+//should throw a exception here as well.instead of eating this 
up.
+   try {
+   reconnect(hostId);
+   } catch (CloudRuntimeException e) {
--- End diff --

@bvbharatk Is it possible to take the failure reason forward?


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9666) Add configuration validation for the config drive global settings

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907919#comment-15907919
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9666:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1968
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 462
 Hypervisor xenserver
 NetworkType Advanced
 Passed=102
 Failed=3
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_non_contigiousvlan.py

 * test_extendPhysicalNetworkVlan Failed

* test_routers_network_ops.py

 * test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failed

 * test_03_RVR_Network_check_router_state Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Add configuration validation for the config drive global settings
> -
>
> Key: CLOUDSTACK-9666
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9666
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.9.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
> Fix For: 4.9.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910237#comment-15910237
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105725769
  
--- Diff: server/src/com/cloud/alert/AlertManagerImpl.java ---
@@ -767,7 +767,9 @@ public void sendAlert(AlertType alertType, long 
dataCenterId, Long podId, Long c
 // set up a new alert
 AlertVO newAlert = new AlertVO();
 newAlert.setType(alertType.getType());
-newAlert.setSubject(subject);
+//do not have a seperate column for content.
+//appending the message to the subject for now.
+newAlert.setSubject(subject+content);
--- End diff --

Are you sure this is a good idea?
If the column for content does exist, what about creating it with this PR? 
Thus, we can avoid this type of half measure solution. Especially that the 
column `subject` has a limitation on its size  `@Column(name = "subject", 
length = 999)`; this can potentially create problems in certain conditions in 
runtime.


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910233#comment-15910233
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105723252
  
--- Diff: api/src/com/cloud/resource/ResourceService.java ---
@@ -50,7 +52,7 @@
 
 Host cancelMaintenance(CancelMaintenanceCmd cmd);
 
-Host reconnectHost(ReconnectHostCmd cmd);
+Host reconnectHost(ReconnectHostCmd cmd) throws CloudRuntimeException, 
AgentUnavailableException;
--- End diff --

The `CloudRuntimeException` is a `RuntimeException` you do not need to 
declare it in the method signature.


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910234#comment-15910234
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105723986
  
--- Diff: engine/components-api/src/com/cloud/agent/AgentManager.java ---
@@ -141,7 +142,7 @@
 
 public void pullAgentOutMaintenance(long hostId);
 
-boolean reconnect(long hostId);
+void reconnect(long hostId) throws CloudRuntimeException, 
AgentUnavailableException;
--- End diff --

the `CloudRuntimeException` is a `RuntimeException` you do not need to 
declare it in the method signature.


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910238#comment-15910238
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105724153
  
--- Diff: 
plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
 ---
@@ -512,7 +512,11 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 });
 HostVO host = _hostDao.findById(lbDeviceVo.getHostId());
 
-_agentMgr.reconnect(host.getId());
+try {
+_agentMgr.reconnect(host.getId());
+} catch (Exception e ) {
--- End diff --

Cannot you use a more specific `catch` here?


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9831) Previous pod_id still remains in the vm_instance table after VM migration with migrateVirtualMachineWithVolume

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910437#comment-15910437
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9831:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/2002
  
@sudhansu7 Shouldn't we update POD id upon a successful migration? If 
migration fails we will have wrong POD id in the DB.


> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> --
>
> Key: CLOUDSTACK-9831
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9831
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
>Reporter: Sudhansu Sahu
>
> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> {noformat}
> Previous pod_id still remains in the vm_instance table after VM migration 
> with migrateVirtualMachineWithVolume
> Before migrateVirtualMachineWithVolume
> mysql> select v.id,v.instance_name,h.name,v.pod_id as 
> pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
> host h where v.host_id=h.id and v.id=2;
> --
> idinstance_name   namepod_id_from_instance_tb 
> pod_id_from_host_tb
> --
> 2 i-2-2-VMtestVM  1   1
> --
> 1 row in set (0.00 sec)
> After migrateVirtualMachineWithVolume
> mysql> select v.id,v.instance_name,h.name,v.pod_id as 
> pod_id_from_instance_tb,h.pod_id as pod_id_from_host_tb from vm_instance v, 
> host h where v.host_id=h.id and v.id=3;
> -
> idinstance_name   namepod_id_from_instance_tb 
> pod_id_from_host_tb
> -
> 3 i-2-3-VMtestVm1 1   2
> -
> 1 row in set (0.00 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910235#comment-15910235
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105723834
  
--- Diff: 
engine/orchestration/src/com/cloud/agent/manager/AgentManagerImpl.java ---
@@ -986,33 +986,28 @@ public Answer easySend(final Long hostId, final 
Command cmd) {
 }
 
 @Override
-public boolean reconnect(final long hostId) {
+public void reconnect(final long hostId) throws CloudRuntimeException, 
AgentUnavailableException{
--- End diff --

You added `AgentUnavailableException` to the method signature, but I did 
not see you throwing this exception anywhere.


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15910236#comment-15910236
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/837#discussion_r105723444
  
--- Diff: 
api/src/org/apache/cloudstack/api/command/admin/host/ReconnectHostCmd.java ---
@@ -100,17 +103,18 @@ public Long getInstanceId() {
 @Override
 public void execute() {
 try {
-Host result = _resourceService.reconnectHost(this);
-if (result != null) {
-HostResponse response = 
_responseGenerator.createHostResponse(result);
-response.setResponseName(getCommandName());
-this.setResponseObject(response);
-} else {
-throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
"Failed to reconnect host");
-}
-} catch (Exception ex) {
-s_logger.warn("Exception: ", ex);
-throw new 
ServerApiException(ApiErrorCode.RESOURCE_UNAVAILABLE_ERROR, ex.getMessage());
+Host result =_resourceService.reconnectHost(this);
--- End diff --

Before the `result` was checked for `null`; can the method 
`_resourceService.reconnectHost(this)` return null?


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Boris Stoyanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15922738#comment-15922738
 ] 

Boris Stoyanov commented on CLOUDSTACK-9811:


That is it [~wstevens], I've reverted the changes to 
https://github.com/swill/cloudstack/blob/8b4c36ef501a96742c52b4d532cc3adda25aa71b/systemvm/patches/debian/config/opt/cloud/bin/cs_ip.py
which is the previous version before StrongSwan. VR did came up and the 
instance got deployed. I guess we need to create a separate PR to address this 
issue and restructure the StronSwan changes, I'll be really happy to help with 
testing.


> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9717) [VMware] RVRs have mismatching MAC addresses for extra public NICs

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15922743#comment-15922743
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9717:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@rafaelweingartner Thanks for reviewing, will work on the changes suggested.


> [VMware] RVRs have mismatching MAC addresses for extra public NICs
> --
>
> Key: CLOUDSTACK-9717
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9717
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> [CLOUDSTACK-985|https://issues.apache.org/jira/browse/CLOUDSTACK-985] doesn't 
> seem to be completely fixed.
> ISSUE
> ==
> If there are two public networks on two VLANs, and a pair redundant VRs 
> acquire IPs from both, the associated NICs on the redundant VRs will have 
> mismatching MAC addresses.  
> The example below shows the eth2 NICs for the first public network 
> (210.140.168.0/21) have matching MAC addresses (06:c4:b6:00:03:df) as 
> expected, but the eth3 NICs for the second one (210.140.160.0/21) have 
> mismatching MACs (02:00:50:e1:6c:cd versus 02:00:5a:e6:6c:d5).
> *r-43584-VM (Master)*
> 6: eth2:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc mq state UNKNOWN 
> qlen 1000 
> link/ether 02:00:50:e1:6c:cd brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> *r-43585-VM (Backup)*
> 6: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 06:c4:b6:00:03:df brd ff:ff:ff:ff:ff:ff 
> inet 210.140.168.42/21 brd 210.140.175.255 scope global eth2 
> inet 210.140.168.20/21 brd 210.140.175.255 scope global secondary eth2 
> 8: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000 
> link/ether 02:00:5a:e6:6c:d5 brd ff:ff:ff:ff:ff:ff 
> inet 210.140.162.124/21 brd 210.140.167.255 scope global eth3 
> inet 210.140.163.36/21 brd 210.140.167.255 scope global secondary eth3 
> CloudStack should ensure that the NICs for all public networks have matching 
> MACs.
> REPRO STEPS
> ==
> 1) Set up redundant VR.
> 2) Set up multiple public networks on different VLANs.
> 3) Acquire IPs in the RVR network until the VRs get IPs in the different 
> public networks.
> 4) Confirm the mismatching MAC addresses.
> EXPECTED BEHAVIOR
> ==
> Redundant VRs have matching MACs for all public networks.
> ACTUAL BEHAVIOR
> ==
> Redundant VRs have matching MACs only for the first public network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15922827#comment-15922827
 ] 

Will Stevens commented on CLOUDSTACK-9811:
--

[~bstoyanov] here is a PR to fix this issue: 
https://github.com/apache/cloudstack/pull/2003

I still think there is a different problem in play which is getting us to this 
condition without a valid dev, but that is a different story.  This will fix 
the problem of the code breaking if the dev does not exist while keeping the 
bug fix this change was introduced to fix.

> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8672) NCC Integration with CloudStack

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15922865#comment-15922865
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8672:


Github user nitin-maharana commented on the issue:

https://github.com/apache/cloudstack/pull/1859
  
It has already two LGTMs and contains all successful test results. This is 
a big change, as time passes there are more chances of conflicts 
appearance(Already resolved once). If anyone wants to review, please do it else 
we should consider merging this.


> NCC Integration with CloudStack
> ---
>
> Key: CLOUDSTACK-8672
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8672
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Devices
>Affects Versions: 4.6.0
>Reporter: Rajesh Battala
>Assignee: Rajesh Battala
>Priority: Critical
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8672) NCC Integration with CloudStack

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15922880#comment-15922880
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8672:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1859
  
@nitin-maharana I was looking at the PR.
Do you need to split everything thing there in a different commit?
I think that we still do not have a clear understanding when and how to 
separate things in a commit; however, anything like 90+ commits in a PR seems 
exaggerated to me.

I like the philosophy that for every commit we should be able to get the 
system, build it and use it. This, for instance, would not happen here. Again, 
as I said, I think we do not have a clear rule about that, but I would like 
others to check this situation as well.

@DaanHoogland, @rhtyd, @swill any thoughts here?



> NCC Integration with CloudStack
> ---
>
> Key: CLOUDSTACK-8672
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8672
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Devices
>Affects Versions: 4.6.0
>Reporter: Rajesh Battala
>Assignee: Rajesh Battala
>Priority: Critical
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923166#comment-15923166
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1994
  
@rafaelweingartner I pushed changes and squashed my commits as it could be 
easier to review. I also added unit tests for new methods


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923167#comment-15923167
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1994#discussion_r105796076
  
--- Diff: 
engine/schema/src/com/cloud/storage/dao/StoragePoolTagsDaoImpl.java ---
@@ -77,4 +90,71 @@ public void deleteTags(long poolId) {
 txn.commit();
 }
 
+@Override
+public List searchByIds(Long... stIds) {
+String batchCfg = _configDao.getValue("detail.batch.query.size");
--- End diff --

Done, thanks!


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9589) vmName entries from host_details table for the VM's whose state is Expunging should be deleted during upgrade from older versions

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923170#comment-15923170
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9589:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1759
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 463
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failing since 
2 runs


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> vmName entries from host_details table for the VM's whose state is Expunging 
> should be deleted during upgrade from older versions
> -
>
> Key: CLOUDSTACK-9589
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9589
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.4.4
> Environment: Baremetal zone
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Having vmName entries for VMs in 'expunging' states would cause with 
> deploying VMs with matching host tags fail. So removing them during upgrade



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923173#comment-15923173
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1994#discussion_r105796786
  
--- Diff: 
engine/schema/src/com/cloud/storage/dao/StoragePoolTagsDaoImpl.java ---
@@ -77,4 +90,71 @@ public void deleteTags(long poolId) {
 txn.commit();
 }
 
+@Override
+public List searchByIds(Long... stIds) {
+String batchCfg = _configDao.getValue("detail.batch.query.size");
--- End diff --

About number 2000, I assumed it was a default value for that configuration


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923174#comment-15923174
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1994#discussion_r105796872
  
--- Diff: 
engine/schema/src/com/cloud/storage/dao/StoragePoolTagsDaoImpl.java ---
@@ -77,4 +90,71 @@ public void deleteTags(long poolId) {
 txn.commit();
 }
 
+@Override
+public List searchByIds(Long... stIds) {
+String batchCfg = _configDao.getValue("detail.batch.query.size");
+
+final int detailsBatchSize = batchCfg != null ? 
Integer.parseInt(batchCfg) : 2000;
+
+// query details by batches
+List uvList = new ArrayList();
+int curr_index = 0;
+
+if (stIds.length > detailsBatchSize) {
+while ((curr_index + detailsBatchSize) <= stIds.length) {
+Long[] ids = new Long[detailsBatchSize];
+
+for (int k = 0, j = curr_index; j < curr_index + 
detailsBatchSize; j++, k++) {
+ids[k] = stIds[j];
+}
+
+SearchCriteria sc = 
StoragePoolIdsSearch.create();
--- End diff --

Done, created method `searchForStoragePoolIdsInternal`


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9827) Storage tags stored in multiple places

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923179#comment-15923179
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9827:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1994#discussion_r105797159
  
--- Diff: 
engine/schema/src/org/apache/cloudstack/storage/datastore/db/PrimaryDataStoreDaoImpl.java
 ---
@@ -409,15 +460,13 @@ public StoragePoolVO persist(StoragePoolVO pool, 
Map details) {
 sc.and(sc.entity().getScope(), Op.EQ, ScopeType.ZONE);
 return sc.list();
 } else {
-Map details = tagsToDetails(tags);
-
-StringBuilder sql = new 
StringBuilder(ZoneWideDetailsSqlPrefix);
+StringBuilder sql = new StringBuilder(ZoneWideTagsSqlPrefix);
--- End diff --

Thanks for pointing this out, I had missed it out. Created methods 
`getSqlPreparedStatement` and `searchStoragePoolsPreparedStatement` which are 
called from many methods and allow storage pool retrieval


> Storage tags stored in multiple places
> --
>
> Key: CLOUDSTACK-9827
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9827
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.10.0.0
> Environment: N/A
>Reporter: Mike Tutkowski
>Assignee: Nicolas Vazquez
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> I marked this as a Blocker because it concerns me that we are not handling 
> storage tags correctly in 4.10 and, as such, VM storage might get placed in 
> locations that users don't want.
> From e-mails I sent to dev@ (most recent to oldest):
> If I add a new primary storage and give it a storage tag, the tag ends up in 
> storage_pool_details.
> If I edit an existing storage pool’s storage tags, it places them in 
> storage_pool_tags.
> **
> I believe I have found another bug (one that we should either fix or examine 
> in detail before releasing 4.10).
> It looks like we have a new table: cloud.storage_pool_tags.
> The addition of this table seems to have broken the listStorageTags API 
> command. When this command runs, it doesn’t pick up any storage tags for me 
> (and I know I have one storage tag).
> This data used to be stored in the cloud.storage_pool_details table. It’s 
> good to put it in its own table, but will our upgrade process move the 
> existing tags from storage_pool_details to storage_pool_tags?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9730) [VMware] Unable to add a host with space in its name to existing VMware cluster

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923310#comment-15923310
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9730:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1891
  
Trillian test result (tid-952)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 45011 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1891-t952-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_snapshots.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_test_vm_volume_snapshot | `Failure` | 277.24 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Failure` | 878.79 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 121.10 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 126.19 | 
test_snapshots.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 156.48 | 
test_snapshots.py
test_02_vpc_privategw_static_routes | `Error` | 646.76 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 366.15 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 151.59 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 558.22 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 329.08 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 707.72 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 661.92 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1529.30 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 725.10 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 714.27 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1524.70 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 30.78 | test_volumes.py
test_06_download_detached_volume | Success | 55.48 | test_volumes.py
test_05_detach_volume | Success | 105.28 | test_volumes.py
test_04_delete_attached_volume | Success | 20.23 | test_volumes.py
test_03_download_attached_volume | Success | 15.28 | test_volumes.py
test_02_attach_volume | Success | 58.70 | test_volumes.py
test_01_create_volume | Success | 511.69 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 476.60 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.20 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.18 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.63 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 222.39 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.71 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 65.98 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.12 | test_vm_life_cycle.py
test_02_start_vm | Success | 15.19 | test_vm_life_cycle.py
test_01_stop_vm | Success | 5.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 216.45 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.07 | test_templates.py
test_04_extract_template | Success | 15.22 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.13 | test_templates.py
test_01_create_template | Success | 115.92 | test_templates.py
test_10_destroy_cpvm | Success | 211.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.64 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.47 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.25 | test_ssvm.py
test_06_stop_cpvm | Success | 176.81 | test_ssvm.py
test_05_stop_ssvm | Success | 183.65 | test_ssvm.py
test_04_cpvm_internals | Success | 1.09 | test_ssvm.py
test_03_

[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923354#comment-15923354
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
Trillian test result (tid-953)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 45570 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1880-t953-vmware-55u3.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_router_nics.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_test_vm_volume_snapshot | `Failure` | 342.55 | test_vm_snapshots.py
test_04_rvpc_privategw_static_routes | `Failure` | 919.54 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 81.22 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 86.31 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 371.26 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 166.74 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 587.67 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 379.85 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 795.54 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 667.55 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1580.39 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 715.16 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 669.22 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1391.68 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 30.82 | test_volumes.py
test_06_download_detached_volume | Success | 51.00 | test_volumes.py
test_05_detach_volume | Success | 110.30 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 15.28 | test_volumes.py
test_02_attach_volume | Success | 53.71 | test_volumes.py
test_01_create_volume | Success | 514.68 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 494.29 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 275.15 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 230.09 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.74 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 217.48 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.95 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.05 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.13 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.23 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.16 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 217.42 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 10.22 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_01_create_template | Success | 105.85 | test_templates.py
test_10_destroy_cpvm | Success | 236.95 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.81 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.64 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.49 | test_ssvm.py
test_06_stop_cpvm | Success | 176.85 | test_ssvm.py
test_05_stop_ssvm | Success | 173.74 | test_ssvm.py
test_04_cpvm_internals | Success | 1.25 | test_ssvm.py
test_03_ssvm_internals | Success | 3.43 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test

[jira] [Commented] (CLOUDSTACK-9198) VR gets created in the disabled POD

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923436#comment-15923436
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9198:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1278
  
Hi @rafaelweingartner @anshul1886 @GabrielBrascher,
I've read this PR's comments several times and I think I could understand 
@anshul1886's point. Please correct me if I'm wrong. The execution of 
`getCallingAccount()` is setting the context with the proper account, and I 
think that's fine, as next methods will use it (e.g. `orchestrateStart` in 
`VirtualMachineManagerImpl` lines 829-831).
I also agree with @rafaelweingartner and @GabrielBrascher that even though 
the context is being set, variables `user` and `caller` on `start` method 
(defined on line 266) are not being used. @anshul1886, if no validations are 
required and the context is already set, don't you think that those unused 
parameters can be removed?


> VR gets created in the disabled POD
> ---
>
> Key: CLOUDSTACK-9198
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9198
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> VR gets created in the disabled POD



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9811) VR will not start, looking to configure eth3 while no such device exists on the VR. On KVM-CentOS6.8 physical host

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923531#comment-15923531
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9811:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/2003
  
Thanks Will. Can you please add bug id(CLOUDSTACK-9811) to the PR and 
commit message?


> VR will not start, looking to configure eth3 while no such device exists on 
> the VR. On KVM-CentOS6.8 physical host
> --
>
> Key: CLOUDSTACK-9811
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9811
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: Boris Stoyanov
>Priority: Blocker
> Attachments: agent.log, cloud.log, management.log
>
>
> This issue appears only on 4.10. When you add an instance with a new network 
> the VR starts and fails at the configuration point. Looks like it is looking 
> to configure eth3 adapter while no such device should be available on the VR. 
> The VR does not start and aborts the deployment of the VM. 
> Pease note that this issue was reproduced on physical KVM hosts in our lab.
> Hardware Hosts details:
> - 4x Dell C6100
> - Using: American Megatrends MegaRAC Baseboard Management (IPMI v2 compliant)
> OS:
> CentOS 6.8. 
> Management: 
> VM, running CentOS 6.8
> ACS version: 4.10 RC 1. SHA: 7c1d003b5269b375d87f4f6cfff8a144f0608b67
> In a nested virtualization environment it was working fine with CentOS6.8. 
> Attached are the management log and the cloud.log form the VR. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923545#comment-15923545
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user cloudsadhu commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@serg38 - thanks for your comment- I have added nfs support





> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8672) NCC Integration with CloudStack

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923544#comment-15923544
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8672:


Github user nitin-maharana commented on the issue:

https://github.com/apache/cloudstack/pull/1859
  
@rafaelweingartner : As there are multiple contributors to this feature, If 
I squash it to one commit, then others are going to lose their part of 
contributions. Initially, we thought of making it to one commit, but this is 
the main reason we pushed with multiple commits. Let's wait for others to 
comment on this, after that we will decide. Thanks, @rafaelweingartner for 
pitching in.


> NCC Integration with CloudStack
> ---
>
> Key: CLOUDSTACK-8672
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8672
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Devices
>Affects Versions: 4.6.0
>Reporter: Rajesh Battala
>Assignee: Rajesh Battala
>Priority: Critical
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9208) Assertion Error in VM_POWER_STATE handler.

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923607#comment-15923607
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9208:


Github user ramkatru commented on the issue:

https://github.com/apache/cloudstack/pull/1997
  
@jayapalu, Please see Daan's and Wido's comments on the referenced PR 
#1307. 


> Assertion Error in VM_POWER_STATE handler.
> --
>
> Key: CLOUDSTACK-9208
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9208
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>Priority: Minor
>
> 1. Enable the assertions.
> LOG
> 2015-12-31 04:09:06,687 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-981a85d4) (logid:863754b8) Found 0 networks to 
> update RvR status.
> 2015-12-31 04:09:07,394 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Ping from 5(10.147.40.18)
> 2015-12-31 04:09:07,394 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process host VM state 
> report from ping process. host: 5
> 2015-12-31 04:09:07,416 INFO [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unable to find matched 
> VM in CloudStack DB. name: New Virtual Machine
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process VM state report. 
> host: 5, number of records in report: 5
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report. host: 
> 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,530 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report is 
> updated. host: 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,540 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM r-69-VM is at Stopped 
> and we received a power-off report while there is no pending jobs on it
> 2015-12-31 04:09:07,541 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2879)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.

[jira] [Commented] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923610#comment-15923610
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9595:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1762
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 464
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Transactions are not getting retried in case of database deadlock errors
> 
>
> Key: CLOUDSTACK-9595
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Customer is seeing occasional error 'Deadlock found when trying to get lock; 
> try restarting transaction' messages in their management server logs.  It 
> happens regularly at least once a day.  The following is the error seen 
> 2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
> (catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
> executing api command: [Ljava.lang.String;@230a6e7f
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
> com.mysql.jdbc.JDBC4PreparedStatement@74f134e3: DELETE FROM 
> instance_group_vm_map WHERE instance_group_vm_map.instance_id = 941374
>   at com.cloud.utils.db.GenericDaoBase.expunge(GenericDaoBase.java:1209)
>   at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at com.sun.proxy.$Proxy237.expunge(Unknown Source)
>   at 
> com.cloud.vm.UserVmManagerImpl$2.doInTransactionWithoutResult(UserVmManagerImpl.java:2593)
>   at 
> com.cloud.utils.db.TransactionCallbackNoReturn.doInTransaction(TransactionCallbackNoReturn.java:25)
>   at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:57)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:45)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:54)
>   at 
> com.cloud.vm.UserVmManagerImpl.addInstanceToGroup(UserVmManagerImpl.java:2575)
>   at 
> com.cloud.vm.UserVmManagerImpl.updateVirt

[jira] [Commented] (CLOUDSTACK-9718) Revamp the dropdown showing lists of hosts available for migration in a Zone

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923622#comment-15923622
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9718:


Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1889
  
@ustcweizhou The popup is to indicate that no hosts were found. If I remove 
that, then there will be no visual indication that the search has returned no 
results. 

We need to have an alternative way to show that 0 results found. Let me see 
if I can put a message somewhere beneath to say the same. Will get back to you 
in a day or so.

Thanks for trying this out!


> Revamp the dropdown showing lists of hosts available for migration in a Zone
> 
>
> Key: CLOUDSTACK-9718
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9718
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: MigrateInstance-SeeHosts.PNG, 
> MigrateInstance-SeeHosts-Search.PNG
>
>
> There are a couple of issues:
> 1. When looking for the possible hosts for migration, not all are displayed.
> 2. If there is a large number of hosts, then the drop down showing is not 
> easy to use.
> To fix this, propose to change the view to a list view which will show the 
> hosts in a list view with radio button. Additionally have a search option 
> where the hostname can be searched in this list to make it more usable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9198) VR gets created in the disabled POD

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923645#comment-15923645
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9198:


Github user anshul1886 commented on the issue:

https://github.com/apache/cloudstack/pull/1278
  
@nvazquez @rafaelweingartner @GabrielBrascher, That method is called from 
multiple places. There are many places where we can do these kind of changes. I 
would prefer to have those kind of changes in PR specific to that so that they 
are easy to track and test for that specific purpose. 


> VR gets created in the disabled POD
> ---
>
> Key: CLOUDSTACK-9198
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9198
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> VR gets created in the disabled POD



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)