[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862302#comment-15862302
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) 
has been kicked to run smoke tests


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862296#comment-15862296
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862295#comment-15862295
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
@blueorangutan package


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862292#comment-15862292
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
@BlueOrangutan package


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861782#comment-15861782
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
Ok, got it.
Maybe in CentOS logical links get created during the installation of Java. 
Or maybe during the installation of ACS. The computer  I am using is not used 
to run ACS, but I use Java here, and this is why I asked you.

my output: 
```
lrwxrwxrwx  1 root root   20 Dec  1 12:48 java-1.6.0-openjdk-amd64 -> 
java-6-ope  

   njdk-amd64/
-rw-r--r--  1 root root 2387 Dec  1 12:48 .java-1.6.0-openjdk-amd64.jinfo
lrwxrwxrwx  1 root root   20 Feb  7 20:10 java-1.7.0-openjdk-amd64 -> 
java-7-ope  

   njdk-amd64/
-rw-r--r--  1 root root 2439 Feb  7 20:10 .java-1.7.0-openjdk-amd64.jinfo
drwxr-xr-x  5 root root 4096 Feb 10 14:00 java-6-openjdk-amd64/
drwxr-xr-x  3 root root 4096 Feb 10 13:59 java-6-openjdk-common/
drwxr-xr-x  5 root root 4096 Feb 10 13:57 java-7-openjdk-amd64/
```
Anyways, I understand that you are following the current pattern. So, we 
may let this problem (if it turns out to be a problem) for our future selves. 
If people start having problems when updating to java JRE version bigger than 
1.8.0.

Having said that, LGTM


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861758#comment-15861758
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
So the short answer is that I ran into this issue when I upgraded and since 
`jenv` did not correctly set the `JAVA_HOME` directory, I ran into this, so I 
don't have a deep understanding for all the details.  I made this change in my 
environment and everything worked.  I understand this is not a good answer, but 
that is what I have.

I don't install to that path either, but for some reason it was populated 
in my environment.  Can you do an `$ ll /usr/lib/jvm/` and post what you get?  
I am curious what it would return.  

Here is the result from my CentOS 6.8 setup.


![image](https://cloud.githubusercontent.com/assets/13644/22841453/ae384516-ef9f-11e6-835a-60736d6a03dc.png)

The way I see it right now, people WILL run into this issue.  This at least 
reduces the number of people who have problems.  



> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861720#comment-15861720
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
@Swill, I have a question (actually two) :)

If “JAVA_HOME” is not set, or if “JAVA_HOME” content does not exist, we 
check if there is a java installation. I have a doubt about this last path we 
are checking. Is this path “/usr/lib/jvm/jre-1.8.0” default (some sort of 
standard)?

I ask because (at least on Debian, or at least the Debian I am using) 
whenever I install using “aptitude”, the java versions were added to folders 
following the pattern: 
“/usr/lib/jvm/java--openjdk-/jre/”. Of course, 
this is for OpenJDK installed with “aptitude”.

Are we assuming that the user when installing manually (e.g. installing the 
JRE from Oracle) will put Java JRE files in a folder like 
“/usr/lib/jvm/jre-1.8.0”?

This will also only work for Java JRE 1.8.0, if users install Java JRE 
1.8.1 and use the version as the name of the folder, this would not work. Of 
course, all can be fixed by setting the JAVA_HOME. 

Would not it be better to cause an exception and stop the deployment with a 
message saying that we require the “JAVA_HOME” to be set? Then, it feels that 
we will be consistent; otherwise, I can imagine a user complaining that she/he 
has installed the Java 8 (1.8.x; where x > 0), but still ACS does not work. 

What do you think?


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861680#comment-15861680
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
Thanks @swill!

LGTM


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861675#comment-15861675
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/1938
  
@rhtyd, @wido, @nvazquez, @rafaelweingartner please review...

This fixes an issue with #1888.


> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861673#comment-15861673
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9780:


GitHub user swill opened a pull request:

https://github.com/apache/cloudstack/pull/1938

CLOUDSTACK-9780: Fixed the default JAVA_HOME value to be Java8 if not set

Now that PR-1888 is merged, Java8 is required.  Unfortunately, the file 
pushed to `/etc/cloudstack/management/classpath.conf` on ACS install will 
default the version to Java7 instead of Java8 if JAVA_HOME is unset.  This fix 
sets the default to Java8 if JAVA_HOME is not set.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/swill/cloudstack classpath

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1938.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1938


commit 6ee4a195f27fe59c61e96d0ab1c56dee0c05a52b
Author: Will Stevens 
Date:   2017-02-10T18:42:58Z

Fixed the default JAVA_HOME value to be Java8 if not set




> Default to Java8 if JAVA_HOME is not set
> 
>
> Key: CLOUDSTACK-9780
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Will Stevens
>
> Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed 
> to `/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
> JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9780) Default to Java8 if JAVA_HOME is not set

2017-02-10 Thread Will Stevens (JIRA)
Will Stevens created CLOUDSTACK-9780:


 Summary: Default to Java8 if JAVA_HOME is not set
 Key: CLOUDSTACK-9780
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9780
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.10.0.0
Reporter: Will Stevens


Now that PR1888 is merged, Java8 is required.  Unfortunately the file pushed to 
`/etc/cloudstack/management/classpath.conf` will default to Java7 if the 
JAVA_HOME is not set.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861532#comment-15861532
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r100575787
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861529#comment-15861529
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r100574299
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
--- End diff --

What about extracting this "else block" to a method?
Lines 308-340


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861530#comment-15861530
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r100574038
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
--- End diff --

What about using "org.apache.commons.lang.BooleanUtils.toBoolean(Boolean)" 
here?


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861531#comment-15861531
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r100575967
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = 

[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861408#comment-15861408
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@borisstoyanov @DaanHoogland @rhtyd KVM tests passed but key smoke test is 
skipped 
test_change_service_offering_for_vm_with_snapshots  Skipped
To be on the safe side can we test it on vmware and Xen as well?


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861398#comment-15861398
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
Trillian test result (tid-809)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 31586 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1727-t809-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 310.09 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 160.06 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 61.24 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 241.12 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 258.78 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 532.79 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 506.07 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1394.10 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 545.37 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 751.62 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1288.57 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.49 | test_volumes.py
test_08_resize_volume | Success | 151.34 | test_volumes.py
test_07_resize_fail | Success | 156.41 | test_volumes.py
test_06_download_detached_volume | Success | 151.59 | test_volumes.py
test_05_detach_volume | Success | 150.73 | test_volumes.py
test_04_delete_attached_volume | Success | 151.10 | test_volumes.py
test_03_download_attached_volume | Success | 156.22 | test_volumes.py
test_02_attach_volume | Success | 84.11 | test_volumes.py
test_01_create_volume | Success | 711.26 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.76 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.73 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 263.02 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.15 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 50.93 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.78 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.90 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.41 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.58 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 40.40 | test_templates.py
test_10_destroy_cpvm | Success | 191.41 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.60 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.48 | test_ssvm.py
test_07_reboot_ssvm | Success | 103.35 | test_ssvm.py
test_06_stop_cpvm | Success | 131.79 | test_ssvm.py
test_05_stop_ssvm | Success | 133.21 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 3.94 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.20 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.29 | test_snapshots.py
test_04_change_offering_small | Success | 239.54 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861331#comment-15861331
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1913#discussion_r100543231
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
 ---
@@ -1577,11 +1577,15 @@ public Answer createVolume(CreateObjectCommand cmd) 
{
 }
 
 synchronized (this) {
-// s_logger.info("Delete file if exists in datastore 
to clear the way for creating the volume. file: " + volumeDatastorePath);
-VmwareStorageLayoutHelper.deleteVolumeVmdkFiles(dsMo, 
volumeUuid.toString(), dcMo);
-
-vmMo.createDisk(volumeDatastorePath, 
(int)(volume.getSize() / (1024L * 1024L)), morDatastore, 
vmMo.getScsiDeviceControllerKey());
-vmMo.detachDisk(volumeDatastorePath, false);
+try {
+vmMo.createDisk(volumeDatastorePath, 
(int)(volume.getSize() / (1024L * 1024L)), morDatastore, 
vmMo.getScsiDeviceControllerKey());
+vmMo.detachDisk(volumeDatastorePath, false);
+}
+catch (Exception e) {
+s_logger.error("Deleting file " + 
volumeDatastorePath + " due to error: " + e.getMessage());
+
VmwareStorageLayoutHelper.deleteVolumeVmdkFiles(dsMo, volumeUuid.toString(), 
dcMo);
--- End diff --

Hi @syed @karuturi @SudharmaJain,
What do you think about this approach? We will delete vmdk file if either 
createDisk or detachDisk fails, this way we make sure file doesn't exist in 
case that CloudStack retries operation. By the way, in VolumeOrchestrator lines 
556-588 there's the retry logic, to retry only if failure contains "request 
template reload":


for (int i = 0; i < 2; i++) {
// retry one more time in case of template reload is required 
for Vmware case
AsyncCallFuture future = null;
boolean isNotCreatedFromTemplate = volume.getTemplateId() == 
null ? true : false;
if (isNotCreatedFromTemplate) {
future = volService.createVolumeAsync(volume, store);
} else {
TemplateInfo templ = 
tmplFactory.getTemplate(template.getId(), DataStoreRole.Image);
future = volService.createVolumeFromTemplateAsync(volume, 
store.getId(), templ);
}
try {
VolumeApiResult result = future.get();
if (result.isFailed()) {
if (result.getResult().contains("request template 
reload") && (i == 0)) {
s_logger.debug("Retry template re-deploy for 
vmware");
continue;
} else {
s_logger.debug("create volume failed: " + 
result.getResult());
throw new CloudRuntimeException("create volume 
failed:" + result.getResult());
}
}

return result.getVolume();
} catch (InterruptedException e) {
s_logger.error("create volume failed", e);
throw new CloudRuntimeException("create volume failed", e);
} catch (ExecutionException e) {
s_logger.error("create volume failed", e);
throw new CloudRuntimeException("create volume failed", e);
}
}
throw new CloudRuntimeException("create volume failed even after 
template re-deploy");
}


To preserve this logic is that I've passed caught exception message on 
thrown exception in line 1587. Do you agree with this solution?


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due 

[jira] [Commented] (CLOUDSTACK-9317) Disabling static NAT on many IPs can leave wrong IPs on the router

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861277#comment-15861277
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9317:


Github user ProjectMoon commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1908#discussion_r100535848
  
--- Diff: setup/db/db/schema-4920to41000.sql ---
@@ -62,3 +62,4 @@ INSERT INTO `cloud`.`role_permissions` (`uuid`, 
`role_id`, `rule`, `permission`,
 INSERT INTO `cloud`.`role_permissions` (`uuid`, `role_id`, `rule`, 
`permission`, `sort_order`) values (UUID(), 3, 'createSnapshotFromVMSnapshot', 
'ALLOW', 302) ON DUPLICATE KEY UPDATE rule=rule;
 INSERT INTO `cloud`.`role_permissions` (`uuid`, `role_id`, `rule`, 
`permission`, `sort_order`) values (UUID(), 4, 'createSnapshotFromVMSnapshot', 
'ALLOW', 260) ON DUPLICATE KEY UPDATE rule=rule;
 
+ALTER TABLE `user_ip_address` ADD COLUMN `staticnat_state` VARCHAR(32) 
COMMENT 'static  rule state while removing'
--- End diff --

This column name does not match the name of the column in the VO 
(`rule_state`).


> Disabling static NAT on many IPs can leave wrong IPs on the router
> --
>
> Key: CLOUDSTACK-9317
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9317
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Virtual Router
>Affects Versions: 4.7.0, 4.7.1, 4.7.2
>Reporter: Jeff Hair
>
> The current behavior of enabling or disabling static NAT will call the apply 
> IP associations method in the management server. The method is not 
> thread-safe. If it's called from multiple threads, each thread will load up 
> the list of public IPs in different states (add or revoke)--correct for the 
> thread, but not correct overall. Depending on execution order on the virtual 
> router, the router can end up with public IPs assigned to it that are not 
> supposed to be on it anymore. When another account acquires the same IP, this 
> of course leads to network problems.
> The problem has been in CS since at least 4.2, and likely affects all 
> recently released versions. Affected version is set to 4.7.x because that's 
> what we verified against.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861192#comment-15861192
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861191#comment-15861191
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan test


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is already removed
> 2017-01-26 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861153#comment-15861153
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-478


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is 

[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861130#comment-15861130
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1933
  
Testing has PASSED
There's a way to override the hardcoded timeout of 10 mins now by setting 
the "router.aggregation.command.each.timeout" in the 
"/etc/cloudstack/agent/agent.properties" file.
Testing steps:
Set the router.aggregation.command.each.timeout=1
Added a sleep of 15 seconds in the router_proxy.sh
Restarted the agent 
Started a new VM with a new network
In the log the following observations were noted:

```
2017-01-20 12:04:05,787 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-2:null) Executing: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.3.57 -c /var/cache/cloud/VR-a20603a7-8e10-4378-96bb-7a2dbc7c6c0b.cfg
2017-01-20 12:04:06,792 WARN  [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-2:null) Timed out: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.3.57 -c /var/cache/cloud/VR-a20603a7-8e10-4378-96bb-7a2dbc7c6c0b.cfg .  
Output is:
2017-01-20 12:05:20,419 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-1:null) Executing: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.0.41 -c /var/cache/cloud/VR-090cae6a-f07d-40bb-9f19-809ccdcca16b.cfg
2017-01-20 12:05:21,423 WARN  [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-1:null) Timed out: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.0.41 -c /var/cache/cloud/VR-090cae6a-f07d-40bb-9f19-809ccdcca16b.cfg .  
Output is:
2017-01-20 12:06:33,620 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-1:null) Executing: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.2.33 -c /var/cache/cloud/VR-63f52dbd-b710-4e57-a21b-1f4bcd146ec3.cfg
2017-01-20 12:06:34,624 WARN  [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-1:null) Timed out: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.2.33 -c /var/cache/cloud/VR-63f52dbd-b710-4e57-a21b-1f4bcd146ec3.cfg .  
Output is:
```

VR Failed to start with a Message "Unable to start a VM due to insufficient 
capacity" 
I think we could add a log message saying that it timed out in the agent 
log, because now it does not leave any clue where the time out is.


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9772) Perform HEAD request to retrieve header information

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861128#comment-15861128
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9772:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1934
  
@marcaurele there are some marvin tests in extract and delete template, 
could you please have a look?


> Perform HEAD request to retrieve header information
> ---
>
> Key: CLOUDSTACK-9772
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9772
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.2.0, 4.2.1, 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.4.2, 
> 4.4.3, 4.3.2, 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.7.1, 4.8.0, 
> 4.9.0, 4.8.1.1, 4.9.0.1, 4.5.2.2
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> The function in UriUtils which perform a check for the template file size of 
> an arbitrary URL is sending a `GET` request to only retrieve the response 
> header. A `HEAD` is the correct way of retrieving such information from the 
> response header.
> This was affecting the restart of a management server since all templates 
> were retrieved when receiving the startup command from the secondary storage 
> sysvm.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861125#comment-15861125
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861121#comment-15861121
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan package


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is already removed
> 2017-01-26 

[jira] [Commented] (CLOUDSTACK-9773) Don't break API output with non-printable characters

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861119#comment-15861119
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9773:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1936
  
This is a really small change, why's Travis build failing?
@marcaurele @koushik-das 


> Don't break API output with non-printable characters
> 
>
> Key: CLOUDSTACK-9773
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9773
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> When sending non printable characters, the response returns them as they were 
> sent, thus potentially breaking the JSON/XML parser on the other side. Best 
> would be to not include such invalid character in the response.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861095#comment-15861095
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9779:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1937
  
files are also changed from 100644 → 100755 which is not necessary


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861092#comment-15861092
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9779:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1937#discussion_r100511377
  
--- Diff: server/src/com/cloud/network/NetworkServiceImpl.java ---
@@ -852,7 +852,8 @@ public boolean releaseSecondaryIpFromNic(long 
ipAddressId) {
 throw new InvalidParameterValueException("Can' remove the 
ip " + secondaryIp + "is associate with static NAT rule public IP address id " 
+ publicIpVO.getId());
 }
 
-if (_lbService.isLbRuleMappedToVmGuestIp(secondaryIp)) {
+List lbRuleIdList = 
_firewallDao.listIdByNetworkAndPurposeAndNotRevoked(network.getId(), 
Purpose.LoadBalancing);
--- End diff --

this two checks can be done by a join search in LoadBalancerVMMapDao.java


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861075#comment-15861075
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9779:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1937#discussion_r100509837
  
--- Diff: server/src/com/cloud/network/NetworkServiceImpl.java ---
@@ -852,7 +852,8 @@ public boolean releaseSecondaryIpFromNic(long 
ipAddressId) {
 throw new InvalidParameterValueException("Can' remove the 
ip " + secondaryIp + "is associate with static NAT rule public IP address id " 
+ publicIpVO.getId());
--- End diff --

line 849. The issue also happen on static nat. Could you fix it as well ?


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-10 Thread Nitesh Sarda (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861030#comment-15861030
 ] 

Nitesh Sarda commented on CLOUDSTACK-9779:
--

Hello [~weizhou],

I have raised a PR for this issue. Following is the link :

https://github.com/apache/cloudstack/pull/1937

> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861019#comment-15861019
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9779:


GitHub user niteshsarda opened a pull request:

https://github.com/apache/cloudstack/pull/1937

CLOUDSTACK-9779 : Releasing secondary guest IP fails with error VM nic Ip 
x.x.x.x is mapped to load balancing rule

ISSUE 
=
Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped 
to load balancing rule

REPRO STEPS
==
1. Create two isolated guest networks with same CIDR
2. Deploy VMs on both networks
3. Acquire secondary IP on NICs of both VMs and make sure they have the 
same value, user can input the IP address.
4. Configure Loadbalancing rule on one of the secondary IP address and try 
releasing the other secondary IP address.
5. The operation would fail

EXPECTED BEHAVIOR
==
Secondary IP address should be released if there are no LB rules associated 
with it.

ACTUAL BEHAVIOR
==
Releasing secondary IP address even if there are no LB rules associated 
with it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CS-50136

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1937.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1937






> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)