[jira] [Commented] (CLOUDSTACK-9861) Expire VM snapshots after configured duration

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956292#comment-15956292
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9861:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2026
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-616


> Expire VM snapshots after configured duration
> -
>
> Key: CLOUDSTACK-9861
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9861
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Abhinandan Prateek
>
> Currently, users can keep VM snapshots for an indefinite time period.  
> Long-lived snapshots can cause stability issues on the hypervisor.
> Requirement: Add a timeout for VM Snapshots, whereby snapshots get 
> automatically deleted after a given time period. This would be available at 
> account level, with a default value determined by a global setting. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9861) Expire VM snapshots after configured duration

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956281#comment-15956281
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9861:


Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/2026
  
@blueorangutan package


> Expire VM snapshots after configured duration
> -
>
> Key: CLOUDSTACK-9861
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9861
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Abhinandan Prateek
>
> Currently, users can keep VM snapshots for an indefinite time period.  
> Long-lived snapshots can cause stability issues on the hypervisor.
> Requirement: Add a timeout for VM Snapshots, whereby snapshots get 
> automatically deleted after a given time period. This would be available at 
> account level, with a default value determined by a global setting. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9861) Expire VM snapshots after configured duration

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956282#comment-15956282
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9861:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2026
  
@abhinandanprateek a Jenkins job has been kicked to build packages. I'll 
keep you posted as I make progress.


> Expire VM snapshots after configured duration
> -
>
> Key: CLOUDSTACK-9861
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9861
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Abhinandan Prateek
>
> Currently, users can keep VM snapshots for an indefinite time period.  
> Long-lived snapshots can cause stability issues on the hypervisor.
> Requirement: Add a timeout for VM Snapshots, whereby snapshots get 
> automatically deleted after a given time period. This would be available at 
> account level, with a default value determined by a global setting. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9861) Expire VM snapshots after configured duration

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956240#comment-15956240
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9861:


GitHub user abhinandanprateek opened a pull request:

https://github.com/apache/cloudstack/pull/2026

CLOUDSTACK-9861: Expire VM snapshots after configured duration

Default value of the account level global config vmsnapshot.expire.interval 
is -1 that conforms to legacy behaviour
A positive value will expire the VM snapshots for the respective account in 
that many hours

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack ir25-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2026


commit 1c6a30a0e65f2bc9b1d97920d1562b35f3b682ee
Author: Abhinandan Prateek 
Date:   2017-03-28T12:07:59Z

CLOUDSTACK-9861: Expire VM snapshots after configured duration
Default value of the account level global config vmsnapshot.expire.interval 
is -1 that conforms to legacy behaviour
A positive value will expire the VM snapshots for the respective account in 
that many hours




> Expire VM snapshots after configured duration
> -
>
> Key: CLOUDSTACK-9861
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9861
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Abhinandan Prateek
>
> Currently, users can keep VM snapshots for an indefinite time period.  
> Long-lived snapshots can cause stability issues on the hypervisor.
> Requirement: Add a timeout for VM Snapshots, whereby snapshots get 
> automatically deleted after a given time period. This would be available at 
> account level, with a default value determined by a global setting. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9861) Expire VM snapshots after configured duration

2017-04-04 Thread Abhinandan Prateek (JIRA)
Abhinandan Prateek created CLOUDSTACK-9861:
--

 Summary: Expire VM snapshots after configured duration
 Key: CLOUDSTACK-9861
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9861
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Abhinandan Prateek


Currently, users can keep VM snapshots for an indefinite time period.  
Long-lived snapshots can cause stability issues on the hypervisor.

Requirement: Add a timeout for VM Snapshots, whereby snapshots get 
automatically deleted after a given time period. This would be available at 
account level, with a default value determined by a global setting. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9679) Allow master user to manage subordinate user uploaded template

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955821#comment-15955821
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9679:


Github user pdion891 commented on the issue:

https://github.com/apache/cloudstack/pull/1834
  
@pdumbre @karuturi @syed 
I thinks this introduce an issue where a domain admin cannot get a template 
by id. 
a domain admin can list feature template and get a template via `ids=` but 
not with `id=`
The example bellow is perform with a domain admin account to list 
public+featured template at root domain:

```
(beta2r1-ninja) > list templates templatefilter=featured filter=name,id
count = 9
template:
+--+-+
|  id  |   name  |
+--+-+
| 513b3a6d-c011-46f0-a4a3-2a954cadb673 |  CoreOS Alpha 1367.5.0  |
| 0c04d876-1f85-45a7-b6f4-504de435bf12 |Debian 8.5 PV base (64bit)   |
| 285f2203-449a-428f-997a-1ffbebbf1382 |   CoreOS Alpha  |
| 332b6ca8-b3d6-42c7-83e5-60fe87be6576 |  CoreOS Stable  |
| 3b705008-c186-464d-ad59-312d902420af |   Windows Server 2016 std SPLA  |
| 4256aebe-a1c1-4b49-9993-de2bc712d521 |   Ubuntu 16.04.01 HVM   |
| 59e6b00a-b88e-4539-aa3c-75c9c7e9fa6c | Ubuntu 14.04.5 HVM base (64bit) |
| 3ab936eb-d8c2-44d8-a64b-17ad5adf8a51 |  CentOS 6.8 PV  |
| 7de5d423-c91e-49cc-86e8-9d6ed6abd997 |  CentOS 7.2 HVM |
+--+-+
(beta2r1-ninja) > list templates templatefilter=featured 
id=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
Error 531: Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not 
have permission to operate within domain id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
cserrorcode = 4365
errorcode = 531
errortext = Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not 
have permission to operate within domain id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
uuidList:
(beta2r1-ninja) > list templates templatefilter=featured 
ids=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
count = 1
template:
+--++
|  id  |  name  |
+--++
| 7de5d423-c91e-49cc-86e8-9d6ed6abd997 | CentOS 7.2 HVM |
+--++
```


> Allow master user to manage subordinate user uploaded template
> --
>
> Key: CLOUDSTACK-9679
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9679
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: pallavi Dumbre
>
> REPRO STEPS
> ==
> 1. Create Master User Aaron
> 2. Create common user feixiang within tenant Aaron
> 3. Login as feixiang, upload template CentOS56
> 4. Login as Aaron, go to Template view ,unable to see feixiang's template 
> CentOS56
> EXPECTED BEHAVIOR
> ==
> Login as master user can view all subordinate user's template
>  
> ACTUAL BEHAVIOR
> ==
> Only master user itself upload template to manageable
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9512) listTemplates ids returns all templates instead of the requested ones

2017-04-04 Thread Pierre-Luc Dion (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955741#comment-15955741
 ] 

Pierre-Luc Dion commented on CLOUDSTACK-9512:
-

Hi [~rajanik], 

I've just look at this issue and is not visible in 4.10 at the moment. should 
we close this issue ?

{code}
(beta1t2-ninja) > list templates templatefilter=all 
ids=e85972ad-9267-4798-a9c2-175141653bb9,74fa60e2-93a7-46bf-b410-c64471a21e78 
filter=id,name
count = 2
template:
+--+-+
|  id  |   name  |
+--+-+
| e85972ad-9267-4798-a9c2-175141653bb9 | Windows Server 2008 R2 ent BYOL |
| 74fa60e2-93a7-46bf-b410-c64471a21e78 |  Windows Server 2016 CORE SPLA  |
+--+-+
{code}


> listTemplates ids returns all templates instead of the requested ones
> -
>
> Key: CLOUDSTACK-9512
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9512
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Template
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMWare 5.5u3 + NFS primary/secondary storage
>Reporter: Boris Stoyanov
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0, 4.9.1.0, 4.8.2.0
>
>
> Actual call form the logs:
> {code}{'account': u'test-a-TestListIdsParams-KYBF19', 
> 'domainid': u'41c6fda1-84cf-11e6-bbd2-066638010710', 
> 'ids': 
> u'a4bcd20f-0c4c-4999-bb5e-02aab8f763a1,10a429da-829b-4492-9674-26b1c172462e,d3a9a86d-17a1-4199-8116-ceefc6ef31d5',
>  
> 'apiKey': 
> u'LIN6rqXuaJwMPfGYFh13qDwYz5VNNz1J2J6qIOWcd3oLQOq0WtD4CwRundBL6rzXToa3lQOC_vKjI3nkHtiD8Q',
>  
> 'command': 'listTemplates', 
> 'listall': True, 
> 'signature': 'Yo7dRnPdSch+mEzcF8TTo1xhxpo=', 
> 'templatefilter': 'all', 
> 'response': 'json', 
> 'listAll': True}{code}
> When asking to list 3 or more 
> {code}(local) SBCM5> list templates templatefilter=all 
> ids=a4bcd20f-0c4c-4999-bb5e-02aab8f763a1,10a429da-829b-4492-9674-26b1c172462e,d3a9a86d-17a1-4199-8116-ceefc6ef31d5{code}
> You receive all templates (count: 14)
> Response
> {code}
> {
>   "count": 14,
>   "template": [
> {
>   "account": "system",
>   "checksum": "4b415224fe00b258f66cad9fce9f73fc",
>   "created": "2016-09-27T17:38:31+0100",
>   "crossZones": true,
>   "displaytext": "SystemVM Template (vSphere)",
>   "domain": "ROOT",
>   "domainid": "41c6fda1-84cf-11e6-bbd2-066638010710",
>   "format": "OVA",
>   "hypervisor": "VMware",
>   "id": "6114746a-aefa-4be7-8234-f0d76ff175d0",
>   "isdynamicallyscalable": true,
>   "isextractable": false,
>   "isfeatured": false,
>   "ispublic": false,
>   "isready": true,
>   "name": "SystemVM Template (vSphere)",
>   "ostypeid": "41db0847-84cf-11e6-bbd2-066638010710",
>   "ostypename": "Debian GNU/Linux 5.0 (64-bit)",
>   "passwordenabled": false,
>   "size": 3145728000,
>   "sshkeyenabled": false,
>   "status": "Download Complete",
>   "tags": [],
>   "templatetype": "SYSTEM",
>   "zoneid": "b8d4cea4-6b4b-4cfb-9f17-0a6b31fec09f",
> .
> .
> .
> .
> .
> 
> }{code}
> Marvin failure:
> {code}2016-09-29 11:43:39,819 - CRITICAL - FAILED: test_02_list_templates: 
> ['Traceback (most recent call last):\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 369, in run\n
> testMethod()\n', '  File "/marvin/tests/smoke/test_list_ids_parameter.py", 
> line 253, in test_02_list_templates\n"ListTemplates response expected 3 
> Templates, received %s" % len(list_template_response)\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 553, in assertEqual\n
> assertion_func(first, second, msg=msg)\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 546, in _baseAssertEqual\n
> raise self.failureException(msg)\n', 'AssertionError: ListTemplates response 
> expected 3 Templates, received 14\n']{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955355#comment-15955355
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan test


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is already removed
> 2017-01-26 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955356#comment-15955356
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955335#comment-15955335
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-614


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is 

[jira] [Commented] (CLOUDSTACK-9842) Unable to map root volume usage to VM

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955316#comment-15955316
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9842:


Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/2012
  
Tested this change. LGTM for test


> Unable to map root volume usage to VM
> -
>
> Key: CLOUDSTACK-9842
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9842
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> If a VM is cold migrated the vm_instance_id and uuid of volume is nullified. 
> So there is no link between volume and vm. 
> With this the ROOT volume usage can not be mapped to a VM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9208) Assertion Error in VM_POWER_STATE handler.

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955310#comment-15955310
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9208:


Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/1997
  
@jayapalu your check is ok, what is not ok is that it is needed. Somehow a 
stop can be send to a VM for which no host is known?
I am worried about the reason of your fix and that we might obscure it. Why 
is a stop command attempted on a vm for which no host is known?


> Assertion Error in VM_POWER_STATE handler.
> --
>
> Key: CLOUDSTACK-9208
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9208
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>Priority: Minor
>
> 1. Enable the assertions.
> LOG
> 2015-12-31 04:09:06,687 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-981a85d4) (logid:863754b8) Found 0 networks to 
> update RvR status.
> 2015-12-31 04:09:07,394 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Ping from 5(10.147.40.18)
> 2015-12-31 04:09:07,394 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process host VM state 
> report from ping process. host: 5
> 2015-12-31 04:09:07,416 INFO [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unable to find matched 
> VM in CloudStack DB. name: New Virtual Machine
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process VM state report. 
> host: 5, number of records in report: 5
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report. host: 
> 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,530 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report is 
> updated. host: 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,540 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM r-69-VM is at Stopped 
> and we received a power-off report while there is no pending jobs on it
> 2015-12-31 04:09:07,541 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2879)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955306#comment-15955306
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r103529571
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,82 +284,145 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = getGlobalLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+
+boolean rollBackState = false;
+
+try {
+long ownerId = domain.getAccountId();
+if (BooleanUtils.toBoolean(cleanup)) {
+tryCleanupDomain(domain, ownerId);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkIds.size() + " non-removed networks";
-} else if (hasDedicatedResources) {
-msg = "dedicated resources.";
-}
+
removeDomainWithNoAccountsForCleanupNetworksOrDedicatedResources(domain);
+}
 
- 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955307#comment-15955307
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r103529869
  
--- Diff: server/test/com/cloud/user/DomainManagerImplTest.java ---
@@ -134,4 +164,69 @@ public void testFindDomainByIdOrPathValidId() {
 Assert.assertEquals(domain, domainManager.findDomainByIdOrPath(1L, 
"/validDomain/"));
 }
 
+@Test(expected=InvalidParameterValueException.class)
+public void testDeleteDomainNullDomain() {
+Mockito.when(_domainDao.findById(DOMAIN_ID)).thenReturn(null);
+domainManager.deleteDomain(DOMAIN_ID, testDomainCleanup);
+}
+
+@Test(expected=PermissionDeniedException.class)
+public void testDeleteDomainRootDomain() {
+
Mockito.when(_domainDao.findById(Domain.ROOT_DOMAIN)).thenReturn(domain);
+domainManager.deleteDomain(Domain.ROOT_DOMAIN, testDomainCleanup);
+}
+
+@Test
+public void testDeleteDomainNoCleanup() {
+domainManager.deleteDomain(DOMAIN_ID, testDomainCleanup);
+Mockito.verify(domainManager).deleteDomain(domain, 
testDomainCleanup);
+
Mockito.verify(domainManager).removeDomainWithNoAccountsForCleanupNetworksOrDedicatedResources(domain);
+Mockito.verify(domainManager).cleanupDomainOfferings(DOMAIN_ID);
+Mockito.verify(lock).unlock();
+}
+
+@Test
+public void 
testRemoveDomainWithNoAccountsForCleanupNetworksOrDedicatedResourcesRemoveDomain()
 {
+
domainManager.removeDomainWithNoAccountsForCleanupNetworksOrDedicatedResources(domain);
+
Mockito.verify(domainManager).publishRemoveEventsAndRemoveDomain(domain);
+}
+
+@Test(expected=CloudRuntimeException.class)
+public void 
testRemoveDomainWithNoAccountsForCleanupNetworksOrDedicatedResourcesDontRemoveDomain()
 {
+domainNetworkIds.add(2l);
+
domainManager.removeDomainWithNoAccountsForCleanupNetworksOrDedicatedResources(domain);
+Mockito.verify(domainManager).failRemoveOperation(domain, 
domainAccountsForCleanup, domainNetworkIds, false);
+}
+
+@Test
+public void testPublishRemoveEventsAndRemoveDomainSuccessfulDelete() {
+domainManager.publishRemoveEventsAndRemoveDomain(domain);
+Mockito.verify(_messageBus).publish(Mockito.anyString(), 
Matchers.eq(DomainManager.MESSAGE_PRE_REMOVE_DOMAIN_EVENT),
+Matchers.eq(PublishScope.LOCAL), Matchers.eq(domain));
+Mockito.verify(_messageBus).publish(Mockito.anyString(), 
Matchers.eq(DomainManager.MESSAGE_REMOVE_DOMAIN_EVENT),
+Matchers.eq(PublishScope.LOCAL), Matchers.eq(domain));
+Mockito.verify(_domainDao).remove(DOMAIN_ID);
+}
+
+@Test(expected=CloudRuntimeException.class)
+public void testPublishRemoveEventsAndRemoveDomainExceptionDelete() {
+Mockito.when(_domainDao.remove(DOMAIN_ID)).thenReturn(false);
+domainManager.publishRemoveEventsAndRemoveDomain(domain);
+Mockito.verify(_messageBus).publish(Mockito.anyString(), 
Matchers.eq(DomainManager.MESSAGE_PRE_REMOVE_DOMAIN_EVENT),
+Matchers.eq(PublishScope.LOCAL), Matchers.eq(domain));
+Mockito.verify(_messageBus, 
Mockito.never()).publish(Mockito.anyString(), 
Matchers.eq(DomainManager.MESSAGE_REMOVE_DOMAIN_EVENT),
+Matchers.eq(PublishScope.LOCAL), Matchers.eq(domain));
+Mockito.verify(_domainDao).remove(DOMAIN_ID);
+}
+
+@Test
+public void testFailRemoveOperation() {
+try {
+domainManager.failRemoveOperation(domain, 
domainAccountsForCleanup, domainNetworkIds, true);
--- End diff --

Now that you removed the use of `rollBackState`, made the method last 
problematic to test. Therefore, you can use the `@Test(expected=...)`, instead 
of this very unusual construction here.


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955305#comment-15955305
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102538324
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -109,6 +112,20 @@
 @Inject
 MessageBus _messageBus;
 
+static boolean rollBackState = false;
--- End diff --

@nvazquez I have been thinking about this variable you introduced here. I 
think it can cause problems (concurrency problems). The `DomainManagerImpl` is 
a singleton. Therefore, it should not have state variables. The `rollBackState 
` is acting as a state variable for requests that use 
`com.cloud.user.DomainManagerImpl.deleteDomain(DomainVO, Boolean)`. The problem 
is that every call should have its own context/state for `rollBackState`. 
However, this will not happen with the current implementation.

I think we should re-work the use of that variable. What do you think?


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955300#comment-15955300
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955297#comment-15955297
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@blueorangutan package


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to domain is already removed
> 2017-01-26 

[jira] [Commented] (CLOUDSTACK-9764) Delete domain failure due to Account Cleanup task

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955177#comment-15955177
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9764:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@borisstoyanov I've rebased master branch, can we re-run tests on this PR?


> Delete domain failure due to Account Cleanup task
> -
>
> Key: CLOUDSTACK-9764
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9764
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> It was noticed in production environments that {{deleteDomain}} task failed 
> for domains with multiple accounts and resources. Examining logs it was found 
> out that if Account Cleanup Task got executed after domain (and all of its 
> subchilds) got marked as Inactive; and before delete domain task finishes, it 
> produces a failure.
> {{AccountCleanupTask}} gets executed every {{account.cleanup.interval}} 
> seconds looking for:
> * Removed accounts
> * Disabled accounts
> * Inactive domains
> As {{deleteDomain}} marks domain to delete (and its subchilds) as Inactive 
> before deleting them, when {{AccountCleanupTask}} is executed, it removes 
> marked domains. When there are resources to cleanup on domain accounts, 
> domain is not found throwing exception: 
> {{com.cloud.exception.InvalidParameterValueException: Please specify a valid 
> domain ID}}
> h3. Example
> {{account.cleanup.interval}} = 100
> {noformat}
> 2017-01-26 06:07:03,621 DEBUG [cloud.api.ApiServlet] 
> (catalina-exec-8:ctx-50cfa3b6 ctx-92ad5b38) ===END===  10.39.251.17 -- GET  
> command=deleteDomain=1910a3dc-6fa6-457b-ab3a-602b0cfb6686=true=json&_=1485439623475
> ...
> // Domain and its subchilds marked as Inactive
> 2017-01-26 06:07:03,640 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Marking domain id=27 
> as Inactive before actually deleting it
> 2017-01-26 06:07:03,646 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=27
> 2017-01-26 06:07:03,670 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=28
> 2017-01-26 06:07:03,685 DEBUG [cloud.user.DomainManagerImpl] 
> (API-Job-Executor-29:ctx-23415942 job-7165 ctx-fe3d13d6) Cleaning up domain 
> id=29
> ...
> // AccountCleanupTask removes Inactive domain id=29, no rollback for it
> 2017-01-26 06:07:44,285 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 removed accounts to cleanup
> 2017-01-26 06:07:44,287 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled accounts to cleanup
> 2017-01-26 06:07:44,289 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 3 inactive domains to cleanup
> 2017-01-26 06:07:44,292 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=27
> 2017-01-26 06:07:44,297 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,301 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=28
> 2017-01-26 06:07:44,304 DEBUG [db.Transaction.Transaction] 
> (AccountChecker-1:ctx-b8a01824) Rolling back the transaction: Time = 2 Name = 
>  AccountChecker-1; called by 
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy63.remove:-1-DomainManagerImpl.removeDomain:248-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:62
> 2017-01-26 06:07:44,307 DEBUG [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Removing inactive domain id=29
> 2017-01-26 06:07:44,319 INFO  [cloud.user.AccountManagerImpl] 
> (AccountChecker-1:ctx-b8a01824) Found 0 disabled projects to cleanup
> ...
> // Failure due to 

[jira] [Commented] (CLOUDSTACK-9690) Scale CentOS7 VM fails with error

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955082#comment-15955082
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9690:


Github user jayantpatil1234 commented on the issue:

https://github.com/apache/cloudstack/pull/1849
  
 @sudhansu7 
There is one file "Upgrade4910to4920.java", which does not have any 
changes, still that get added in PR. If possible could you remove that file? 

Otherwise code looks good. LGTM for code


> Scale CentOS7 VM fails with error 
> --
>
> Key: CLOUDSTACK-9690
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9690
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> Scale CentOS7 VM fails with error "Cannot scale up the vm because of memory 
> constraint violation"
> When creating VM from CentOS 7 template on the XenServer with dynamically 
> scaling enabled, instance starts with base specified memory instead of memory 
> * 4 as static limit.
> As the result, attempt to scale VM throws error in MS log:
> {noformat}
> java.lang.RuntimeException: Job failed due to exception Unable to scale vm 
> due to Catch exception com.cloud.utils.exception.CloudRuntimeException when 
> scaling VM:i-24-3976-VM due to 
> com.cloud.utils.exception.CloudRuntimeException: Cannot scale up the vm 
> because of memory constraint violation: 0 <= memory-static-min(2147483648) <= 
> memory-dynamic-min(8589934592) <= memory-dynamic-max(8589934592) <= 
> memory-static-max(2147483648)
> {noformat}
> REPO STEPS
> =
> # Enable dynamic scaling in Global settings
> # Register an CentOS 7 tempplate(with tools) and tick dynamic scaling
> # Deploy VM with this template
> # Start the VM and try to change service offering
> EXPECTED RESULT: VM should start with static limit 4x and 
> scale up when offering is changed
> ACTUAL RESULT: VM starts with maximum static limit of  and 
> doesn't scale up with error in ms log :
> Cannot scale up the vm because of memory constraint violation: 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955053#comment-15955053
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9560:


Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
tag:mergeready


> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9690) Scale CentOS7 VM fails with error

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955048#comment-15955048
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9690:


Github user jayakarteek commented on the issue:

https://github.com/apache/cloudstack/pull/1849
  
@sudhansu7 

Tested the fix, LGTM.
I am able to change from small computer offering to medium computer 
offering for centos-7.


> Scale CentOS7 VM fails with error 
> --
>
> Key: CLOUDSTACK-9690
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9690
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Sudhansu Sahu
>Assignee: Sudhansu Sahu
>
> Scale CentOS7 VM fails with error "Cannot scale up the vm because of memory 
> constraint violation"
> When creating VM from CentOS 7 template on the XenServer with dynamically 
> scaling enabled, instance starts with base specified memory instead of memory 
> * 4 as static limit.
> As the result, attempt to scale VM throws error in MS log:
> {noformat}
> java.lang.RuntimeException: Job failed due to exception Unable to scale vm 
> due to Catch exception com.cloud.utils.exception.CloudRuntimeException when 
> scaling VM:i-24-3976-VM due to 
> com.cloud.utils.exception.CloudRuntimeException: Cannot scale up the vm 
> because of memory constraint violation: 0 <= memory-static-min(2147483648) <= 
> memory-dynamic-min(8589934592) <= memory-dynamic-max(8589934592) <= 
> memory-static-max(2147483648)
> {noformat}
> REPO STEPS
> =
> # Enable dynamic scaling in Global settings
> # Register an CentOS 7 tempplate(with tools) and tick dynamic scaling
> # Deploy VM with this template
> # Start the VM and try to change service offering
> EXPECTED RESULT: VM should start with static limit 4x and 
> scale up when offering is changed
> ACTUAL RESULT: VM starts with maximum static limit of  and 
> doesn't scale up with error in ms log :
> Cannot scale up the vm because of memory constraint violation: 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955002#comment-15955002
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
it would be nice if this can be merged into 4.10.0.0


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955000#comment-15955000
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1950#discussion_r109633875
  
--- Diff: debian/control ---
@@ -16,14 +16,15 @@ Description: A common package which contains files 
which are shared by several C
 Package: cloudstack-management
 Architecture: all
 Depends: ${misc:Depends}, ${python:Depends}, openjdk-8-jre-headless | 
java8-runtime-headless | java8-runtime, cloudstack-common (= 
${source:Version}), tomcat6 | tomcat7, sudo, jsvc, python-mysql.connector, 
libmysql-java, augeas-tools, mysql-client, adduser, bzip2, ipmitool, lsb-release
+Recommends: init-system-helpers (>= 1.14)
--- End diff --

@rhtyd Sorry I just noticed your comment.  I will remove all 3 lines
Recommends: init-system-helpers (>= 1.14)

I am not sure if it will cause any issue. Please test it.




> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9208) Assertion Error in VM_POWER_STATE handler.

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954977#comment-15954977
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9208:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1997
  
@ramkatru  I am not sure about  Daan comments. 
The tests results are positive.


> Assertion Error in VM_POWER_STATE handler.
> --
>
> Key: CLOUDSTACK-9208
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9208
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>Priority: Minor
>
> 1. Enable the assertions.
> LOG
> 2015-12-31 04:09:06,687 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-981a85d4) (logid:863754b8) Found 0 networks to 
> update RvR status.
> 2015-12-31 04:09:07,394 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Ping from 5(10.147.40.18)
> 2015-12-31 04:09:07,394 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process host VM state 
> report from ping process. host: 5
> 2015-12-31 04:09:07,416 INFO [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unable to find matched 
> VM in CloudStack DB. name: New Virtual Machine
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process VM state report. 
> host: 5, number of records in report: 5
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report. host: 
> 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,530 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report is 
> updated. host: 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,540 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM r-69-VM is at Stopped 
> and we received a power-off report while there is no pending jobs on it
> 2015-12-31 04:09:07,541 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2879)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> 

[jira] [Commented] (CLOUDSTACK-8939) VM Snapshot size with memory is not correctly calculated in cloud.usage_event (XenServer)

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954965#comment-15954965
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8939:


Github user niteshsarda commented on the issue:

https://github.com/apache/cloudstack/pull/914
  
I have tested this and **LGTM** for test.

Following are the test results : 

**Before applying fix :**

[root@xenserver-jay ~]# xe vbd-list vm-name-label=i-2-120-VM empty=false 
params=vdi-uuid
vdi-uuid ( RO): 674c2c07-954e-4fa0-943a-979a71748727


[root@xenserver-jay ~]# xe vdi-list 
params=physical-utilisation,sm-config,is-a-snapshot 
uuid=674c2c07-954e-4fa0-943a-979a71748727
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 81256448
   sm-config (MRO): 
host_OpaqueRef:a3c1c534-b5e3-6cb5-10af-b56dead63fc0: RW; vhd-parent: 
1aac168d-22d2-4796-8669-48a3f4361dfc

   
[root@xenserver-jay ~]# xe vdi-list 
params=physical-utilisation,sm-config,is-a-snapshot 
uuid=1aac168d-22d2-4796-8669-48a3f4361dfc
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 233251328
   sm-config (MRO): vhd-blocks: 
eJxjYEAB7AykARFGPc1cN4UVJGqDAUYG5tZJF0ymbpi4bYUCAxPJ+nn2sDQLMGwTYzzI0OvAQob1AgIBip6BrAIrG0jWDAEAQmQNjA==;
 vhd-parent: 55943089-24fa-4fbe-a44e-e57fe9a080ed
   
[root@xenserver-jay ~]# xe vdi-list name-label=Suspend\ image params=all
uuid ( RO): 105fbe1d-973b-4d8d-82e7-96246669a769
  name-label ( RW): Suspend image
name-description ( RW): Suspend image
   is-a-snapshot ( RO): true
 snapshot-of ( RO): 
   snapshots ( RO):
   snapshot-time ( RO): 20170401T07:49:50Z
  allowed-operations (SRO): forget; generate_config; update; resize; 
destroy; clone; copy; snapshot
  current-operations (SRO):
 sr-uuid ( RO): 99b84794-d0d6-46c6-fa42-07af5b423b8c
   sr-name-label ( RO): d2fecabd-61cf-3458-a120-ea01a4f91c29
   vbd-uuids (SRO):
 crashdump-uuids (SRO):
virtual-size ( RO): 782237696
physical-utilisation ( RO): 6144
location ( RO): 105fbe1d-973b-4d8d-82e7-96246669a769
type ( RO): Suspend
sharable ( RO): false
   read-only ( RO): false
storage-lock ( RO): false
 managed ( RO): true
  parent ( RO): 
 missing ( RO): false
other-config (MRW): content_id: 
842a46b4-bffd-70fb-b00a-47c39de8959b
   xenstore-data (MRO):
   sm-config (MRO): vhd-parent: 
638118a6-5f85-4c35-9aa8-d19793a144a8
 on-boot ( RW): persist
   allow-caching ( RW): false
 metadata-latest ( RO): false
metadata-of-pool ( RO): 
tags (SRW):


[root@xenserver-jay ~]# xe vdi-list 
uuid=638118a6-5f85-4c35-9aa8-d19793a144a8 params=all
uuid ( RO): 638118a6-5f85-4c35-9aa8-d19793a144a8
  name-label ( RW): base copy
name-description ( RW):
   is-a-snapshot ( RO): false
 snapshot-of ( RO): 
   snapshots ( RO):
   snapshot-time ( RO): 19700101T00:00:00Z
  allowed-operations (SRO): forget; generate_config; update; resize; 
destroy; clone; copy; snapshot
  current-operations (SRO):
 sr-uuid ( RO): 99b84794-d0d6-46c6-fa42-07af5b423b8c
   sr-name-label ( RO): d2fecabd-61cf-3458-a120-ea01a4f91c29
   vbd-uuids (SRO):
 crashdump-uuids (SRO):
virtual-size ( RO): 782237696
physical-utilisation ( RO): 548430336
location ( RO): 638118a6-5f85-4c35-9aa8-d19793a144a8
type ( RO): User
sharable ( RO): false
   read-only ( RO): true
storage-lock ( RO): false
 managed ( RO): false
  parent ( RO): 
 missing ( RO): false
other-config (MRW):
   xenstore-data (MRO):
   sm-config (MRO): vhd-blocks: eJz7/x8/+MGAAgD69CDZ
 on-boot ( RW): persist
   allow-caching ( RW): false
 metadata-latest ( RO): false
metadata-of-pool ( RO): 
tags (SRW):

   
   
mysql> select * from cloud.usage_event where resource_name like 
"i-2-120-VM%" \G
*** 1. row ***
   id: 84
 type: 

[jira] [Commented] (CLOUDSTACK-9848) VR commands exist status is not checked in python config files

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954941#comment-15954941
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9848:


Github user bvbharatk commented on the issue:

https://github.com/apache/cloudstack/pull/2018
  
code changes LGTM.


> VR commands exist status is not checked in python config files
> --
>
> Key: CLOUDSTACK-9848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>
> When iptables rules are configured on the VR failures or exceptions are not 
> detected in VR because iptables commands exit/return status is not 
> checked.Also in exception catch failure is not returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9860) CloudStack should be able to pass 'hard' shutdown instruction to hosts to force a guest instance shutdown

2017-04-04 Thread Paul Angus (JIRA)
Paul Angus created CLOUDSTACK-9860:
--

 Summary: CloudStack should be able to pass 'hard' shutdown 
instruction to hosts to force a guest instance shutdown
 Key: CLOUDSTACK-9860
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9860
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Hypervisor Controller, Management Server
Reporter: Paul Angus


The 'force' option provided with the stopVirtualMachine API command is often 
assumed to be a hard shutdown sent to the hypervisor, when in fact it is for 
CloudStacks' internal use.

CloudStack should be able to send the 'hard' power-off request to the hosts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8944) Template download possible from new secondary storages before the download is 100 % complete

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954681#comment-15954681
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8944:


Github user yvsubhash closed the pull request at:

https://github.com/apache/cloudstack/pull/921


> Template download possible from new secondary storages before the download is 
> 100 % complete
> 
>
> Key: CLOUDSTACK-8944
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8944
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
> Environment: xenserver host with nfs storage
>Reporter: subhash yedugundla
>
> ISSUE
> ==
> Secondary Storage ( Parent is NULL in the database )
> after  secondary storage is added  from the CloudStack GUI which in turn 
> leads to a invalid download URL for a template
>  
> TROUBLESHOOTING
> ===
> The parameter provided when the Secondary storage created 
> Name: 
> Provider:NFS 
> Zone:dev3-z1 
> Server:192.168.125.12 
> Path:/vol/dev03/test 
>   when we add secondary storage
> {noformat}
> 2015-06-11 07:27:40,686 TRACE [c.c.u.d.T.Statement] 
> (catalina-exec-19:ctx-11906a2c ctx-550a6e46) (logid:0fb48736) Closing: 
> com.mysql.jdbc.JDBC4PreparedStatement@7e703121: INSERT INTO image_store 
> (image_store.id, image_store.name, image_store.uuid, image_store.protocol, 
> image_store.url, image_store.image_provider_name, image_store.data_center_id, 
> image_store.scope, image_store.created, image_store.role, image_store.parent, 
> image_store.total_size, image_store.used_bytes) VALUES (0, _binary'sec3', 
> _binary'471d5edc-424e-41fb-a21e-47e53670fe62', _binary'nfs', 
> _binary'nfs://10.104.49.65/nfs/sec3', _binary'NFS', 1, 'ZONE', '2015-06-11 
> 01:57:40', 'Image', null, null, null) 
> mysql> select * from image_store where id=3 \G; 
> *** 1. row *** 
> id: 3 
> name: sec3 
> image_provider_name: NFS 
> protocol: nfs 
> url: nfs://10.104.49.65/nfs/sec3 
> data_center_id: 1 
> scope: ZONE 
> role: Image 
> uuid: 471d5edc-424e-41fb-a21e-47e53670fe62 
> parent: NULL 
> created: 2015-06-11 01:57:40 
> removed: NULL 
> total_size: NULL 
> used_bytes: NULL 
> 1 row in set (0.00 sec) 
> {noformat}
>  Template download falils if the parent is NULL
> The URL published when the customer extract the template  gives 403 forbidden 
> error. 
> Example :
> Template id:3343 
> The URL is below. 
> https://210-140-168-1.systemip.idcfcloud.com/userdata/8aa50513-e60e-481f-989d-5bbd119504df.ova
>  
> The template is stored on the new mount-point (je01v-secstr01-02 )
> {noformat}
> root@s-1-VM:/var/www/html/userdata# df -h 
> Filesystem Size Used Avail Use% Mounted on 
> rootfs 276M 144M 118M 55% / 
> udev 10M 0 10M 0% /dev 
> tmpfs 201M 224K 201M 1% /run 
> /dev/disk/by-uuid/1458767f-a01a-4237-89e8-930f8c42fffe 276M 144M 118M 55% / 
> tmpfs 5.0M 0 5.0M 0% /run/lock 
> tmpfs 515M 0 515M 0% /run/shm 
> /dev/sda1 45M 22M 21M 51% /boot 
> /dev/sda6 98M 5.6M 88M 6% /home 
> /dev/sda8 368M 11M 339M 3% /opt 
> /dev/sda10 63M 5.3M 55M 9% /tmp 
> /dev/sda7 610M 518M 61M 90% /usr 
> /dev/sda9 415M 248M 146M 63% /var 
> 10.133.245.11:/je01v-secstr01-01 16T 11T 5.5T 66% 
> /mnt/SecStorage/8c0f1709-5d1d-3f0e-b100-ccfb873cf3ff 
> 10.133.245.11:/je01v-secstr01-02 5.9T 4.0T 1.9T 69% 
> /mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a **THIS ONE 
> From the SSVM
> root@s-1-VM:/var/www/html/userdata# ls -lah | grep 3343 
> lrwxrwxrwx 1 root root 83 May 20 06:11 
> 8aa50513-e60e-481f-989d-5bbd119504df.ova -> 
> /mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova
>  
> {noformat}
> The symbolic link is 
> "/mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova".
>  
> We assumed the problem is that the link contains "null" directory. 
> The correct symbolic link should be 
> "/mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8944) Template download possible from new secondary storages before the download is 100 % complete

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954680#comment-15954680
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8944:


Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/921
  
Closing this PR, as this is no longer needed


> Template download possible from new secondary storages before the download is 
> 100 % complete
> 
>
> Key: CLOUDSTACK-8944
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8944
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
> Environment: xenserver host with nfs storage
>Reporter: subhash yedugundla
>
> ISSUE
> ==
> Secondary Storage ( Parent is NULL in the database )
> after  secondary storage is added  from the CloudStack GUI which in turn 
> leads to a invalid download URL for a template
>  
> TROUBLESHOOTING
> ===
> The parameter provided when the Secondary storage created 
> Name: 
> Provider:NFS 
> Zone:dev3-z1 
> Server:192.168.125.12 
> Path:/vol/dev03/test 
>   when we add secondary storage
> {noformat}
> 2015-06-11 07:27:40,686 TRACE [c.c.u.d.T.Statement] 
> (catalina-exec-19:ctx-11906a2c ctx-550a6e46) (logid:0fb48736) Closing: 
> com.mysql.jdbc.JDBC4PreparedStatement@7e703121: INSERT INTO image_store 
> (image_store.id, image_store.name, image_store.uuid, image_store.protocol, 
> image_store.url, image_store.image_provider_name, image_store.data_center_id, 
> image_store.scope, image_store.created, image_store.role, image_store.parent, 
> image_store.total_size, image_store.used_bytes) VALUES (0, _binary'sec3', 
> _binary'471d5edc-424e-41fb-a21e-47e53670fe62', _binary'nfs', 
> _binary'nfs://10.104.49.65/nfs/sec3', _binary'NFS', 1, 'ZONE', '2015-06-11 
> 01:57:40', 'Image', null, null, null) 
> mysql> select * from image_store where id=3 \G; 
> *** 1. row *** 
> id: 3 
> name: sec3 
> image_provider_name: NFS 
> protocol: nfs 
> url: nfs://10.104.49.65/nfs/sec3 
> data_center_id: 1 
> scope: ZONE 
> role: Image 
> uuid: 471d5edc-424e-41fb-a21e-47e53670fe62 
> parent: NULL 
> created: 2015-06-11 01:57:40 
> removed: NULL 
> total_size: NULL 
> used_bytes: NULL 
> 1 row in set (0.00 sec) 
> {noformat}
>  Template download falils if the parent is NULL
> The URL published when the customer extract the template  gives 403 forbidden 
> error. 
> Example :
> Template id:3343 
> The URL is below. 
> https://210-140-168-1.systemip.idcfcloud.com/userdata/8aa50513-e60e-481f-989d-5bbd119504df.ova
>  
> The template is stored on the new mount-point (je01v-secstr01-02 )
> {noformat}
> root@s-1-VM:/var/www/html/userdata# df -h 
> Filesystem Size Used Avail Use% Mounted on 
> rootfs 276M 144M 118M 55% / 
> udev 10M 0 10M 0% /dev 
> tmpfs 201M 224K 201M 1% /run 
> /dev/disk/by-uuid/1458767f-a01a-4237-89e8-930f8c42fffe 276M 144M 118M 55% / 
> tmpfs 5.0M 0 5.0M 0% /run/lock 
> tmpfs 515M 0 515M 0% /run/shm 
> /dev/sda1 45M 22M 21M 51% /boot 
> /dev/sda6 98M 5.6M 88M 6% /home 
> /dev/sda8 368M 11M 339M 3% /opt 
> /dev/sda10 63M 5.3M 55M 9% /tmp 
> /dev/sda7 610M 518M 61M 90% /usr 
> /dev/sda9 415M 248M 146M 63% /var 
> 10.133.245.11:/je01v-secstr01-01 16T 11T 5.5T 66% 
> /mnt/SecStorage/8c0f1709-5d1d-3f0e-b100-ccfb873cf3ff 
> 10.133.245.11:/je01v-secstr01-02 5.9T 4.0T 1.9T 69% 
> /mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a **THIS ONE 
> From the SSVM
> root@s-1-VM:/var/www/html/userdata# ls -lah | grep 3343 
> lrwxrwxrwx 1 root root 83 May 20 06:11 
> 8aa50513-e60e-481f-989d-5bbd119504df.ova -> 
> /mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova
>  
> {noformat}
> The symbolic link is 
> "/mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova".
>  
> We assumed the problem is that the link contains "null" directory. 
> The correct symbolic link should be 
> "/mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9099) SecretKey is returned from the APIs

2017-04-04 Thread Jayapal Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayapal Reddy updated CLOUDSTACK-9099:
--
Affects Version/s: 4.9.0
Fix Version/s: 4.9.3.0
   4.10.0.0

> SecretKey is returned from the APIs
> ---
>
> Key: CLOUDSTACK-9099
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9099
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
> Fix For: 4.10.0.0, 4.9.3.0
>
>
> The sercreKey parameter is returned from the following APIs:
> createAccount
> createUser
> disableAccount
> disableUser
> enableAccount
> enableUser
> listAccounts
> listUsers
> lockAccount
> lockUser
> registerUserKeys
> updateAccount
> updateUser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9099) SecretKey is returned from the APIs

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954666#comment-15954666
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9099:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1996
  
@rhtyd   Once this PR got the LGTMs, I can rebase it on 4.9. Can you please 
review this PR


> SecretKey is returned from the APIs
> ---
>
> Key: CLOUDSTACK-9099
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9099
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
> Fix For: 4.10.0.0, 4.9.3.0
>
>
> The sercreKey parameter is returned from the following APIs:
> createAccount
> createUser
> disableAccount
> disableUser
> enableAccount
> enableUser
> listAccounts
> listUsers
> lockAccount
> lockUser
> registerUserKeys
> updateAccount
> updateUser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)