[jira] [Commented] (CLOUDSTACK-7516) test_snapshots.py - VM Deploy failed because the account was using template belonging to different account to deploy the instance

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135015#comment-14135015
 ] 

ASF subversion and git services commented on CLOUDSTACK-7516:
-

Commit 13357cff7d15d439f1393b18558776831260a1bd in cloudstack's branch 
refs/heads/master from [~gauravaradhye]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=13357cf ]

CLOUDSTACK-7516: Fixed resource permission issue in test_snapshots.py, account 
was using template registered with other account

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 test_snapshots.py - VM Deploy failed because the account was using template 
 belonging to different account to deploy the instance
 -

 Key: CLOUDSTACK-7516
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7516
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
  Labels: automation
 Fix For: 4.5.0


 Following test case failed:
 integration.component.test_snapshots.TestCreateVMSnapshotTemplate.test_01_createVM_snapshotTemplate
 Reason:
 Execute cmd: deployvirtualmachine failed, due to: errorCode: 531, 
 errorText:Acct[51d00171-895e-4893-90c8-6630b98f852a-test-TestCreateVMSnapshotTemplate-BJ9XFN]
  does not have permission to operate with resource 
 Acct[e7b7973c-3512-11e4-9ac6-1a6f7bb0d0a8-admin]
 Solution:
 Create the template with the api client of the account itself, and not the 
 api client of root domain account. So that account will have permission to 
 use the resources (template)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7387) [Automation] Fix the script test_vpc_host_maintenance.py - Code is hardcoded to use certain host tags

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135020#comment-14135020
 ] 

ASF subversion and git services commented on CLOUDSTACK-7387:
-

Commit 1b14fa6abef5811079eeb7cbd26ab718f6f69405 in cloudstack's branch 
refs/heads/master from [~gauravaradhye]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1b14fa6 ]

CLOUDSTACK-7387: Corrected code related to adding host tags

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 [Automation] Fix the script test_vpc_host_maintenance.py - Code is 
 hardcoded to use certain host tags
 ---

 Key: CLOUDSTACK-7387
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7387
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Gaurav Aradhye
Priority: Critical
 Fix For: 4.5.0


 *Script is present at:*
 component/maint/test_vpc_host_maintenance.py
 Currently script assumes that the deployment has hosts with host_tag 
 HOST_TAGS_HERE and uses two service offerings with this host_tag. The 
 script is hardcoded with the above information. The proper design and 
 correction should be as follows.
 # Find the cluster with two hosts
 ## If no two host cluster is found error out with proper message
 # Edit the host tags on the two hosts to two different unique names
 # Create corresponding service offerings with the two different unique names
 # Conduct the tests
 # In teardown script section of the script, edit the host tags on the hosts 
 to empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7266) Deleting account is not cleaning the snapshot entries in secondary storage

2014-09-16 Thread manasaveloori (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

manasaveloori closed CLOUDSTACK-7266.
-

Verified the issue on latest build.Working fine

[root@RHEL63testVM 31]# pwd
/home/secondaryVMw/snapshots/31
[root@RHEL63testVM 31]# ls -lrt
total 0

commit:
[root@RHEL63testVM ~]# cloudstack-sccs
1148c318b392e9ca9fa93692a99850e105d293b7

Closing the issue.

 Deleting account is not cleaning the snapshot entries in secondary storage
 --

 Key: CLOUDSTACK-7266
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7266
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Snapshot
Affects Versions: 4.5.0
Reporter: manasaveloori
Assignee: Min Chen
Priority: Critical
 Fix For: 4.5.0

 Attachments: cloud.log-20140806.gz, management-server.rar, 
 mysqldump45.dmp


 Steps:
 1. Deployed CS with ESXi5.1
 2. Created VM with data disk.
 3. Created snapshots on both root and data disks.
 4. Now deleted the account.
  id: 4
account_name: test
uuid: c55b8251-7bb0-4531-bd7e-7811d55160e6
type: 0
   domain_id: 1
   state: enabled
 removed: 2014-08-06 06:04:50
  cleanup_needed: 0
  network_domain: NULL
 default_zone_id: NULL
 default: 0
 5. All the snapshots got deleted as apart of account clean up in DB
 mysql select * from snapshots where account_id=4\G;
 *** 1. row ***
   id: 38
   data_center_id: 1
   account_id: 4
domain_id: 1
volume_id: 18
 disk_offering_id: 1
   status: Destroyed
 path: NULL
 name: testvmacct1_ROOT-11_2014080605
 uuid: 716f0dd5-056e-4cab-b814-125fc6fe84e6
snapshot_type: 0
 type_description: MANUAL
 size: 2147483648
  created: 2014-08-06 05:44:44
  removed: NULL
   backup_snap_id: NULL
 swift_id: NULL
   sechost_id: NULL
 prev_snap_id: NULL
  hypervisor_type: VMware
  version: 2.2
s3_id: NULL
 *** 2. row ***
   id: 39
   data_center_id: 1
   account_id: 4
domain_id: 1
volume_id: 18
 disk_offering_id: 1
   status: Destroyed
 path: NULL
 name: testvmacct1_ROOT-11_20140806054650
 uuid: ea949547-5ef7-40b0-99a6-152d346a0ad6
snapshot_type: 3
 type_description: HOURLY
 size: 2147483648
  created: 2014-08-06 05:46:50
  removed: NULL
   backup_snap_id: NULL
 swift_id: NULL
   sechost_id: NULL
 prev_snap_id: NULL
  hypervisor_type: VMware
  version: 2.2
s3_id: NULL
 *** 3. row ***
   id: 40
   data_center_id: 1
   account_id: 4
domain_id: 1
volume_id: 18
 disk_offering_id: 1
   status: Destroyed
 path: NULL
 name: testvmacct1_ROOT-11_20140806055150
 uuid: b8af0810-b08a-40bd-a114-13a4d75907ca
snapshot_type: 4
 type_description: DAILY
 size: 2147483648
  created: 2014-08-06 05:51:50
  removed: NULL
   backup_snap_id: NULL
 swift_id: NULL
   sechost_id: NULL
 prev_snap_id: NULL
  hypervisor_type: VMware
  version: 2.2
s3_id: NULL
 *** 4. row ***
   id: 41
   data_center_id: 1
   account_id: 4
domain_id: 1
volume_id: 18
 disk_offering_id: 1
   status: Destroyed
 path: NULL
 name: testvmacct1_ROOT-11_20140806055150
 uuid: dda5cfb7-f5ab-4bd0-bdd3-953932e77f5e
snapshot_type: 5
 type_description: WEEKLY
 size: 2147483648
  created: 2014-08-06 05:51:50
  removed: NULL
   backup_snap_id: NULL
 swift_id: NULL
   sechost_id: NULL
 prev_snap_id: NULL
  hypervisor_type: VMware
  version: 2.2
s3_id: NULL
 *** 5. row ***
   id: 42
   data_center_id: 1
   account_id: 4
domain_id: 1
volume_id: 19
 disk_offering_id: 3
   status: Destroyed
 path: NULL
 name: testvmacct1_DATA-11_20140806055150
 uuid: 69c8b7d1-baab-42ac-9a7d-dbdb0688654f
snapshot_type: 3
 type_description: HOURLY
 size: 5368709120
  created: 2014-08-06 05:51:50
  removed: NULL
   backup_snap_id: NULL
 swift_id: NULL
   sechost_id: NULL
 prev_snap_id: NULL
  hypervisor_type: VMware
  

[jira] [Created] (CLOUDSTACK-7555) [Automation] FIx the script - /component/test_usage.py - Template should belong to the Regular Account to test TEMPLATE.CREATE Event

2014-09-16 Thread Chandan Purushothama (JIRA)
Chandan Purushothama created CLOUDSTACK-7555:


 Summary: [Automation] FIx the script - /component/test_usage.py - 
Template should belong to the Regular Account to test TEMPLATE.CREATE Event
 Key: CLOUDSTACK-7555
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7555
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
Priority: Critical
 Fix For: 4.5.0


*TestTemplateUsage.test_01_template_usage* fails with the following error 
message  Stack Trace

{noformat}
Stacktrace

  File /usr/lib/python2.7/unittest/case.py, line 332, in run
testMethod()
  File /root/cloudstack/test/integration/component/test_usage.py, line 802, 
in test_01_template_usage
Check TEMPLATE.CREATE event in events table
  File /usr/lib/python2.7/unittest/case.py, line 516, in assertEqual
assertion_func(first, second, msg=msg)
  File /usr/lib/python2.7/unittest/case.py, line 509, in _baseAssertEqual
raise self.failureException(msg)
'Check TEMPLATE.CREATE event in events table\n
{noformat}

This is because the Template is being created as admin and it belongs to the 
admin account. The template should belong to the Regular User in order to check 
for the TEMPLATE.CREATE Event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7556) [API]Non live Migration of root and data disk from cluster wide primary to zone wide and vice versa is not failing

2014-09-16 Thread manasaveloori (JIRA)
manasaveloori created CLOUDSTACK-7556:
-

 Summary: [API]Non live Migration of root and data disk from 
cluster wide primary to zone wide and vice versa is not failing
 Key: CLOUDSTACK-7556
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7556
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.5.0
 Environment: 1zone,1pod,2 clusters with ESXi5.1 HV each
Reporter: manasaveloori
Priority: Critical
 Fix For: 4.5.0


1. Cluster C1 has primary PS1 and C2 -primary PS2.Also added one zone wide PS 
PSZW.
2. Deployed a VM  with data disk under cluster1.
3. Stopped  the VM.
4. Fired the API for migrating the root/data disk from PS1 to PSZW.

Migration of root/data from cluster wide to zone wide and vice versa should 
fail as there is a scope change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7552) [Automation][HyperV] Fix the script - /smoke/test_volumes.py - TestCreateVolume.test_01_create_volume

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135044#comment-14135044
 ] 

ASF subversion and git services commented on CLOUDSTACK-7552:
-

Commit 8567701f07a3bf43a4f7532745e967d029229589 in cloudstack's branch 
refs/heads/master from sanjeev
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8567701 ]

CLOUDSTACK-7552: In hyper-v additional data disks will be mapped to /dev/sdb
Made changes to test_volumes.py accordingly


 [Automation][HyperV] Fix the script - /smoke/test_volumes.py - 
 TestCreateVolume.test_01_create_volume
 ---

 Key: CLOUDSTACK-7552
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7552
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
Priority: Critical
 Fix For: 4.5.0


 Test Case at 
 *integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume* 
 failed on HyperV due to querying for wrong disk on the Guest VM. Instead of 
 */dev/sdb*, Query has been made for */dev/sda*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7552) [Automation][HyperV] Fix the script - /smoke/test_volumes.py - TestCreateVolume.test_01_create_volume

2014-09-16 Thread Sanjeev N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjeev N updated CLOUDSTACK-7552:
--
Status: Reviewable  (was: In Progress)

 [Automation][HyperV] Fix the script - /smoke/test_volumes.py - 
 TestCreateVolume.test_01_create_volume
 ---

 Key: CLOUDSTACK-7552
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7552
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Sanjeev N
Priority: Critical
 Fix For: 4.5.0


 Test Case at 
 *integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume* 
 failed on HyperV due to querying for wrong disk on the Guest VM. Instead of 
 */dev/sdb*, Query has been made for */dev/sda*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7552) [Automation][HyperV] Fix the script - /smoke/test_volumes.py - TestCreateVolume.test_01_create_volume

2014-09-16 Thread Sanjeev N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjeev N resolved CLOUDSTACK-7552.
---
Resolution: Fixed

 [Automation][HyperV] Fix the script - /smoke/test_volumes.py - 
 TestCreateVolume.test_01_create_volume
 ---

 Key: CLOUDSTACK-7552
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7552
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Sanjeev N
Priority: Critical
 Fix For: 4.5.0


 Test Case at 
 *integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume* 
 failed on HyperV due to querying for wrong disk on the Guest VM. Instead of 
 */dev/sdb*, Query has been made for */dev/sda*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7557) test_vpc_network.TestVPCNetworkUpgrade.test_01_network_services_upgrade failed with Network state should change to Allocated, it is Implemented

2014-09-16 Thread Gaurav Aradhye (JIRA)
Gaurav Aradhye created CLOUDSTACK-7557:
--

 Summary: 
test_vpc_network.TestVPCNetworkUpgrade.test_01_network_services_upgrade failed 
with Network state should change to Allocated, it is Implemented
 Key: CLOUDSTACK-7557
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7557
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
 Fix For: 4.5.0


After stopping VMs, the network state should change from Implemented to 
Allocated after some time.

Root cause of failure:
The wait time in test case is too less. It's 6 sec instead of 60 seconds. Hence 
the test case failed before the state changed to Allocated.

Resolution:
Fix the wait time.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7558) [UI]list storage pools under Migrate root volume is not listing the primary storage of other clusters.

2014-09-16 Thread manasaveloori (JIRA)
manasaveloori created CLOUDSTACK-7558:
-

 Summary: [UI]list storage pools under Migrate root volume is not 
listing the primary storage of other clusters.
 Key: CLOUDSTACK-7558
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7558
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller, UI
Affects Versions: 4.5.0
 Environment: 1zone,1pod,2 clusters with 2 cluster wide primary storage 
pools.
Reporter: manasaveloori
 Fix For: 4.5.0


1. Clusters C1 with PS1 and cluster PS2both are cluster wide primary 
storages.
2. deployed a VM with data disk under C1.
3. stopped the VM.
4. Migrate the root/data volume from PS1 to PS2.

Observation:

under Migration...the drop down is not listing the primary storage of other 
clusters.
As this is supported operation the UI should list them.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7554) [Automation] Fix the script - /component/test_templates.py - User Account does not permissions to the Template created by Admin

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135060#comment-14135060
 ] 

ASF subversion and git services commented on CLOUDSTACK-7554:
-

Commit 50990c40423062ed0b724472c95a1d3120b8b66d in cloudstack's branch 
refs/heads/master from [~chandanp]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=50990c4 ]

CLOUDSTACK-7554 : Fixed the script - /component/test_templates.py - User 
Account now has permissions to the Template created by Admin


 [Automation] Fix the script - /component/test_templates.py - User Account 
 does not permissions to the Template created by Admin
 ---

 Key: CLOUDSTACK-7554
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7554
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
Priority: Critical
 Fix For: 4.5.0


 Two testcases failed due to template permissions issue:
 *test_01_create_template_volume*
 *test_04_template_from_snapshot*
 Template created by Admin should have public permissions so that the Regular 
 Account can use it to deploy VMs in the test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7557) test_vpc_network.TestVPCNetworkUpgrade.test_01_network_services_upgrade failed with Network state should change to Allocated, it is Implemented

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye updated CLOUDSTACK-7557:
---
Status: Reviewable  (was: In Progress)

 test_vpc_network.TestVPCNetworkUpgrade.test_01_network_services_upgrade 
 failed with Network state should change to Allocated, it is Implemented
 -

 Key: CLOUDSTACK-7557
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7557
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
 Fix For: 4.5.0


 After stopping VMs, the network state should change from Implemented to 
 Allocated after some time.
 Root cause of failure:
 The wait time in test case is too less. It's 6 sec instead of 60 seconds. 
 Hence the test case failed before the state changed to Allocated.
 Resolution:
 Fix the wait time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7516) test_snapshots.py - VM Deploy failed because the account was using template belonging to different account to deploy the instance

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-7516.

Resolution: Fixed

 test_snapshots.py - VM Deploy failed because the account was using template 
 belonging to different account to deploy the instance
 -

 Key: CLOUDSTACK-7516
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7516
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
  Labels: automation
 Fix For: 4.5.0


 Following test case failed:
 integration.component.test_snapshots.TestCreateVMSnapshotTemplate.test_01_createVM_snapshotTemplate
 Reason:
 Execute cmd: deployvirtualmachine failed, due to: errorCode: 531, 
 errorText:Acct[51d00171-895e-4893-90c8-6630b98f852a-test-TestCreateVMSnapshotTemplate-BJ9XFN]
  does not have permission to operate with resource 
 Acct[e7b7973c-3512-11e4-9ac6-1a6f7bb0d0a8-admin]
 Solution:
 Create the template with the api client of the account itself, and not the 
 api client of root domain account. So that account will have permission to 
 use the resources (template)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7441) [Automation] Fix the script test_resource_limits.py - Templates are registered as admin's

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-7441.

Resolution: Fixed

 [Automation] Fix the script test_resource_limits.py - Templates are 
 registered as admin's
 ---

 Key: CLOUDSTACK-7441
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7441
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Gaurav Aradhye
Priority: Critical
 Fix For: 4.5.0


 Two test cases are failing in the test suite: *test_05_templates_per_account* 
  *test_05_templates_per_domain*
 *Error Message*
 Exception not raised   Logs available at 
 http://xenrt.hq.xensource.com/control/queue.cgi?action=testlogsid=804076phase=Paralleltest=resource_limits
 Stacktrace
   File /usr/lib/python2.7/unittest/case.py, line 332, in run
 testMethod()
   File /root/cloudstack/test/integration/component/test_resource_limits.py, 
 line 844, in test_05_templates_per_account
 domainid=self.account_1.domainid,
   File /usr/lib/python2.7/unittest/case.py, line 116, in __exit__
 {0} not raised.format(exc_name))
 'Exception not raised\n
 Logs available at 
 http://xenrt.hq.xensource.com/control/queue.cgi?action=testlogsid=804076phase=Paralleltest=resource_limits

 *Error Message*
 Exception not raised   Logs available at 
 http://xenrt.hq.xensource.com/control/queue.cgi?action=testlogsid=804076phase=Paralleltest=resource_limits
 Stacktrace
   File /usr/lib/python2.7/unittest/case.py, line 332, in run
 testMethod()
   File /root/cloudstack/test/integration/component/test_resource_limits.py, 
 line 1355, in test_05_templates_per_domain
 domainid=self.account.domainid,
   File /usr/lib/python2.7/unittest/case.py, line 116, in __exit__
 {0} not raised.format(exc_name))
 'Exception not raised\n
 Logs available at 
 http://xenrt.hq.xensource.com/control/queue.cgi?action=testlogsid=804076phase=Paralleltest=resource_limits

 *Bug Information in the Client result logs:*
 {code}
 ==
 FAIL: Test Templates limit per account
 --
 Traceback (most recent call last):
   File /root/cloudstack/test/integration/component/test_resource_limits.py, 
 line 844, in test_05_templates_per_account
 domainid=self.account_1.domainid,
 AssertionError: Exception not raised
   begin captured stdout  -
 === TestName: test_05_templates_per_account | Status : FAILED ===
 .
 .
 .
 .
 test_05_templates_per_account 
 (integration.component.test_resource_limits.TestResourceLimitsAccount): 
 DEBUG: Response : {jobprocstatus : 0, created : u'2014-08-24T14:19:11+', 
 jobresult : {domain : u'ROOT', domainid : 
 u'56ab18f0-2b4d-11e4-89bd-1e5d0e053e75', ostypename : u'CentOS 5.3 (64-bit)', 
 zoneid : u'eb811e7d-59d4-4c72-a965-80d9e30572d1', displaytext : u'Cent OS 
 Template', ostypeid : u'56b1e2f2-2b4d-11e4-89bd-1e5d0e053e75', 
 passwordenabled : False, id : u'c406715b-aa97-4361-a212-32cdfba76b00', size : 
 21474836480, isready : True, format : u'VHD', templatetype : u'USER', details 
 : {platform : u'viridian:true;acpi:1;apic:true;pae:true;nx:true', 
 Message.ReservedCapacityFreed.Flag : u'false'}, zonename : u'XenRT-Zone-0', 
 status : u'Download Complete', isdynamicallyscalable : False, tags : [], 
 isfeatured : False, sshkeyenabled : False, account : u'admin', isextractable 
 : True, crossZones : False, sourcetemplateid : 
 u'56af8f20-2b4d-11e4-89bd-1e5d0e053e75', name : u'Cent OS Template-LS47A4', 
 created : u'2014-08-24T14:19:11+', hypervisor : u'XenServer', ispublic : 
 False}, cmd : 
 u'org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin', 
 userid : u'77df13d2-2b4d-11e4-89bd-1e5d0e053e75', jobstatus : 1, jobid : 
 u'e9295a94-4dfd-4c27-a405-fab4975f0eee', jobresultcode : 0, jobinstanceid : 
 u'c406715b-aa97-4361-a212-32cdfba76b00', jobresulttype : u'object', 
 jobinstancetype : u'Template', accountid : 
 u'77df055e-2b4d-11e4-89bd-1e5d0e053e75'}
 test_05_templates_per_account 
 (integration.component.test_resource_limits.TestResourceLimitsAccount): 
 DEBUG: ===Jobid:e9295a94-4dfd-4c27-a405-fab4975f0eee ; StartTime:Sun Aug 24 
 14:19:12 2014 ; EndTime:Sun Aug 24 14:20:52 2014 ; TotalTime:-100===
 test_05_templates_per_account 
 (integration.component.test_resource_limits.TestResourceLimitsAccount): 
 DEBUG: Response : {jobprocstatus : 0, created : u'2014-08-24T14:19:11+', 
 jobresult : {domain : 

[jira] [Resolved] (CLOUDSTACK-7135) [Automation] Fix the script test_baremetal.py - Can't have more than one Guest network in zone with network type Basic

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-7135.

Resolution: Fixed

 [Automation] Fix the script test_baremetal.py - Can't have more than one 
 Guest network in zone with network type Basic
 

 Key: CLOUDSTACK-7135
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7135
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Gaurav Aradhye
Priority: Critical
 Fix For: 4.5.0


 
 Error Message:
 
 test_baremetal (integration.component.test_baremetal.TestBaremetal): DEBUG: 
 Sending GET Cmd : createNetwork===
 requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection 
 (1): 10.220.135.73
 requests.packages.urllib3.connectionpool: DEBUG: GET 
 /client/api?apiKey=Ra1mlXzCZU0K1l4MKDWdRbQDU67PCQuRnKYv3hyc-Q8hSvCSFjB32UtifLbS6oYpMeKaf0BCuUidMw0LqZeCMAzoneid=1displaytext=defaultBaremetalNetworknetworkofferingid=84c3e203-139e-4412-ab81-8a6abecb3e35response=jsonname=defaultBaremetalNetworkcommand=createNetworksignature=alemHuTsxw31sTOaAyaZn2Cw4N8%3D
  HTTP/1.1 431 165
 test_baremetal (integration.component.test_baremetal.TestBaremetal): ERROR: 
 Exception:['Traceback (most recent call last):\n', '  File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py,
  line 308, in __parseAndGetResponse\nresponse_cls)\n', '  File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/jsonHelper.py,
  line 150, in getResultObj\nraise 
 cloudstackException.CloudstackAPIException(respname, errMsg)\n', 
 CloudstackAPIException: Execute cmd: createnetwork failed, due to: 
 errorCode: 431, errorText:Can't have more than one Guest network in zone with 
 network type Basic\n]
 Traceback (most recent call last):
   File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py,
  line 308, in __parseAndGetResponse
 response_cls)
   File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/jsonHelper.py,
  line 150, in getResultObj
 raise cloudstackException.CloudstackAPIException(respname, errMsg)
 CloudstackAPIException: Execute cmd: createnetwork failed, due to: errorCode: 
 431, errorText:Can't have more than one Guest network in zone with network 
 type Basic
 test_baremetal (integration.component.test_baremetal.TestBaremetal): ERROR: 
 marvinRequest : CmdName: marvin.cloudstackAPI.createNetwork.createNetworkCmd 
 object at 0x302ebd0 Exception: ['Traceback (most recent call last):\n', '  
 File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py,
  line 375, in marvinRequest\nraise self.__lastError\n', 
 CloudstackAPIException: Execute cmd: createnetwork failed, due to: 
 errorCode: 431, errorText:Can't have more than one Guest network in zone with 
 network type Basic\n]
 Traceback (most recent call last):
   File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py,
  line 375, in marvinRequest
 raise self.__lastError
 CloudstackAPIException: Execute cmd: createnetwork failed, due to: errorCode: 
 431, errorText:Can't have more than one Guest network in zone with network 
 type Basic
 test_baremetal (integration.component.test_baremetal.TestBaremetal): 
 CRITICAL: EXCEPTION: test_baremetal: ['Traceback (most recent call last):\n', 
 '  File /usr/lib/python2.7/unittest/case.py, line 332, in run\n
 testMethod()\n', '  File 
 /home/jenkins/workspace/xenrt-reg-basic-xs/cloudstack.git/test/integration/component/test_baremetal.py,
  line 110, in test_baremetal\nnetwork = Network.create(self.apiclient, 
 self.services[network], zoneid=self.zoneid, 
 networkofferingid=networkoffering.id)\n', '  File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/lib/base.py,
  line 2591, in create\nreturn 
 Network(apiclient.createNetwork(cmd).__dict__)\n', '  File 
 /local/jenkins/workspace/xenrt-reg-basic-xs/work.64/env/local/lib/python2.7/site-packages/marvin/cloudstackAPI/cloudstackAPIClient.py,
  line 1854, in createNetwork\nresponse = 
 self.connection.marvinRequest(command, response_type=response, 
 method=method)\n', '  File 
 

[jira] [Resolved] (CLOUDSTACK-7351) [Automation] test_02_deploy_ha_vm_from_iso test case fails during VM deploy

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-7351.

Resolution: Fixed

 [Automation] test_02_deploy_ha_vm_from_iso test case fails during VM deploy 
 

 Key: CLOUDSTACK-7351
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7351
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
 Environment: KVM (RHEL 6.3)
Reporter: Rayees Namathponnan
Assignee: Gaurav Aradhye
Priority: Critical
 Fix For: 4.5.0

 Attachments: CLOUDSTACK-7351.rar


 This issue observed while running the test case 
 integration.component.test_stopped_vm.TestDeployHaEnabledVM.test_02_deploy_ha_vm_from_iso
 This test case deploying VM with below command 
 2014-08-14 15:59:45,255 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-10:ctx-4e4260d3) ===START===  10.223.240.194 -- GET  
 account=test-TestVMAcc
 ountLimit-test_02_deploy_ha_vm_from_iso-AYL50Ydomainid=8b53537a-23f9-11e4-9ac6-1a6f7bb0d0a8displayname=testserversignature=4xBMTxK5iiaze
 Fgwm2GisNo1SvM%3Dzoneid=a99226f1-d924-4156-8157-90bec0fa6579apiKey=uBqUNp_2XuCg6uwv_LMLO2W6drySk_RYAiVlcdSda1yBfLTiC2SAlFk2LX9HLLpPkAs0zo
 TzASxzSN0OSUnfoQstartvm=Truetemplateid=5cc1e055-5f49-4f12-91da-d01bf7ee509ccommand=deployVirtualMachineresponse=jsondiskofferingid=543
 e345e-645a-4bf9-bd4e-af1db46470e7serviceofferingid=db22034a-1bdd-494f-9627-fb6fd4e16585
 This deployment failed with below error 
 2014-08-14 15:59:45,353 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
 (catalina-exec-10:ctx-4e4260d3 ctx-b9541cb6 ctx-d0edfeec) Releasing lock for 
 Acct[76597f29-a3e7-41a8-abc7-1cef552cf748-test-TestVMAccountLimit-test_02_deploy_ha_vm_from_iso-AYL50Y]
 2014-08-14 15:59:45,388 INFO  [c.c.a.ApiServer] 
 (catalina-exec-10:ctx-4e4260d3 ctx-b9541cb6 ctx-d0edfeec) hypervisor 
 parameter is needed to deploy VM or the hypervisor parameter value passed is 
 invalid
 Is it required to pass hypervisor type ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7391) [Automation] Fix the script test_host_high_availability.py - Error Message: suitablehost should not be None

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-7391.

Resolution: Fixed

 [Automation] Fix the script test_host_high_availability.py - Error Message: 
 suitablehost should not be None
 ---

 Key: CLOUDSTACK-7391
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7391
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Gaurav Aradhye
 Fix For: 4.5.0


 ==
 Client Code:
 ==
 def test_03_cant_migrate_vm_to_host_with_ha_positive(self):
  Verify you can not migrate VMs to hosts with an ha.tag (positive) 
 
 .
 .
 .
 vm = vms[0]
 self.debug(Deployed VM on host: %s % vm.hostid)
 #Find out a Suitable host for VM migration
 list_hosts_response = list_hosts(
 self.apiclient,  *BUG: Query the list of hosts with vm id. Only 
 then the response will have list of suitable and non-suitable hosts. Else 
 suitableforMigration is not returned in the response*
 )
 self.assertEqual(
 isinstance(list_hosts_response, list),
 True,
 The listHosts API returned the invalid list
 )
 self.assertNotEqual(
 len(list_hosts_response),
 0,
 The listHosts returned nothing.
 )
 suitableHost = None
 for host in list_hosts_response:
 if host.suitableformigration == True and host.hostid != vm.hostid:
 suitableHost = host
 break
 self.assertTrue(suitableHost is not None, suitablehost should not be 
 None)
 *Error Message: suitablehost should not be None*
 {code}
 Cmd : listHosts===
 requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection 
 (1): 10.220.135.39
 requests.packages.urllib3.connectionpool: DEBUG: GET 
 /client/api?apiKey=NpffyWZkfwK7gPcNpx28Ohv6K56ftl57A409SyokqHjJ2ZNe3AvvF3F0teTETeIIqrtlcWpQOooM3cQyPveGXwcommand=listHostsresponse=jsonsignature=gh2gh3mSzQNAcfMdspqc9v1JE3U%3D
  HTTP/1.1 200 3708
 test_03_cant_migrate_vm_to_host_with_ha_positive 
 (integration.component.maint.test_host_high_availability.TestHostHighAvailability):
  DEBUG: Response : [{name : u's-2-VM', created : u'2014-08-20T04:31:37+', 
 ipaddress : u'10.220.136.107', islocalstorageactive : False, podid : 
 u'027c1e45-5867-40f8-8ad9-685b5eb63dd2', resourcestate : u'Enabled', zoneid : 
 u'f2acfe0c-c8c8-4353-8f97-a3e0f14d6357', state : u'Up', version : 
 u'4.5.0-SNAPSHOT', managementserverid : 231707544610094, podname : 
 u'XenRT-Zone-0-Pod-0', id : u'bb004159-d510-42b4-bfd5-878140a11f78', 
 lastpinged : u'1970-01-16T22:04:57+', type : u'SecondaryStorageVM', 
 events : u'AgentDisconnected; PingTimeout; Remove; ShutdownRequested; 
 AgentConnected; HostDown; ManagementServerDown; Ping; StartAgentRebalance', 
 zonename : u'XenRT-Zone-0'}, {name : u'v-1-VM', created : 
 u'2014-08-20T04:31:37+', ipaddress : u'10.220.136.105', 
 islocalstorageactive : False, podid : 
 u'027c1e45-5867-40f8-8ad9-685b5eb63dd2', resourcestate : u'Enabled', zoneid : 
 u'f2acfe0c-c8c8-4353-8f97-a3e0f14d6357', state : u'Up', version : 
 u'4.5.0-SNAPSHOT', managementserverid : 231707544610094, podname : 
 u'XenRT-Zone-0-Pod-0', id : u'f328a0d1-f4cb-4486-9550-dd46c403c3ed', 
 lastpinged : u'1970-01-16T22:04:57+', type : u'ConsoleProxy', events : 
 u'AgentDisconnected; PingTimeout; Remove; ShutdownRequested; AgentConnected; 
 HostDown; ManagementServerDown; Ping; StartAgentRebalance', zonename : 
 u'XenRT-Zone-0'}, {cpuwithoverprovisioning : u'28800.0', version : 
 u'4.5.0-SNAPSHOT', memorytotal : 31073792896, zoneid : 
 u'f2acfe0c-c8c8-4353-8f97-a3e0f14d6357', cpunumber : 12, managementserverid : 
 231707544610094, cpuallocated : u'2.08%', memoryused : 4211653, id : 
 u'1f5f180e-3eb1-4a6a-92f8-8df71df57962', cpuused : u'0.03%', 
 hypervisorversion : u'6.2.0', clusterid : 
 u'af55ad36-15c8-424b-916b-db1550aae5ff', capabilities : u'xen-3.0-x86_64 , 
 xen-3.0-x86_32p , hvm-3.0-x86_32 , hvm-3.0-x86_32p , hvm-3.0-x86_64', state : 
 u'Up', memoryallocated : 268435456, networkkbswrite : 5383, cpuspeed : 2400, 
 cpusockets : 2, type : u'Routing', events : u'AgentDisconnected; PingTimeout; 
 Remove; ShutdownRequested; AgentConnected; HostDown; ManagementServerDown; 
 Ping; StartAgentRebalance', zonename : u'XenRT-Zone-0', podid : 
 u'027c1e45-5867-40f8-8ad9-685b5eb63dd2', clustertype : u'CloudManaged', 
 hahost : False, lastpinged : u'1970-01-16T22:04:56+', ipaddress : 
 

[jira] [Closed] (CLOUDSTACK-7215) [Automation] Make expunge=True as default parameter value for destroyVirtualMachine api call through base library

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye closed CLOUDSTACK-7215.
--
Resolution: Fixed

 [Automation] Make expunge=True as default parameter value for 
 destroyVirtualMachine api call through base library
 -

 Key: CLOUDSTACK-7215
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7215
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
  Labels: automation
 Fix For: 4.5.0


 In almost 90% of the scenarios where VMs are created through test case, VMs 
 are added to cleanup list and the delete method is called for them through 
 cleanup_resources method in utils.py file.
 These VMs remain in destroyed state for long time and keep blocking the 
 resources (IP Address etc) and hence the load on the setup on which 
 regression build is fired increases.
 Making expunge=True as default parameter in destroyVirtualMachine api call 
 through base library will make all these VMs expunge quickly making resources 
 available for next test cases.
 Also, it can be passed as False whenever we don't want VM to expunge 
 immediately, and in case when we recover the VM through test case after 
 destroying it. Pass expunge=False for all such scenarios.
 This will hugely boost the test cases execution speed too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-2266) Automation : IP Address reservation within a network

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye resolved CLOUDSTACK-2266.

Resolution: Fixed

 Automation : IP Address reservation within a network
 

 Key: CLOUDSTACK-2266
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2266
 Project: CloudStack
  Issue Type: Sub-task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Sudha Ponnaganti
Assignee: Ashutosk Kelkar
 Fix For: 4.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-2266) Automation : IP Address reservation within a network

2014-09-16 Thread Gaurav Aradhye (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aradhye updated CLOUDSTACK-2266:
---
Assignee: Ashutosk Kelkar  (was: Gaurav Aradhye)

 Automation : IP Address reservation within a network
 

 Key: CLOUDSTACK-2266
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2266
 Project: CloudStack
  Issue Type: Sub-task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Sudha Ponnaganti
Assignee: Ashutosk Kelkar
 Fix For: 4.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7421) [Automation] Exceptions in orchestrate* methods from virtualMachineManagerImpl are shown in log

2014-09-16 Thread Koushik Das (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135079#comment-14135079
 ] 

Koushik Das commented on CLOUDSTACK-7421:
-

Rayees, What is the expectation here? The exception has the proper message and 
it is logged in MS logs with category as ERROR.

 [Automation] Exceptions in orchestrate* methods from 
 virtualMachineManagerImpl are shown in log
 ---

 Key: CLOUDSTACK-7421
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7421
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Rayees Namathponnan
 Fix For: 4.5.0


 Steps to reproduce 
 1 ) Deploy VM 
 2) Add one more nic
 3) remove default nic the VM 
 Result 
 Below exception thrown in log, it should be handled with proper messgae 
 cloud.vm.VmWorkRemoveNicFromVm for VM 30, job origin: 248
 2014-08-23 09:50:33,665 ERROR [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-61:ctx-5c747fe0 job-248/job-249) Unable to complete 
 AsyncJobVO {id:249, userId: 2, accountId: 2, instanceType: null, instanceId: 
 null, cmd: com.cloud.vm.VmWorkRemoveNicFromVm, cmdInfo: 
 rO0ABXNyACJjb20uY2xvdWQudm0uVm1Xb3JrUmVtb3ZlTmljRnJvbVZtxM1Xh9nBu10CAAFMAAVuaWNJZHQAEExqYXZhL2xhbmcvTG9uZzt4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1ldAASTGphdmEvbGFuZy9TdHJpbmc7eHAAAgACAB50ABlWaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsc3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cABI,
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 90928106758026, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: Sat Aug 23 09:50:32 PDT 2014}, job origin:248
 com.cloud.utils.exception.CloudRuntimeException: Failed to remove nic from 
 VM[User|i-18-30-TestVM] in Ntwk[222|Guest|17], nic is default.
 at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateRemoveNicFromVm(VirtualMachineManagerImpl.java:2963)
 at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateRemoveNicFromVm(VirtualMachineManagerImpl.java:4690)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
 at 
 com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4738)
 at 
 com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:503)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:460)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7549) Apache cloudstack failed to authenticate using a novell NIM openldap server

2014-09-16 Thread JF Vincent (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135093#comment-14135093
 ] 

JF Vincent commented on CLOUDSTACK-7549:


HAD a look on the packets sent and received by the NIM server. Cloudstack 
correctly bound to the server (this one correclty reported one entry for the 
user) but do not ask the server to check the password. Just 3 packets were 
exchanged. 

 Apache cloudstack failed to authenticate using a novell NIM openldap server
 ---

 Key: CLOUDSTACK-7549
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7549
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Novell NIM openldap server
Reporter: JF Vincent
Priority: Critical

 Succeeded to connect to a A.D. server.
 When trying to connect to a Novel NIM authentication server, authentication 
 failed while correctly configured :
 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-f6badad3) ===START===  
 10.26.238.65 -- POST
 DEBUG [c.c.u.AccountManagerImpl] (http-6443-exec-6:ctx-f6badad3) Attempting 
 to log in user: b11 in domain 1
 DEBUG [c.c.s.a.SHA256SaltedUserAuthenticator] (http-6443-exec-6:ctx-f6badad3) 
 Retrieving user: b11
 DEBUG [c.c.s.a.MD5UserAuthenticator] (http-6443-exec-6:ctx-f6badad3) 
 Retrieving user: b11
 DEBUG [c.c.s.a.MD5UserAuthenticator] (http-6443-exec-6:ctx-f6badad3) Password 
 does not match
 INFO  [o.a.c.l.LdapManagerImpl] (http-6443-exec-6:ctx-f6badad3) Failed to 
 authenticate user: b11. incorrect password.
 DEBUG [c.c.s.a.PlainTextUserAuthenticator] (http-6443-exec-6:ctx-f6badad3) 
 Retrieving user: b11
 DEBUG [c.c.s.a.PlainTextUserAuthenticator] (http-6443-exec-6:ctx-f6badad3) 
 Password does not match
 DEBUG [c.c.u.AccountManagerImpl] (http-6443-exec-6:ctx-f6badad3) Unable to 
 authenticate user with username b11 in domain 1
 DEBUG [c.c.u.AccountManagerImpl] (http-6443-exec-6:ctx-f6badad3) User: 
 a543197 in domain 1 has failed to log in
 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-f6badad3) ===END===  
 10.26.238.65 -- POST



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7559) After migrating root volume to other cluster wide storage, start VM is not running the VM with root disk from new storage.

2014-09-16 Thread manasaveloori (JIRA)
manasaveloori created CLOUDSTACK-7559:
-

 Summary: After migrating root volume to other cluster wide 
storage, start VM is not running the VM with root disk from new storage.
 Key: CLOUDSTACK-7559
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7559
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.5.0
 Environment: 1zone ,1pod, 2 clusters with ESXi HV each cluster wide 
primary storage.
Reporter: manasaveloori
Priority: Critical
 Fix For: 4.5.0


1. 2 clusters C1 and C2 with PS1 and PS2(cluster wide both)
2. Deploy a VM with data disk under cluster C1.
3. stop the VM and detach the data disk.
4. Migrate the root volume to PS2.
5. Start the VM.

Observed :

Initially vm deployment failed and then root volume migrated back to PS1 and 
then VM started.

Observed the following in MS logs:

2014-09-15 17:02:07,096 ERROR [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-18:ctx-a3ba0611 job-911/job-912 ctx-769e4a3b) Invocation 
exception, caused by: com.cloud.exception.ResourceUnavailableException: 
Resource [Cluster:1] is unreachable: Root volume is ready in different cluster, 
Deployment plan provided cannot be satisfied, unable to create a deployment for 
VM[User|i-2-59-VM]

Attaching the Mslogs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7559) After migrating root volume to other cluster wide storage, start VM is not running the VM with root disk from new storage.

2014-09-16 Thread manasaveloori (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

manasaveloori updated CLOUDSTACK-7559:
--
Attachment: management-server.log.rar

 After migrating root volume to other cluster wide storage, start VM is not 
 running the VM with root disk from new storage.
 --

 Key: CLOUDSTACK-7559
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7559
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.5.0
 Environment: 1zone ,1pod, 2 clusters with ESXi HV each cluster wide 
 primary storage.
Reporter: manasaveloori
Priority: Critical
 Fix For: 4.5.0

 Attachments: management-server.log.rar


 1. 2 clusters C1 and C2 with PS1 and PS2(cluster wide both)
 2. Deploy a VM with data disk under cluster C1.
 3. stop the VM and detach the data disk.
 4. Migrate the root volume to PS2.
 5. Start the VM.
 Observed :
 Initially vm deployment failed and then root volume migrated back to PS1 and 
 then VM started.
 Observed the following in MS logs:
 2014-09-15 17:02:07,096 ERROR [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-18:ctx-a3ba0611 job-911/job-912 ctx-769e4a3b) Invocation 
 exception, caused by: com.cloud.exception.ResourceUnavailableException: 
 Resource [Cluster:1] is unreachable: Root volume is ready in different 
 cluster, Deployment plan provided cannot be satisfied, unable to create a 
 deployment for VM[User|i-2-59-VM]
 Attaching the Mslogs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6465) vmware.reserve.mem is missing from cluster level settings

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135208#comment-14135208
 ] 

Rohit Yadav commented on CLOUDSTACK-6465:
-

Any update on this one?

 vmware.reserve.mem is missing from cluster level settings 
 --

 Key: CLOUDSTACK-6465
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6465
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Harikrishna Patnala
Assignee: Harikrishna Patnala
Priority: Critical
 Fix For: 4.4.0


 vmware.reserve.mem is missing from cluster level settings 
 steps
 ===
 infrastructure-cluster-select a cluster-setting  you should 
 vmware.reserver.mem 
 DB:
 ===
 mysql select  name  from configuration where scope =cluster;
 +--+
 | name |
 +--+
 | cluster.cpu.allocated.capacity.disablethreshold  |
 | cluster.cpu.allocated.capacity.notificationthreshold |
 | cluster.memory.allocated.capacity.disablethreshold   |
 | cluster.memory.allocated.capacity.notificationthreshold  |
 | cluster.storage.allocated.capacity.notificationthreshold |
 | cluster.storage.capacity.notificationthreshold   |
 | cpu.overprovisioning.factor  |
 | mem.overprovisioning.factor  |
 | vmware.reserve.cpu   |
 | xen.vm.vcpu.max  |
 +--+
 10 rows in set (0.00 sec)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6459) Unable to enable maintenance mode on a Primary storage that crashed

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135209#comment-14135209
 ] 

Rohit Yadav commented on CLOUDSTACK-6459:
-

Any update on this?

 Unable to enable maintenance mode on a Primary storage that crashed
 ---

 Key: CLOUDSTACK-6459
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6459
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Chandan Purushothama
Assignee: Min Chen
Priority: Critical
 Fix For: 4.4.0

 Attachments: kern.zip, management-server.log.2014-04-18.gz, 
 mysql_cloudstack_dump.zip


 Primary storage in my setup got powered off. I am not able to enable 
 maintenance mode on this primary storage.
 Enabling maintenance mode on the primary storage fails with the following 
 error. It eventually timed out after trying many times
 2014-04-18 16:43:50,020 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-1:ctx-bd92e323 ctx-7d4c2498) ===END===  10.214.5.40 -- GET  
 command=queryAsyncJobResultjobId=62f6830a-c409-4449-a9c5-6a35b7b9fbedresponse=jsonsessionkey=WBpwG%2FryPRNNB1GRuHqam1zbtS8%3D_=1397865006850
 2014-04-18 16:43:50,495 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-9:null) SeqA 2-792: Processing Seq 2-792:  { Cmd , 
 MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
 [{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:1,_loadInfo:{\n
   \connections\: []\n},wait:0}}] }
 2014-04-18 16:43:50,504 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-9:null) SeqA 2-792: Sending Seq 2-792:  { Ans: , 
 MgmtId: 6638073284439, via: 2, Ver: v1, Flags: 100010, 
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2014-04-18 16:43:52,539 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-143:ctx-16ea61bc) Async 600 seconds timeout for task 
 com.xensource.xenapi.Task@8aa497e8
 2014-04-18 16:43:52,563 DEBUG [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-143:ctx-16ea61bc) unable to destroy 
 task(com.xensource.xenapi.Task@8aa497e8) on 
 host(0d2ea73b-12c0-433c-b1c3-e1f193e68f6e) due to You gave an invalid object 
 reference.  The object may have recently been deleted.  The class parameter 
 gives the type of reference given, and the handle parameter echoes the bad 
 value given.
 2014-04-18 16:43:52,564 DEBUG [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-143:ctx-16ea61bc) Catch exception 
 com.cloud.utils.exception.CloudRuntimeException when stop VM:i-3-3-DR due to 
 com.cloud.utils.exception.CloudRuntimeException: Shutdown VM catch 
 HandleInvalid and VM is not in HALTED state
 2014-04-18 16:43:52,569 DEBUG [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-143:ctx-16ea61bc) 10. The VM i-3-3-DR is in Running state
 2014-04-18 16:43:52,572 DEBUG [c.c.a.m.DirectAgentAttache] 
 (DirectAgent-143:ctx-16ea61bc) Seq 1-2385781902599520418: Response Received:
 2014-04-18 16:43:52,573 DEBUG [c.c.a.t.Request] 
 (DirectAgent-143:ctx-16ea61bc) Seq 1-2385781902599520418: Processing:  { Ans: 
 , MgmtId: 6638073284439, via: 1, Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.StopAnswer:{platform:viridian:true;acpi:1;apic:true;pae:true;nx:true,result:false,details:Catch
  exception com.cloud.utils.exception.CloudRuntimeException when stop 
 VM:i-3-3-DR due to com.cloud.utils.exception.CloudRuntimeException: Shutdown 
 VM catch HandleInvalid and VM is not in HALTED state,wait:0}}] }
 2014-04-18 16:43:52,576 DEBUG [c.c.a.t.Request] 
 (Work-Job-Executor-2:job-30/job-31 ctx-191e1825) Seq 1-2385781902599520418: 
 Received:  { Ans: , MgmtId: 6638073284439, via: 1, Ver: v1, Flags: 10, { 
 StopAnswer } }
 2014-04-18 16:43:52,591 WARN  [c.c.v.VirtualMachineManagerImpl] 
 (Work-Job-Executor-2:job-30/job-31 ctx-191e1825) Unable to stop vm 
 VM[User|i-3-3-DR]
 2014-04-18 16:43:52,616 DEBUG [c.c.c.CapacityManagerImpl] 
 (Work-Job-Executor-2:job-30/job-31 ctx-191e1825) VM state transitted from 
 :Stopping to Running with event: OperationFailedvm's original host id: 1 new 
 host id: 1 host id before state transition: 1
 2014-04-18 16:43:52,616 ERROR [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-2:job-30/job-31 ctx-191e1825) Invocation exception, caused 
 by: com.cloud.utils.exception.CloudRuntimeException: Unable to stop 
 VM[User|i-3-3-DR]
 2014-04-18 16:43:52,617 INFO  [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-2:job-30/job-31 ctx-191e1825) Rethrow exception 
 com.cloud.utils.exception.CloudRuntimeException: Unable to stop 
 VM[User|i-3-3-DR]
 2014-04-18 16:43:52,617 DEBUG [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-2:job-30/job-31) Done with run of VM work job: 
 com.cloud.vm.VmWorkStop for VM 3, job origin: 30
 2014-04-18 

[jira] [Commented] (CLOUDSTACK-6496) addHost fails for XenServer with vSwitch networking

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135216#comment-14135216
 ] 

Rohit Yadav commented on CLOUDSTACK-6496:
-

Can we document this? Any update on this issue?

 addHost fails for XenServer with vSwitch networking
 ---

 Key: CLOUDSTACK-6496
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6496
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: XenServer
Affects Versions: Future, 4.4.0
 Environment: MS: ACS Master 
 (http://jenkins.buildacloud.org/job/package-rhel63-master/2647)
 XenServer 6.2
Reporter: Doug Clark
Assignee: Anthony Xu
Priority: Critical
 Fix For: 4.4.0

 Attachments: management-server.log


 Attempt to add a XenServer host (with the default vSwitch networking) to a 
 Basic Networking Zone fails.  Adding a XenServer host configured to use 
 bridge works ok.
 From MS log (attached):
 {noformat}
 2014-04-24 13:41:07,361 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-1:ctx-3e360a0c) Failed to configure brige firewall
 2014-04-24 13:41:07,361 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-1:ctx-3e360a0c) Check host 10.81.40.102 for CSP is installed or 
 not and check network mode for bridge
 2014-04-24 13:41:07,361 DEBUG [c.c.a.m.DirectAgentAttache] 
 (DirectAgent-1:ctx-3e360a0c) Seq 1-7133701809754865665: Response Received:
 2014-04-24 13:41:07,363 DEBUG [c.c.a.t.Request] (DirectAgent-1:ctx-3e360a0c) 
 Seq 1-7133701809754865665: Processing:  { Ans: , MgmtId: 275410316893143, 
 via: 1, Ver: v1, Flags: 110, [{com.cloud.agent.api.Set
 upAnswer:{_reconnect:true,result:false,details:Failed to configure 
 brige firewall,wait:0}}] }
 2014-04-24 13:41:07,363 DEBUG [c.c.a.t.Request] (catalina-exec-2:ctx-407da4e1 
 ctx-e02434c0 ctx-a13beb18) Seq 1-7133701809754865665: Received:  { Ans: , 
 MgmtId: 275410316893143, via: 1, Ver: v1, Flags: 110,
 { SetupAnswer } }
 2014-04-24 13:41:07,363 WARN  [c.c.h.x.d.XcpServerDiscoverer] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Unable to setup 
 agent 1 due to Failed to configure brige firewall
 2014-04-24 13:41:07,364 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Could not find 
 exception: com.cloud.exception.ConnectionException in error code list for
  exceptions
 2014-04-24 13:41:07,364 WARN  [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Monitor 
 XcpServerDiscoverer says there is an error in the connect process for 1 due 
 to Reini
 tialize agent after setup.
 2014-04-24 13:41:07,364 INFO  [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Host 1 is 
 disconnecting with event AgentDisconnected
 2014-04-24 13:41:07,364 DEBUG [c.c.a.m.AgentAttache] 
 (DirectAgent-1:ctx-3e360a0c) Seq 1-7133701809754865665: No more commands found
 2014-04-24 13:41:07,366 DEBUG [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) The next status of 
 agent 1is Alert, current status is Connecting
 2014-04-24 13:41:07,366 DEBUG [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Deregistering link 
 for 1 with state Alert
 2014-04-24 13:41:07,366 DEBUG [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Remove Agent : 1
 {noformat}
 ...snip...
 {noformat}
 2014-04-24 13:41:07,460 DEBUG [c.c.a.m.AgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Sending Disconnect 
 to listener: com.cloud.network.router.VirtualNetworkApplianceManagerImpl
 2014-04-24 13:41:07,460 DEBUG [c.c.h.Status] (catalina-exec-2:ctx-407da4e1 
 ctx-e02434c0 ctx-a13beb18) Transition:[Resource state = Enabled, Agent event 
 = AgentDisconnected, Host id = 1, name = xrtuk-09-03]
 2014-04-24 13:41:07,766 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Notifying other 
 nodes of to disconnect
 2014-04-24 13:41:07,770 WARN  [c.c.r.ResourceManagerImpl] 
 (catalina-exec-2:ctx-407da4e1 ctx-e02434c0 ctx-a13beb18) Unable to connect 
 due to
 com.cloud.exception.ConnectionException: Reinitialize agent after setup.
 at 
 com.cloud.hypervisor.xen.discoverer.XcpServerDiscoverer.processConnect(XcpServerDiscoverer.java:657)
 at 
 com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentManagerImpl.java:514)
 at 
 com.cloud.agent.manager.AgentManagerImpl.handleDirectConnectAgent(AgentManagerImpl.java:1428)
 at 
 com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerImpl.java:1767)
 at 
 

[jira] [Commented] (CLOUDSTACK-6465) vmware.reserve.mem is missing from cluster level settings

2014-09-16 Thread Harikrishna Patnala (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135224#comment-14135224
 ] 

Harikrishna Patnala commented on CLOUDSTACK-6465:
-

Yes this issue is still there and my patch in review board fixes that.


 vmware.reserve.mem is missing from cluster level settings 
 --

 Key: CLOUDSTACK-6465
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6465
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Harikrishna Patnala
Assignee: Harikrishna Patnala
Priority: Critical
 Fix For: 4.4.0


 vmware.reserve.mem is missing from cluster level settings 
 steps
 ===
 infrastructure-cluster-select a cluster-setting  you should 
 vmware.reserver.mem 
 DB:
 ===
 mysql select  name  from configuration where scope =cluster;
 +--+
 | name |
 +--+
 | cluster.cpu.allocated.capacity.disablethreshold  |
 | cluster.cpu.allocated.capacity.notificationthreshold |
 | cluster.memory.allocated.capacity.disablethreshold   |
 | cluster.memory.allocated.capacity.notificationthreshold  |
 | cluster.storage.allocated.capacity.notificationthreshold |
 | cluster.storage.capacity.notificationthreshold   |
 | cpu.overprovisioning.factor  |
 | mem.overprovisioning.factor  |
 | vmware.reserve.cpu   |
 | xen.vm.vcpu.max  |
 +--+
 10 rows in set (0.00 sec)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-5469) Snapshot creation fails with following exception - Failed to backup snapshot: qemu-img: Could not delete snapshot '89eced14-9121-44a7-bb97-26b567795726': -2 (No s

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135229#comment-14135229
 ] 

Rohit Yadav commented on CLOUDSTACK-5469:
-

Ping, any update on this?

 Snapshot creation fails with following exception - Failed to backup 
 snapshot: qemu-img: Could not delete snapshot 
 '89eced14-9121-44a7-bb97-26b567795726': -2 (No such file or directory)
 --

 Key: CLOUDSTACK-5469
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5469
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
Priority: Critical
 Fix For: 4.4.0

 Attachments: deletesnapshot.rar


 Set up: 
 Advanced Zone with 2 KVM (RHEL 6.3) hosts.
  2 NFS secondary stores set up. 
 Steps to reproduce the problem:
  1. Deploy 5 Vms in each of the hosts with 10 GB ROOT volume size , so we 
 start with 10 Vms. 
 2. Start concurrent snapshots for ROOT volumes of all the Vms. 
 1 of the secondary store -ss1- had the nfs server down for 1 and 1/2 hours. 
 The other secondary store -ss2 - was always reachable. 
 Snapshot tasks that went to the ss1 , succeeded after the nfs server was 
 brought up (It temporarily halted when the the nfs server was down and 
 resumed when the nsf server was made available). 
 First set of snapshot tasks that went to the ss2 all succeeded.
  But the next hourly snapshot tasks, few of them failed with following 
 exception: 2013-12-11 16:33:22,427 DEBUG [c.c.s.s.SnapshotManagerImpl] 
 (Job-Executor-64:ctx-9c70ad77 ctx-3d959fa6) Failed t o create snapshot 
 com.cloud.utils.exception.CloudRuntimeException: Failed to backup snapshot: 
 qemu-img: Could not delete snapshot '89eced14-9121-44a7-bb97-26b567795726': 
 -2 (No such file or directory)Failed to delete snapshot 89eced14-9121-44 
 a7-bb97-26b567795726 for path 
 /mnt/c20ea198-e8ca-33c3-9f11-e361ec9b5532/71a5dce2-da7c-4692-8f25-ba37e5296886
  at 
 org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:27
  5) at 
 org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.backupSnapshot(XenserverSnapshotStra
  tegy.java:135) at 
 org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.takeSnapshot(XenserverSnapshotStrate
  gy.java:294) at 
 com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:951)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
  at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocati
  on.java:183) at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:
  150) at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.ja
  va:91) at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:
  172) at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
  at $Proxy161.takeSnapshot(Unknown Source) at 
 org.apache.cloudstack.storage.volume.VolumeServiceImpl.takeSnapshot(VolumeServiceImpl.java:1341)
  at 
 com.cloud.storage.VolumeApiServiceImpl.takeSnapshot(VolumeApiServiceImpl.java:1461)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
  Copy to the secondary has succeed. Failure happens after this. 
 [root@Rack3Host5 118]# ls -ltr
 total 10002852
 -rw-r--r--. 1 root root 3637903360 Dec 11 20:33 
 89eced14-9121-44a7-bb97-26b567795726
 -rw-r--r--. 1 root root 3638755328 Dec 11 21:37 
 b38d93db-4c14-45a7-9274-639ad95a3f29
 -rw-r--r--. 1 root root 2956619776 Dec 11 22:24 
 452c8841-2025-41da-b6ec-49cea2a49da8
 [root@Rack3Host5 118]#
 Following are the volumes which are in CreatedOnPrimary state for which the 
 failure occured. 
 | 113 | 

[jira] [Commented] (CLOUDSTACK-6465) vmware.reserve.mem is missing from cluster level settings

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135240#comment-14135240
 ] 

Rohit Yadav commented on CLOUDSTACK-6465:
-

Can you get this reviewed by any of our VMWare maintainer such as Koushik?

 vmware.reserve.mem is missing from cluster level settings 
 --

 Key: CLOUDSTACK-6465
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6465
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Harikrishna Patnala
Assignee: Harikrishna Patnala
Priority: Critical
 Fix For: 4.4.0


 vmware.reserve.mem is missing from cluster level settings 
 steps
 ===
 infrastructure-cluster-select a cluster-setting  you should 
 vmware.reserver.mem 
 DB:
 ===
 mysql select  name  from configuration where scope =cluster;
 +--+
 | name |
 +--+
 | cluster.cpu.allocated.capacity.disablethreshold  |
 | cluster.cpu.allocated.capacity.notificationthreshold |
 | cluster.memory.allocated.capacity.disablethreshold   |
 | cluster.memory.allocated.capacity.notificationthreshold  |
 | cluster.storage.allocated.capacity.notificationthreshold |
 | cluster.storage.capacity.notificationthreshold   |
 | cpu.overprovisioning.factor  |
 | mem.overprovisioning.factor  |
 | vmware.reserve.cpu   |
 | xen.vm.vcpu.max  |
 +--+
 10 rows in set (0.00 sec)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7049) APIs return sensitive information which CloudStack does not manage and which caller did not request

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135246#comment-14135246
 ] 

Rohit Yadav commented on CLOUDSTACK-7049:
-

Every API since at least 4.3.0 has a requestHasSensitiveInfo and 
responseHasSensitiveInfo which can be set to true suitably to make sure 
CloudStack does not leak sensitive output in logs. As far as individual API is 
concerned, what you've presented is a very valid issue but can you share or 
list APIs which do that so we can have a look and fix them? Historically, we 
kept adding stuff to the API to suit the needs of the UI so we don't have very 
clean RESTful API implementation.

 APIs return sensitive information which CloudStack does not manage and which 
 caller did not request
 ---

 Key: CLOUDSTACK-7049
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7049
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.4.0
Reporter: Demetrius Tsitrelis
Priority: Critical
  Labels: security

 CloudStack stores sensitive information such as passwords and keys.  Some of 
 this information it creates such as the users’ secret keys.  Admins configure 
 CloudStack with the other types of sensitive information such as host 
 passwords, S3 secret keys, etc.
  
 There are two problems with the way the API returns sensitive information:
 1)  Many of the APIs return the entire state of the modified object on 
 which they operate.  For example, if the API to remove a NIC from a VM is 
 called then the response returns the VM password even though the caller did 
 not ask for it.
 2)  Some of the APIs return sensitive information which is not created 
 nor managed by CloudStack.  For instance, the listS3s API returns the S3 
 secret key.  There doesn’t seem to be any legitimate use case for returning 
 this category of information; this type of sensitive data could go into 
 CloudStack for its internal use but should not come out via the API (i.e., 
 CloudStack is not a password manager app!).
 Substantial changes cannot be made to the API without bumping the API 
 version.  A near-term mitigation for these problems then is simply to return 
 empty strings in the response for the sensitive information which is not 
 requested or which is not managed by CloudStack.  So for the 
 removeNicFromVirtualMachine API, for instance, return an empty string for the 
 password value.  A caller could still use getVMPassword to obtain the 
 password if he needed it since it is CloudStack which generated the VM 
 password.  For the S3 case, ALWAYS return an empty value for the S3 secret 
 key since that key is managed by Amazon and not CloudStack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6748) Creating an instance with user-data when network doesn't support user-data should error

2014-09-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135247#comment-14135247
 ] 

Rohit Yadav commented on CLOUDSTACK-6748:
-

This is open for discussion, do you propose we fail and return error; or start 
the VM but warn the user?

 Creating an instance with user-data when network doesn't support user-data 
 should error
 ---

 Key: CLOUDSTACK-6748
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6748
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.4.0
Reporter: Harikrishna Patnala
Assignee: Harikrishna Patnala
Priority: Critical
 Fix For: 4.4.0


 While deploying a VM we provide user-data in order to configure appliance 
 instances. Right now we do not throw error if user data is sent while 
 deploying the vm and network does not support userdata. 
 We should not allow sending userdata when network does not support UserData 
 service since this may create eventual problems. We should fail fast or error 
 the creation of the instance if user-data is supplied when the guest network 
 doesn't support this capability
 The same applies for ssh key and password enabled templates. Say if a vm is 
 deployed using password enabled template we cannot send password to router if 
 network does not support userdata service. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6696) UI: createAccount under sub-domain is created with ROOT domain id

2014-09-16 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-6696:

Attachment: Screen Shot 2014-09-16 at 12.28.06 pm 1.png

Could not reproduce it on latest 4.4 branch. When you create new account, the 
form gives you an option to select domain.

 UI: createAccount under sub-domain is created with ROOT domain id
 -

 Key: CLOUDSTACK-6696
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6696
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.4.0
Reporter: Parth Jagirdar
Assignee: Jessica Wang
Priority: Critical
 Attachments: Screen Shot 2014-05-16 at 10.48.49 PM.png, Screen Shot 
 2014-09-16 at 12.28.06 pm 1.png


 Steps::
 Create subdomains D1 under ROOT.
 - Go to domains and under ROOT create a new domain D1.
 Attempt to create a domain admin user under D1 through UI.
 - Go to domains, expand domains and go to D1.
 - Select view account from top right of the window.
 - Observe there are no accounts.
 - Create a new account of type Admin.
 Verify this::
 -- Click on accounts and you will be able to see this new account with domain 
 ROOT. 
 As a side effect listAccounts with a sub-domainid will return nothing as all 
 the accounts are created using ROOT Domain ID.
 A Screen is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-6696) UI: createAccount under sub-domain is created with ROOT domain id

2014-09-16 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-6696.
---
Resolution: Cannot Reproduce

 UI: createAccount under sub-domain is created with ROOT domain id
 -

 Key: CLOUDSTACK-6696
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6696
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.4.0
Reporter: Parth Jagirdar
Assignee: Jessica Wang
Priority: Critical
 Attachments: Screen Shot 2014-05-16 at 10.48.49 PM.png, Screen Shot 
 2014-09-16 at 12.28.06 pm 1.png


 Steps::
 Create subdomains D1 under ROOT.
 - Go to domains and under ROOT create a new domain D1.
 Attempt to create a domain admin user under D1 through UI.
 - Go to domains, expand domains and go to D1.
 - Select view account from top right of the window.
 - Observe there are no accounts.
 - Create a new account of type Admin.
 Verify this::
 -- Click on accounts and you will be able to see this new account with domain 
 ROOT. 
 As a side effect listAccounts with a sub-domainid will return nothing as all 
 the accounts are created using ROOT Domain ID.
 A Screen is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6748) Creating an instance with user-data when network doesn't support user-data should error

2014-09-16 Thread Harikrishna Patnala (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135271#comment-14135271
 ] 

Harikrishna Patnala commented on CLOUDSTACK-6748:
-

The fix I made returns error when userdata, sshkey (and also for password 
enabled templates) is sent as parameter when network does not support userdata 
service. 

 Creating an instance with user-data when network doesn't support user-data 
 should error
 ---

 Key: CLOUDSTACK-6748
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6748
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.4.0
Reporter: Harikrishna Patnala
Assignee: Harikrishna Patnala
Priority: Critical
 Fix For: 4.4.0


 While deploying a VM we provide user-data in order to configure appliance 
 instances. Right now we do not throw error if user data is sent while 
 deploying the vm and network does not support userdata. 
 We should not allow sending userdata when network does not support UserData 
 service since this may create eventual problems. We should fail fast or error 
 the creation of the instance if user-data is supplied when the guest network 
 doesn't support this capability
 The same applies for ssh key and password enabled templates. Say if a vm is 
 deployed using password enabled template we cannot send password to router if 
 network does not support userdata service. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7527) XenServer heartbeat-script: make it reboot faster (when fencing)

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135275#comment-14135275
 ] 

ASF subversion and git services commented on CLOUDSTACK-7527:
-

Commit 7a694d4deb0d74a18c7bac8bfffa8faf6fa5d835 in cloudstack's branch 
refs/heads/hotfix/4.4/CLOUDSTACK-7184 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7a694d4 ]

CLOUDSTACK-7527 reboot faster by writing to /proc/sysrq-trigger

 XenServer heartbeat-script: make it reboot faster (when fencing)
 

 Key: CLOUDSTACK-7527
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7527
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: XenServer
Affects Versions: 4.3.0, 4.4.0
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Minor

 xenheartbeat.sh:
 I've seen the 'reboot' command hang, even though it has the force option 
 specified (last line of the script). Wouldn't it be better to invoke it like 
 this:
 echo b  /proc/sysrq-trigger
 Tested it, starts boot sequence immediately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7184) HA should wait for at least 'xen.heartbeat.interval' sec before starting HA on vm's when host is marked down

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135276#comment-14135276
 ] 

ASF subversion and git services commented on CLOUDSTACK-7184:
-

Commit b82f27be4150e70c017ed2597137319daa79560b in cloudstack's branch 
refs/heads/hotfix/4.4/CLOUDSTACK-7184 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=b82f27b ]

CLOUDSTACK-7184 retry-wait loop config to deal with network glitches


 HA should wait for at least 'xen.heartbeat.interval' sec before starting HA 
 on vm's when host is marked down
 

 Key: CLOUDSTACK-7184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Hypervisor Controller, Management Server, XenServer
Affects Versions: 4.3.0, 4.4.0, 4.5.0
 Environment: CloudStack 4.3 with XenServer 6.2 hypervisors
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Blocker

 Hypervisor got isolated for 30 seconds due to a network issue. CloudStack did 
 discover this and marked the host as down, and immediately started HA. Just 
 18 seconds later the hypervisor returned and we ended up with 5 vm's that 
 were running on two hypervisors at the same time. 
 This, of course, resulted in file system corruption and the loss of the vm's. 
 One side of the story is why XenServer allowed this to happen (will not 
 bother you with this one). The CloudStack side of the story: HA should only 
 start after at least xen.heartbeat.interval seconds. If the host is down long 
 enough, the Xen heartbeat script will fence the hypervisor and prevent 
 corruption. If it is not down long enough, nothing should happen.
 Logs (short):
 2014-07-25 05:03:28,596 WARN  [c.c.a.m.DirectAgentAttache] 
 (DirectAgent-122:ctx-690badc5) Unable to get current status on 505(mccpvmXX)
 .
 2014-07-25 05:03:31,920 ERROR [c.c.a.m.AgentManagerImpl] 
 (AgentTaskPool-10:ctx-11b9af3e) Host is down: 505-mccpvmXX.  Starting HA on 
 the VMs
 .
 2014-07-25 05:03:49,655 DEBUG [c.c.h.Status] (ClusteredAgentManager 
 Timer:ctx-0e00979c) Transition:[Resource state = Enabled, Agent event = 
 AgentDisconnected, Host id = 505, name = mccpvmXX]
 cs marks host down: 2014-07-25  05:03:31,920
 cs marks host up: 2014-07-25  05:03:49,655



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7184) HA should wait for at least 'xen.heartbeat.interval' sec before starting HA on vm's when host is marked down

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135274#comment-14135274
 ] 

ASF subversion and git services commented on CLOUDSTACK-7184:
-

Commit 4d065b9a3a336d59902c266202c1094509c007d2 in cloudstack's branch 
refs/heads/hotfix/4.4/CLOUDSTACK-7184 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4d065b9 ]

CLOUDSTACK-7184: xenheartbeat gets passed timeout and interval

 HA should wait for at least 'xen.heartbeat.interval' sec before starting HA 
 on vm's when host is marked down
 

 Key: CLOUDSTACK-7184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Hypervisor Controller, Management Server, XenServer
Affects Versions: 4.3.0, 4.4.0, 4.5.0
 Environment: CloudStack 4.3 with XenServer 6.2 hypervisors
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Blocker

 Hypervisor got isolated for 30 seconds due to a network issue. CloudStack did 
 discover this and marked the host as down, and immediately started HA. Just 
 18 seconds later the hypervisor returned and we ended up with 5 vm's that 
 were running on two hypervisors at the same time. 
 This, of course, resulted in file system corruption and the loss of the vm's. 
 One side of the story is why XenServer allowed this to happen (will not 
 bother you with this one). The CloudStack side of the story: HA should only 
 start after at least xen.heartbeat.interval seconds. If the host is down long 
 enough, the Xen heartbeat script will fence the hypervisor and prevent 
 corruption. If it is not down long enough, nothing should happen.
 Logs (short):
 2014-07-25 05:03:28,596 WARN  [c.c.a.m.DirectAgentAttache] 
 (DirectAgent-122:ctx-690badc5) Unable to get current status on 505(mccpvmXX)
 .
 2014-07-25 05:03:31,920 ERROR [c.c.a.m.AgentManagerImpl] 
 (AgentTaskPool-10:ctx-11b9af3e) Host is down: 505-mccpvmXX.  Starting HA on 
 the VMs
 .
 2014-07-25 05:03:49,655 DEBUG [c.c.h.Status] (ClusteredAgentManager 
 Timer:ctx-0e00979c) Transition:[Resource state = Enabled, Agent event = 
 AgentDisconnected, Host id = 505, name = mccpvmXX]
 cs marks host down: 2014-07-25  05:03:31,920
 cs marks host up: 2014-07-25  05:03:49,655



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7560) Usage Event is not generated when VM state is transited from RUNNING to STOPPED directly

2014-09-16 Thread Damodar Reddy T (JIRA)
Damodar Reddy T created CLOUDSTACK-7560:
---

 Summary: Usage Event is not generated when VM state is transited 
from RUNNING to STOPPED directly
 Key: CLOUDSTACK-7560
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7560
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Usage
Affects Versions: 4.5.0
Reporter: Damodar Reddy T
 Fix For: Future


When Maagement server is not able to detect VM state, after a specific time it 
will try to change the RUNNING state to STOPPED state directly without STOPPING 
state in between. Which is causing the event not to be generated and now if you 
DESTROY the VM still it will generate the usage records for a period of 24 hrs 
every day.
Logs are attached below...

2014-05-15 16:13:25,197 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
(DirectAgent-117:ctx-775a3f34) Run missing VM report. current time: 
1400163205197
2014-05-15 16:13:25,197 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
(DirectAgent-117:ctx-775a3f34) Detected missing VM. host: 1, vm id: 106, power 
state: PowerReportMissing, last state update: 1400162537000
2014-05-15 16:13:25,197 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
(DirectAgent-117:ctx-775a3f34) vm id: 106 - time since last state 
update(668197ms) has passed graceful period
2014-05-15 16:13:25,202 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
(DirectAgent-117:ctx-775a3f34) VM state report is updated. host: 1, vm id: 106, 
power state: PowerReportMissing
2014-05-15 16:13:25,207 INFO  [c.c.v.VirtualMachineManagerImpl] 
(DirectAgent-117:ctx-775a3f34) VM i-5-106-VM is at Running and we received a 
power-off report while there is no pending jobs on it
2014-05-15 16:13:25,210 DEBUG [c.c.a.t.Request] (DirectAgent-117:ctx-775a3f34) 
Seq 1-1971781673: Sending  { Cmd , MgmtId: 20750978301280, via: 1(XSMASTER-5), 
Ver: v1, Flags: 100111, 
[{com.cloud.agent.api.StopCommand:{isProxy:false,executeInSequence:true,checkBeforeCleanup:true,vmName:i-5-106-VM,wait:0}}]
 }
2014-05-15 16:13:25,210 DEBUG [c.c.a.t.Request] (DirectAgent-117:ctx-775a3f34) 
Seq 1-1971781673: Executing:  { Cmd , MgmtId: 20750978301280, via: 
1(XSMASTER-5), Ver: v1, Flags: 100111, 
[{com.cloud.agent.api.StopCommand:{isProxy:false,executeInSequence:true,checkBeforeCleanup:true,vmName:i-5-106-VM,wait:0}}]
 }
2014-05-15 16:13:25,210 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-18:ctx-7180bb9c) Seq 1-1971781673: Executing request
2014-05-15 16:13:25,249 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-18:ctx-7180bb9c) 9. The VM i-5-106-VM is in Stopping state
2014-05-15 16:13:25,293 INFO  [c.c.h.x.r.XenServer56Resource] 
(DirectAgent-18:ctx-7180bb9c) Catch com.xensource.xenapi.Types$VifInUse: failed 
to destory VLAN eth0 on host c4080dac-f034-452f-9203-8e69d5512315 due to 
Network has active VIFs
2014-05-15 16:13:25,294 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-18:ctx-7180bb9c) 10. The VM i-5-106-VM is in Stopped state
2014-05-15 16:13:25,294 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-18:ctx-7180bb9c) Seq 1-1971781673: Response Received:
2014-05-15 16:13:25,294 DEBUG [c.c.a.t.Request] (DirectAgent-18:ctx-7180bb9c) 
Seq 1-1971781673: Processing:  { Ans: , MgmtId: 20750978301280, via: 1, Ver: 
v1, Flags: 110, 
[{com.cloud.agent.api.StopAnswer:{platform:viridian:true;acpi:1;apic:true;pae:true;nx:true,result:true,details:Stop
 VM i-5-106-VM Succeed,wait:0}}] }
2014-05-15 16:13:25,294 DEBUG [c.c.a.t.Request] (DirectAgent-117:ctx-775a3f34) 
Seq 1-1971781673: Received:  { Ans: , MgmtId: 20750978301280, via: 1, Ver: v1, 
Flags: 110, { StopAnswer } }
2014-05-15 16:13:25,301 DEBUG [c.c.a.m.AgentAttache] 
(DirectAgent-18:ctx-7180bb9c) Seq 1-1971781673: No more commands found
2014-05-15 16:13:25,365 DEBUG [c.c.c.CapacityManagerImpl] 
(DirectAgent-117:ctx-775a3f34) VM state transitted from :Running to Stopped 
with event: FollowAgentPowerOffReportvm's original host id: 1 new host id: null 
host id before state transition: 1




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7527) XenServer heartbeat-script: make it reboot faster (when fencing)

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135584#comment-14135584
 ] 

ASF subversion and git services commented on CLOUDSTACK-7527:
-

Commit d04f59a30d130dbb83f162af6e67334fe2c9cef0 in cloudstack's branch 
refs/heads/hotfix/4.4/CLOUDSTACK-7184 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d04f59a ]

CLOUDSTACK-7527 reboot faster by writing to /proc/sysrq-trigger


 XenServer heartbeat-script: make it reboot faster (when fencing)
 

 Key: CLOUDSTACK-7527
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7527
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: XenServer
Affects Versions: 4.3.0, 4.4.0
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Minor

 xenheartbeat.sh:
 I've seen the 'reboot' command hang, even though it has the force option 
 specified (last line of the script). Wouldn't it be better to invoke it like 
 this:
 echo b  /proc/sysrq-trigger
 Tested it, starts boot sequence immediately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7184) HA should wait for at least 'xen.heartbeat.interval' sec before starting HA on vm's when host is marked down

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135589#comment-14135589
 ] 

ASF subversion and git services commented on CLOUDSTACK-7184:
-

Commit 4d065b9a3a336d59902c266202c1094509c007d2 in cloudstack's branch 
refs/heads/4.4 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4d065b9 ]

CLOUDSTACK-7184: xenheartbeat gets passed timeout and interval

 HA should wait for at least 'xen.heartbeat.interval' sec before starting HA 
 on vm's when host is marked down
 

 Key: CLOUDSTACK-7184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Hypervisor Controller, Management Server, XenServer
Affects Versions: 4.3.0, 4.4.0, 4.5.0
 Environment: CloudStack 4.3 with XenServer 6.2 hypervisors
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Blocker

 Hypervisor got isolated for 30 seconds due to a network issue. CloudStack did 
 discover this and marked the host as down, and immediately started HA. Just 
 18 seconds later the hypervisor returned and we ended up with 5 vm's that 
 were running on two hypervisors at the same time. 
 This, of course, resulted in file system corruption and the loss of the vm's. 
 One side of the story is why XenServer allowed this to happen (will not 
 bother you with this one). The CloudStack side of the story: HA should only 
 start after at least xen.heartbeat.interval seconds. If the host is down long 
 enough, the Xen heartbeat script will fence the hypervisor and prevent 
 corruption. If it is not down long enough, nothing should happen.
 Logs (short):
 2014-07-25 05:03:28,596 WARN  [c.c.a.m.DirectAgentAttache] 
 (DirectAgent-122:ctx-690badc5) Unable to get current status on 505(mccpvmXX)
 .
 2014-07-25 05:03:31,920 ERROR [c.c.a.m.AgentManagerImpl] 
 (AgentTaskPool-10:ctx-11b9af3e) Host is down: 505-mccpvmXX.  Starting HA on 
 the VMs
 .
 2014-07-25 05:03:49,655 DEBUG [c.c.h.Status] (ClusteredAgentManager 
 Timer:ctx-0e00979c) Transition:[Resource state = Enabled, Agent event = 
 AgentDisconnected, Host id = 505, name = mccpvmXX]
 cs marks host down: 2014-07-25  05:03:31,920
 cs marks host up: 2014-07-25  05:03:49,655



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7527) XenServer heartbeat-script: make it reboot faster (when fencing)

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135590#comment-14135590
 ] 

ASF subversion and git services commented on CLOUDSTACK-7527:
-

Commit d04f59a30d130dbb83f162af6e67334fe2c9cef0 in cloudstack's branch 
refs/heads/4.4 from [~dahn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d04f59a ]

CLOUDSTACK-7527 reboot faster by writing to /proc/sysrq-trigger


 XenServer heartbeat-script: make it reboot faster (when fencing)
 

 Key: CLOUDSTACK-7527
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7527
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: XenServer
Affects Versions: 4.3.0, 4.4.0
Reporter: Remi Bergsma
Assignee: Daan Hoogland
Priority: Minor

 xenheartbeat.sh:
 I've seen the 'reboot' command hang, even though it has the force option 
 specified (last line of the script). Wouldn't it be better to invoke it like 
 this:
 echo b  /proc/sysrq-trigger
 Tested it, starts boot sequence immediately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7354) [Automation] test_scale_vm fails with VMWare

2014-09-16 Thread Alex Brett (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Brett resolved CLOUDSTACK-7354.

Resolution: Fixed

Resolving this one as the change is in and the test passing :)

 [Automation] test_scale_vm fails with VMWare
 

 Key: CLOUDSTACK-7354
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7354
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, VMware
Affects Versions: Future, 4.5.0
Reporter: John Dilley
Assignee: John Dilley
 Fix For: 4.5.0


 test_scale_vm fails on VMWare, complaining about the lack of tools.
 The VM property needs to set isdynamicallyscalable to true, which the 
 testcase does, but unfortunately after the Scale VM command has been 
 attempted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7546) [LXC] agent addition to MS is failing if we stop service NetworkManager

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135603#comment-14135603
 ] 

ASF subversion and git services commented on CLOUDSTACK-7546:
-

Commit 75d01971e8d5f27f65ba0b98fc6d9aba057757e7 in cloudstack's branch 
refs/heads/master from [~kishan]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=75d0197 ]

CLOUDSTACK-7546: cloudstack-setup-agent considers distro as RHEL5 if no 
conditions match. Add check to identify RHEL7 distro and consider it as RHEL6. 
If there is anything specific required for RHEL7, it can be added later


 [LXC] agent addition to MS is failing if we stop service NetworkManager 
 

 Key: CLOUDSTACK-7546
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7546
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Kishan Kavala
Priority: Blocker
 Fix For: 4.5.0


 Repro Steps:
 1. Install MS and agent on two different host
 2. Stop networkmanager on host and also chkconfig networkmanager off
 3. Create a Advance zone with LXC 
 Bug:
 Agent addition to  CS will fail
 agent log shows :
 2014-09-15 16:38:26,027 ERROR [cloud.agent.AgentShell] (main:null) Unable to 
 start agent:
 com.cloud.utils.exception.CloudRuntimeException: Failed to connect socket to 
 '/var/run/libvirt/libvirt-sock': No such file or directory
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:830)
 at com.cloud.agent.Agent.init(Agent.java:163)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:401)
 at 
 com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:371)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:355)
 at com.cloud.agent.AgentShell.start(AgentShell.java:465)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
 2014-09-15 16:39:37,406 INFO  [cloud.agent.AgentShell] (main:null) Agent 
 started
 Additional info :
 checked Libvirtd status once agent fails to start and status was stopped in 
 above scenario
 But if we dont stop networkmanager service before adding agent to CS then 
 agent addition succeeds without fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7546) [LXC] agent addition to MS is failing if we stop service NetworkManager

2014-09-16 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala resolved CLOUDSTACK-7546.
---
Resolution: Fixed

 [LXC] agent addition to MS is failing if we stop service NetworkManager 
 

 Key: CLOUDSTACK-7546
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7546
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Kishan Kavala
Priority: Blocker
 Fix For: 4.5.0


 Repro Steps:
 1. Install MS and agent on two different host
 2. Stop networkmanager on host and also chkconfig networkmanager off
 3. Create a Advance zone with LXC 
 Bug:
 Agent addition to  CS will fail
 agent log shows :
 2014-09-15 16:38:26,027 ERROR [cloud.agent.AgentShell] (main:null) Unable to 
 start agent:
 com.cloud.utils.exception.CloudRuntimeException: Failed to connect socket to 
 '/var/run/libvirt/libvirt-sock': No such file or directory
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:830)
 at com.cloud.agent.Agent.init(Agent.java:163)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:401)
 at 
 com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:371)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:355)
 at com.cloud.agent.AgentShell.start(AgentShell.java:465)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
 2014-09-15 16:39:37,406 INFO  [cloud.agent.AgentShell] (main:null) Agent 
 started
 Additional info :
 checked Libvirtd status once agent fails to start and status was stopped in 
 above scenario
 But if we dont stop networkmanager service before adding agent to CS then 
 agent addition succeeds without fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7561) UI: After creating a new account, the Add Account dialog remains open

2014-09-16 Thread Gabor Apati-Nagy (JIRA)
Gabor Apati-Nagy created CLOUDSTACK-7561:


 Summary: UI: After creating a new account, the Add Account 
dialog remains open
 Key: CLOUDSTACK-7561
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7561
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.5.0
Reporter: Gabor Apati-Nagy
Assignee: Gabor Apati-Nagy


Go to HomeAccounts. 
Click Add Account to add a new account. 
Fill out the required fields. 
Then click Add to create a new Account.

Result: The account is created, but the Add Account dialog remains open and 
the Accounts view is not refreshed.

Running in Chrome, The following error is observed:
Uncaught TypeError: undefined is not a function, accountsWizard.js:260 (see 
attached screenshot).

Note: Clicking the Add button again will cause a User already exists error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7561) UI: After creating a new account, the Add Account dialog remains open

2014-09-16 Thread Gabor Apati-Nagy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Apati-Nagy updated CLOUDSTACK-7561:
-
Description: 
Go to HomeAccounts. 
Click Add Account to add a new account. 
Fill out the required fields. 
Then click Add to create a new Account.

Result: The account is created, but the Add Account dialog remains open and 
the Accounts view is not refreshed.

Running in Chrome, The following error is observed:
Uncaught TypeError: undefined is not a function, accountsWizard.js:260

Note: Clicking the Add button again will cause a User already exists error.

  was:
Go to HomeAccounts. 
Click Add Account to add a new account. 
Fill out the required fields. 
Then click Add to create a new Account.

Result: The account is created, but the Add Account dialog remains open and 
the Accounts view is not refreshed.

Running in Chrome, The following error is observed:
Uncaught TypeError: undefined is not a function, accountsWizard.js:260 (see 
attached screenshot).

Note: Clicking the Add button again will cause a User already exists error.


 UI: After creating a new account, the Add Account dialog remains open
 ---

 Key: CLOUDSTACK-7561
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7561
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: Gabor Apati-Nagy
Assignee: Gabor Apati-Nagy

 Go to HomeAccounts. 
 Click Add Account to add a new account. 
 Fill out the required fields. 
 Then click Add to create a new Account.
 Result: The account is created, but the Add Account dialog remains open and 
 the Accounts view is not refreshed.
 Running in Chrome, The following error is observed:
 Uncaught TypeError: undefined is not a function, accountsWizard.js:260
 Note: Clicking the Add button again will cause a User already exists 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7562) Details page for disk offerings only show details for write performance

2014-09-16 Thread Gabor Apati-Nagy (JIRA)
Gabor Apati-Nagy created CLOUDSTACK-7562:


 Summary: Details page for disk offerings only show details for 
write performance
 Key: CLOUDSTACK-7562
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7562
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.2.1
Reporter: Gabor Apati-Nagy
Assignee: Gabor Apati-Nagy
 Fix For: 4.5.0


Details for disk offerings only show details for write performance

Disk Write Rate(BPS)

Disk Write Rate(IOPS)

are both repeated twice, should be read and write ( see screenshot)

REPRO
Home - Service Offerings - Disk Offerings - select any storage offering

Expected:

Disk Write Rate (BPS)   
Disk Read Rate (BPS)
Disk Write Rate (IOPS)  
Disk Read Rate (IOPS) 

Actual result:

Disk Write Rate (BPS)   
Disk Write Rate (BPS)   
Disk Write Rate (IOPS)  
Disk Write Rate (IOPS)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7563) ClassCastException in VirtualMachineManagerImpl in handling various Agent command answer.

2014-09-16 Thread Min Chen (JIRA)
Min Chen created CLOUDSTACK-7563:


 Summary: ClassCastException in VirtualMachineManagerImpl in 
handling various Agent command answer.
 Key: CLOUDSTACK-7563
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7563
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Assignee: Min Chen
Priority: Critical
 Fix For: 4.5.0


irtualMachineManagerImpl has many methods directly cast the Answer received 
from AgentManager.send or AgentManager.easySend to the expected Answer class in 
the normal case. This will throw unhandled  ClassCastException in some 
unexcepted cases where host is down and agent is disconnected, which will lead 
to cloud instability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7564) [Automation][XenServer] Unable to Stop a VM - callHostPlugin failed for cmd: destroy_network_rules_for_vm with args vmName: i-20-27-VM

2014-09-16 Thread Chandan Purushothama (JIRA)
Chandan Purushothama created CLOUDSTACK-7564:


 Summary: [Automation][XenServer] Unable to Stop a VM - 
callHostPlugin failed for cmd: destroy_network_rules_for_vm with args vmName: 
i-20-27-VM
 Key: CLOUDSTACK-7564
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7564
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation, XenServer
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Anthony Xu
Priority: Blocker
 Fix For: 4.5.0


I see that the VM Stop Job failed due to the following reason:

*2014-09-16 15:51:21,914 WARN  [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-76:ctx-abca3786) callHostPlugin failed for cmd: 
destroy_network_rules_for_vm with args vmName: i-20-27-VM,  due to There was a 
failure communicating with the plugin.
2014-09-16 15:51:21,915 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-76:ctx-abca3786) Catch exception 
com.cloud.utils.exception.CloudRuntimeException when stop VM:i-20-27-VM due to 
com.cloud.utils.exception.CloudRuntimeException: callHostPlugin failed for cmd: 
destroy_network_rules_for_vm with args vmName: i-20-27-VM,  due to There was a 
failure communicating with the plugin.
*

VM Stop Job Logs Information:


{noformat}
2014-09-16 15:51:20,594 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-25:ctx-fedac54a) ===START===  10.220.135.29 -- GET  
jobid=3dc3e848-cf6f-4cb1-b05f-7d220bdf396aapiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=HFVB81DxD27cwGUnFn%2B2D3AQuRs%3D
2014-09-16 15:51:20,597 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-22:ctx-8c7ba1c4) ===START===  10.220.135.29 -- GET  
jobid=a12f92f6-8efc-4518-b7b4-112cd1f40754apiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=%2F%2BLLeKMfokMZJ6pOg50PxPZSOjU%3D
2014-09-16 15:51:20,629 DEBUG [c.c.u.AccountManagerImpl] 
(API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Removed account 8
2014-09-16 15:51:20,633 DEBUG [c.c.a.ApiServlet] (catalina-exec-22:ctx-8c7ba1c4 
ctx-941f9ccb ctx-1af51dd9) ===END===  10.220.135.29 -- GET  
jobid=a12f92f6-8efc-4518-b7b4-112cd1f40754apiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=%2F%2BLLeKMfokMZJ6pOg50PxPZSOjU%3D
2014-09-16 15:51:20,648 DEBUG [c.c.a.ApiServlet] (catalina-exec-25:ctx-fedac54a 
ctx-4056e031 ctx-35e5) ===END===  10.220.135.29 -- GET  
jobid=3dc3e848-cf6f-4cb1-b05f-7d220bdf396aapiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=HFVB81DxD27cwGUnFn%2B2D3AQuRs%3D
2014-09-16 15:51:20,651 DEBUG [c.c.u.AccountManagerImpl] 
(API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Successfully deleted 
snapshots directories for all volumes under account 8 across all zones
2014-09-16 15:51:20,655 DEBUG [c.c.u.AccountManagerImpl] 
(API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Expunging # of vms 
(accountId=8): 1
2014-09-16 15:51:20,655 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-74:ctx-cbfc3d27 job-207 ctx-ea6e190d) Sync job-209 execution 
on object VmWorkJobQueue.27
2014-09-16 15:51:20,658 WARN  [c.c.u.d.Merovingian2] 
(API-Job-Executor-74:ctx-cbfc3d27 job-207 ctx-ea6e190d) Was unable to find lock 
for the key vm_instance27 and thread id 2057618920
2014-09-16 15:51:20,664 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Sync job-210 execution 
on object VmWorkJobQueue.7
2014-09-16 15:51:20,666 WARN  [c.c.u.d.Merovingian2] 
(API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Was unable to find lock 
for the key vm_instance7 and thread id 1487507158
2014-09-16 15:51:20,928 DEBUG [c.c.a.m.AgentManagerImpl] 
(AgentManager-Handler-15:null) SeqA 3-136: Processing Seq 3-136:  { Cmd , 
MgmtId: -1, via: 3, Ver: v1, Flags: 11, 
[{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:1,_loadInfo:{\n
  \connections\: []\n},wait:0}}] }
2014-09-16 15:51:20,932 DEBUG [c.c.a.m.AgentManagerImpl] 
(AgentManager-Handler-15:null) SeqA 3-136: Sending Seq 3-136:  { Ans: , MgmtId: 
125944753790399, via: 3, Ver: v1, Flags: 100010, 
[{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
2014-09-16 15:51:21,586 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(AsyncJobMgr-Heartbeat-1:ctx-ee5979bc) Execute sync-queue item: SyncQueueItemVO 
{id:62, queueId: 61, contentType: AsyncJob, contentId: 209, lastProcessMsid: 
null, lastprocessNumber: null, lastProcessTime: null, created: Tue Sep 16 
15:51:20 UTC 2014}
2014-09-16 15:51:21,587 

[jira] [Resolved] (CLOUDSTACK-7564) [Automation][XenServer] Unable to Stop a VM - callHostPlugin failed for cmd: destroy_network_rules_for_vm with args vmName: i-20-27-VM

2014-09-16 Thread Anthony Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Xu resolved CLOUDSTACK-7564.

Resolution: Incomplete

management server log
SMlog,
what's the zone(basic/advanced)?
what's the network(w/o SG)?

 [Automation][XenServer] Unable to Stop a VM - callHostPlugin failed for cmd: 
 destroy_network_rules_for_vm with args vmName: i-20-27-VM
 --

 Key: CLOUDSTACK-7564
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7564
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, XenServer
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Anthony Xu
Priority: Blocker
 Fix For: 4.5.0


 I see that the VM Stop Job failed due to the following reason:
 *2014-09-16 15:51:21,914 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-76:ctx-abca3786) callHostPlugin failed for cmd: 
 destroy_network_rules_for_vm with args vmName: i-20-27-VM,  due to There was 
 a failure communicating with the plugin.
 2014-09-16 15:51:21,915 DEBUG [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-76:ctx-abca3786) Catch exception 
 com.cloud.utils.exception.CloudRuntimeException when stop VM:i-20-27-VM due 
 to com.cloud.utils.exception.CloudRuntimeException: callHostPlugin failed for 
 cmd: destroy_network_rules_for_vm with args vmName: i-20-27-VM,  due to There 
 was a failure communicating with the plugin.
 *
 
 VM Stop Job Logs Information:
 
 {noformat}
 2014-09-16 15:51:20,594 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-25:ctx-fedac54a) ===START===  10.220.135.29 -- GET  
 jobid=3dc3e848-cf6f-4cb1-b05f-7d220bdf396aapiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=HFVB81DxD27cwGUnFn%2B2D3AQuRs%3D
 2014-09-16 15:51:20,597 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-22:ctx-8c7ba1c4) ===START===  10.220.135.29 -- GET  
 jobid=a12f92f6-8efc-4518-b7b4-112cd1f40754apiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=%2F%2BLLeKMfokMZJ6pOg50PxPZSOjU%3D
 2014-09-16 15:51:20,629 DEBUG [c.c.u.AccountManagerImpl] 
 (API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Removed account 8
 2014-09-16 15:51:20,633 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-22:ctx-8c7ba1c4 ctx-941f9ccb ctx-1af51dd9) ===END===  
 10.220.135.29 -- GET  
 jobid=a12f92f6-8efc-4518-b7b4-112cd1f40754apiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=%2F%2BLLeKMfokMZJ6pOg50PxPZSOjU%3D
 2014-09-16 15:51:20,648 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-25:ctx-fedac54a ctx-4056e031 ctx-35e5) ===END===  
 10.220.135.29 -- GET  
 jobid=3dc3e848-cf6f-4cb1-b05f-7d220bdf396aapiKey=V9qdDxm-ufkQ7NG7IUBKZGbCo9gzC4d5pjKLwFqNDaLUDC3ELlMIGvqq6RjfF2EQ8qTC0GwfxbhswOFP-Hg-Cgcommand=queryAsyncJobResultresponse=jsonsignature=HFVB81DxD27cwGUnFn%2B2D3AQuRs%3D
 2014-09-16 15:51:20,651 DEBUG [c.c.u.AccountManagerImpl] 
 (API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Successfully deleted 
 snapshots directories for all volumes under account 8 across all zones
 2014-09-16 15:51:20,655 DEBUG [c.c.u.AccountManagerImpl] 
 (API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Expunging # of vms 
 (accountId=8): 1
 2014-09-16 15:51:20,655 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (API-Job-Executor-74:ctx-cbfc3d27 job-207 ctx-ea6e190d) Sync job-209 
 execution on object VmWorkJobQueue.27
 2014-09-16 15:51:20,658 WARN  [c.c.u.d.Merovingian2] 
 (API-Job-Executor-74:ctx-cbfc3d27 job-207 ctx-ea6e190d) Was unable to find 
 lock for the key vm_instance27 and thread id 2057618920
 2014-09-16 15:51:20,664 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Sync job-210 
 execution on object VmWorkJobQueue.7
 2014-09-16 15:51:20,666 WARN  [c.c.u.d.Merovingian2] 
 (API-Job-Executor-75:ctx-52918b50 job-208 ctx-22aa778f) Was unable to find 
 lock for the key vm_instance7 and thread id 1487507158
 2014-09-16 15:51:20,928 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-15:null) SeqA 3-136: Processing Seq 3-136:  { Cmd , 
 MgmtId: -1, via: 3, Ver: v1, Flags: 11, 
 [{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:1,_loadInfo:{\n
   \connections\: []\n},wait:0}}] }
 2014-09-16 15:51:20,932 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-15:null) SeqA 3-136: Sending Seq 3-136:  { Ans: , 
 MgmtId: 125944753790399, via: 3, Ver: v1, Flags: 100010, 
 

[jira] [Commented] (CLOUDSTACK-7563) ClassCastException in VirtualMachineManagerImpl in handling various Agent command answer.

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136053#comment-14136053
 ] 

ASF subversion and git services commented on CLOUDSTACK-7563:
-

Commit 1b15efb5f018f473a680165fff0d5574b8e771e5 in cloudstack's branch 
refs/heads/master from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1b15efb ]

CLOUDSTACK-7563: ClassCastException in VirtualMachineManagerImpl in
handling various Agent command answer.


 ClassCastException in VirtualMachineManagerImpl in handling various Agent 
 command answer.
 -

 Key: CLOUDSTACK-7563
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7563
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Assignee: Min Chen
Priority: Critical
 Fix For: 4.5.0


 irtualMachineManagerImpl has many methods directly cast the Answer received 
 from AgentManager.send or AgentManager.easySend to the expected Answer class 
 in the normal case. This will throw unhandled  ClassCastException in some 
 unexcepted cases where host is down and agent is disconnected, which will 
 lead to cloud instability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136057#comment-14136057
 ] 

Jessica Wang commented on CLOUDSTACK-7497:
--

shweta,

 3. Create a KVM VM first . Very important step.

After adding this step to my test, I'm able to reproduce this bug now.

thank you.

Jessica


 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
 Fix For: 4.5.0

 Attachments: MS.tar.gz, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136131#comment-14136131
 ] 

ASF subversion and git services commented on CLOUDSTACK-7497:
-

Commit d0da107b7ff20512b2ea18bbd4bb7fa09e533eb2 in cloudstack's branch 
refs/heads/master from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d0da107 ]

CLOUDSTACK-7497: UI  VM Wizard  select template  reset local variable before 
retrieving selected template object.


 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
 Fix For: 4.5.0

 Attachments: MS.tar.gz, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang updated CLOUDSTACK-7497:
-
Attachment: after_checked_in_UI_fix.PNG

 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
 Fix For: 4.5.0

 Attachments: MS.tar.gz, after_checked_in_UI_fix.PNG, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang resolved CLOUDSTACK-7497.
--
Resolution: Fixed

 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
 Fix For: 4.5.0

 Attachments: MS.tar.gz, after_checked_in_UI_fix.PNG, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang updated CLOUDSTACK-7497:
-
Labels: DEVREV  (was: )

 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
  Labels: DEVREV
 Fix For: 4.5.0

 Attachments: MS.tar.gz, after_checked_in_UI_fix.PNG, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7497) [UI] deploy VM failing as hypervisor type passed is KVM instead of LXC

2014-09-16 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136142#comment-14136142
 ] 

Jessica Wang commented on CLOUDSTACK-7497:
--

Problem:
--- 
As described by the reporter


Root Cause Analysis:
-- 
didn't reset a local variable for selected template


Proposed Solution:
---
reset a local variable for selected template


QA notes:
-
Steps to reproduce:
i) as described by the reporter
 

Expected result:
-
correct hypervisor should be passed (as my attached screenshot: 
after_checked_in_UI_fix.PNG)
 

 [UI] deploy VM  failing as hypervisor type passed is KVM instead of LXC
 ---

 Key: CLOUDSTACK-7497
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7497
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
Priority: Blocker
  Labels: DEVREV
 Fix For: 4.5.0

 Attachments: MS.tar.gz, after_checked_in_UI_fix.PNG, cloud.dmp, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1A.PNG, 
 jessica_web_site_loaded_with_Shweta_databaseDump_1B.PNG, shweta_website.PNG


 Repro steps:
 Create a LXC zone with 2 cluster one LXc and one KVM
 Create  VM  with LXC template
 Bug:
 Deploy vm fails as hypervisor type is passed as KVM
 014-09-05 14:06:09,520 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24) ===START===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:09,581 INFO  [c.c.a.ApiServer] (catalina-exec-8:ctx-7b5dcb24 
 ctx-2a5b99cd) Hypervisor passed to the deployVm call, is different from the 
 hypervisor type of the template
 2014-09-05 14:06:09,582 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-7b5dcb24 ctx-2a5b99cd) ===END===  10.146.0.131 -- GET  
 command=deployVirtualMachineresponse=jsonsessionkey=NZyAJuyUuohCrPbjervw1hkdBlM%3Dzoneid=7d97ec72-39c7-4fbc-b6f3-7710ada1a82atemplateid=2175190d-212a-4078-aedb-d25e4ddcdeb3hypervisor=KVMserviceofferingid=82ab5842-2a12-4160-bba4-4b2ff81e810baffinitygroupids=345511d9-fa5f-46e8-b101-1d4ea0a1ba47iptonetworklist%5B0%5D.networkid=04bf986a-ac13-4eaf-84e9-e99dcf508c48_=1409906169711
 2014-09-05 14:06:12,573 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-6de2885c) Zone 2 is ready to launch secondary storage VM
 2014-09-05 14:06:12,689 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-a134b2b7) Zone 2 is ready to launch console proxy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6460) Migration of CLVM volumes to another primary storage fail

2014-09-16 Thread Simon Weller (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136210#comment-14136210
 ] 

Simon Weller commented on CLOUDSTACK-6460:
--

I've been digging into this a lot today, and I think I've narrowed the problem 
down to the attachVolume functionality.

It appears that during the attacheVolume operation, the volume format is being 
set to QCOW2 in the database.

Steps to reproduce this:

1. Create a new volume either via GUI or API.
Database shows format field to be NULL.
2. Attach volume to VM.
At this point the format field is set to QCOW2, even though it's really RAW.

How's the database table data:

After Volume Creation:

mysql select * from volumes where id='17451';
+---++---+-+--+-+---+-+--+++--++++-+-+---+--+-++-+-+--+-+-+---++--+---+++++--+--+---+
| id| account_id | domain_id | pool_id | last_pool_id | instance_id | 
device_id | name| uuid | size   | 
folder | path | pod_id | data_center_id | iscsi_name | host_ip | volume_type | 
pool_type | disk_offering_id | template_id | first_snapshot_backup_uuid | 
recreatable | created | attached | updated | removed | 
state | chain_info | update_count | disk_type | vm_snapshot_chain_size | 
iso_id | display_volume | format | min_iops | max_iops | hv_ss_reserve |
+---++---+-+--+-+---+-+--+++--++++-+-+---+--+-++-+-+--+-+-+---++--+---+++++--+--+---+
| 17451 |  6 | 1 |NULL | NULL |NULL |  
NULL | testnew | 9fc165e4-d383-4367-ab0a-9a55e5566110 | 5368709120 | NULL   | 
NULL |   NULL |  2 | NULL   | NULL| DATADISK| NULL  
|3 |NULL | NULL   |   0 | 
2014-09-16 20:14:25 | NULL | 2014-09-16 20:14:25 | NULL| Allocated | 
NULL   |0 | NULL  |   NULL |   NULL |   
   1 | NULL   | NULL | NULL |  NULL |
+---++---+-+--+-+---+-+--+++--++++-+-+---+--+-++-+-+--+-+-+---++--+---+++++--+--+---+
1 row in set (0.00 sec)


mysql select * from storage_pool;
+-+---+--+---+--++++---++--+---++-+-+-++---+-++-+---+
| id  | name  | uuid | pool_type | port | 
data_center_id | pod_id | cluster_id | used_bytes| capacity_bytes | 
host_address | user_info | path   | created | removed | 
update_time | status | storage_provider_name | scope   | hypervisor | managed | 
capacity_iops |
+-+---+--+---+--++++---++--+---++-+-+-++---+-++-+---+
| 201 | clvm1 | 076d1cd7-9c80-4302-8e13-ea8187a9a96b | CLVM  |0 |   
   2 |  2 |  2 | 1022336434176 |  1099507433472 | localhost 
   | NULL  | /csstore01 | 2014-01-15 22:02:18 | NULL| NULL| Up  
   | DefaultPrimary| CLUSTER | NULL   |   0 |  NULL |
| 202 | csstore02 | 8f624e46-33fe-473c-9d05-62aebc151d1b | CLVM  |0 |   
   2 |  2 |  2 | 1019077459968 |  1099507433472 | localhost 
   | NULL  | 

[jira] [Resolved] (CLOUDSTACK-7563) ClassCastException in VirtualMachineManagerImpl in handling various Agent command answer.

2014-09-16 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen resolved CLOUDSTACK-7563.
--
Resolution: Fixed

Changed VirtualMachineManagerImpl to use check Answer cast to avoid potential 
unhandled exception.

 ClassCastException in VirtualMachineManagerImpl in handling various Agent 
 command answer.
 -

 Key: CLOUDSTACK-7563
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7563
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Assignee: Min Chen
Priority: Critical
 Fix For: 4.5.0


 irtualMachineManagerImpl has many methods directly cast the Answer received 
 from AgentManager.send or AgentManager.easySend to the expected Answer class 
 in the normal case. This will throw unhandled  ClassCastException in some 
 unexcepted cases where host is down and agent is disconnected, which will 
 lead to cloud instability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7565) [Automation] test_escalations_volumes test cases failing while attaching

2014-09-16 Thread Rayees Namathponnan (JIRA)
Rayees Namathponnan created CLOUDSTACK-7565:
---

 Summary: [Automation] test_escalations_volumes test cases failing 
while attaching 
 Key: CLOUDSTACK-7565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7565
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation
Affects Versions: 4.5.0
 Environment: KVM 
Reporter: Rayees Namathponnan
Priority: Critical
 Fix For: 4.5.0


Test case failing with QEMU error 

Job failed: {jobprocstatus : 0, created : u'2014-09-13T22:11:54-0700', 
jobresult : {errorcode : 530, errortext : u'Unexpected exception'}, cmd : 
u'org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd', userid : 
u'40ce6901-3037-4a9b-9720-c3525af1971e', jobstatus : 2, jobid : 
u'e6e007e0-4026-4476-ba21-86ba6da2fd46', jobresultcode : 530, jobinstanceid : 
u'0f2ab3fb-977f-40ea-aa53-b95be017e38c', jobresulttype : u'object', 
jobinstancetype : u'Volume', accountid : 
u'db0c6f40-940e-43d4-bfb5-6afecbbe331d'}
  begin captured stdout  -
=== TestName: test_06_volume_snapshot_policy_hourly | Status : EXCEPTION ===







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7565) [Automation] test_escalations_volumes test cases failing while attaching

2014-09-16 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-7565:

Description: 
Test case failing with QEMU error 

Job failed: {jobprocstatus : 0, created : u'2014-09-13T22:11:54-0700', 
jobresult : {errorcode : 530, errortext : u'Unexpected exception'}, cmd : 
u'org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd', userid : 
u'40ce6901-3037-4a9b-9720-c3525af1971e', jobstatus : 2, jobid : 
u'e6e007e0-4026-4476-ba21-86ba6da2fd46', jobresultcode : 530, jobinstanceid : 
u'0f2ab3fb-977f-40ea-aa53-b95be017e38c', jobresulttype : u'object', 
jobinstancetype : u'Volume', accountid : 
u'db0c6f40-940e-43d4-bfb5-6afecbbe331d'}
  begin captured stdout  -
=== TestName: test_06_volume_snapshot_policy_hourly | Status : EXCEPTION ===


Test cases failing while attaching volume,  error is from QEMU with duplicate 
volume id, we cannot fix this issue in QEMU 

 I tested attach volume manually it works
 there is a volume attach test case in BVT, it passes 


We need to know why attach passing in BVT not in regrssion 

we may need to update test case to work with KVM 





  was:
Test case failing with QEMU error 

Job failed: {jobprocstatus : 0, created : u'2014-09-13T22:11:54-0700', 
jobresult : {errorcode : 530, errortext : u'Unexpected exception'}, cmd : 
u'org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd', userid : 
u'40ce6901-3037-4a9b-9720-c3525af1971e', jobstatus : 2, jobid : 
u'e6e007e0-4026-4476-ba21-86ba6da2fd46', jobresultcode : 530, jobinstanceid : 
u'0f2ab3fb-977f-40ea-aa53-b95be017e38c', jobresulttype : u'object', 
jobinstancetype : u'Volume', accountid : 
u'db0c6f40-940e-43d4-bfb5-6afecbbe331d'}
  begin captured stdout  -
=== TestName: test_06_volume_snapshot_policy_hourly | Status : EXCEPTION ===






 [Automation] test_escalations_volumes test cases failing while attaching 
 -

 Key: CLOUDSTACK-7565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7565
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
 Environment: KVM 
Reporter: Rayees Namathponnan
Priority: Critical
 Fix For: 4.5.0


 Test case failing with QEMU error 
 Job failed: {jobprocstatus : 0, created : u'2014-09-13T22:11:54-0700', 
 jobresult : {errorcode : 530, errortext : u'Unexpected exception'}, cmd : 
 u'org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd', userid : 
 u'40ce6901-3037-4a9b-9720-c3525af1971e', jobstatus : 2, jobid : 
 u'e6e007e0-4026-4476-ba21-86ba6da2fd46', jobresultcode : 530, jobinstanceid : 
 u'0f2ab3fb-977f-40ea-aa53-b95be017e38c', jobresulttype : u'object', 
 jobinstancetype : u'Volume', accountid : 
 u'db0c6f40-940e-43d4-bfb5-6afecbbe331d'}
   begin captured stdout  -
 === TestName: test_06_volume_snapshot_policy_hourly | Status : EXCEPTION ===
 Test cases failing while attaching volume,  error is from QEMU with duplicate 
 volume id, we cannot fix this issue in QEMU 
  I tested attach volume manually it works
  there is a volume attach test case in BVT, it passes 
 We need to know why attach passing in BVT not in regrssion 
 we may need to update test case to work with KVM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7565) [Automation] test_escalations_volumes test cases failing while attaching

2014-09-16 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-7565:

Assignee: Gaurav Aradhye

 [Automation] test_escalations_volumes test cases failing while attaching 
 -

 Key: CLOUDSTACK-7565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7565
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
 Environment: KVM 
Reporter: Rayees Namathponnan
Assignee: Gaurav Aradhye
Priority: Critical
 Fix For: 4.5.0


 Test case failing with QEMU error 
 Job failed: {jobprocstatus : 0, created : u'2014-09-13T22:11:54-0700', 
 jobresult : {errorcode : 530, errortext : u'Unexpected exception'}, cmd : 
 u'org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd', userid : 
 u'40ce6901-3037-4a9b-9720-c3525af1971e', jobstatus : 2, jobid : 
 u'e6e007e0-4026-4476-ba21-86ba6da2fd46', jobresultcode : 530, jobinstanceid : 
 u'0f2ab3fb-977f-40ea-aa53-b95be017e38c', jobresulttype : u'object', 
 jobinstancetype : u'Volume', accountid : 
 u'db0c6f40-940e-43d4-bfb5-6afecbbe331d'}
   begin captured stdout  -
 === TestName: test_06_volume_snapshot_policy_hourly | Status : EXCEPTION ===
 Test cases failing while attaching volume,  error is from QEMU with duplicate 
 volume id, we cannot fix this issue in QEMU 
  I tested attach volume manually it works
  there is a volume attach test case in BVT, it passes 
 We need to know why attach passing in BVT not in regrssion 
 we may need to update test case to work with KVM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable

2014-09-16 Thread Min Chen (JIRA)
Min Chen created CLOUDSTACK-7566:


 Summary: Many jobs getting stuck in pending state and cloud is 
unusable
 Key: CLOUDSTACK-7566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Priority: Blocker
 Fix For: 4.5.0


Many jobs are getting stuck with errors like:

2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] (Timer-1:ctx-1e7a8a7e) 
Task (job-355415) has been pending for 690 seconds

Even jobs that apparently succeed are getting the same error. Async job table 
is not updated with complete even though job is completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136374#comment-14136374
 ] 

ASF subversion and git services commented on CLOUDSTACK-7566:
-

Commit a2d85c8cae5f603bbcfcd3659c1207f0bfe461a7 in cloudstack's branch 
refs/heads/master from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a2d85c8 ]

CLOUDSTACK-7566:Many jobs getting stuck in pending state and cloud is
unusable.

 Many jobs getting stuck in pending state and cloud is unusable
 --

 Key: CLOUDSTACK-7566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Priority: Blocker
 Fix For: 4.5.0


 Many jobs are getting stuck with errors like:
 2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] 
 (Timer-1:ctx-1e7a8a7e) Task (job-355415) has been pending for 690 seconds
 Even jobs that apparently succeed are getting the same error. Async job table 
 is not updated with complete even though job is completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable

2014-09-16 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen reassigned CLOUDSTACK-7566:


Assignee: Min Chen

 Many jobs getting stuck in pending state and cloud is unusable
 --

 Key: CLOUDSTACK-7566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Assignee: Min Chen
Priority: Blocker
 Fix For: 4.5.0


 Many jobs are getting stuck with errors like:
 2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] 
 (Timer-1:ctx-1e7a8a7e) Task (job-355415) has been pending for 690 seconds
 Even jobs that apparently succeed are getting the same error. Async job table 
 is not updated with complete even though job is completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable

2014-09-16 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen resolved CLOUDSTACK-7566.
--
Resolution: Fixed

 Many jobs getting stuck in pending state and cloud is unusable
 --

 Key: CLOUDSTACK-7566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Assignee: Min Chen
Priority: Blocker
 Fix For: 4.5.0


 Many jobs are getting stuck with errors like:
 2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] 
 (Timer-1:ctx-1e7a8a7e) Task (job-355415) has been pending for 690 seconds
 Even jobs that apparently succeed are getting the same error. Async job table 
 is not updated with complete even though job is completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable

2014-09-16 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136380#comment-14136380
 ] 

Min Chen commented on CLOUDSTACK-7566:
--

Guard all potential unhandled exception in MessageBus gate.enter and gate.leave 
routine to avoid potential lock holdup. Since each API will invoke messageBus 
to publish event, any potential lock holdup will make jobs pending in the 
system and render cloud unusable.

 Many jobs getting stuck in pending state and cloud is unusable
 --

 Key: CLOUDSTACK-7566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Min Chen
Priority: Blocker
 Fix For: 4.5.0


 Many jobs are getting stuck with errors like:
 2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] 
 (Timer-1:ctx-1e7a8a7e) Task (job-355415) has been pending for 690 seconds
 Even jobs that apparently succeed are getting the same error. Async job table 
 is not updated with complete even though job is completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7551) Automate ACL test cases relating to impersonation when depoying VM in shared network.

2014-09-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136448#comment-14136448
 ] 

ASF subversion and git services commented on CLOUDSTACK-7551:
-

Commit 65608e99495007183bb8e4043b0f1efe527a7e85 in cloudstack's branch 
refs/heads/master from [~sangeethah]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=65608e9 ]

CLOUDSTACK-7551 - Automate ACL test cases relating to impersonation when 
depoying VM in shared network


 Automate ACL test cases relating to impersonation  when depoying VM in shared 
 network.
 --

 Key: CLOUDSTACK-7551
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7551
 Project: CloudStack
  Issue Type: Task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: marvin
Affects Versions: 4.4.0
Reporter: Sangeetha Hariharan

 Automate ACL test cases relating to impersonation when depoying VM in shared 
 network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-2412) [UI]Disable CiscoVnmc provider for PF/SourceNat/StaticNAT/Firewall dropdown list with Shared guest type and VPC Network Offering

2014-09-16 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang updated CLOUDSTACK-2412:
-
Attachment: jessica_2014_09_16.jpg

 [UI]Disable CiscoVnmc provider for PF/SourceNat/StaticNAT/Firewall dropdown 
 list with Shared guest type and VPC Network Offering
 

 Key: CLOUDSTACK-2412
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2412
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Jessica Wang
 Fix For: 4.4.0

 Attachments: VNNMCinShared.png, jessica_2014_09_16.jpg


 Setup: Advanced Networking Zone
 Steps:
 1. Configure VMWARE Cluster with Nexus 1000v 
 2. Tried to create Shared Network offering with Guest type as shared Or 
 Enable VPC
 Observation.
 CiscoVnmc provider is listed in dropdown for selection with 
 PF/SourceNat/StaticNAT/Firewall  services.
 This is supported only with isolated network. So Disable CiscoVnmc provider 
 for PF/SourceNat/StaticNAT/Firewall dropdown list with Shared guest type and 
 VPC Network Offering



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-2412) [UI]Disable CiscoVnmc provider for PF/SourceNat/StaticNAT/Firewall dropdown list with Shared guest type and VPC Network Offering

2014-09-16 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang reassigned CLOUDSTACK-2412:


Assignee: Sailaja Mada  (was: Jessica Wang)

Sailaja,

I'm unable to reproduce this bug in 4.5 release (as my attached screenshot 
jessica_2014_09_16.jpg).

If you are still able to reproduce this bug, please provide:

(1) your database dump (against 4.5 release)

(2) a new screenshot of Create Network Offering dialog (like my attached 
screenshot jessica_2014_09_16.jpg = Guest Type is Shared, Firewall is 
selected, options in Firewall Provider dropdown are showing).

thank you.

Jessica 

 [UI]Disable CiscoVnmc provider for PF/SourceNat/StaticNAT/Firewall dropdown 
 list with Shared guest type and VPC Network Offering
 

 Key: CLOUDSTACK-2412
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2412
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Sailaja Mada
 Fix For: 4.4.0

 Attachments: VNNMCinShared.png, jessica_2014_09_16.jpg


 Setup: Advanced Networking Zone
 Steps:
 1. Configure VMWARE Cluster with Nexus 1000v 
 2. Tried to create Shared Network offering with Guest type as shared Or 
 Enable VPC
 Observation.
 CiscoVnmc provider is listed in dropdown for selection with 
 PF/SourceNat/StaticNAT/Firewall  services.
 This is supported only with isolated network. So Disable CiscoVnmc provider 
 for PF/SourceNat/StaticNAT/Firewall dropdown list with Shared guest type and 
 VPC Network Offering



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7567) Automate ACL test cases relating to depoying VM in shared network with different scopes - All/Domain/Domain with subdomain/Account for Admin, domain admin and regula

2014-09-16 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-7567:
---

 Summary: Automate ACL test cases relating to depoying VM in shared 
network with different scopes - All/Domain/Domain with subdomain/Account for 
Admin, domain admin and regular users.
 Key: CLOUDSTACK-7567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7567
 Project: CloudStack
  Issue Type: Task
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: marvin
Affects Versions: 4.4.0
Reporter: Sangeetha Hariharan


Automate ACL test cases relating to impersonation when depoying VM in shared 
network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7567) Automate ACL test cases relating to depoying VM in shared network with different scopes - All/Domain/Domain with subdomain/Account for Admin, domain admin and regula

2014-09-16 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-7567:

Description: Automate ACL test cases relating to depoying VM in shared 
network with different scopes - All/Domain/Domain with subdomain/Account for 
Admin, domain admin and regular users.

 Automate ACL test cases relating to depoying VM in shared network with 
 different scopes - All/Domain/Domain with subdomain/Account for Admin, domain 
 admin and regular users.
 -

 Key: CLOUDSTACK-7567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7567
 Project: CloudStack
  Issue Type: Task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: marvin
Affects Versions: 4.4.0
Reporter: Sangeetha Hariharan

 Automate ACL test cases relating to depoying VM in shared network with 
 different scopes - All/Domain/Domain with subdomain/Account for Admin, domain 
 admin and regular users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7567) Automate ACL test cases relating to depoying VM in shared network with different scopes - All/Domain/Domain with subdomain/Account for Admin, domain admin and regula

2014-09-16 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-7567:

Description: (was: Automate ACL test cases relating to impersonation 
when depoying VM in shared network.)

 Automate ACL test cases relating to depoying VM in shared network with 
 different scopes - All/Domain/Domain with subdomain/Account for Admin, domain 
 admin and regular users.
 -

 Key: CLOUDSTACK-7567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7567
 Project: CloudStack
  Issue Type: Task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: marvin
Affects Versions: 4.4.0
Reporter: Sangeetha Hariharan





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)