[jira] [Commented] (CLOUDSTACK-8580) Users should be able to expunge VMs

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693564#comment-14693564
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8580:


Github user DaanHoogland commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/680#discussion_r36865162
  
--- Diff: ui/scripts/instances.js ---
@@ -618,11 +618,11 @@
 createForm: {
 title: 'label.action.destroy.instance',
 desc: 'label.action.destroy.instance',
-isWarning: true,
+isWarning: true,
 preFilter: function(args) {
-if (isAdmin() || isDomainAdmin()) {
-
args.$form.find('.form-item[rel=expunge]').css('display', 'inline-block');
-} else {
+// Hide the expunge checkbox when the 
authenticated user
+// can't expunge VMs. Related to 
CLOUDSTACK-8580.
--- End diff --

I think the var name is explanatory. the comment could go with allocation 
and not use of the thingy.


 Users should be able to expunge VMs
 ---

 Key: CLOUDSTACK-8580
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8580
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Lennert den Teuling
Priority: Minor

 When automating deployments of CloudStack (with for example Terraform) there 
 are situations where VMs get recreated with the same name (and hostname). 
 When VMs are destroyed by a user, the name will be reserved on the network 
 until the VM truly gets expunged (depending on expunge.delay). Because of 
 this, some automation tools cannot work because a new deployment with the 
 same name gives an error.  
 Users do not have the ability to directly expunge VMs (Only admin and 
 domain-admins can), but they can destroy them and the admin can configure the 
 expunge.delay where VMs truly get removed (expunged). 
 Working with the expunge delay is very safe in case users accidentally remove 
 a VM, but in some cases (when users know what they are doing) there should 
 also be a option to completely remove the VM when destroying it (expunge). 
 Ideally the admin should be able to configure this behavior trough the global 
 settings, cause i believe the admin deliberately needs to turn it on (off by 
 default).
 We have looked into making our clients domain-admin by default, but that 
 gives them abilities we do not want to give, so we see no other way then just 
 enabling expunge for the user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8580) Users should be able to expunge VMs

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693567#comment-14693567
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8580:


Github user DaanHoogland commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/680#discussion_r36865186
  
--- Diff: ui/scripts/instances.js ---
@@ -2425,11 +2425,15 @@
 var allowedActions = [];
 
 if (jsonObj.state == 'Destroyed') {
-if (isAdmin() || isDomainAdmin()) {
+// Display expunge and recover action when authenticated user
+// is allowed to expunge or recover VMs. Related to 
CLOUDSTACK-8580.
--- End diff --

see comment at line 624


 Users should be able to expunge VMs
 ---

 Key: CLOUDSTACK-8580
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8580
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Lennert den Teuling
Priority: Minor

 When automating deployments of CloudStack (with for example Terraform) there 
 are situations where VMs get recreated with the same name (and hostname). 
 When VMs are destroyed by a user, the name will be reserved on the network 
 until the VM truly gets expunged (depending on expunge.delay). Because of 
 this, some automation tools cannot work because a new deployment with the 
 same name gives an error.  
 Users do not have the ability to directly expunge VMs (Only admin and 
 domain-admins can), but they can destroy them and the admin can configure the 
 expunge.delay where VMs truly get removed (expunged). 
 Working with the expunge delay is very safe in case users accidentally remove 
 a VM, but in some cases (when users know what they are doing) there should 
 also be a option to completely remove the VM when destroying it (expunge). 
 Ideally the admin should be able to configure this behavior trough the global 
 settings, cause i believe the admin deliberately needs to turn it on (off by 
 default).
 We have looked into making our clients domain-admin by default, but that 
 gives them abilities we do not want to give, so we see no other way then just 
 enabling expunge for the user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8580) Users should be able to expunge VMs

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693573#comment-14693573
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8580:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/680#issuecomment-130321431
  
please comment on testing the stuff, for instance ref unit - or integration 
tests that cover the code in the PR description/comment or, alternatively add 
unit tests.


 Users should be able to expunge VMs
 ---

 Key: CLOUDSTACK-8580
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8580
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Lennert den Teuling
Priority: Minor

 When automating deployments of CloudStack (with for example Terraform) there 
 are situations where VMs get recreated with the same name (and hostname). 
 When VMs are destroyed by a user, the name will be reserved on the network 
 until the VM truly gets expunged (depending on expunge.delay). Because of 
 this, some automation tools cannot work because a new deployment with the 
 same name gives an error.  
 Users do not have the ability to directly expunge VMs (Only admin and 
 domain-admins can), but they can destroy them and the admin can configure the 
 expunge.delay where VMs truly get removed (expunged). 
 Working with the expunge delay is very safe in case users accidentally remove 
 a VM, but in some cases (when users know what they are doing) there should 
 also be a option to completely remove the VM when destroying it (expunge). 
 Ideally the admin should be able to configure this behavior trough the global 
 settings, cause i believe the admin deliberately needs to turn it on (off by 
 default).
 We have looked into making our clients domain-admin by default, but that 
 gives them abilities we do not want to give, so we see no other way then just 
 enabling expunge for the user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8580) Users should be able to expunge VMs

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693568#comment-14693568
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8580:


Github user DaanHoogland commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/680#discussion_r36865220
  
--- Diff: ui/scripts/instances.js ---
@@ -2498,8 +2502,11 @@
 } else if (jsonObj.state == 'Error') {
 allowedActions.push(destroy);
 } else if (jsonObj.state == 'Expunging') {
-if (isAdmin() || isDomainAdmin())
+// Display expunge action when authenticated user
+// is allowed to expunge VMs. Related to CLOUDSTACK-8580.
+if (g_allowUserExpungeRecoverVm) {
--- End diff --

see commetn at line 624


 Users should be able to expunge VMs
 ---

 Key: CLOUDSTACK-8580
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8580
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Lennert den Teuling
Priority: Minor

 When automating deployments of CloudStack (with for example Terraform) there 
 are situations where VMs get recreated with the same name (and hostname). 
 When VMs are destroyed by a user, the name will be reserved on the network 
 until the VM truly gets expunged (depending on expunge.delay). Because of 
 this, some automation tools cannot work because a new deployment with the 
 same name gives an error.  
 Users do not have the ability to directly expunge VMs (Only admin and 
 domain-admins can), but they can destroy them and the admin can configure the 
 expunge.delay where VMs truly get removed (expunged). 
 Working with the expunge delay is very safe in case users accidentally remove 
 a VM, but in some cases (when users know what they are doing) there should 
 also be a option to completely remove the VM when destroying it (expunge). 
 Ideally the admin should be able to configure this behavior trough the global 
 settings, cause i believe the admin deliberately needs to turn it on (off by 
 default).
 We have looked into making our clients domain-admin by default, but that 
 gives them abilities we do not want to give, so we see no other way then just 
 enabling expunge for the user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8710) site2site vpn iptables rules are not configured on VR

2015-08-12 Thread Remi Bergsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remi Bergsma reassigned CLOUDSTACK-8710:


Assignee: Remi Bergsma

 site2site vpn iptables rules are not configured on VR
 -

 Key: CLOUDSTACK-8710
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8710
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Devices
Affects Versions: 4.6.0
Reporter: Jayapal Reddy
Assignee: Remi Bergsma
Priority: Critical

 1. Configure vpc 
 2. Configure site2site vpn 
 3. After configuration go to VR and check the iptables rules of VR.
 Observed that there no rules configured on ports 500, 4500.
 In configure.py there is method 'configure_iptables' which is having rules 
 but these are not getting applied on VR on site2site vpn configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8726) Automation for Quickly attaching multiple data disks to a new VM

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694681#comment-14694681
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8726:


Github user nitt10prashant commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/683#discussion_r36940905
  
--- Diff: test/integration/component/test_simultaneous_volume_attach.py ---
@@ -0,0 +1,256 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#Import Local Modules
+from marvin.cloudstackAPI import *
+from marvin.cloudstackTestCase import cloudstackTestCase, unittest
+from marvin.lib.utils import (cleanup_resources,
+  validateList)
+from marvin.lib.base import (ServiceOffering,
+ VirtualMachine,
+ Account,
+ Volume,
+ DiskOffering,
+ )
+from marvin.lib.common import (get_domain,
+get_zone,
+get_template,
+find_storage_pool_type)
+from marvin.codes import (
+PASS,
+FAILED,
+JOB_FAILED,
+JOB_CANCELLED,
+JOB_SUCCEEDED
+)
+from nose.plugins.attrib import attr
+import time
+
+
+class TestMultipleVolumeAttach(cloudstackTestCase):
+
+@classmethod
+def setUpClass(cls):
+testClient = super(TestMultipleVolumeAttach, 
cls).getClsTestClient()
+cls.apiclient = testClient.getApiClient()
+cls.services = testClient.getParsedTestDataConfig()
+cls._cleanup = []
+# Get Zone, Domain and templates
+cls.domain = get_domain(cls.apiclient)
+cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
+cls.services['mode'] = cls.zone.networktype
+cls.hypervisor = testClient.getHypervisorInfo()
+cls.invalidStoragePoolType = False
+#for LXC if the storage pool of type 'rbd' ex: ceph is not 
available, skip the test
+if cls.hypervisor.lower() == 'lxc':
+if not find_storage_pool_type(cls.apiclient, 
storagetype='rbd'):
+# RBD storage type is required for data volumes for LXC
+cls.invalidStoragePoolType = True
+return
+cls.disk_offering = DiskOffering.create(
+cls.apiclient,
+cls.services[disk_offering]
+)
+
+template = get_template(
+cls.apiclient,
+cls.zone.id,
+cls.services[ostype]
+)
+if template == FAILED:
+assert False, get_template() failed to return template with 
description %s % cls.services[ostype]
+
+cls.services[domainid] = cls.domain.id
+cls.services[zoneid] = cls.zone.id
+cls.services[template] = template.id
+cls.services[diskofferingid] = cls.disk_offering.id
+
+# Create VMs, VMs etc
+cls.account = Account.create(
+cls.apiclient,
+cls.services[account],
+domainid=cls.domain.id
+)
+cls.service_offering = ServiceOffering.create(
+cls.apiclient,
+
cls.services[service_offering]
+)
+cls.virtual_machine = VirtualMachine.create(
+cls.apiclient,
+cls.services,
+accountid=cls.account.name,
+   

[jira] [Commented] (CLOUDSTACK-8727) API call listVirtualMachines returns same keypair

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694710#comment-14694710
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8727:


Github user kansal commented on the pull request:

https://github.com/apache/cloudstack/pull/685#issuecomment-130539282
  
@kishankavala  The current implementation of the createSSHKeyPair checks 
for the keypair-name in the table ssh_keypairs and if that name exists it will 
return A keypair with the name exists. But at the time of inserting the 
keypairs it will not insert the same key once again because of the UNIQUE 
constraint that is added in the above commit. So In a way the duplication 
problem is automatically checked. 


 API call listVirtualMachines returns same keypair
 -

 Key: CLOUDSTACK-8727
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8727
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Kshitij Kansal

 If you register 2 SSH keypairs with the same public key then 
 listVirtualMachines API call will only return the first keypair. Although its 
 a very rare case and generally don't make much sense by registering the same 
 key with different name, still it can be fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8727) API call listVirtualMachines returns same keypair

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694680#comment-14694680
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8727:


Github user kansal commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/685#discussion_r36940851
  
--- Diff: setup/db/db/schema-452to460.sql ---
@@ -353,6 +353,8 @@ CREATE VIEW `cloud`.`user_vm_view` AS
 `cloud`.`user_vm_details` `custom_speed`  ON 
(((`custom_speed`.`vm_id` = `cloud`.`vm_instance`.`id`) and 
(`custom_speed`.`name` = 'CpuSpeed')))
left join
 `cloud`.`user_vm_details` `custom_ram_size`  ON 
(((`custom_ram_size`.`vm_id` = `cloud`.`vm_instance`.`id`) and 
(`custom_ram_size`.`name` = 'memory')));
+delete s1 from ssh_keypairs s1, ssh_keypairs s2 where s1.id  
s2.id and s1.public_key = s2.public_key;
--- End diff --

@sedukull there are no constraints which refer to ssh_keypairs as the 
foreign key. Yes there are foreign key constraints on the columns of 
ssh_keypairs, but deleteing them won't create any problem as they are not 
referenced further. 


 API call listVirtualMachines returns same keypair
 -

 Key: CLOUDSTACK-8727
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8727
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Kshitij Kansal

 If you register 2 SSH keypairs with the same public key then 
 listVirtualMachines API call will only return the first keypair. Although its 
 a very rare case and generally don't make much sense by registering the same 
 key with different name, still it can be fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8711) public_ip type resource count for an account is not decremented upon IP range deletion

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692972#comment-14692972
 ] 

Rajani Karuturi commented on CLOUDSTACK-8711:
-

[~ManeeshaP], please assign the issue when you start working and mark it 
resolved when the pull request is merged.

 public_ip type resource count for an account is not decremented upon IP range 
 deletion
 --

 Key: CLOUDSTACK-8711
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8711
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.1
Reporter: Maneesha
Assignee: Maneesha
 Fix For: 4.6.0


 When deleting the IP range which is associated to an account the resource 
 count for public_ip is not decremented accordingly which is causing not to 
 add any new ranges to that account further once we reach max limit.
 Repro Steps.
 -
 1. Add an IP range and associate it to a particular account. This will 
 increment your resource count for public_ip to that range count.
 2. Now try to delete this range and check the resource count for public_ip of 
 that account. it will not be decreased.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8548) Message translations in Japanese and Chinese

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8548.
-
Resolution: Fixed

 Message translations in Japanese and Chinese
 

 Key: CLOUDSTACK-8548
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8548
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
 Fix For: 4.6.0


 The message keys are sorted to match with the English messages file. None of 
 the messages are removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8548) Message translations in Japanese and Chinese

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692975#comment-14692975
 ] 

Rajani Karuturi commented on CLOUDSTACK-8548:
-

[~ramamurtis], please assign issue when you start working and mark them 
resolved when the pr is merged

 Message translations in Japanese and Chinese
 

 Key: CLOUDSTACK-8548
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8548
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
 Fix For: 4.6.0


 The message keys are sorted to match with the English messages file. None of 
 the messages are removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7410) OVS distributed routing + KVM / NameError: name 'configure_ovs_bridge_for_routing_policies' is not defined

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7410:

Fix Version/s: (was: 4.5.2)
   (was: 4.6.0)
   (was: Future)

 OVS distributed routing + KVM / NameError: name 
 'configure_ovs_bridge_for_routing_policies' is not defined
 --

 Key: CLOUDSTACK-7410
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7410
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
 Environment: Environment:
  CloudStack 4.4.0
  KVM(CentOS6.4)
  Open vSwitch 1.11.0
  VPC Offering
:
   Connectivity:Ovs
   DistributedRouter:On
  Network Offering
:
   Virtual Networking:Ovs
Reporter: satoru nakaya
 Attachments: computenode_agent.zip, computenode_messages.zip, 
 management-server.zip


 Error is output following when you deploy VM instances in VPC.
 2014-08-03 14:52:25,608 DEBUG [c.c.a.t.Request] (AgentManager-Handler-5:null) 
 Seq 8-7588002422165864587: Processing: { Ans: , MgmtId: 52236007434, via: 8, 
 Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.Answer:{result:false,details:Traceback (most 
 recent call last): File 
 \/usr/share/cloudstack-common/scripts/vm/network/vnet/ovstunnel.py\, line 
 302, in configure_ovs_bridge_for_routing_policies(bridge, config)NameError: 
 name 'configure_ovs_bridge_for_routing_policies' is not defined,wait:0}}] }
 2014-08-03 14:52:25,608 DEBUG [c.c.a.t.Request] 
 (Work-Job-Executor-57:ctx-f0d66eef job-299/job-300 ctx-3845c1db) Seq 
 8-7588002422165864587: Received: { Ans: , MgmtId: 52236007434, via: 8, Ver: 
 v1, Flags: 10, { Answer } }
 2014-08-03 14:52:25,608 DEBUG [c.c.n.o.OvsTunnelManagerImpl] 
 (Work-Job-Executor-57:ctx-f0d66eef job-299/job-300 ctx-3845c1db) Failed to 
 update the host 8 with latest routing policies.
 2014-08-03 14:52:25,609 DEBUG [c.c.n.o.OvsTunnelManagerImpl] 
 (Work-Job-Executor-57:ctx-f0d66eef job-299/job-300 ctx-3845c1db) Failed to 
 send VPC routing policy change update to host : 8. But moving on with sending 
 the updates to the rest of the hosts.
 :
 2014-08-03 14:52:25,622 ERROR [c.c.v.VirtualMachineManagerImpl] 
 (Work-Job-Executor-57:ctx-f0d66eef job-299/job-300 ctx-3845c1db) Failed to 
 start instance VM[User|i-2-40-VM]
 com.cloud.utils.exception.CloudRuntimeException: Failed to add VPC router 
 VM[DomainRouter|r-38-VM] to guest network 
 Ntwk[95583273-9dc0-4569-be29-706b28543932|Guest|15]
 at 
 com.cloud.network.element.VpcVirtualRouterElement.implement(VpcVirtualRouterElement.java:188)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7369) assignVirtualMachine API name not intuitive

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7369:

Fix Version/s: (was: 4.6.0)

 assignVirtualMachine API name not intuitive
 ---

 Key: CLOUDSTACK-7369
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7369
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.5.0
Reporter: Nitin Mehta

 I just went to a meetup and was talking to [~jlkinsel] who was talking about 
 moving vms from one account to another and was doing it manually through a 
 huge set of db queries. We already had this api in place since 3.0.x but 
 folks are not aware of it due to unintuitive api name. I can think of 2 
 solutions at the moment.
 1. Create another api say changeVmOwnership and internally point it to the 
 same logic as assignVirtualMachine. Mark assignVirtualMachine as deprecated 
 api name for 5.0
 2. OR in the api documentation name it as changeVmOwnership so that folks 
 would know such functionality exists and is available through APIs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-6835) db abstraction layer in upgrade path

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi closed CLOUDSTACK-6835.
---
Resolution: Fixed

 db abstraction layer in upgrade path
 

 Key: CLOUDSTACK-6835
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6835
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rajani Karuturi
Priority: Critical

 about 198 of the issues reported by covery scan[1] on 26 May, 2014 are in the 
 Upgrade###to###.java code
 and many of them related to Resource leaks and not closing the prepared 
 statements. 
 I think we should have a DB abstraction layer in the upgrade path so the the 
 developer who needs to do select/insert/update data in the upgrade path need 
 not write native sqls and worry about these recurring issues.
 [1] https://scan.coverity.com/projects/943



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6698) listResourceDetals - normal user able to list details not belonging to it

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6698:

Fix Version/s: (was: 4.6.0)

 listResourceDetals - normal user able to list details not belonging to it
 -

 Key: CLOUDSTACK-6698
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6698
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Nitin Mehta
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8701) Allow SAML users to switch accounts

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693144#comment-14693144
 ] 

ASF subversion and git services commented on CLOUDSTACK-8701:
-

Commit 1527ad6964f240e0c98606510fb1c8806eeb4e04 in cloudstack's branch 
refs/heads/4.5-samlfixes from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1527ad6 ]

CLOUDSTACK-8701: Add listandswitchsamlaccount API test and add boundary checks

- Adds unit test for ListAndSwitchSAMLAccountCmd
- Checks and logs in user only if they are enabled
- If saml user switches to a locked account, send appropriate error message

Signed-off-by: Rohit Yadav rohit.ya...@shapeblue.com


 Allow SAML users to switch accounts
 ---

 Key: CLOUDSTACK-8701
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8701
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0, 4.5.2


 SAML authenticated users may have multiple accounts across domains as there 
 may be user accounts with same usernames, the current way in 4.5/master is to 
 grab the domain information beforehand which provides a bad UX as users would 
 need to remember their domain path names (difficult than remembering the 
 domain name).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8437) Automation: test_04_create_multiple_networks_with_lb_1_network_offering - Fails

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8437:

Fix Version/s: (was: 4.6.0)

 Automation: test_04_create_multiple_networks_with_lb_1_network_offering - 
 Fails
 ---

 Key: CLOUDSTACK-8437
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8437
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0, 4.6.0
Reporter: Abhinandan Prateek
Priority: Critical

 test/integration/component/test_vpc_network.py 
 If a network with LB service exists in VPC, creating second network with LB 
 should fail. 
 This is a rough description more investigation is required to check if the 
 test is fine and it is a product defect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8442) [VMWARE] VM Cannot be powered on after restoreVirtualMachine

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8442:

Fix Version/s: (was: 4.6.0)

 [VMWARE] VM Cannot be powered on after restoreVirtualMachine 
 -

 Key: CLOUDSTACK-8442
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8442
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.1
 Environment: ACS 4.5.1, CentOS 6.6
 vSphere 5.5 with NFS for Primary Storage
Reporter: ilya musayev
  Labels: vmware

 While restoreVirtualMachine call is successful, when you try to power on the 
 VM, vSphere fails to find and use proper ROOT volume. 
 To recreate this issue, create a project, then deploy a VM with template X in 
 same project, then use restoreVirtualMachine API call to alter the ROOT disk 
 and attempt to power on..
 Same errors are seen in vcenter...
 2015-05-05 06:38:43,962 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077) Add job-8077 into job monitoring
 2015-05-05 06:38:43,969 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (catalina-exec-7:ctx-6e032e40 ctx-8bb374e0) submit async job-8077, details: 
 AsyncJobVO {id:8077, userId: 2, accountId: 2, instanceType: VirtualMachine, 
 instanceId: 1350, cmd: 
 org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin, cmdInfo: 
 {id:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,response:json,sessionkey:EfTBAqeGH5ivA9E7W1q7gcYXWgI\u003d,ctxDetails:{\com.cloud.vm.VirtualMachine\:\bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\},cmdEventType:VM.START,ctxUserId:2,httpmethod:GET,projectid:98b2e16f-1e4f-4b19-866b-154ef5aad53d,_:1430807923839,uuid:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,ctxAccountId:2,ctxStartEventId:17421},
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 345049223690, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: null}
 2015-05-05 06:38:43,978 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077) Executing AsyncJobVO {id:8077, 
 userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 1350, cmd: 
 org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin, cmdInfo: 
 {id:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,response:json,sessionkey:EfTBAqeGH5ivA9E7W1q7gcYXWgI\u003d,ctxDetails:{\com.cloud.vm.VirtualMachine\:\bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\},cmdEventType:VM.START,ctxUserId:2,httpmethod:GET,projectid:98b2e16f-1e4f-4b19-866b-154ef5aad53d,_:1430807923839,uuid:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,ctxAccountId:2,ctxStartEventId:17421},
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 345049223690, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: null}
 2015-05-05 06:38:43,990 WARN  [c.c.a.d.ParamGenericValidationWorker] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Received unknown 
 parameters for command startVirtualMachine. Unknown parameters : projectid
 2015-05-05 06:38:44,020 DEBUG [c.c.n.NetworkModelImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service 
 SecurityGroup is not supported in the network id=224
 2015-05-05 06:38:44,025 DEBUG [c.c.n.NetworkModelImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service 
 SecurityGroup is not supported in the network id=224
 2015-05-05 06:38:44,045 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Deploy avoids pods: 
 [], clusters: [], hosts: []
 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) DeploymentPlanner 
 allocation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4
 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Trying to allocate a 
 host and storage pools from dc:1, pod:1,cluster:null, requested cpu: 100, 
 requested ram: 2147483648
 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Is ROOT volume READY 
 (pool already allocated)?: No
 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) This VM has last 
 host_id specified, trying to choose the same host: 5
 2015-05-05 06:38:44,055 DEBUG [c.c.c.CapacityManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking if host: 5 
 has enough capacity for requested CPU: 100 and requested RAM: 2147483648 , 
 cpuOverprovisioningFactor: 1.0
 2015-05-05 06:38:44,058 DEBUG [c.c.c.CapacityManagerImpl] 

[jira] [Updated] (CLOUDSTACK-8499) UI reload perfomance is poor in index.jsp

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8499:

Assignee: Rafael Santos Antunes da Fonseca

 UI reload perfomance is poor in index.jsp
 -

 Key: CLOUDSTACK-8499
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8499
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, UI
Affects Versions: 4.6.0, 4.4.4, 4.5.2
Reporter: Rafael Santos Antunes da Fonseca
Assignee: Rafael Santos Antunes da Fonseca
 Fix For: Future, 4.6.0, 4.5.2


 There is a timestamp being placed in front of some of the static files in 
 ui/index.jsp that is preventing tomcat from responding with 304 to the 
 client, so that cached client files will not need to be resent.
 This affects page reload speeds harshly.
 This problem affects all versions since 4.0 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8499) UI reload perfomance is poor in index.jsp

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8499.
-
Resolution: Fixed

 UI reload perfomance is poor in index.jsp
 -

 Key: CLOUDSTACK-8499
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8499
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, UI
Affects Versions: 4.6.0, 4.4.4, 4.5.2
Reporter: Rafael Santos Antunes da Fonseca
Assignee: Rafael Santos Antunes da Fonseca
 Fix For: Future, 4.6.0, 4.5.2


 There is a timestamp being placed in front of some of the static files in 
 ui/index.jsp that is preventing tomcat from responding with 304 to the 
 client, so that cached client files will not need to be resent.
 This affects page reload speeds harshly.
 This problem affects all versions since 4.0 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7759) [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7759:

Assignee: (was: Likitha Shetty)

 [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start
 

 Key: CLOUDSTACK-7759
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7759
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
 Environment: Latest build from 4.5
Reporter: Sanjeev N
 Attachments: management-server.rar


 [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start
 Steps to Reproduce:
 ===
 1.Create CS 4.5 setup using vmware cluster
 2.Wait for the system vms to come up
 Observations:
 ==
 During system vms start up we see lot of 
 javax.xml.ws.soap.SOAPFaultExceptions in the MS log file
 2014-10-21 21:17:28,104 WARN  [c.c.h.v.m.HttpNfcLeaseMO] (Thread-29:null) 
 Unexpected exception
 javax.xml.ws.soap.SOAPFaultException: A specified parameter was not correct.
 percent
 at 
 com.sun.xml.internal.ws.fault.SOAP11Fault.getProtocolException(SOAP11Fault.java:178)
 at 
 com.sun.xml.internal.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:119)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:108)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78)
 at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:129)
 at $Proxy413.httpNfcLeaseProgress(Unknown Source)
 at 
 com.cloud.hypervisor.vmware.mo.HttpNfcLeaseMO.updateLeaseProgress(HttpNfcLeaseMO.java:104)
 at 
 com.cloud.hypervisor.vmware.mo.HttpNfcLeaseMO$ProgressReporter.run(HttpNfcLeaseMO.java:163)
 But these exceptions disappear when the system vms are up and running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7759) [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7759:

Fix Version/s: (was: 4.6.0)

 [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start
 

 Key: CLOUDSTACK-7759
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7759
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
 Environment: Latest build from 4.5
Reporter: Sanjeev N
 Attachments: management-server.rar


 [VMWare]javax.xml.ws.soap.SOAPFaultException during system vms start
 Steps to Reproduce:
 ===
 1.Create CS 4.5 setup using vmware cluster
 2.Wait for the system vms to come up
 Observations:
 ==
 During system vms start up we see lot of 
 javax.xml.ws.soap.SOAPFaultExceptions in the MS log file
 2014-10-21 21:17:28,104 WARN  [c.c.h.v.m.HttpNfcLeaseMO] (Thread-29:null) 
 Unexpected exception
 javax.xml.ws.soap.SOAPFaultException: A specified parameter was not correct.
 percent
 at 
 com.sun.xml.internal.ws.fault.SOAP11Fault.getProtocolException(SOAP11Fault.java:178)
 at 
 com.sun.xml.internal.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:119)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:108)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78)
 at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:129)
 at $Proxy413.httpNfcLeaseProgress(Unknown Source)
 at 
 com.cloud.hypervisor.vmware.mo.HttpNfcLeaseMO.updateLeaseProgress(HttpNfcLeaseMO.java:104)
 at 
 com.cloud.hypervisor.vmware.mo.HttpNfcLeaseMO$ProgressReporter.run(HttpNfcLeaseMO.java:163)
 But these exceptions disappear when the system vms are up and running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8415) [VMware] SSVM shutdown during snapshot operation results in disks to be left behind

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8415:

Assignee: Suresh Kumar Anaparti  (was: Rajani Karuturi)

 [VMware] SSVM shutdown during snapshot operation results in disks to be left 
 behind 
 

 Key: CLOUDSTACK-8415
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8415
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty
Assignee: Suresh Kumar Anaparti
 Fix For: 4.6.0


 Partial disks are residue of a failed snapshot operation caused by SSVM 
 reboot/shutdown. The disks do not get cleaned up on secondary storage and 
 need to be cleaned up manually to release storage.
 +Steps to reproduce+
 1. Initiate a volume snapshot operation.
 2. Destroy SSVM while the operation is in progress.
 3. Check the snapshot folder in secondary storage - Files including disks are 
 present in the folder and are never cleaned up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8415) [VMware] SSVM shutdown during snapshot operation results in disks to be left behind

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi reassigned CLOUDSTACK-8415:
---

Assignee: Rajani Karuturi  (was: Suresh Kumar Anaparti)

 [VMware] SSVM shutdown during snapshot operation results in disks to be left 
 behind 
 

 Key: CLOUDSTACK-8415
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8415
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty
Assignee: Rajani Karuturi
 Fix For: 4.6.0


 Partial disks are residue of a failed snapshot operation caused by SSVM 
 reboot/shutdown. The disks do not get cleaned up on secondary storage and 
 need to be cleaned up manually to release storage.
 +Steps to reproduce+
 1. Initiate a volume snapshot operation.
 2. Destroy SSVM while the operation is in progress.
 3. Check the snapshot folder in secondary storage - Files including disks are 
 present in the folder and are never cleaned up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8711) public_ip type resource count for an account is not decremented upon IP range deletion

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8711:

Assignee: Maneesha

 public_ip type resource count for an account is not decremented upon IP range 
 deletion
 --

 Key: CLOUDSTACK-8711
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8711
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.1
Reporter: Maneesha
Assignee: Maneesha
 Fix For: 4.6.0


 When deleting the IP range which is associated to an account the resource 
 count for public_ip is not decremented accordingly which is causing not to 
 add any new ranges to that account further once we reach max limit.
 Repro Steps.
 -
 1. Add an IP range and associate it to a particular account. This will 
 increment your resource count for public_ip to that range count.
 2. Now try to delete this range and check the resource count for public_ip of 
 that account. it will not be decreased.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8713) KVM Power state report not properly parsed (Exception) resulting in HA is not working on CentOS 7

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692971#comment-14692971
 ] 

Rajani Karuturi commented on CLOUDSTACK-8713:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved. 
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)


 KVM Power state report not properly parsed (Exception) resulting in HA is not 
 working on CentOS 7
 -

 Key: CLOUDSTACK-8713
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8713
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
 Environment: KVM on CentOS 7, management server running latest master 
 aka 4.6.0
Reporter: Remi Bergsma
Priority: Critical

 While testing a PR, I found that HA on KVM does not work properly. 
 Steps to reproduce:
 - Spin up some VMs on KVM using a HA offering
 - go to KVM hypervisor and kill one of them to simulate a crash
 virsh destroy 6 (change number)
 - look how cloudstack handles this missing VM
 Result:
 - VM stays down and is not started
 Expected result:
 - VM should be started somewhere
 Cause:
 It doesn’t parse the power report property it gets from the hypervisor, so it 
 never marks it Stopped. HA will not start, VM will stay down.
 Database reports PowerStateMissing. Starting manually works fine.
 select name,power_state,instance_name,state from vm_instance where 
 name='test003';
 +-++---+-+
 | name| power_state| instance_name | state   |
 +-++---+-+
 | test003 | PowerReportMissing | i-2-6-VM  | Running |
 +-++---+-+
 1 row in set (0.00 sec)
 I also tried to crash a KVM hypervisor and then the same thing happens.
 Haven’t tested it on other hypervisors. Could anyone verify this?
 Logs:
 2015-08-06 15:40:46,809 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
 (AgentManager-Handler-16:null) VM state report is updated. host: 1, vm id: 6, 
 power state: PowerReportMissing 
 2015-08-06 15:40:46,815 INFO  [c.c.v.VirtualMachineManagerImpl] 
 (AgentManager-Handler-16:null) VM i-2-6-VM is at Running and we received a 
 power-off report while there is no pending jobs on it
 2015-08-06 15:40:46,815 INFO  [c.c.v.VirtualMachineManagerImpl] 
 (AgentManager-Handler-16:null) Detected out-of-band stop of a HA enabled VM 
 i-2-6-VM, will schedule restart
 2015-08-06 15:40:46,824 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (AgentManager-Handler-16:null) Schedule vm for HA:  VM[User|i-2-6-VM]
 2015-08-06 15:40:46,824 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
 (AgentManager-Handler-16:null) Done with process of VM state report. host: 1
 2015-08-06 15:40:46,851 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-3:ctx-4e073b92 work-37) Processing 
 HAWork[37-HA-6-Running-Investigating]
 2015-08-06 15:40:46,871 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-3:ctx-4e073b92 work-37) HA on VM[User|i-2-6-VM]
 2015-08-06 15:40:46,880 DEBUG [c.c.a.t.Request] (HA-Worker-3:ctx-4e073b92 
 work-37) Seq 1-6463228415230083145: Sending  { Cmd , MgmtId: 3232241215, via: 
 1(kvm2), Ver: v1, Flags: 100011, 
 [{com.cloud.agent.api.CheckVirtualMachineCommand:{vmName:i-2-6-VM,wait:20}}]
  }
 2015-08-06 15:40:46,908 ERROR [c.c.a.t.Request] 
 (AgentManager-Handler-17:null) Unable to convert to json: 
 [{com.cloud.agent.api.CheckVirtualMachineAnswer:{state:Stopped,result:true,contextMap:{},wait:0}}]
 2015-08-06 15:40:46,909 WARN  [c.c.u.n.Task] (AgentManager-Handler-17:null) 
 Caught the following exception but pushing on
 com.google.gson.JsonParseException: The JsonDeserializer EnumTypeAdapter 
 failed to deserialize json object Stopped given the type class 
 com.cloud.vm.VirtualMachine$PowerState
 at 
 com.google.gson.JsonDeserializerExceptionWrapper.deserialize(JsonDeserializerExceptionWrapper.java:64)
 at 
 com.google.gson.JsonDeserializationVisitor.invokeCustomDeserializer(JsonDeserializationVisitor.java:92)
 at 
 com.google.gson.JsonObjectDeserializationVisitor.visitFieldUsingCustomHandler(JsonObjectDeserializationVisitor.java:117)
 at 
 com.google.gson.ReflectingFieldNavigator.visitFieldsReflectively(ReflectingFieldNavigator.java:63)
 at com.google.gson.ObjectNavigator.accept(ObjectNavigator.java:120)
 at 
 

[jira] [Updated] (CLOUDSTACK-8389) Volume to Template Conversion Broken

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8389:

Fix Version/s: (was: 4.6.0)

 Volume to Template Conversion Broken
 

 Key: CLOUDSTACK-8389
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8389
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Template
Affects Versions: 4.5.0, 4.6.0, 4.5.1
 Environment: ACS 4.5.1, vSphere 5.5
Reporter: ilya musayev

 During testing of ACS 4.5.1, when i try to convert a volume to template 
 running vmware vsphere 5.5, it bugs out with error below.. i checked on 
 commit history, suspecting another coverity fix causes issue, would someone 
 please have a look..
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=history;f=server/src/com/cloud/template/TemplateManagerImpl.java;h=8bd7b21602a0cc5410af54f41cac510f1751b183;hb=refs/heads/4.5
 2015-04-16 18:37:17,138 DEBUG [c.c.a.t.Request] (AgentManager-Handler-4:null) 
 Seq 12-1900237567773648695: Processing:  { Ans: , MgmtId: 345049223690, via: 
 12, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:create
  template from volume exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2015-04-16 18:37:17,138 DEBUG [c.c.a.t.Request] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636 ctx-f9f56d7e) Seq 
 12-1900237567773648695: Received:  { Ans: , MgmtId: 345049223690, via: 12, 
 Ver: v1, Flags: 10, { CopyCmdAnswer } }
 2015-04-16 18:37:17,153 DEBUG [c.c.t.TemplateManagerImpl] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636 ctx-f9f56d7e) Failed to create 
 templatecreate template from volume exception: Exception: 
 java.lang.NullPointerException
 Message: null
 2015-04-16 18:37:17,188 ERROR [c.c.a.ApiAsyncJobDispatcher] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636) Unexpected exception while 
 executing 
 org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin
 com.cloud.utils.exception.CloudRuntimeException: Failed to create 
 templatecreate template from volume exception: Exception: 
 java.lang.NullPointerException
 Message: null
 at 
 com.cloud.template.TemplateManagerImpl.createPrivateTemplate(TemplateManagerImpl.java:1397)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
 at 
 com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
 at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
 at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
 at $Proxy174.createPrivateTemplate(Unknown Source)
 at 
 org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin.execute(CreateTemplateCmdByAdmin.java:43)
 at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141)
 at 
 com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 

[jira] [Commented] (CLOUDSTACK-8383) [Master Install] Unable to start VM due to error in finalizeStart

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692987#comment-14692987
 ] 

Rajani Karuturi commented on CLOUDSTACK-8383:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 [Master Install] Unable to start VM due to error in finalizeStart
 -

 Key: CLOUDSTACK-8383
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8383
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup
Affects Versions: 4.6.0
Reporter: Raja Pullela
Priority: Critical

 Following Exception is seen in the Management server log - 
 Error finalize start
 2015-04-13 08:12:11,018 ERROR [c.c.v.VirtualMachineManagerImpl] 
 (Work-Job-Executor-3:ctx-a3d21009 job-19/job-23 ctx-31b66d8c) Failed to start 
 instance VM[DomainRouter|r-7-VM]
 com.cloud.utils.exception.ExecutionException: Unable to start 
 VM[DomainRouter|r-7-VM] due to error in finalizeStart, not retrying
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1076)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4503)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4659)
   at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8382) [Master Install] DB exception unknown column is seen in Management server log

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8382:

Fix Version/s: (was: 4.6.0)

 [Master Install] DB exception unknown column is seen in Management server log
 -

 Key: CLOUDSTACK-8382
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8382
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup
Affects Versions: 4.6.0
Reporter: Raja Pullela
Priority: Critical

 Following DB Exception is seen in the Management server log - 
 DB exception – 
 2015-04-13 07:49:25,128 ERROR [c.c.u.d.Upgrade410to420] (main:null) 
 migrateDatafromIsoIdInVolumesTable:Exception:Unknown column 'iso_id1' in 
 'field list'
 com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
 'iso_id1' in 'field list'
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
   at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
   at com.mysql.jdbc.Util.getInstance(Util.java:386)
   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
   at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
   at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2625)
   at 
 com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2119)
   at 
 com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2283)
   at 
 org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
   at 
 org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
   at 
 com.cloud.upgrade.dao.Upgrade410to420.migrateDatafromIsoIdInVolumesTable(Upgrade410to420.java:2555)
   at 
 com.cloud.upgrade.dao.Upgrade410to420.performDataMigration(Upgrade410to420.java:111)
   at 
 com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:338)
   at 
 com.cloud.upgrade.DatabaseUpgradeChecker.check(DatabaseUpgradeChecker.java:461)
   at 
 org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle.checkIntegrity(CloudStackExtendedLifeCycle.java:65)
   at 
 org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle.start(CloudStackExtendedLifeCycle.java:55)
   at 
 org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:167)
   at 
 org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
   at 
 org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:339)
   at 
 org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:143)
   at 
 org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:108)
   at 
 org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:947)
   at 
 org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContext(DefaultModuleDefinitionSet.java:145)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$2.with(DefaultModuleDefinitionSet.java:122)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:245)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:250)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:233)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContexts(DefaultModuleDefinitionSet.java:117)
   at 
 org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load(DefaultModuleDefinitionSet.java:79)
   at 
 

[jira] [Updated] (CLOUDSTACK-8381) [Master Install] SQL exception are seen in Management server log

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8381:

Fix Version/s: (was: 4.6.0)

 [Master Install] SQL exception are seen in Management server log
 

 Key: CLOUDSTACK-8381
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8381
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup
Affects Versions: 4.6.0
Reporter: Raja Pullela
Priority: Critical

 Following error is seen in the Management server log - 
 SQL Exceptions
 2015-04-13 07:49:24,872 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop key last_sent on table alert 
 exception: Can't DROP 'last_sent'; check that column/key exists
 2015-04-13 07:49:24,872 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop key i_alert__last_sent on table 
 alert exception: Can't DROP 'i_alert__last_sent'; check that column/key exists
 2015-04-13 07:49:24,881 DEBUG [c.c.u.d.Upgrade410to420] (main:null) Added 
 index i_alert__last_sent for table alert
 2015-04-13 07:49:24,888 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_nsp_id on table baremetal_dhcp_devices exception: 
 Error on rename of './cloud/baremetal_dhcp_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,896 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_host_id on table baremetal_dhcp_devices exception: 
 Error on rename of './cloud/baremetal_dhcp_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,902 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_pod_id on table baremetal_dhcp_devices exception: 
 Error on rename of './cloud/baremetal_dhcp_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,908 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_physical_network_id on table baremetal_dhcp_devices 
 exception: Error on rename of './cloud/baremetal_dhcp_devices' to 
 './cloud/#sql2-a94-15' (errno: 152)
 2015-04-13 07:49:24,915 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_nsp_id on table baremetal_pxe_devices exception: 
 Error on rename of './cloud/baremetal_pxe_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,921 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_host_id on table baremetal_pxe_devices exception: 
 Error on rename of './cloud/baremetal_pxe_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,927 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_pod_id on table baremetal_pxe_devices exception: 
 Error on rename of './cloud/baremetal_pxe_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,932 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_dhcp_devices_physical_network_id on table baremetal_pxe_devices 
 exception: Error on rename of './cloud/baremetal_pxe_devices' to 
 './cloud/#sql2-a94-15' (errno: 152)
 2015-04-13 07:49:24,943 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_pxe_devices_nsp_id on table baremetal_pxe_devices exception: 
 Error on rename of './cloud/baremetal_pxe_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,949 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_pxe_devices_host_id on table baremetal_pxe_devices exception: 
 Error on rename of './cloud/baremetal_pxe_devices' to './cloud/#sql2-a94-15' 
 (errno: 152)
 2015-04-13 07:49:24,955 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) 
 Ignored SQL Exception when trying to drop foreign key 
 fk_external_pxe_devices_physical_network_id on table baremetal_pxe_devices 
 exception: Error on rename of './cloud/baremetal_pxe_devices' to 
 './cloud/#sql2-a94-15' (errno: 152)
  
 2015-04-13 07:50:08,590 DEBUG [c.c.s.ConfigurationServerImpl] (main:null) 
 Caught (SQL?)Exception: no network_group  Table 'cloud.network_group' doesn't 
 exist
 
 2015-04-13 07:55:40,419 DEBUG [c.c.s.ConfigurationServerImpl] 

[jira] [Updated] (CLOUDSTACK-8327) Add Oracle linux guest OS for KVM

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8327:

Fix Version/s: (was: 4.6.0)

 Add Oracle linux guest OS for KVM 
 --

 Key: CLOUDSTACK-8327
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8327
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Hypervisor Controller
Affects Versions: 4.5.0
Reporter: Abhinandan Prateek
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8245) Scrolling down the network service providers list from the UI never ends

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8245:

Assignee: Ramamurti Subramanian

 Scrolling down the network service providers list from the UI never ends
 

 Key: CLOUDSTACK-8245
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8245
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8245) Scrolling down the network service providers list from the UI never ends

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692996#comment-14692996
 ] 

Rajani Karuturi commented on CLOUDSTACK-8245:
-

[~ramamurtis], please assign issue when you start working and mark them 
resolved when the pr is merged

 Scrolling down the network service providers list from the UI never ends
 

 Key: CLOUDSTACK-8245
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8245
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Ramamurti Subramanian
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8245) Scrolling down the network service providers list from the UI never ends

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8245:

Fix Version/s: (was: 4.6.0)

 Scrolling down the network service providers list from the UI never ends
 

 Key: CLOUDSTACK-8245
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8245
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8245) Scrolling down the network service providers list from the UI never ends

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8245.
-
Resolution: Fixed

 Scrolling down the network service providers list from the UI never ends
 

 Key: CLOUDSTACK-8245
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8245
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8295) max data volume limits to be updated with new values for all hypervisors

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8295.
-
Resolution: Fixed

 max data volume limits to be updated with new values for all hypervisors
 

 Key: CLOUDSTACK-8295
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8295
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.3.0, 4.4.0
Reporter: Harikrishna Patnala
Assignee: satoru nakaya
 Fix For: 4.6.0


 There is discrepancy in doc and the values we support in Cloudstack
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.4/storage.html
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/storage.html
 
 CloudStack supports attaching up to 13 data disks to a VM on XenServer
 hypervisor versions 6.0 and above.
 For the VMs on other hypervisor types, the data disk limit is 6.
 
 The Manual is wrong.
 CloudStack supports attaching up to 
  a) 13 data disks on XenServer hypervisor versions 6.0 and above and all 
 versions of VMware 
 b) 64 data disks on HyperV
 c) 6 data disks on other hypervisor types
 mysql select hypervisor_type,hypervisor_version,max_data_volumes_limit from 
 cloud.hypervisor_capabilities order by hypervisor_type;;
 +-+++
 | hypervisor_type | hypervisor_version | max_data_volumes_limit |
 +-+++
 | Hyperv  | 6.2| 64 |
 | KVM | default|  6 |
 | LXC | default|  6 |
 | Ovm | default|  6 |
 | Ovm | 2.3|  6 |
 | VMware  | default| 13 |
 | VMware  | 4.0| 13 |
 | VMware  | 4.1| 13 |
 | VMware  | 5.5| 13 |
 | VMware  | 5.1| 13 |
 | VMware  | 5.0| 13 |
 | XenServer   | 6.1.0  | 13 |
 | XenServer   | 6.2.0  | 13 |
 | XenServer   | default|  6 |
 | XenServer   | 6.0.2  | 13 |
 | XenServer   | 6.0| 13 |
 | XenServer   | 5.6 SP2|  6 |
 | XenServer   | 5.6 FP1|  6 |
 | XenServer   | 5.6|  6 |
 | XenServer   | XCP 1.0|  6 |
 | XenServer   | 6.5.0  | 13 |
 +-+++



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8246) Add Cluster - Guest traffic label displayed Incorrectly

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8246:

Assignee: Ramamurti Subramanian

 Add Cluster - Guest traffic label displayed Incorrectly
 ---

 Key: CLOUDSTACK-8246
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8246
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.6.0
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
Priority: Minor
 Fix For: 4.6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6403) ListApi Responses does not have count parameter and response arrays defined as part of API docs.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6403:

Fix Version/s: (was: 4.6.0)

 ListApi Responses does not have count parameter and response arrays defined 
 as part of API docs.
 --

 Key: CLOUDSTACK-6403
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6403
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.4.0
Reporter: Santhosh Kumar Edukulla

 1. Verifying few cases for CS responses, we have observed that the 
 commands.xml generated for apidocs does not seems to contain count argument 
 as part of response structure. EX: for listVirtualMachines, below response 
 from CS has count parameter, but apidocs does not have these references. 
 { count:2 ,virtualmachine : [  
 {id:c82ddb67-261c-4794-9b00-ee081f999466,name:basicVM2,displayname:basicVM2,account:ccp11,domainid:a282fb70-33cf-487c-9f24-14e5a565ff04,domain:AA04,created:2014-04-11T09:28:41+0530,state:Running,haenable:false,groupid:0f495cdd-143c-42e9-ba03-446fa3eb3288,group:basicVM2,zoneid:00f9edb1-5484-47bc-ba6b-d28aa03a5ed5,zonename:Basic2,templateid:acf13ade-bd9b-11e3-b856-0e6bf445a756,templatename:CentOS
  5.6(64-bit) no GUI (XenServer),templatedisplaytext:CentOS 5.6(64-bit) no 
 GUI 
 (XenServer),passwordenabled:false,serviceofferingid:41f933ea-6137-438b-9c9e-3b83c946f9f1,serviceofferingname:Medium
  
 Instance,cpunumber:1,cpuspeed:128,memory:128,cpuused:0.01%,networkkbsread:57077,networkkbswrite:365,diskkbsread:0,diskkbswrite:0,diskioread:0,diskiowrite:0,guestosid:ad7cd0a8-bd9b-11e3-b856-0e6bf445a756,rootdeviceid:0,rootdevicetype:ROOT,securitygroup:[{id:0a22f405-be1a-4e19-a07b-a5123433e6d9,name:default,description:Default
  Security 
 Group,account:ccp11,ingressrule:[],egressrule:[],tags:[]}],nic:[{id:84f8f52c-68a6-4b36-a5ee-993139caa227,networkid:b2f7c802-566b-4136-87ca-fcf361065051,networkname:defaultGuestNetwork,netmask:255.255.240.0,gateway:10.105.112.1,ipaddress:10.105.115.238,broadcasturi:vlan://untagged,traffictype:Guest,type:Shared,isdefault:true,macaddress:06:c5:40:00:00:27}],hypervisor:XenServer,tags:[],details:{hypervisortoolsversion:xenserver56},affinitygroup:[],displayvm:true,isdynamicallyscalable:true},
  
 {id:35312098-e20a-4a3c-9311-cc10d431f5b1,name:fvm1,displayname:fvm2,account:ccp11,domainid:a282fb70-33cf-487c-9f24-14e5a565ff04,domain:AA04,created:2014-04-10T16:22:11+0530,state:Stopped,haenable:false,groupid:c5acf387-da00-4f84-a081-80b47be0bdf6,group:fvm2,zoneid:ded7f6db-0b71-4de6-b7e2-c8a89c09b0fb,zonename:Advance-1,templateid:acf13ade-bd9b-11e3-b856-0e6bf445a756,templatename:CentOS
  5.6(64-bit) no GUI (XenServer),templatedisplaytext:CentOS 5.6(64-bit) no 
 GUI 
 (XenServer),passwordenabled:false,serviceofferingid:e8b7381d-979e-45fc-869e-34137a2449e8,serviceofferingname:Tiny
  
 Instance,cpunumber:1,cpuspeed:64,memory:64,guestosid:ad7cd0a8-bd9b-11e3-b856-0e6bf445a756,rootdeviceid:0,rootdevicetype:ROOT,securitygroup:[],nic:[{id:abb1426d-a151-4a86-8ab2-ff23ae8c93fb,networkid:1393fc5d-be74-4e86-9f9f-ce6c207ea6eb,networkname:ccp11-default
  
 Network,netmask:255.255.255.0,gateway:10.1.1.1,ipaddress:10.1.1.10,traffictype:Guest,type:Isolated,isdefault:true,macaddress:02:00:31:2c:00:01}],hypervisor:XenServer,tags:[],details:{hypervisortoolsversion:xenserver56},keypair:testkey2,affinitygroup:[],displayvm:true,isdynamicallyscalable:true}
  ] }
 2. Adding annotation as part of ListResponse for count parameter does not 
 have effect to apidocs.
 3. Also,  getCount seems to return null instead, we can return zero.
 public Integer getCount() {
 if (count != null) {
 return count;
 }
 if (responses != null) {
 return responses.size();
 }
 return null;
 4. The docs currently does not reflect the response as an array of sub 
 elements. Even, if we add count, there is no way to know that the responses 
 are actually list of subelements. EX: ListRegions should have a response 
 structure defined as part of docs to say its a list of Region responses. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8725) RVR functionality is broken in case of isolated networks, conntrackd fails to start.

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693001#comment-14693001
 ] 

Rajani Karuturi commented on CLOUDSTACK-8725:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 RVR functionality is broken in case of isolated networks, conntrackd fails to 
 start.
 

 Key: CLOUDSTACK-8725
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8725
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Virtual Router
Affects Versions: 4.6.0
Reporter: Bharat Kumar
Priority: Critical

 I tried setting up a rvr enabled isolated network. In the startup logs of the 
 router i can see that conntrackd is failing to start.  Below are the startup 
 logs
 [info] Setting console screen modes.
 setterm: cannot (un)set powersave mode: Invalid argument
 [info] Skipping font and keymap setup (handled by console-setup).
 [] Loading IPsec SA/SP database: 
 [ ok etc/ipsec-tools.conf.
 INIT: Entering runlevel: 2
 [info] Using makefile-style concurrent boot in runlevel 2.
 [info] ipvsadm is not configured to run. Please edit /etc/default/ipvsadm.
 [ ok ] Loading iptables rules... IPv4... IPv6...done.
 [ ok ] Starting rpcbind daemon...[] Already running..
 sed: can't read /ramdisk/rrouter/enable_pubip.sh: No such file or directory
 open-vm-tools: not starting as this is not a VMware VM
 [ ok ] Starting enhanced syslogd: rsyslogd.
 [] Starting ACPI services...RTNETLINK1 answers: No such file or directory
 acpid: error talking to the kernel via netlink
 . ok 
 [] Starting conntrackdERROR: parsing config file in line (102), symbol 
 'Multicast': syntax error
  failed!
 [ ok ] Starting DNS forwarder and DHCP server: dnsmasq.
 [] Starting web server: apache2apache2: Could not reliably determine the 
 server's fully qualified domain name, using 10.1.1.247 for ServerName
 . ok 
 [ ok ] Starting periodic command scheduler: cron.
 [] Starting haproxy: haproxy[WARNING] 223/051439 (3480) : config : 
 'stats' statement ignored for proxy 'cloud-default' as it requires HTTP mode.
 [WARNING] 223/051439 (3480) : config : 'option forwardfor' ignored for proxy 
 'cloud-default' as it requires HTTP mode.
 [WARNING] 223/051439 (3484) : config : 'stats' statement ignored for proxy 
 'cloud-default' as it requires HTTP mode.
 [WARNING] 223/051439 (3484) : config : 'option forwardfor' ignored for proxy 
 'cloud-default' as it requires HTTP mode.
 . ok 
 [FAIL] Starting keepalived: keepalived failed!
 [ ok ] Starting NTP server: ntpd.
 [ ok ] Starting OpenBSD Secure Shell server: sshd.
 [ ok ] Starting the system activity data collector: sadc.
 Detecting Linux distribution version: OK
 Starting xe daemon:  OK
 [ ok ] Starting OpenBSD Secure Shell server: sshd.
 [] Starting haproxy: haproxy[WARNING] 223/051440 (3709) : config : 
 'stats' statement ignored for proxy 'cloud-default' as it requires HTTP mode.
 [WARNING] 223/051440 (3709) : config : 'option forwardfor' ignored for proxy 
 'cloud-default' as it requires HTTP mode.
 . ok 
 [] Starting web server: apache2apache2: Could not reliably determine the 
 server's fully qualified domain name, using 10.1.1.247 for ServerName
 httpd (pid 3351) already running
 . ok 
 [FAIL] Starting keepalived: keepalived failed!
 [] Starting conntrackdERROR: parsing config file in line (102), symbol 
 'Multicast': syntax error
  failed!
 sed: can't read /ramdisk/rrouter/enable_pubip.sh: No such file or directory
 Failed
 [ ok ] Stopping NFS common utilities: idmapd statd.
 -On router.
 root@r-93-VM:~# service conntrackd restart
 [] Stopping conntrackdERROR: parsing config file in line (102), symbol 
 'Multicast': syntax error
  failed!
 [] Starting conntrackdERROR: parsing config file in line (102), symbol 
 'Multicast': syntax error
  failed!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7853) Hosts that are temporary Disconnected and get behind on ping (PingTimeout) turn up in permanent state Alert

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7853:

Affects Version/s: (was: Future)

 Hosts that are temporary Disconnected and get behind on ping (PingTimeout) 
 turn up in permanent state Alert
 ---

 Key: CLOUDSTACK-7853
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7853
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.6.0
Reporter: Joris van Lieshout
Priority: Critical

 If for some reason (I've been unable to determine why but my suspicion is 
 that the management server is busy processing other agent requests and/or 
 xapi is temporary unavailable) a host that is Disconnected gets behind on 
 ping (PingTimeout) it it transitioned to a permanent state of Alert.
 INFO  [c.c.a.m.AgentManagerImpl] (AgentMonitor-1:ctx-9551e174) Found the 
 following agents behind on ping: [421, 427, 425]
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-9551e174) Ping timeout for host 421, 
 do invstigation
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-9551e174) Transition:[Resource state 
 = Enabled, Agent event = PingTimeout, Host id = 421, name = xx1]
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-9551e174) Agent status update: [id = 
 421; name = xx1; old status = Disconnected; event = PingTimeout; new 
 status = Alert; old update count = 111; new update count = 112]
 / next cycle / -
 INFO  [c.c.a.m.AgentManagerImpl] (AgentMonitor-1:ctx-2a81b9f7) Found the 
 following agents behind on ping: [421, 427, 425]
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-2a81b9f7) Ping timeout for host 421, 
 do invstigation
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-2a81b9f7) Transition:[Resource state 
 = Enabled, Agent event = PingTimeout, Host id = 421, name = xx1]
 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-2a81b9f7) Cannot transit agent 
 status with event PingTimeout for host 421, name=xx1, mangement server id 
 is 345052370017
 ERROR [c.c.a.m.AgentManagerImpl] (AgentMonitor-1:ctx-2a81b9f7) Caught the 
 following exception: 
 com.cloud.utils.exception.CloudRuntimeException: Cannot transit agent status 
 with event PingTimeout for host 421, mangement server id is 
 345052370017,Unable to transition to a new state from Alert via PingTimeout
 at 
 com.cloud.agent.manager.AgentManagerImpl.agentStatusTransitTo(AgentManagerImpl.java:1334)
 at 
 com.cloud.agent.manager.AgentManagerImpl.disconnectAgent(AgentManagerImpl.java:1349)
 at 
 com.cloud.agent.manager.AgentManagerImpl.disconnectInternal(AgentManagerImpl.java:1378)
 at 
 com.cloud.agent.manager.AgentManagerImpl.disconnectWithInvestigation(AgentManagerImpl.java:1384)
 at 
 com.cloud.agent.manager.AgentManagerImpl$MonitorTask.runInContext(AgentManagerImpl.java:1466)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:701)
 I think the bug occures because there is no valid state transition from Alert 
 via PingTimeout to something recoverable
 Status.java
   s_fsm.addTransition(Status.Alert, Event.AgentConnected, 
 Status.Connecting);
 s_fsm.addTransition(Status.Alert, Event.Ping, Status.Up);
 s_fsm.addTransition(Status.Alert, Event.Remove, Status.Removed);
 s_fsm.addTransition(Status.Alert, Event.ManagementServerDown, 
 Status.Alert);
 

[jira] [Updated] (CLOUDSTACK-7857) CitrixResourceBase wrongly calculates total memory on hosts with a lot of memory and large Dom0

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7857:

Affects Version/s: (was: Future)

 CitrixResourceBase wrongly calculates total memory on hosts with a lot of 
 memory and large Dom0
 ---

 Key: CLOUDSTACK-7857
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7857
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.6.0
Reporter: Joris van Lieshout
Priority: Critical

 We have hosts with 256GB memory and 4GB dom0. During startup ACS calculates 
 available memory using this formula:
 CitrixResourceBase.java
   protected void fillHostInfo
   ram = (long) ((ram - dom0Ram - _xs_memory_used) * 
 _xs_virtualization_factor);
 In our situation:
   ram = 274841497600
   dom0Ram = 4269801472
   _xs_memory_used = 128 * 1024 * 1024L = 134217728
   _xs_virtualization_factor = 63.0/64.0 = 0,984375
   (274841497600 - 4269801472 - 134217728) * 0,984375 = 266211892800
 This is in fact not the actual amount of memory available for instances. The 
 difference in our situation is a little less then 1GB. On this particular 
 hypervisor Dom0+Xen uses about 9GB.
 As the comment above the definition of XsMemoryUsed allready stated it's time 
 to review this logic. 
 //Hypervisor specific params with generic value, may need to be overridden 
 for specific versions
 The effect of this bug is that when you put a hypervisor in maintenance it 
 might try to move instances (usually small instances (1GB)) to a host that 
 in fact does not have enought free memory.
 This exception is thrown:
 ERROR [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-3:ctx-09aca6e9 
 work-8981) Terminating HAWork[8981-Migration-4482-Running-Migrating]
 com.cloud.utils.exception.CloudRuntimeException: Unable to migrate due to 
 Catch Exception com.cloud.utils.exception.CloudRuntimeException: Migration 
 failed due to com.cloud.utils.exception.CloudRuntim
 eException: Unable to migrate VM(r-4482-VM) from 
 host(6805d06c-4d5b-4438-a245-7915e93041d9) due to Task failed! Task record:   
   uuid: 645b63c8-1426-b412-7b6a-13d61ee7ab2e
nameLabel: Async.VM.pool_migrate
  nameDescription: 
allowedOperations: []
currentOperations: {}
  created: Thu Nov 06 13:44:14 CET 2014
 finished: Thu Nov 06 13:44:14 CET 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@b42882c6
 progress: 1.0
 type: none/
   result: 
errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 272629760, 263131136]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:1840)
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrateAway(VirtualMachineManagerImpl.java:2214)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl.migrate(HighAvailabilityManagerImpl.java:610)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.runWithContext(HighAvailabilityManagerImpl.java:865)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.access$000(HighAvailabilityManagerImpl.java:822)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread$1.run(HighAvailabilityManagerImpl.java:834)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.run(HighAvailabilityManagerImpl.java:831)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-5504) Vmware-Primary store unavailable for 10 mts - All snapshot tasks reported failure because of timing out after 20 minutes.But the snapshot process continues to succee

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-5504:

Fix Version/s: (was: 4.6.0)

 Vmware-Primary store unavailable for 10 mts - All snapshot tasks reported 
 failure because of timing out after 20 minutes.But the snapshot process 
 continues to succeed in Vmcenter after NFS was brought up.
 

 Key: CLOUDSTACK-5504
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5504
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
 Attachments: primarydown.rar


 Setup:
 Advanced zone set up with 2 5.1 ESXI host.
 1. Deploy few Vms in each of the hosts  , so we start with 11 Vms.
 2. Create snapshot for ROOT volumes.
 3. When snapshot is still in progress , Make the primary storage unavailable 
 for 10 mts.
 4. Bring up the primary store after more than 10 mts.
 When the primary store was brought up , I see the snapshots that were in 
 progress actually continue to download to secondary and succeed . 
 one of the snapshots that succeeded and fully available in secondary store:
 root@Rack3Host8 1c545037-1d1c-4927-918a-2f3975e1076b]# ls -ltr
 total 446808
 -rw-r--r--. 1 root root  6454 Dec 13 21:09 
 1c545037-1d1c-4927-918a-2f3975e1076b.ovf
 -rw-r--r--. 1 root root 457069056 Dec 13 21:09 
 1c545037-1d1c-4927-918a-2f3975e1076b-disk0.vmdk
 [root@Rack3Host8 1c545037-1d1c-4927-918a-2f3975e1076b]#
 But all the 11 snapshot tasks from Cloud Stack side report failure after 
 about 20 minutes and then snapshots are put in CreatedOnPrimary state.
 Next scheduled hourly snapshot is attempted and succeeds.
 |22 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |21 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |20 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |19 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |18 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |17 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |16 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |14 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |25 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |24 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |23 | CreatedOnPrimary | 2013-12-13 21:52:18 | NULL|
 |22 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |21 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |20 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |19 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |18 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |17 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |16 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |14 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |25 | BackedUp | 2013-12-13 22:42:17 | NULL|
 |24 | BackedUp | 2013-12-13 22:42:17 | NULL|
 |23 | BackedUp | 2013-12-13 22:42:17 | NULL|
 +---+--+-+-+
 86 rows in set (0.00 sec)
 2013-12-13 16:52:17,720 DEBUG [c.c.a.t.Request] (Job-Executor-5:ctx-1d3bd6cc 
 ctx-837a59a5) Seq 5-422576170: Sending  { Cmd , MgmtId: 95307354844397, via: 
 5(s-10-VM), Ver: v1, Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.SnapshotObjectTO:{path:ffa0b125-d1d9-4524-bd9e-03178914845b,volume:{uuid:15189035-3592-41ac-b2bc-a39d247e7d2f,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:a0c555cc-695c-3343-bfa0-3413a91dbfed,id:1,poolType:NetworkFilesystem,host:10.223.57.195,path:/export/home/vmware/primary,port:2049,url:NetworkFilesystem://10.223.57.195//export/home/vmware/primary/?ROLE=PrimarySTOREUUID=a0c555cc-695c-3343-bfa0-3413a91dbfed}},name:ROOT-18,size:2147483648,path:ROOT-18,volumeId:18,vmName:i-4-18-VM,accountId:4,chainInfo:{\diskDeviceBusName\:\ide0:1\,\diskChain\:[\[a0c555cc695c3343bfa03413a91dbfed]
  
 

[jira] [Commented] (CLOUDSTACK-8701) Allow SAML users to switch accounts

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693151#comment-14693151
 ] 

ASF subversion and git services commented on CLOUDSTACK-8701:
-

Commit b63ddcd160c51f002e9e85b6f44469fd48f0df94 in cloudstack's branch 
refs/heads/4.5-samlfixes from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=b63ddcd ]

CLOUDSTACK-8701: Add listandswitchsamlaccount API test and add boundary checks

- Adds unit test for ListAndSwitchSAMLAccountCmd
- Checks and logs in user only if they are enabled
- If saml user switches to a locked account, send appropriate error message

Signed-off-by: Rohit Yadav rohit.ya...@shapeblue.com


 Allow SAML users to switch accounts
 ---

 Key: CLOUDSTACK-8701
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8701
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0, 4.5.2


 SAML authenticated users may have multiple accounts across domains as there 
 may be user accounts with same usernames, the current way in 4.5/master is to 
 grab the domain information beforehand which provides a bad UX as users would 
 need to remember their domain path names (difficult than remembering the 
 domain name).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8598) CS reports volume migration as successful but the volume is not migrated in vCenter.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8598:

Fix Version/s: (was: 4.6.0)

 CS reports volume migration as successful but the volume is not migrated in 
 vCenter.
 

 Key: CLOUDSTACK-8598
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8598
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty

 +Steps to reproduce+
 1. Deploy a VMware setup with 1 cluster, 2 hosts H1,H2 and 2 primary storages 
 P1,P2
 2. Deploy a VM V1 on H1, such that ROOT is on P1 and data vol is on P2
 3. Migrate V1 to H2 ,ROOT to P2 and data vol to P1 using 
 migrateVirtualMachineWithVolume API.
 4. Attach another data disk to V1
 5. Now migrate V1 to H1, data1 to P2, data2 to P2 and ROOT to P1.
 6. Again Migrate V1 to H2, data1 to P1, data2 to P1 and ROOT to P2
 Observed that data volume doesn't get migrated in step 8, but DB is updated 
 and migration operation is reported as successful.
 Same issue observed with other operations that involve disk look up. For e.g. 
 creating volume snapshot 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8600) Clean up VM folders in storage

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8600:

Assignee: (was: Likitha Shetty)

 Clean up VM folders in storage
 --

 Key: CLOUDSTACK-8600
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8600
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty
Priority: Minor

 In case of VMware. when all volumes of a VM are migrated to a different 
 primary storage, the empty VM folder is left behind on the old primary 
 storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8600) Clean up VM folders in storage

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8600:

Fix Version/s: (was: 4.6.0)

 Clean up VM folders in storage
 --

 Key: CLOUDSTACK-8600
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8600
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty
Priority: Minor

 In case of VMware. when all volumes of a VM are migrated to a different 
 primary storage, the empty VM folder is left behind on the old primary 
 storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8713) KVM Power state report not properly parsed (Exception) resulting in HA is not working on CentOS 7

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8713:

Fix Version/s: (was: 4.6.0)

 KVM Power state report not properly parsed (Exception) resulting in HA is not 
 working on CentOS 7
 -

 Key: CLOUDSTACK-8713
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8713
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
 Environment: KVM on CentOS 7, management server running latest master 
 aka 4.6.0
Reporter: Remi Bergsma
Priority: Critical

 While testing a PR, I found that HA on KVM does not work properly. 
 Steps to reproduce:
 - Spin up some VMs on KVM using a HA offering
 - go to KVM hypervisor and kill one of them to simulate a crash
 virsh destroy 6 (change number)
 - look how cloudstack handles this missing VM
 Result:
 - VM stays down and is not started
 Expected result:
 - VM should be started somewhere
 Cause:
 It doesn’t parse the power report property it gets from the hypervisor, so it 
 never marks it Stopped. HA will not start, VM will stay down.
 Database reports PowerStateMissing. Starting manually works fine.
 select name,power_state,instance_name,state from vm_instance where 
 name='test003';
 +-++---+-+
 | name| power_state| instance_name | state   |
 +-++---+-+
 | test003 | PowerReportMissing | i-2-6-VM  | Running |
 +-++---+-+
 1 row in set (0.00 sec)
 I also tried to crash a KVM hypervisor and then the same thing happens.
 Haven’t tested it on other hypervisors. Could anyone verify this?
 Logs:
 2015-08-06 15:40:46,809 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
 (AgentManager-Handler-16:null) VM state report is updated. host: 1, vm id: 6, 
 power state: PowerReportMissing 
 2015-08-06 15:40:46,815 INFO  [c.c.v.VirtualMachineManagerImpl] 
 (AgentManager-Handler-16:null) VM i-2-6-VM is at Running and we received a 
 power-off report while there is no pending jobs on it
 2015-08-06 15:40:46,815 INFO  [c.c.v.VirtualMachineManagerImpl] 
 (AgentManager-Handler-16:null) Detected out-of-band stop of a HA enabled VM 
 i-2-6-VM, will schedule restart
 2015-08-06 15:40:46,824 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (AgentManager-Handler-16:null) Schedule vm for HA:  VM[User|i-2-6-VM]
 2015-08-06 15:40:46,824 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
 (AgentManager-Handler-16:null) Done with process of VM state report. host: 1
 2015-08-06 15:40:46,851 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-3:ctx-4e073b92 work-37) Processing 
 HAWork[37-HA-6-Running-Investigating]
 2015-08-06 15:40:46,871 INFO  [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-3:ctx-4e073b92 work-37) HA on VM[User|i-2-6-VM]
 2015-08-06 15:40:46,880 DEBUG [c.c.a.t.Request] (HA-Worker-3:ctx-4e073b92 
 work-37) Seq 1-6463228415230083145: Sending  { Cmd , MgmtId: 3232241215, via: 
 1(kvm2), Ver: v1, Flags: 100011, 
 [{com.cloud.agent.api.CheckVirtualMachineCommand:{vmName:i-2-6-VM,wait:20}}]
  }
 2015-08-06 15:40:46,908 ERROR [c.c.a.t.Request] 
 (AgentManager-Handler-17:null) Unable to convert to json: 
 [{com.cloud.agent.api.CheckVirtualMachineAnswer:{state:Stopped,result:true,contextMap:{},wait:0}}]
 2015-08-06 15:40:46,909 WARN  [c.c.u.n.Task] (AgentManager-Handler-17:null) 
 Caught the following exception but pushing on
 com.google.gson.JsonParseException: The JsonDeserializer EnumTypeAdapter 
 failed to deserialize json object Stopped given the type class 
 com.cloud.vm.VirtualMachine$PowerState
 at 
 com.google.gson.JsonDeserializerExceptionWrapper.deserialize(JsonDeserializerExceptionWrapper.java:64)
 at 
 com.google.gson.JsonDeserializationVisitor.invokeCustomDeserializer(JsonDeserializationVisitor.java:92)
 at 
 com.google.gson.JsonObjectDeserializationVisitor.visitFieldUsingCustomHandler(JsonObjectDeserializationVisitor.java:117)
 at 
 com.google.gson.ReflectingFieldNavigator.visitFieldsReflectively(ReflectingFieldNavigator.java:63)
 at com.google.gson.ObjectNavigator.accept(ObjectNavigator.java:120)
 at 
 com.google.gson.JsonDeserializationContextDefault.fromJsonObject(JsonDeserializationContextDefault.java:76)
 at 
 com.google.gson.JsonDeserializationContextDefault.deserialize(JsonDeserializationContextDefault.java:54)
 at com.google.gson.Gson.fromJson(Gson.java:551)
 at com.google.gson.Gson.fromJson(Gson.java:521)
 at 
 com.cloud.agent.transport.ArrayTypeAdaptor.deserialize(ArrayTypeAdaptor.java:80)
 at 

[jira] [Updated] (CLOUDSTACK-6974) IAM-Root Admin - When listNetwork is used with listall=false (or no listall passed), all isoalted networks belonging to other users is listed.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6974:

Fix Version/s: (was: 4.6.0)

 IAM-Root Admin - When listNetwork is used with listall=false (or no listall 
 passed), all isoalted networks belonging to other users is listed.
 --

 Key: CLOUDSTACK-6974
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6974
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.4.0
 Environment: Build from 4.4-forward
Reporter: Sangeetha Hariharan

 Root Admin - When listNetwork is used with listall=false (or no listall 
 passed) and isrecursive=true , all networks in the system are returned.
 Steps to reproduce the problem:
 Create multiple domains with few user and domain accounts in them.
 Create isolated networks as each of these accounts.
 Create an admin user under ROOT.
 As this admin user, deploy a VM.
 Use listNetwork with listall=false (or no listall passed) and 
 isrecursive=true to retrieve all the networks owned by this admin.
 This results in all the networks in the system being returned.
 Following is the API call that was made , that resulted in 15 networks being 
 fetched when it should have fetched only 1 isolated network and 1
 shared network.
 http://10.223.49.6:8080/client/api?apiKey=PB2CyeaqN0vfTodPzXV52OdE9YZLC8K-BrdLiEijWmq85nuAEfXVoAPxbzW0J5BgFAT-f5lnwDEgeOfp_boJAgisrecursive=trueresponse=jsonlistall=falsecommand=listNetworkssignature=l%2FNR4aBSnk7aAEDHhlsAvEXe7Cg%3D
  Response: { listnetworksresponse : { count:15 ,network : [ 
 {id:fb3b563c-5ba2-4f9a-aa65-82996f78f20e,name:SharedNetwork-Account,displaytext:SharedNetwork-Account,broadcastdomaintype:Vlan,traffictype:Guest,gateway:10.223.1.1,netmask:255.255.255.0,cidr:10.223.1.0/24,zoneid:b690dddf-5755-49ab-8a4d-0aff04fa39f7,zonename:BLR1,networkofferingid:1bec2c7f-d35d-4d33-a655-d3159be4a6ff,networkofferingname:DefaultSharedNetworkOfferingWithSGService,networkofferingdisplaytext:Offering
  for Shared Security group enabled 
 networks,networkofferingconservemode:true,networkofferingavailability:Optional,issystem:false,state:Setup,related:fb3b563c-5ba2-4f9a-aa65-82996f78f20e,broadcasturi:vlan://153,dns1:4.2.2.2,type:Shared,vlan:153,acltype:Account,account:testD111A-TestNetworkList-RPNQIQ,domainid:b706ea33-fbf7-4167-a857-16f79f332cf3,domain:D111-A243U3,service:[
 {name:UserData}
 ,{name:Dhcp,capability:[
 {name:DhcpAccrossMultipleSubnets,value:true,canchooseservicecapability:false}
 ]},{ ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7289) Bugs seen when declaring a class variable as native type (long) and have its getter method returning the corresponding object (Long) and vice versa

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7289:

Fix Version/s: (was: 4.6.0)

 Bugs seen when declaring a class variable as native type (long) and have its 
 getter method returning the corresponding object (Long) and vice versa
 ---

 Key: CLOUDSTACK-7289
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7289
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Nitin Mehta
Priority: Critical

 Declare a variable as native type (long) and have its getter method
 returning the corresponding object (Long). This is what I fixed with 
 CLOUDSTACK-7272.
 Example below. This should be fixed in the entire code base.
 Autoboxing causes NPE or defaults some values. The vice versa should be
 fixed as well meaning declaring hostId as Long and returning as native
 type (long).
 long hostId
 Long getHostId(){
 return hostId;
 }
 Right Implementation (hostId is declared as Long)
 Long hostId;
 Long getHostId(){
 return hostId;
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6835) db abstraction layer in upgrade path

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6835:

Fix Version/s: (was: 4.6.0)

 db abstraction layer in upgrade path
 

 Key: CLOUDSTACK-6835
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6835
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rajani Karuturi
Priority: Critical

 about 198 of the issues reported by covery scan[1] on 26 May, 2014 are in the 
 Upgrade###to###.java code
 and many of them related to Resource leaks and not closing the prepared 
 statements. 
 I think we should have a DB abstraction layer in the upgrade path so the the 
 developer who needs to do select/insert/update data in the upgrade path need 
 not write native sqls and worry about these recurring issues.
 [1] https://scan.coverity.com/projects/943



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7290) VO classes shouldn¹t have any class variables declared as native type

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7290:

Fix Version/s: (was: 4.6.0)

 VO classes shouldn¹t have any class variables declared as native type
 -

 Key: CLOUDSTACK-7290
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7290
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Nitin Mehta
Priority: Critical

 VO classes which are the mapping of schema to Java objects shouldn¹t
 have any class variables declared as native type. Because native types
 have default values whereas schema columns can be null and declaring as
 native types masks that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7350) Introduce the state 'Expunged' for vms when the vms are expunged.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7350:

Fix Version/s: (was: 4.6.0)

 Introduce the state 'Expunged' for vms when the vms are expunged. 
 --

 Key: CLOUDSTACK-7350
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7350
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Nitin Mehta

 Introduce the state 'Expunged' for vms when the vms are expunged. 
 Currently it statys in 'Expunging' state and is confusing for customers and  
 to understand whether it is gone or still expunging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8701) Allow SAML users to switch accounts

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693154#comment-14693154
 ] 

ASF subversion and git services commented on CLOUDSTACK-8701:
-

Commit ce02ab9f0dfa4baf6790f492c350f49e3141fe46 in cloudstack's branch 
refs/heads/4.5-samlfixes from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ce02ab9 ]

CLOUDSTACK-8701: Add listandswitchsamlaccount API test and add boundary checks

- Adds unit test for ListAndSwitchSAMLAccountCmd
- Checks and logs in user only if they are enabled
- If saml user switches to a locked account, send appropriate error message

Signed-off-by: Rohit Yadav rohit.ya...@shapeblue.com


 Allow SAML users to switch accounts
 ---

 Key: CLOUDSTACK-8701
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8701
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0, 4.5.2


 SAML authenticated users may have multiple accounts across domains as there 
 may be user accounts with same usernames, the current way in 4.5/master is to 
 grab the domain information beforehand which provides a bad UX as users would 
 need to remember their domain path names (difficult than remembering the 
 domain name).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-5504) Vmware-Primary store unavailable for 10 mts - All snapshot tasks reported failure because of timing out after 20 minutes.But the snapshot process continues to succee

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-5504:

Assignee: (was: Likitha Shetty)

 Vmware-Primary store unavailable for 10 mts - All snapshot tasks reported 
 failure because of timing out after 20 minutes.But the snapshot process 
 continues to succeed in Vmcenter after NFS was brought up.
 

 Key: CLOUDSTACK-5504
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5504
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
 Attachments: primarydown.rar


 Setup:
 Advanced zone set up with 2 5.1 ESXI host.
 1. Deploy few Vms in each of the hosts  , so we start with 11 Vms.
 2. Create snapshot for ROOT volumes.
 3. When snapshot is still in progress , Make the primary storage unavailable 
 for 10 mts.
 4. Bring up the primary store after more than 10 mts.
 When the primary store was brought up , I see the snapshots that were in 
 progress actually continue to download to secondary and succeed . 
 one of the snapshots that succeeded and fully available in secondary store:
 root@Rack3Host8 1c545037-1d1c-4927-918a-2f3975e1076b]# ls -ltr
 total 446808
 -rw-r--r--. 1 root root  6454 Dec 13 21:09 
 1c545037-1d1c-4927-918a-2f3975e1076b.ovf
 -rw-r--r--. 1 root root 457069056 Dec 13 21:09 
 1c545037-1d1c-4927-918a-2f3975e1076b-disk0.vmdk
 [root@Rack3Host8 1c545037-1d1c-4927-918a-2f3975e1076b]#
 But all the 11 snapshot tasks from Cloud Stack side report failure after 
 about 20 minutes and then snapshots are put in CreatedOnPrimary state.
 Next scheduled hourly snapshot is attempted and succeeds.
 |22 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |21 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |20 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |19 | CreatedOnPrimary | 2013-12-13 21:52:15 | NULL|
 |18 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |17 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |16 | CreatedOnPrimary | 2013-12-13 21:52:16 | NULL|
 |14 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |25 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |24 | CreatedOnPrimary | 2013-12-13 21:52:17 | NULL|
 |23 | CreatedOnPrimary | 2013-12-13 21:52:18 | NULL|
 |22 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |21 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |20 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |19 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |18 | BackedUp | 2013-12-13 22:42:15 | NULL|
 |17 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |16 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |14 | BackedUp | 2013-12-13 22:42:16 | NULL|
 |25 | BackedUp | 2013-12-13 22:42:17 | NULL|
 |24 | BackedUp | 2013-12-13 22:42:17 | NULL|
 |23 | BackedUp | 2013-12-13 22:42:17 | NULL|
 +---+--+-+-+
 86 rows in set (0.00 sec)
 2013-12-13 16:52:17,720 DEBUG [c.c.a.t.Request] (Job-Executor-5:ctx-1d3bd6cc 
 ctx-837a59a5) Seq 5-422576170: Sending  { Cmd , MgmtId: 95307354844397, via: 
 5(s-10-VM), Ver: v1, Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.SnapshotObjectTO:{path:ffa0b125-d1d9-4524-bd9e-03178914845b,volume:{uuid:15189035-3592-41ac-b2bc-a39d247e7d2f,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:a0c555cc-695c-3343-bfa0-3413a91dbfed,id:1,poolType:NetworkFilesystem,host:10.223.57.195,path:/export/home/vmware/primary,port:2049,url:NetworkFilesystem://10.223.57.195//export/home/vmware/primary/?ROLE=PrimarySTOREUUID=a0c555cc-695c-3343-bfa0-3413a91dbfed}},name:ROOT-18,size:2147483648,path:ROOT-18,volumeId:18,vmName:i-4-18-VM,accountId:4,chainInfo:{\diskDeviceBusName\:\ide0:1\,\diskChain\:[\[a0c555cc695c3343bfa03413a91dbfed]
  
 

[jira] [Resolved] (CLOUDSTACK-8711) public_ip type resource count for an account is not decremented upon IP range deletion

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8711.
-
Resolution: Fixed

 public_ip type resource count for an account is not decremented upon IP range 
 deletion
 --

 Key: CLOUDSTACK-8711
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8711
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.1
Reporter: Maneesha
Assignee: Maneesha
 Fix For: 4.6.0


 When deleting the IP range which is associated to an account the resource 
 count for public_ip is not decremented accordingly which is causing not to 
 add any new ranges to that account further once we reach max limit.
 Repro Steps.
 -
 1. Add an IP range and associate it to a particular account. This will 
 increment your resource count for public_ip to that range count.
 2. Now try to delete this range and check the resource count for public_ip of 
 that account. it will not be decreased.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8699) Extra acquired public ip is assigned to wrong eth device

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8699:

Fix Version/s: (was: 4.6.0)

 Extra acquired public ip is assigned to wrong eth device
 

 Key: CLOUDSTACK-8699
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8699
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Virtual Router
Affects Versions: 4.4.4
 Environment: KVM on CentOS
Reporter: Remi Bergsma

 When the public network of a zone is untagged, an extra public ip address is 
 bound to the wrong interface (eth2 instead of eth1).
 Example:
 root@r-44137-VM:~# ip a
 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
 qlen 1000
 link/ether 0e:00:a9:fe:01:eb brd ff:ff:ff:ff:ff:ff
 inet 169.254.1.235/16 brd 169.254.255.255 scope global eth0
 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
 qlen 1000
 link/ether 06:46:32:00:00:8d brd ff:ff:ff:ff:ff:ff
 inet xx.22.37.143/25 brd 85.222.237.255 scope global eth1
 4: eth2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
 qlen 1000
 link/ether 02:00:51:27:00:02 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.1/24 brd 10.0.0.255 scope global eth2
 inet xx.22.37.145/25 brd 85.222.237.255 scope global eth2
 Obviously, this does not work.
 # MGT server
 2015-07-31 13:08:12,330 DEBUG [agent.manager.ClusteredAgentAttache] 
 (API-Job-Executor-98:ctx-cbf1e352 job-799791 ctx-58cb236f) Seq 
 437-2425751349292433542: Forwarding Seq 437-2425751349292433542:  { Cmd , 
 MgmtId: 345052433506, via: 
 437(mccxpod01-hv03.mccxpod01.mccx-shared-2.mccx.mcinfra.net), Ver: v1, Flags: 
 11, 
 [{com.cloud.agent.api.routing.SetPortForwardingRulesVpcCommand:{rules:[{dstIp:10.0.0.61,dstPortRange:[22,22],id:54859,srcIp:xx.22.37.145,protocol:tcp,srcPortRange:[22,22],revoked:false,alreadyAdded:false,purpose:PortForwarding,defaultEgressPolicy:false}],accessDetails:{zone.network.type:Advanced,router.name:r-44137-VM,router.ip:169.254.1.235,router.guest.ip:10.0.0.1},wait:0}}]
  } to 345052433504
 2015-07-31 13:08:12,457 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-58:null) Seq 437-2425751349292433542: Processing:  { 
 Ans: , MgmtId: 345052433506, via: 437, Ver: v1, Flags: 0, 
 [{com.cloud.agent.api.Answer:{result:true,details:,wait:0}}] }
 2015-07-31 13:08:12,457 DEBUG [agent.transport.Request] 
 (API-Job-Executor-98:ctx-cbf1e352 job-799791 ctx-58cb236f) Seq 
 437-2425751349292433542: Received:  { Ans: , MgmtId: 345052433506, via: 437, 
 Ver: v1, Flags: 0, { Answer } }
 # AGENT
 2015-07-31 13:08:12,203 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-4:null) Request:Seq 437-2425751349292433541:  { Cmd , 
 MgmtId: 345052433506, via: 437, Ver: v1, Flags: 11, 
 [{com.cloud.agent.api.rout
 ing.IpAssocVpcCommand:{ipAddresses:[{accountId:625,publicIp:xx.22.37.145,sourceNat:false,add:true,oneToOneNat:false,firstIP:false,broadcastUri:vlan://untagged,vlanGateway:xx.22.37.1
 29,vlanNetmask:255.255.255.128,vifMacAddress:06:46:32:00:00:8d,networkRate:-1,trafficType:Public,networkName:pubbr0,newNic:false}],accessDetails:{router.guest.ip:xx.22.37.143,zone
 .network.type:Advanced,router.ip:169.254.1.235,router.name:r-44137-VM},wait:0}}]
  }
 2015-07-31 13:08:12,204 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-4:null) Processing command: 
 com.cloud.agent.api.routing.IpAssocVpcCommand
 2015-07-31 13:08:12,206 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-4:null) Executing: 
 /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh 
 vpc_ipassoc.sh 169.254.1.235  -A
   -l xx.22.37.145 -c eth2 -g xx.22.37.129 -m 25 -n xx.22.37.128
 As you see, the vpc_assoc.sh script is instructed to use the wrong eth 
 interface.
 See also:
 core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
 api/src/com/cloud/agent/api/to/IpAddressTO.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8442) [VMWARE] VM Cannot be powered on after restoreVirtualMachine

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692982#comment-14692982
 ] 

Rajani Karuturi commented on CLOUDSTACK-8442:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 [VMWARE] VM Cannot be powered on after restoreVirtualMachine 
 -

 Key: CLOUDSTACK-8442
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8442
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.1
 Environment: ACS 4.5.1, CentOS 6.6
 vSphere 5.5 with NFS for Primary Storage
Reporter: ilya musayev
  Labels: vmware

 While restoreVirtualMachine call is successful, when you try to power on the 
 VM, vSphere fails to find and use proper ROOT volume. 
 To recreate this issue, create a project, then deploy a VM with template X in 
 same project, then use restoreVirtualMachine API call to alter the ROOT disk 
 and attempt to power on..
 Same errors are seen in vcenter...
 2015-05-05 06:38:43,962 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077) Add job-8077 into job monitoring
 2015-05-05 06:38:43,969 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (catalina-exec-7:ctx-6e032e40 ctx-8bb374e0) submit async job-8077, details: 
 AsyncJobVO {id:8077, userId: 2, accountId: 2, instanceType: VirtualMachine, 
 instanceId: 1350, cmd: 
 org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin, cmdInfo: 
 {id:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,response:json,sessionkey:EfTBAqeGH5ivA9E7W1q7gcYXWgI\u003d,ctxDetails:{\com.cloud.vm.VirtualMachine\:\bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\},cmdEventType:VM.START,ctxUserId:2,httpmethod:GET,projectid:98b2e16f-1e4f-4b19-866b-154ef5aad53d,_:1430807923839,uuid:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,ctxAccountId:2,ctxStartEventId:17421},
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 345049223690, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: null}
 2015-05-05 06:38:43,978 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077) Executing AsyncJobVO {id:8077, 
 userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 1350, cmd: 
 org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin, cmdInfo: 
 {id:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,response:json,sessionkey:EfTBAqeGH5ivA9E7W1q7gcYXWgI\u003d,ctxDetails:{\com.cloud.vm.VirtualMachine\:\bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\},cmdEventType:VM.START,ctxUserId:2,httpmethod:GET,projectid:98b2e16f-1e4f-4b19-866b-154ef5aad53d,_:1430807923839,uuid:bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2,ctxAccountId:2,ctxStartEventId:17421},
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 345049223690, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: null}
 2015-05-05 06:38:43,990 WARN  [c.c.a.d.ParamGenericValidationWorker] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Received unknown 
 parameters for command startVirtualMachine. Unknown parameters : projectid
 2015-05-05 06:38:44,020 DEBUG [c.c.n.NetworkModelImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service 
 SecurityGroup is not supported in the network id=224
 2015-05-05 06:38:44,025 DEBUG [c.c.n.NetworkModelImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service 
 SecurityGroup is not supported in the network id=224
 2015-05-05 06:38:44,045 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Deploy avoids pods: 
 [], clusters: [], hosts: []
 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) DeploymentPlanner 
 allocation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4
 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Trying to allocate a 
 host and storage pools from dc:1, pod:1,cluster:null, requested cpu: 100, 
 requested ram: 2147483648
 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Is ROOT volume READY 
 (pool already allocated)?: No
 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
 (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) This VM has last 
 

[jira] [Updated] (CLOUDSTACK-8383) [Master Install] Unable to start VM due to error in finalizeStart

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8383:

Fix Version/s: (was: 4.6.0)

 [Master Install] Unable to start VM due to error in finalizeStart
 -

 Key: CLOUDSTACK-8383
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8383
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup
Affects Versions: 4.6.0
Reporter: Raja Pullela
Priority: Critical

 Following Exception is seen in the Management server log - 
 Error finalize start
 2015-04-13 08:12:11,018 ERROR [c.c.v.VirtualMachineManagerImpl] 
 (Work-Job-Executor-3:ctx-a3d21009 job-19/job-23 ctx-31b66d8c) Failed to start 
 instance VM[DomainRouter|r-7-VM]
 com.cloud.utils.exception.ExecutionException: Unable to start 
 VM[DomainRouter|r-7-VM] due to error in finalizeStart, not retrying
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1076)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4503)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4659)
   at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8389) Volume to Template Conversion Broken

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692983#comment-14692983
 ] 

Rajani Karuturi commented on CLOUDSTACK-8389:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 Volume to Template Conversion Broken
 

 Key: CLOUDSTACK-8389
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8389
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Template
Affects Versions: 4.5.0, 4.6.0, 4.5.1
 Environment: ACS 4.5.1, vSphere 5.5
Reporter: ilya musayev

 During testing of ACS 4.5.1, when i try to convert a volume to template 
 running vmware vsphere 5.5, it bugs out with error below.. i checked on 
 commit history, suspecting another coverity fix causes issue, would someone 
 please have a look..
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=history;f=server/src/com/cloud/template/TemplateManagerImpl.java;h=8bd7b21602a0cc5410af54f41cac510f1751b183;hb=refs/heads/4.5
 2015-04-16 18:37:17,138 DEBUG [c.c.a.t.Request] (AgentManager-Handler-4:null) 
 Seq 12-1900237567773648695: Processing:  { Ans: , MgmtId: 345049223690, via: 
 12, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:create
  template from volume exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2015-04-16 18:37:17,138 DEBUG [c.c.a.t.Request] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636 ctx-f9f56d7e) Seq 
 12-1900237567773648695: Received:  { Ans: , MgmtId: 345049223690, via: 12, 
 Ver: v1, Flags: 10, { CopyCmdAnswer } }
 2015-04-16 18:37:17,153 DEBUG [c.c.t.TemplateManagerImpl] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636 ctx-f9f56d7e) Failed to create 
 templatecreate template from volume exception: Exception: 
 java.lang.NullPointerException
 Message: null
 2015-04-16 18:37:17,188 ERROR [c.c.a.ApiAsyncJobDispatcher] 
 (API-Job-Executor-11:ctx-0e29dec8 job-7636) Unexpected exception while 
 executing 
 org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin
 com.cloud.utils.exception.CloudRuntimeException: Failed to create 
 templatecreate template from volume exception: Exception: 
 java.lang.NullPointerException
 Message: null
 at 
 com.cloud.template.TemplateManagerImpl.createPrivateTemplate(TemplateManagerImpl.java:1397)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
 at 
 com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
 at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
 at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
 at $Proxy174.createPrivateTemplate(Unknown Source)
 at 
 org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin.execute(CreateTemplateCmdByAdmin.java:43)
 at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141)
 at 
 com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 

[jira] [Updated] (CLOUDSTACK-8499) UI reload perfomance is poor in index.jsp

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8499:

Fix Version/s: (was: Future)

 UI reload perfomance is poor in index.jsp
 -

 Key: CLOUDSTACK-8499
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8499
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, UI
Affects Versions: 4.6.0, 4.4.4, 4.5.2
Reporter: Rafael Santos Antunes da Fonseca
Assignee: Rafael Santos Antunes da Fonseca
 Fix For: 4.6.0, 4.5.2


 There is a timestamp being placed in front of some of the static files in 
 ui/index.jsp that is preventing tomcat from responding with 304 to the 
 client, so that cached client files will not need to be resent.
 This affects page reload speeds harshly.
 This problem affects all versions since 4.0 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8437) Automation: test_04_create_multiple_networks_with_lb_1_network_offering - Fails

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692981#comment-14692981
 ] 

Rajani Karuturi commented on CLOUDSTACK-8437:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 Automation: test_04_create_multiple_networks_with_lb_1_network_offering - 
 Fails
 ---

 Key: CLOUDSTACK-8437
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8437
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0, 4.6.0
Reporter: Abhinandan Prateek
Priority: Critical

 test/integration/component/test_vpc_network.py 
 If a network with LB service exists in VPC, creating second network with LB 
 should fail. 
 This is a rough description more investigation is required to check if the 
 test is fine and it is a product defect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8710) site2site vpn iptables rules are not configured on VR

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693002#comment-14693002
 ] 

Rajani Karuturi commented on CLOUDSTACK-8710:
-

removing fixVersion from the bugs with empty assignee.
(fixVersion implies fix will be available in this release when open and was 
available since this release when resolved..
please update the fixVersion only when you plan to work on it or you know that 
someone is working on it. In that case, please update the assignee as well)

 site2site vpn iptables rules are not configured on VR
 -

 Key: CLOUDSTACK-8710
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8710
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Devices
Affects Versions: 4.6.0
Reporter: Jayapal Reddy
Priority: Critical

 1. Configure vpc 
 2. Configure site2site vpn 
 3. After configuration go to VR and check the iptables rules of VR.
 Observed that there no rules configured on ports 500, 4500.
 In configure.py there is method 'configure_iptables' which is having rules 
 but these are not getting applied on VR on site2site vpn configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8710) site2site vpn iptables rules are not configured on VR

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8710:

Fix Version/s: (was: 4.6.0)

 site2site vpn iptables rules are not configured on VR
 -

 Key: CLOUDSTACK-8710
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8710
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Devices
Affects Versions: 4.6.0
Reporter: Jayapal Reddy
Priority: Critical

 1. Configure vpc 
 2. Configure site2site vpn 
 3. After configuration go to VR and check the iptables rules of VR.
 Observed that there no rules configured on ports 500, 4500.
 In configure.py there is method 'configure_iptables' which is having rules 
 but these are not getting applied on VR on site2site vpn configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8548) Message translations in Japanese and Chinese

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8548:

Assignee: Ramamurti Subramanian

 Message translations in Japanese and Chinese
 

 Key: CLOUDSTACK-8548
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8548
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Ramamurti Subramanian
Assignee: Ramamurti Subramanian
 Fix For: 4.6.0


 The message keys are sorted to match with the English messages file. None of 
 the messages are removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8295) max data volume limits to be updated with new values for all hypervisors

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8295:

Assignee: satoru nakaya

 max data volume limits to be updated with new values for all hypervisors
 

 Key: CLOUDSTACK-8295
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8295
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.3.0, 4.4.0
Reporter: Harikrishna Patnala
Assignee: satoru nakaya
 Fix For: 4.6.0


 There is discrepancy in doc and the values we support in Cloudstack
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.4/storage.html
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/storage.html
 
 CloudStack supports attaching up to 13 data disks to a VM on XenServer
 hypervisor versions 6.0 and above.
 For the VMs on other hypervisor types, the data disk limit is 6.
 
 The Manual is wrong.
 CloudStack supports attaching up to 
  a) 13 data disks on XenServer hypervisor versions 6.0 and above and all 
 versions of VMware 
 b) 64 data disks on HyperV
 c) 6 data disks on other hypervisor types
 mysql select hypervisor_type,hypervisor_version,max_data_volumes_limit from 
 cloud.hypervisor_capabilities order by hypervisor_type;;
 +-+++
 | hypervisor_type | hypervisor_version | max_data_volumes_limit |
 +-+++
 | Hyperv  | 6.2| 64 |
 | KVM | default|  6 |
 | LXC | default|  6 |
 | Ovm | default|  6 |
 | Ovm | 2.3|  6 |
 | VMware  | default| 13 |
 | VMware  | 4.0| 13 |
 | VMware  | 4.1| 13 |
 | VMware  | 5.5| 13 |
 | VMware  | 5.1| 13 |
 | VMware  | 5.0| 13 |
 | XenServer   | 6.1.0  | 13 |
 | XenServer   | 6.2.0  | 13 |
 | XenServer   | default|  6 |
 | XenServer   | 6.0.2  | 13 |
 | XenServer   | 6.0| 13 |
 | XenServer   | 5.6 SP2|  6 |
 | XenServer   | 5.6 FP1|  6 |
 | XenServer   | 5.6|  6 |
 | XenServer   | XCP 1.0|  6 |
 | XenServer   | 6.5.0  | 13 |
 +-+++



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8295) max data volume limits to be updated with new values for all hypervisors

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692994#comment-14692994
 ] 

Rajani Karuturi commented on CLOUDSTACK-8295:
-

[~giraffeforestg], gave you required permissions

 max data volume limits to be updated with new values for all hypervisors
 

 Key: CLOUDSTACK-8295
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8295
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.3.0, 4.4.0
Reporter: Harikrishna Patnala
Assignee: satoru nakaya
 Fix For: 4.6.0


 There is discrepancy in doc and the values we support in Cloudstack
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.4/storage.html
 http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/storage.html
 
 CloudStack supports attaching up to 13 data disks to a VM on XenServer
 hypervisor versions 6.0 and above.
 For the VMs on other hypervisor types, the data disk limit is 6.
 
 The Manual is wrong.
 CloudStack supports attaching up to 
  a) 13 data disks on XenServer hypervisor versions 6.0 and above and all 
 versions of VMware 
 b) 64 data disks on HyperV
 c) 6 data disks on other hypervisor types
 mysql select hypervisor_type,hypervisor_version,max_data_volumes_limit from 
 cloud.hypervisor_capabilities order by hypervisor_type;;
 +-+++
 | hypervisor_type | hypervisor_version | max_data_volumes_limit |
 +-+++
 | Hyperv  | 6.2| 64 |
 | KVM | default|  6 |
 | LXC | default|  6 |
 | Ovm | default|  6 |
 | Ovm | 2.3|  6 |
 | VMware  | default| 13 |
 | VMware  | 4.0| 13 |
 | VMware  | 4.1| 13 |
 | VMware  | 5.5| 13 |
 | VMware  | 5.1| 13 |
 | VMware  | 5.0| 13 |
 | XenServer   | 6.1.0  | 13 |
 | XenServer   | 6.2.0  | 13 |
 | XenServer   | default|  6 |
 | XenServer   | 6.0.2  | 13 |
 | XenServer   | 6.0| 13 |
 | XenServer   | 5.6 SP2|  6 |
 | XenServer   | 5.6 FP1|  6 |
 | XenServer   | 5.6|  6 |
 | XenServer   | XCP 1.0|  6 |
 | XenServer   | 6.5.0  | 13 |
 +-+++



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-6697) update BigSwitch network plugin

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-6697.
-
Resolution: Fixed

 update BigSwitch network plugin
 ---

 Key: CLOUDSTACK-6697
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6697
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Controller
Affects Versions: 4.6.0
Reporter: Kuang-Ching Wang
Assignee: Kuang-Ching Wang
 Fix For: 4.6.0


 In CLOUDSTACK-733 a network plugin was created for the BVS application of the 
 Big Switch SDN controller.  Since then, the application has evolved and have 
 various changes in its implementation.  This issue is created to update the 
 plugin so as to be compatible with the latest controller.
 This issue is not expected to affect any Cloudstack workflow.  The fix will 
 make sure correct operation of all the previously supported functions.
 See CLOUDSTACK-733 for reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6623) Register template does not work as expected, when deploying simulator and xen zones simultaneously on a single management server.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6623:

Fix Version/s: (was: 4.6.0)

 Register template does not work as expected, when deploying simulator and xen 
 zones simultaneously on a single management server.
 -

 Key: CLOUDSTACK-6623
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6623
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.4.0
Reporter: Bharat Kumar
Priority: Critical

 when we setup simulator and xenserver both in separate zones on a single 
 management server, The register template always behaves as if it is executing 
 on the simulator. i.e. register template is always successful and it dose not 
 initiate the actual download when calling the register template API  against 
 xen-zone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3788) [KVM] Weekly Snapshot got stuck in Allocated State

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-3788:

Fix Version/s: (was: 4.6.0)

 [KVM] Weekly Snapshot got stuck in Allocated State
 

 Key: CLOUDSTACK-3788
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3788
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Snapshot
Affects Versions: 4.2.0
Reporter: Chandan Purushothama
 Attachments: management-server.log.2013-07-23.gz, 
 mysql_cloudstack_dump.zip


 Weekly Snapshot stuck in Allocated State:
 mysql select * from snapshots where name like 
 Atoms-VM-1_ROOT-6_20130723235146;
 ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
 | id | data_center_id | account_id | domain_id | volume_id | disk_offering_id 
 | status| path | name | uuid  
| snapshot_type | type_description | size   | created  
| removed | backup_snap_id | swift_id | sechost_id | prev_snap_id | 
 hypervisor_type | version | s3_id |
 ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
 | 24 |  1 |  3 | 1 | 6 |1 
 | Destroyed | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
 08a0d2aa-9635-41cd-ba54-5367303bceac | 3 | HOURLY   | 
 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
 NULL | NULL | KVM | 2.2 |  NULL |
 | 25 |  1 |  3 | 1 | 6 |1 
 | Allocated | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
 1e24a056-be38-4b55-845b-a5672b9fa93c | 5 | WEEKLY   | 
 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
 NULL | NULL | KVM | 2.2 |  NULL |
 ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
 2 rows in set (0.04 sec)
 Attached Management Server logs and cloud database dump



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6697) update BigSwitch network plugin

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-6697:

Assignee: Kuang-Ching Wang

 update BigSwitch network plugin
 ---

 Key: CLOUDSTACK-6697
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6697
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Controller
Affects Versions: 4.6.0
Reporter: Kuang-Ching Wang
Assignee: Kuang-Ching Wang
 Fix For: 4.6.0


 In CLOUDSTACK-733 a network plugin was created for the BVS application of the 
 Big Switch SDN controller.  Since then, the application has evolved and have 
 various changes in its implementation.  This issue is created to update the 
 plugin so as to be compatible with the latest controller.
 This issue is not expected to affect any Cloudstack workflow.  The fix will 
 make sure correct operation of all the previously supported functions.
 See CLOUDSTACK-733 for reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8597) [VMware] Failed to migrate a volume from zone-wide to cluster-wide storage.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8597:

Fix Version/s: (was: 4.6.0)

 [VMware] Failed to migrate a volume from zone-wide to cluster-wide storage.
 ---

 Key: CLOUDSTACK-8597
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8597
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty

 +Steps to reproduce+
 1. Have a VMware setup 2 clusters with a ESXi host and cluster-wide storage 
 each.
 2. Have a zone-wide storage spanning both clusters.
 3. Deploy a VM with a datadisk.
 4. Ensure datadisk is on the zone-wide storage. Attempt to migrate it to the 
 cluster-wide storage (cluster that contains the disks's VM).
 5. Try the above operation repeatedly till a failure is seen.
 Migration may fails with the below -
 {noformat}
 2015-06-08 14:37:00,079 ERROR [c.c.h.v.r.VmwareResource] 
 (DirectAgent-86:ctx-b374c26e 10.102.192.12, job-192/job-193, cmd: 
 MigrateVolumeCommand) (logid:ea70ca83) Unable to find the mounted datastore 
 with name 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 2015-06-08 14:37:00,084 ERROR [c.c.h.v.r.VmwareResource] 
 (DirectAgent-86:ctx-b374c26e 10.102.192.12, job-192/job-193, cmd: 
 MigrateVolumeCommand) (logid:ea70ca83) Catch Exception java.lang.Exception 
 due to java.lang.Exception: Unable to find the mounted datastore with name 
 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 java.lang.Exception: Unable to find the mounted datastore with name 
 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:3561)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:414)
 at 
 com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:317)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8598) CS reports volume migration as successful but the volume is not migrated in vCenter.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8598:

Assignee: (was: Likitha Shetty)

 CS reports volume migration as successful but the volume is not migrated in 
 vCenter.
 

 Key: CLOUDSTACK-8598
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8598
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty
 Fix For: 4.6.0


 +Steps to reproduce+
 1. Deploy a VMware setup with 1 cluster, 2 hosts H1,H2 and 2 primary storages 
 P1,P2
 2. Deploy a VM V1 on H1, such that ROOT is on P1 and data vol is on P2
 3. Migrate V1 to H2 ,ROOT to P2 and data vol to P1 using 
 migrateVirtualMachineWithVolume API.
 4. Attach another data disk to V1
 5. Now migrate V1 to H1, data1 to P2, data2 to P2 and ROOT to P1.
 6. Again Migrate V1 to H2, data1 to P1, data2 to P1 and ROOT to P2
 Observed that data volume doesn't get migrated in step 8, but DB is updated 
 and migration operation is reported as successful.
 Same issue observed with other operations that involve disk look up. For e.g. 
 creating volume snapshot 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8597) [VMware] Failed to migrate a volume from zone-wide to cluster-wide storage.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8597:

Assignee: (was: Likitha Shetty)

 [VMware] Failed to migrate a volume from zone-wide to cluster-wide storage.
 ---

 Key: CLOUDSTACK-8597
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8597
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Likitha Shetty

 +Steps to reproduce+
 1. Have a VMware setup 2 clusters with a ESXi host and cluster-wide storage 
 each.
 2. Have a zone-wide storage spanning both clusters.
 3. Deploy a VM with a datadisk.
 4. Ensure datadisk is on the zone-wide storage. Attempt to migrate it to the 
 cluster-wide storage (cluster that contains the disks's VM).
 5. Try the above operation repeatedly till a failure is seen.
 Migration may fails with the below -
 {noformat}
 2015-06-08 14:37:00,079 ERROR [c.c.h.v.r.VmwareResource] 
 (DirectAgent-86:ctx-b374c26e 10.102.192.12, job-192/job-193, cmd: 
 MigrateVolumeCommand) (logid:ea70ca83) Unable to find the mounted datastore 
 with name 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 2015-06-08 14:37:00,084 ERROR [c.c.h.v.r.VmwareResource] 
 (DirectAgent-86:ctx-b374c26e 10.102.192.12, job-192/job-193, cmd: 
 MigrateVolumeCommand) (logid:ea70ca83) Catch Exception java.lang.Exception 
 due to java.lang.Exception: Unable to find the mounted datastore with name 
 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 java.lang.Exception: Unable to find the mounted datastore with name 
 23b5a868-b6af-3692-85f5-f1d987b7f3e2 to execute MigrateVolumeCommand
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:3561)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:414)
 at 
 com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:317)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8487) Add VMware vMotion Tests

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-8487:

Issue Type: Test  (was: Bug)

 Add VMware vMotion Tests
 

 Key: CLOUDSTACK-8487
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8487
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
 Environment: Hypervisor :VMware
 Storage type : VMFS, NFS 
 Storage scope : clusterwide, local, zonewide
 VM : Linux and Windows
Reporter: Abhinav Roy
Assignee: Abhinav Roy
 Fix For: 4.6.0


 Adding a new test script testpath_vMotion_vmware.py in the 
 test/integration/testpath folder. 
 This script has vMotion related test cases for VMware.
 Tests include :
 
 1. Migrate VM with volume within/across the cluster both for vmfs and nfs 
 datastores, windows and linux vms.
 2. Migrate VM with volume within/across cluster for local storage.
 3. Migrate across cwps and zwps.
 4. Migrate across nfs and vmfs.
 5. Negative scenarios
 6. Migration tests when host is put in maintenance.
 7. Migration tests when storage is put in maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7088) Snapshot manager should search for guest OS including deleted

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7088:

Fix Version/s: (was: 4.6.0)

 Snapshot manager should search for guest OS including deleted
 -

 Key: CLOUDSTACK-7088
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7088
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Amogh Vasekar
Assignee: Amogh Vasekar
   Original Estimate: 24h
  Remaining Estimate: 24h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-4536) [object_store_refactor] Inconsistency in volume store location on secondary storage for uploaded and extracted volume

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-4536:

Assignee: (was: edison su)

 [object_store_refactor] Inconsistency in volume store location on secondary 
 storage for uploaded and extracted volume
 -

 Key: CLOUDSTACK-4536
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4536
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware, Volumes
Affects Versions: 4.2.1
 Environment: git rev-parse HEAD~5
 1f46bc3fb09aead2cf1744d358fea7adba7df6e1
 Hypervisor: VMWare
Reporter: Sanjeev N
 Fix For: 4.6.0


 Inconsistency in volume store location on secondary for uploaded and 
 extracted volume in case of vmware
 Volumes are stored in the secondary storage under following two conditions:
 1.Uploaded volume
 2.Extract(Download volume)
 In case 1 volume is stored in 
 NFS_secondary_storage_path/volumes/account-id/volume-id/uuid.ova
 In case 2 volume is stored in 
 NFS_secondary_storage_path/volumes/account-id/volume-id/uuid/uuid.ova
 Did not see any issues with this approach but the store location is 
 inconsistent. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7088) Snapshot manager should search for guest OS including deleted

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7088:

Assignee: (was: Amogh Vasekar)

 Snapshot manager should search for guest OS including deleted
 -

 Key: CLOUDSTACK-7088
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7088
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Amogh Vasekar
   Original Estimate: 24h
  Remaining Estimate: 24h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3111) [UI] Storage tab is not showing the Hypervisor column as 'KVM' if the (root/data)disk is attached to instance running in KVM Hypervisor.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-3111:

Assignee: (was: Animesh Chaturvedi)

 [UI] Storage tab is not showing the Hypervisor column as 'KVM' if the 
 (root/data)disk is attached to instance running in KVM Hypervisor.
 

 Key: CLOUDSTACK-3111
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3111
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.1.1, 4.2.0
Reporter: Rajesh Battala
 Attachments: screen1.png


 Attaching the screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-7650) with wrong checksum volume got uploaded

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi reassigned CLOUDSTACK-7650:
---

Assignee: Rajani Karuturi

 with wrong checksum volume got uploaded 
 

 Key: CLOUDSTACK-7650
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7650
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
Reporter: prashant kumar mishra
Assignee: Rajani Karuturi
 Fix For: 4.6.0

 Attachments: Logs_DB.rar


 steps to reproduce
 
 1-upload a volume with wrong checksum
 2-try to attach to a vm
 Expected
 --
 upload volume should fail
 Actual
 ---
 volume got  uploaded and attached successfully 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7650) with wrong checksum volume got uploaded

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-7650.
-
Resolution: Fixed

 with wrong checksum volume got uploaded 
 

 Key: CLOUDSTACK-7650
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7650
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
Reporter: prashant kumar mishra
Assignee: Rajani Karuturi
 Fix For: 4.6.0

 Attachments: Logs_DB.rar


 steps to reproduce
 
 1-upload a volume with wrong checksum
 2-try to attach to a vm
 Expected
 --
 upload volume should fail
 Actual
 ---
 volume got  uploaded and attached successfully 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7715) Triage and fix Coverity defects

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi closed CLOUDSTACK-7715.
---
Resolution: Fixed

 Triage and fix Coverity defects
 ---

 Key: CLOUDSTACK-7715
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7715
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Santhosh Kumar Edukulla
Assignee: Sanjay Tripathi
 Fix For: 4.6.0


 1. We have Coverity setup available, running as scheduled and individual 
 owners are assigned with analyzed bugs.
 2. As part of this bug, please triage and fix the relevant Coverity bugs 
 assigned. It could be a count as small as 25 bugs.
 3. First start with high impact in order to others later.
 4. We can either triage them accordingly as fix required or false positive or 
 not a bug accordingly. But, triage and fix accordingly wherever relevant and 
 applicable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8569) The latter snapshot export for the same volume will fail is 2 snapshot exports are queued

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8569.
-
Resolution: Fixed

 The latter snapshot export for the same volume will fail is 2 snapshot 
 exports are queued
 -

 Key: CLOUDSTACK-8569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: XenServer
Reporter: Sanjay Tripathi
Assignee: Sanjay Tripathi
Priority: Critical
 Fix For: 4.6.0


 Issue: scheduled snapshot export failed due to another snapshot export task 
 deleting a required VDI
 If there are 2 or more snapshots queued up for backup then first backed-up 
 snapshot is deleting other snapshots from same volume in primary storage 
 causing backup snapshot failure for snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8352) [marvin] Integrate vcenter communication through marvin

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8352.
-
Resolution: Fixed

 [marvin] Integrate  vcenter communication through  marvin
 -

 Key: CLOUDSTACK-8352
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8352
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: marvin
Affects Versions: 4.6.0
Reporter: Srikanteswararao Talluri
Assignee: Srikanteswararao Talluri
 Fix For: 4.6.0


 Marvin should be able to get details of host, vm, dvswitch etc., from vcenter.
 This is going to be implemented using vmware sdk for python pyvmomi.
 Usage:
 vc_object = Vcenter(x.x.x.x, username, password)
 print '###get one dc'
 print(vc_object.get_datacenters(name='testDC'))
 print '###get multiple dcs'
 for i in vc_object.get_datacenters():
 print
 print '###get one dv'
 print vc_object.get_dvswitches(name='dvSwitch')
 print '###get multiple dvs'
 for i in vc_object.get_dvswitches():
 print
 print '###get one dvportgroup'
 print(vc_object.get_dvportgroups(name='cloud.guest.207.200.1-dvSwitch'))
 print ###get one dvportgroup and the vms associated with it
 for vm in 
 vc_object.get_dvportgroups(name='cloud.guest.207.200.1-dvSwitch')[0]['dvportgroup']['vmlist']:
 print(vm.name)
 print(vm.network)
 print '###get multiple dvportgroups'
 for i in vc_object.get_dvportgroups():
 print
 print vc_object.get_vms(name='VM1')



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8726) Automation for Quickly attaching multiple data disks to a new VM

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693290#comment-14693290
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8726:


Github user nitt10prashant commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/683#discussion_r36845123
  
--- Diff: test/integration/component/test_simultaneous_volume_attach.py ---
@@ -0,0 +1,264 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#Import Local Modules
+from marvin.cloudstackAPI import *
+from marvin.cloudstackTestCase import cloudstackTestCase, unittest
+from marvin.lib.utils import (cleanup_resources,
+  validateList)
+from marvin.lib.base import (ServiceOffering,
+ VirtualMachine,
+ Account,
+ Volume,
+ DiskOffering,
+ )
+from marvin.lib.common import (get_domain,
+get_zone,
+get_template,
+find_storage_pool_type)
+from marvin.codes import (
+PASS,
+FAILED,
+JOB_FAILED,
+JOB_CANCELLED,
+JOB_SUCCEEDED
+)
+from nose.plugins.attrib import attr
+import time
+
+
+class TestMultipleVolumeAttach(cloudstackTestCase):
+
+@classmethod
+def setUpClass(cls):
+testClient = super(TestMultipleVolumeAttach, 
cls).getClsTestClient()
+cls.apiclient = testClient.getApiClient()
+cls.services = testClient.getParsedTestDataConfig()
+cls._cleanup = []
+# Get Zone, Domain and templates
+cls.domain = get_domain(cls.apiclient)
+cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
+cls.services['mode'] = cls.zone.networktype
+cls.hypervisor = testClient.getHypervisorInfo()
+cls.invalidStoragePoolType = False
+#for LXC if the storage pool of type 'rbd' ex: ceph is not 
available, skip the test
+if cls.hypervisor.lower() == 'lxc':
+if not find_storage_pool_type(cls.apiclient, 
storagetype='rbd'):
+# RBD storage type is required for data volumes for LXC
+cls.invalidStoragePoolType = True
+return
+cls.disk_offering = DiskOffering.create(
+cls.apiclient,
+cls.services[disk_offering]
+)
+
+template = get_template(
+cls.apiclient,
+cls.zone.id,
+cls.services[ostype]
+)
+if template == FAILED:
+assert False, get_template() failed to return template with 
description %s % cls.services[ostype]
+
+cls.services[domainid] = cls.domain.id
+cls.services[zoneid] = cls.zone.id
+cls.services[template] = template.id
+cls.services[diskofferingid] = cls.disk_offering.id
+
+# Create VMs, VMs etc
+cls.account = Account.create(
+cls.apiclient,
+cls.services[account],
+domainid=cls.domain.id
+)
+cls.service_offering = ServiceOffering.create(
+cls.apiclient,
+
cls.services[service_offering]
+)
+cls.virtual_machine = VirtualMachine.create(
+cls.apiclient,
+cls.services,
+accountid=cls.account.name,
+   

[jira] [Commented] (CLOUDSTACK-8726) Automation for Quickly attaching multiple data disks to a new VM

2015-08-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693291#comment-14693291
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8726:


Github user nitt10prashant commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/683#discussion_r36845173
  
--- Diff: test/integration/component/test_simultaneous_volume_attach.py ---
@@ -0,0 +1,264 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#Import Local Modules
+from marvin.cloudstackAPI import *
+from marvin.cloudstackTestCase import cloudstackTestCase, unittest
+from marvin.lib.utils import (cleanup_resources,
+  validateList)
+from marvin.lib.base import (ServiceOffering,
+ VirtualMachine,
+ Account,
+ Volume,
+ DiskOffering,
+ )
+from marvin.lib.common import (get_domain,
+get_zone,
+get_template,
+find_storage_pool_type)
+from marvin.codes import (
+PASS,
+FAILED,
+JOB_FAILED,
+JOB_CANCELLED,
+JOB_SUCCEEDED
+)
+from nose.plugins.attrib import attr
+import time
+
+
+class TestMultipleVolumeAttach(cloudstackTestCase):
+
+@classmethod
+def setUpClass(cls):
+testClient = super(TestMultipleVolumeAttach, 
cls).getClsTestClient()
+cls.apiclient = testClient.getApiClient()
+cls.services = testClient.getParsedTestDataConfig()
+cls._cleanup = []
+# Get Zone, Domain and templates
+cls.domain = get_domain(cls.apiclient)
+cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
+cls.services['mode'] = cls.zone.networktype
+cls.hypervisor = testClient.getHypervisorInfo()
+cls.invalidStoragePoolType = False
+#for LXC if the storage pool of type 'rbd' ex: ceph is not 
available, skip the test
+if cls.hypervisor.lower() == 'lxc':
+if not find_storage_pool_type(cls.apiclient, 
storagetype='rbd'):
+# RBD storage type is required for data volumes for LXC
+cls.invalidStoragePoolType = True
+return
+cls.disk_offering = DiskOffering.create(
+cls.apiclient,
+cls.services[disk_offering]
+)
+
+template = get_template(
+cls.apiclient,
+cls.zone.id,
+cls.services[ostype]
+)
+if template == FAILED:
+assert False, get_template() failed to return template with 
description %s % cls.services[ostype]
+
+cls.services[domainid] = cls.domain.id
+cls.services[zoneid] = cls.zone.id
+cls.services[template] = template.id
+cls.services[diskofferingid] = cls.disk_offering.id
+
+# Create VMs, VMs etc
+cls.account = Account.create(
+cls.apiclient,
+cls.services[account],
+domainid=cls.domain.id
+)
+cls.service_offering = ServiceOffering.create(
+cls.apiclient,
+
cls.services[service_offering]
+)
+cls.virtual_machine = VirtualMachine.create(
+cls.apiclient,
+cls.services,
+accountid=cls.account.name,
+   

[jira] [Commented] (CLOUDSTACK-8487) Add VMware vMotion Tests

2015-08-12 Thread Rajani Karuturi (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693178#comment-14693178
 ] 

Rajani Karuturi commented on CLOUDSTACK-8487:
-

[~abhinavr], please resolve the issue once the PR is accepted

 Add VMware vMotion Tests
 

 Key: CLOUDSTACK-8487
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8487
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
 Environment: Hypervisor :VMware
 Storage type : VMFS, NFS 
 Storage scope : clusterwide, local, zonewide
 VM : Linux and Windows
Reporter: Abhinav Roy
Assignee: Abhinav Roy
 Fix For: 4.6.0


 Adding a new test script testpath_vMotion_vmware.py in the 
 test/integration/testpath folder. 
 This script has vMotion related test cases for VMware.
 Tests include :
 
 1. Migrate VM with volume within/across the cluster both for vmfs and nfs 
 datastores, windows and linux vms.
 2. Migrate VM with volume within/across cluster for local storage.
 3. Migrate across cwps and zwps.
 4. Migrate across nfs and vmfs.
 5. Negative scenarios
 6. Migration tests when host is put in maintenance.
 7. Migration tests when storage is put in maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8487) Add VMware vMotion Tests

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8487.
-
Resolution: Fixed

 Add VMware vMotion Tests
 

 Key: CLOUDSTACK-8487
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8487
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
 Environment: Hypervisor :VMware
 Storage type : VMFS, NFS 
 Storage scope : clusterwide, local, zonewide
 VM : Linux and Windows
Reporter: Abhinav Roy
Assignee: Abhinav Roy
 Fix For: 4.6.0


 Adding a new test script testpath_vMotion_vmware.py in the 
 test/integration/testpath folder. 
 This script has vMotion related test cases for VMware.
 Tests include :
 
 1. Migrate VM with volume within/across the cluster both for vmfs and nfs 
 datastores, windows and linux vms.
 2. Migrate VM with volume within/across cluster for local storage.
 3. Migrate across cwps and zwps.
 4. Migrate across nfs and vmfs.
 5. Negative scenarios
 6. Migration tests when host is put in maintenance.
 7. Migration tests when storage is put in maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7619) Baremetal - Have an out of the box Isolated network offering with PXE DHCP services provided by VR along with all other services from default isolated network offe

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7619:

Fix Version/s: (was: 4.6.0)

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR along with all other services from default isolated 
 network offering for baremetal instances.
 --

 Key: CLOUDSTACK-7619
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7619
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Sangeetha Hariharan
Assignee: frank zhang

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR slong with all other services from default isolated 
 network offering for baremetal instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7619) Baremetal - Have an out of the box Isolated network offering with PXE DHCP services provided by VR along with all other services from default isolated network offe

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7619:

Assignee: (was: frank zhang)

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR along with all other services from default isolated 
 network offering for baremetal instances.
 --

 Key: CLOUDSTACK-7619
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7619
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Sangeetha Hariharan

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR slong with all other services from default isolated 
 network offering for baremetal instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7619) Baremetal - Have an out of the box Isolated network offering with PXE DHCP services provided by VR along with all other services from default isolated network offe

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7619:

Issue Type: Improvement  (was: Bug)

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR along with all other services from default isolated 
 network offering for baremetal instances.
 --

 Key: CLOUDSTACK-7619
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7619
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.6.0
Reporter: Sangeetha Hariharan
Assignee: frank zhang

 Baremetal - Have an out of the box Isolated network offering with PXE  DHCP 
 services provided by VR slong with all other services from default isolated 
 network offering for baremetal instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7318) [UI] processing wheel continue to spin even after error messaage during VM snapshot creation

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7318:

Fix Version/s: (was: 4.6.0)

 [UI] processing wheel continue to spin even after error messaage during VM 
 snapshot creation
 

 Key: CLOUDSTACK-7318
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7318
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Mihaela Stoica
 Attachments: processingwheel.png


 Repro steps:
 Create a LXC VM
 When VM is running try to createa VM  snapshot
 Bug:
 Notice you get message VM snapshot is not enabled for hypervisor type: LXC
 but spinnign wheel continue to spin . attaching snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8726) Automation for Quickly attaching multiple data disks to a new VM

2015-08-12 Thread Pavan Kumar Bandarupally (JIRA)
Pavan Kumar Bandarupally created CLOUDSTACK-8726:


 Summary: Automation for Quickly attaching multiple data disks to a 
new VM
 Key: CLOUDSTACK-8726
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8726
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.6.0
Reporter: Pavan Kumar Bandarupally
Assignee: Pavan Kumar Bandarupally
 Fix For: 4.6.0


When trying to attach multiple data disks to a VM in quick succession, 
Cloudstack Synchronizes the tasks of disk preparation and reconfiguration of a 
VM with the disk.

This script automates the attach operation and verifies that the attach 
operation is successfully completed without any issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3111) [UI] Storage tab is not showing the Hypervisor column as 'KVM' if the (root/data)disk is attached to instance running in KVM Hypervisor.

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-3111:

Fix Version/s: (was: 4.6.0)

 [UI] Storage tab is not showing the Hypervisor column as 'KVM' if the 
 (root/data)disk is attached to instance running in KVM Hypervisor.
 

 Key: CLOUDSTACK-3111
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3111
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.1.1, 4.2.0
Reporter: Rajesh Battala
Assignee: Animesh Chaturvedi
 Attachments: screen1.png


 Attaching the screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8379) add support to marvin to enable deployed zone based on the value provided in config file

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-8379.
-
Resolution: Invalid

 add support to marvin to enable deployed zone based on the value provided in 
 config file
 

 Key: CLOUDSTACK-8379
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8379
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: marvin
Affects Versions: 4.6.0
Reporter: Srikanteswararao Talluri
Assignee: Srikanteswararao Talluri
 Fix For: 4.6.0


 add support to marvin to enable deployed zone based on the value provided in 
 config file
 if under zone section, if the 'enabled' element is not mentioned, then zone 
 will be enabled otherwise zone will be enabled/disabled based on value 
 provided ('false' or 'true') for 'enabled'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columns as a part of 41-42 db upgrade

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-3658:

Fix Version/s: (was: 4.6.0)

 [DB Upgrade] - Deprecate several old object storage tables and columns as a 
 part of 41-42 db upgrade
 

 Key: CLOUDSTACK-3658
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup, Storage Controller
Affects Versions: 4.2.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
 Attachments: cloud-after-upgrade.dmp


 We should deprecate the following db tables and table columes as a part of 
 41-42 db upgrade due to recent object storage refactoring:
 -Upload
 -s3
 -swift
 -template_host_ref
 -template_s3_ref
 -template_swift_ref
 -volume_host_ref
 -columes (s3_id, swift_id, sechost_id) from snapshots table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7650) with wrong checksum volume got uploaded

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7650:

Assignee: (was: Nitin Mehta)

 with wrong checksum volume got uploaded 
 

 Key: CLOUDSTACK-7650
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7650
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
Reporter: prashant kumar mishra
 Fix For: 4.6.0

 Attachments: Logs_DB.rar


 steps to reproduce
 
 1-upload a volume with wrong checksum
 2-try to attach to a vm
 Expected
 --
 upload volume should fail
 Actual
 ---
 volume got  uploaded and attached successfully 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-5536) Restarting cloudstack service with template download in progress creates redundant entries in DB for systemVM template

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-5536:

Assignee: (was: Min Chen)

 Restarting cloudstack service with template download in progress creates 
 redundant entries in DB for systemVM template 
 ---

 Key: CLOUDSTACK-5536
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5536
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Template
Affects Versions: 4.3.0
 Environment: Latest 4.3 MS build
 VmWare Host
Reporter: Pavan Kumar Bandarupally
 Attachments: MS Log.rar


 My NFS secondary store has been migrated to Object Storage and an S3 
 secondary Store is added. At this point the systemVM template will be 
 downloaded to S3 store.
 If we restart cloudstack-management service when download is in progress, the 
 current entry in template_store_ref which shows status as creating , persists 
 and a new entry will be created. If we restart the service once again another 
 entry is created persisting the older two entries as is.
 Removing leftover template routing-8 entry from template store table is 
 shown in the traces but this doesn't take effect.
 mysql select template_id, store_id, store_role, state, install_path from 
 template_store_ref;
 +-+--++---+---+
 | template_id | store_id | store_role | state | install_path  
 |
 +-+--++---+---+
 |   8 |1 | ImageCache | Ready | 
 template/tmpl/1/8/2ad21358-644d-450c-99a1-6c156afa3206.ova|
 |   7 |1 | ImageCache | Ready | 
 template/tmpl/1/7/a970a6d7-b1ed-3d5a-a8ed-661e059d9f30.ova|
 |   7 |2 | Image  | Ready | NULL  
 |
 |   8 |2 | Image  | Creating  | 
 template/tmpl/1/8/routing-8   |
 | 202 |2 | Image  | Ready | 
 template/tmpl/2/202/202-2-480dd062-9b5c-3f3d-8bd5-934b160883dc/Win832.ova |
 |   8 |2 | Image  | Ready | 
 template/tmpl/1/8/routing-8/systemvmtemplate-4.2-vh7.ova  |
 |   8 |2 | Image  | Allocated | 
 template/tmpl/1/8/routing-8/systemvmtemplate-4.2-vh7.ova  |
 |   8 |2 | Image  | Allocated | 
 template/tmpl/1/8/routing-8   |
 +-+--++---+---+
 Expected:
 ---  
 Upon service restart, template sync should reset the download of the template 
 and create only one entry for the systemVM template.
 Actual:
 -  
 The older entry persists and new entry is created with status as Allocated or 
 Creating.
 Note:
 = 
 This happens only with SystemVM template. General template downloads are 
 properly reset and only one entry exists for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7318) [UI] processing wheel continue to spin even after error messaage during VM snapshot creation

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-7318:

Assignee: (was: Mihaela Stoica)

 [UI] processing wheel continue to spin even after error messaage during VM 
 snapshot creation
 

 Key: CLOUDSTACK-7318
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7318
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
 Attachments: processingwheel.png


 Repro steps:
 Create a LXC VM
 When VM is running try to createa VM  snapshot
 Bug:
 Notice you get message VM snapshot is not enabled for hypervisor type: LXC
 but spinnign wheel continue to spin . attaching snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-5536) Restarting cloudstack service with template download in progress creates redundant entries in DB for systemVM template

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-5536:

Fix Version/s: (was: 4.6.0)
   (was: Future)

 Restarting cloudstack service with template download in progress creates 
 redundant entries in DB for systemVM template 
 ---

 Key: CLOUDSTACK-5536
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5536
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Template
Affects Versions: 4.3.0
 Environment: Latest 4.3 MS build
 VmWare Host
Reporter: Pavan Kumar Bandarupally
Assignee: Min Chen
 Attachments: MS Log.rar


 My NFS secondary store has been migrated to Object Storage and an S3 
 secondary Store is added. At this point the systemVM template will be 
 downloaded to S3 store.
 If we restart cloudstack-management service when download is in progress, the 
 current entry in template_store_ref which shows status as creating , persists 
 and a new entry will be created. If we restart the service once again another 
 entry is created persisting the older two entries as is.
 Removing leftover template routing-8 entry from template store table is 
 shown in the traces but this doesn't take effect.
 mysql select template_id, store_id, store_role, state, install_path from 
 template_store_ref;
 +-+--++---+---+
 | template_id | store_id | store_role | state | install_path  
 |
 +-+--++---+---+
 |   8 |1 | ImageCache | Ready | 
 template/tmpl/1/8/2ad21358-644d-450c-99a1-6c156afa3206.ova|
 |   7 |1 | ImageCache | Ready | 
 template/tmpl/1/7/a970a6d7-b1ed-3d5a-a8ed-661e059d9f30.ova|
 |   7 |2 | Image  | Ready | NULL  
 |
 |   8 |2 | Image  | Creating  | 
 template/tmpl/1/8/routing-8   |
 | 202 |2 | Image  | Ready | 
 template/tmpl/2/202/202-2-480dd062-9b5c-3f3d-8bd5-934b160883dc/Win832.ova |
 |   8 |2 | Image  | Ready | 
 template/tmpl/1/8/routing-8/systemvmtemplate-4.2-vh7.ova  |
 |   8 |2 | Image  | Allocated | 
 template/tmpl/1/8/routing-8/systemvmtemplate-4.2-vh7.ova  |
 |   8 |2 | Image  | Allocated | 
 template/tmpl/1/8/routing-8   |
 +-+--++---+---+
 Expected:
 ---  
 Upon service restart, template sync should reset the download of the template 
 and create only one entry for the systemVM template.
 Actual:
 -  
 The older entry persists and new entry is created with status as Allocated or 
 Creating.
 Note:
 = 
 This happens only with SystemVM template. General template downloads are 
 properly reset and only one entry exists for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columns as a part of 41-42 db upgrade

2015-08-12 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-3658:

Assignee: (was: Nitin Mehta)

 [DB Upgrade] - Deprecate several old object storage tables and columns as a 
 part of 41-42 db upgrade
 

 Key: CLOUDSTACK-3658
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup, Storage Controller
Affects Versions: 4.2.0
Reporter: Nitin Mehta
 Attachments: cloud-after-upgrade.dmp


 We should deprecate the following db tables and table columes as a part of 
 41-42 db upgrade due to recent object storage refactoring:
 -Upload
 -s3
 -swift
 -template_host_ref
 -template_s3_ref
 -template_swift_ref
 -volume_host_ref
 -columes (s3_id, swift_id, sechost_id) from snapshots table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >