[jira] [Updated] (CLOUDSTACK-7498) [UI] Register ISO option is failing to invoke ISO registration page with ReferenceError: osTypeObjs is not defined

2014-11-17 Thread Jayashree Ramamoorthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayashree Ramamoorthy updated CLOUDSTACK-7498:
--
Attachment: RegisterISOFailing.png

 [UI] Register ISO option is failing to invoke ISO registration page with 
 ReferenceError: osTypeObjs is not defined
 --

 Key: CLOUDSTACK-7498
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7498
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: Sailaja Mada
Assignee: Brian Federle
Priority: Critical
 Attachments: RegisterISOFailing.png, registerisoUI.png


 Steps:
 1. Install 4.5 cloudstack
 2. Access Management Server UI and tried register ISO 
 Observations :
 1. It is not invoking Register ISO page and failing with ReferenceError: 
 osTypeObjs is not defined
 Attached the screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-7498) [UI] Register ISO option is failing to invoke ISO registration page with ReferenceError: osTypeObjs is not defined

2014-11-17 Thread Jayashree Ramamoorthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayashree Ramamoorthy reopened CLOUDSTACK-7498:
---

I am seeing the same issue with the latest build.

Please see the screenshot attached.

 [UI] Register ISO option is failing to invoke ISO registration page with 
 ReferenceError: osTypeObjs is not defined
 --

 Key: CLOUDSTACK-7498
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7498
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: Sailaja Mada
Assignee: Brian Federle
Priority: Critical
 Attachments: RegisterISOFailing.png, registerisoUI.png


 Steps:
 1. Install 4.5 cloudstack
 2. Access Management Server UI and tried register ISO 
 Observations :
 1. It is not invoking Register ISO page and failing with ReferenceError: 
 osTypeObjs is not defined
 Attached the screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7364) NetScaler won't create the Public VLAN and Bind the IP to it

2014-11-17 Thread Rajesh Battala (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214441#comment-14214441
 ] 

Rajesh Battala commented on CLOUDSTACK-7364:


from NS config and code I see we are binding the public SNIP to interface 1/1. 
we are binding vlan and ip to the public interface (1/1)

 show ip
IpaddressTraffic Domain  Type Mode Arp  
Icmp Vserver  State
---    ---  
 ---  --
1)  10.102.246.250   NetScaler IP Active   Enabled  
Enabled  NA   Enabled
2)  10.102.242.193   0   SNIP Active   Enabled  
Enabled  NA   Enabled
3)  10.1.0.890   SNIP Active   Enabled  
Enabled  NA   Enabled
4)  10.102.242.194   0   VIP  Active   Enabled  
Enabled  Enabled  Enabled
 Done
 show vlan

1)  VLAN ID: 1
Link-local IPv6 addr: fe80::1c9a:87ff:fe55:8c01/64
Interfaces : 1/1 1/2 0/1 0/2 LO/1

2)  VLAN ID: 100VLAN Alias Name:
Interfaces : 1/1(T)
IPs :
 10.102.242.193 Mask: 255.255.254.0

3)  VLAN ID: 456VLAN Alias Name:
Interfaces : 1/2(T)
IPs :
 10.1.0.89  Mask: 255.255.240.0
 Done

 show version
NetScaler NS10.5: Build 52.11.nc, Date: Sep 30 2014, 01:20:43
 Done


 NetScaler won't create the Public VLAN and Bind the IP to it
 

 Key: CLOUDSTACK-7364
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7364
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Francois Gaudreault
Assignee: Rajesh Battala
Priority: Critical
 Attachments: management-server.log.debug.gz, screenshot-1.png, 
 screenshot-2.png


 When adding a Load Balancing rule with the NetScaler, the provider will tag 
 and bind the private IP to the appropriate interface. However, the behaviour 
 for the Public Interface is different. It simply adds the IP untagged on all 
 interfaces. This is wrong.
 The public VLAN should be tagged, and the VIP bound to the right VLAN tag to 
 avoid unnecessary ARP on other VLANs.
 NS Version tested: 123,11, 127.10, 128.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6870) getDomainId implementation returns invalid value at places

2014-11-17 Thread Santhosh Kumar Edukulla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh Kumar Edukulla updated CLOUDSTACK-6870:

Fix Version/s: (was: 4.5.0)
   Future

 getDomainId implementation returns invalid value at places
 --

 Key: CLOUDSTACK-6870
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6870
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.4.0
Reporter: Santhosh Kumar Edukulla
Assignee: Santhosh Kumar Edukulla
 Fix For: Future


 Few classes implementing getDomainId derived from the below interface, seems 
 to have invalid value returned for domainid. EX: VMTemplateVO, implementing 
 this method returns -1. This behavior is creating issues at some places in 
 code. 
 The respective tables EX: vm_template dont have  column say domainid and so 
 it returns -1. 
 Though the domainid information is available with account and domain tables, 
 it is not able to retrieve this information because of some api semantics. 
 This bug is logged to track and provide fix for this issue. We can discuss as 
 either adding a column is apt way or a better way to refactor few apis to get 
 this information done with out adding extra columns.
  */
 public interface PartOf {
 /**
  * @return domain id that the object belongs to.
  */
 long getDomainId();
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7620) Put SNMP MIB file for snmp-alerts plugin in git repo

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214456#comment-14214456
 ] 

ASF subversion and git services commented on CLOUDSTACK-7620:
-

Commit 3996b402aa371219b60ce7f7605ff3682c1b3267 in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3996b40 ]

CLOUDSTACK-7620: Added SNMP MIB file for snmp-alerts plugin


 Put SNMP MIB file for snmp-alerts plugin in git repo
 

 Key: CLOUDSTACK-7620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7620
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar

 Currently it is available at  
 https://cwiki.apache.org/confluence/download/attachments/30747160/CS-ROOT-MIB.mib?version=1modificationDate=1362442825000api=v2
  for download. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7620) Put SNMP MIB file for snmp-alerts plugin in git repo

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214457#comment-14214457
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7620:


Github user rajesh-battala commented on the pull request:

https://github.com/apache/cloudstack/pull/31#issuecomment-63280334
  
Merging the commit to master as it had passed CI build


 Put SNMP MIB file for snmp-alerts plugin in git repo
 

 Key: CLOUDSTACK-7620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7620
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar

 Currently it is available at  
 https://cwiki.apache.org/confluence/download/attachments/30747160/CS-ROOT-MIB.mib?version=1modificationDate=1362442825000api=v2
  for download. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7912) Move hard coded test data for Netscaler device out of the test cases. Read it from config file.

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214460#comment-14214460
 ] 

ASF subversion and git services commented on CLOUDSTACK-7912:
-

Commit 5f99917991a59f8ecd6d8b0e17b497fe210e636e in cloudstack's branch 
refs/heads/4.5 from [~gauravaradhye]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5f99917 ]

CLOUDSTACK-7912: Remove hardcoded netscaler info and read it from config file

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 Move hard coded test data for Netscaler device out of the test cases. Read it 
 from config file.
 ---

 Key: CLOUDSTACK-7912
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7912
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
  Labels: automation
 Fix For: 4.5.0


 Many test cases have Netscaler info hard coded in the test case. Move this 
 data out of the test cases and read it from the config file. Make this change 
 across all the files applicable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7913) [Automation] Add reconnect functionality to Host class in base.py

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214459#comment-14214459
 ] 

ASF subversion and git services commented on CLOUDSTACK-7913:
-

Commit 19781e094b987cf65d05d890cd3cd86fc22cb873 in cloudstack's branch 
refs/heads/4.5 from [~chandanp]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=19781e0 ]

CLOUDSTACK-7913 : Added reconnect functionality to Host class in base.py

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 [Automation] Add reconnect functionality to Host class in base.py
 -

 Key: CLOUDSTACK-7913
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7913
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
 Fix For: 4.5.0


 Reconnect method in Host Class can be used to reconnect a host in CloudStack 
 Setup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214464#comment-14214464
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7703:


Github user rajesh-battala commented on the pull request:

https://github.com/apache/cloudstack/pull/30#issuecomment-63281107
  
merging this patch to Master as it had passed CI as well


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 CPU: 57268 , Requested CPU: 500
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 RAM: 93916725248 , Requested RAM: 524288000
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host has 
 enough CPU and RAM available
 2014-10-14 

[jira] [Commented] (CLOUDSTACK-7912) Move hard coded test data for Netscaler device out of the test cases. Read it from config file.

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214475#comment-14214475
 ] 

ASF subversion and git services commented on CLOUDSTACK-7912:
-

Commit 91ffaaa5a2163b2b62868097dea587bd712061d8 in cloudstack's branch 
refs/heads/master from [~gauravaradhye]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=91ffaaa ]

CLOUDSTACK-7912: Remove hardcoded netscaler info and read it from config file

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 Move hard coded test data for Netscaler device out of the test cases. Read it 
 from config file.
 ---

 Key: CLOUDSTACK-7912
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7912
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
  Labels: automation
 Fix For: 4.5.0


 Many test cases have Netscaler info hard coded in the test case. Move this 
 data out of the test cases and read it from the config file. Make this change 
 across all the files applicable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7913) [Automation] Add reconnect functionality to Host class in base.py

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214474#comment-14214474
 ] 

ASF subversion and git services commented on CLOUDSTACK-7913:
-

Commit d8d60f017250cc1acb98c8d5b3064c55b348ae56 in cloudstack's branch 
refs/heads/master from [~chandanp]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d8d60f0 ]

CLOUDSTACK-7913 : Added reconnect functionality to Host class in base.py

Signed-off-by: SrikanteswaraRao Talluri tall...@apache.org


 [Automation] Add reconnect functionality to Host class in base.py
 -

 Key: CLOUDSTACK-7913
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7913
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
 Fix For: 4.5.0


 Reconnect method in Host Class can be used to reconnect a host in CloudStack 
 Setup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214489#comment-14214489
 ] 

ASF subversion and git services commented on CLOUDSTACK-7703:
-

Commit 4705933a6459c1b25e71da891a6f736c0a736836 in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4705933 ]

CLOUDSTACK-7703, CLOUDSTACK-7752: Fixed deployment planner stuck in infinite 
loop.
If we create VM with shared service offering and attach disk with local disk 
offering,
and one of storage pool is full(cannot be allocated) and other is not full then
we are not putting the cluster in avoid list which is causing this infinite 
loop.

Fixed by putting the cluster in avoid list even if one of the storage pool is 
full(cannot be allocated)


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f 

[jira] [Commented] (CLOUDSTACK-7752) Management Server goes in infinite loop while creating a vm with tagged local data disk when the pool is not tagged

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214490#comment-14214490
 ] 

ASF subversion and git services commented on CLOUDSTACK-7752:
-

Commit 4705933a6459c1b25e71da891a6f736c0a736836 in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4705933 ]

CLOUDSTACK-7703, CLOUDSTACK-7752: Fixed deployment planner stuck in infinite 
loop.
If we create VM with shared service offering and attach disk with local disk 
offering,
and one of storage pool is full(cannot be allocated) and other is not full then
we are not putting the cluster in avoid list which is causing this infinite 
loop.

Fixed by putting the cluster in avoid list even if one of the storage pool is 
full(cannot be allocated)


 Management Server goes in infinite loop while creating a vm with tagged local 
 data disk when the pool is not tagged
 ---

 Key: CLOUDSTACK-7752
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7752
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical

 Steps to reproduce:
 Setup must have a single cluster with both local and shared storage.
 1) Create a local disk offering and tag it T1
 2) Deploy vm with shared root disk and local data disk
 Management server goes in an infinite loop. The vm is never started/expunged.
 Also this causes vmops.log size to go very high.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214491#comment-14214491
 ] 

ASF subversion and git services commented on CLOUDSTACK-7703:
-

Commit d5b6fc4f04450bb2ff733f2eaf1ae800578be567 in cloudstack's branch 
refs/heads/master from [~rajesh_battala]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d5b6fc4 ]

Merge branch 'CLOUDSTACK-7703' of https://github.com/anshul1886/cloudstack-1
This closes #30


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 CPU: 57268 , Requested CPU: 500
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 RAM: 93916725248 , Requested RAM: 524288000
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 

[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214496#comment-14214496
 ] 

ASF subversion and git services commented on CLOUDSTACK-7703:
-

Commit 3a4d70e69601a6441a6a237517c92e616672a32c in cloudstack's branch 
refs/heads/master from [~rajesh_battala]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3a4d70e ]

Revert Merge branch 'CLOUDSTACK-7703' of 
https://github.com/anshul1886/cloudstack-1;

This reverts commit d5b6fc4f04450bb2ff733f2eaf1ae800578be567, reversing
changes made to 91ffaaa5a2163b2b62868097dea587bd712061d8.


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 CPU: 57268 , Requested CPU: 500
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 

[jira] [Commented] (CLOUDSTACK-7857) CitrixResourceBase wrongly calculates total memory on hosts with a lot of memory and large Dom0

2014-11-17 Thread Joris van Lieshout (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214498#comment-14214498
 ] 

Joris van Lieshout commented on CLOUDSTACK-7857:


Hi Anthony,

I agree that that there is no reliable way to do this beforehand so isn't it 
better to do it whenever an instance is started on/migrated to a host, or 
recalculate the free memory metric every couple minutes (for instance as part 
of the stats collection cycle)? The formula that is used by XenCenter for this 
seems pretty easy and spot.

This would also reduce the number of times a retry mechanism has to kick in for 
other action as well. On that note, the retry mechanism you are referring to 
does not seem to apply to HA-workers created by the process that puts a host in 
maintenance. Also it feels to me that this is more of a workaround than a nice 
solution, mostly because host_free_mem can be recalculated quickly and easily 
when needed.

And concerning the allocation threshold. If I'm not mistaking this does not 
apply to HA-workers which is being used whenever you put at host into 
maintenance. Additionally the instance being migrated is already in the cluster 
so this threshold is not hit during PrepairForMaintenance. 

 CitrixResourceBase wrongly calculates total memory on hosts with a lot of 
 memory and large Dom0
 ---

 Key: CLOUDSTACK-7857
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7857
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: Future, 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.6.0
Reporter: Joris van Lieshout
Priority: Blocker

 We have hosts with 256GB memory and 4GB dom0. During startup ACS calculates 
 available memory using this formula:
 CitrixResourceBase.java
   protected void fillHostInfo
   ram = (long) ((ram - dom0Ram - _xs_memory_used) * 
 _xs_virtualization_factor);
 In our situation:
   ram = 274841497600
   dom0Ram = 4269801472
   _xs_memory_used = 128 * 1024 * 1024L = 134217728
   _xs_virtualization_factor = 63.0/64.0 = 0,984375
   (274841497600 - 4269801472 - 134217728) * 0,984375 = 266211892800
 This is in fact not the actual amount of memory available for instances. The 
 difference in our situation is a little less then 1GB. On this particular 
 hypervisor Dom0+Xen uses about 9GB.
 As the comment above the definition of XsMemoryUsed allready stated it's time 
 to review this logic. 
 //Hypervisor specific params with generic value, may need to be overridden 
 for specific versions
 The effect of this bug is that when you put a hypervisor in maintenance it 
 might try to move instances (usually small instances (1GB)) to a host that 
 in fact does not have enought free memory.
 This exception is thrown:
 ERROR [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-3:ctx-09aca6e9 
 work-8981) Terminating HAWork[8981-Migration-4482-Running-Migrating]
 com.cloud.utils.exception.CloudRuntimeException: Unable to migrate due to 
 Catch Exception com.cloud.utils.exception.CloudRuntimeException: Migration 
 failed due to com.cloud.utils.exception.CloudRuntim
 eException: Unable to migrate VM(r-4482-VM) from 
 host(6805d06c-4d5b-4438-a245-7915e93041d9) due to Task failed! Task record:   
   uuid: 645b63c8-1426-b412-7b6a-13d61ee7ab2e
nameLabel: Async.VM.pool_migrate
  nameDescription: 
allowedOperations: []
currentOperations: {}
  created: Thu Nov 06 13:44:14 CET 2014
 finished: Thu Nov 06 13:44:14 CET 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@b42882c6
 progress: 1.0
 type: none/
   result: 
errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 272629760, 263131136]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:1840)
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrateAway(VirtualMachineManagerImpl.java:2214)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl.migrate(HighAvailabilityManagerImpl.java:610)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.runWithContext(HighAvailabilityManagerImpl.java:865)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.access$000(HighAvailabilityManagerImpl.java:822)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread$1.run(HighAvailabilityManagerImpl.java:834)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 

[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214503#comment-14214503
 ] 

ASF subversion and git services commented on CLOUDSTACK-7703:
-

Commit 2898f7d8d6a09312d58f682ab727e0a80ed7d0dd in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2898f7d ]

CLOUDSTACK-7703, CLOUDSTACK-7752: Fixed deployment planner stuck in infinite 
loop. If we create VM with shared service offering and attach disk with local 
disk offering, and one of storage pool is full(cannot be allocated) and other 
is not full then we are not putting the cluster in avoid list which is causing 
this infinite loop.

Fixed by putting the cluster in avoid list even if one of the storage pool is 
full(cannot be allocated)
This closes #30


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 

[jira] [Commented] (CLOUDSTACK-7752) Management Server goes in infinite loop while creating a vm with tagged local data disk when the pool is not tagged

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214504#comment-14214504
 ] 

ASF subversion and git services commented on CLOUDSTACK-7752:
-

Commit 2898f7d8d6a09312d58f682ab727e0a80ed7d0dd in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2898f7d ]

CLOUDSTACK-7703, CLOUDSTACK-7752: Fixed deployment planner stuck in infinite 
loop. If we create VM with shared service offering and attach disk with local 
disk offering, and one of storage pool is full(cannot be allocated) and other 
is not full then we are not putting the cluster in avoid list which is causing 
this infinite loop.

Fixed by putting the cluster in avoid list even if one of the storage pool is 
full(cannot be allocated)
This closes #30


 Management Server goes in infinite loop while creating a vm with tagged local 
 data disk when the pool is not tagged
 ---

 Key: CLOUDSTACK-7752
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7752
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical

 Steps to reproduce:
 Setup must have a single cluster with both local and shared storage.
 1) Create a local disk offering and tag it T1
 2) Deploy vm with shared root disk and local data disk
 Management server goes in an infinite loop. The vm is never started/expunged.
 Also this causes vmops.log size to go very high.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7620) Put SNMP MIB file for snmp-alerts plugin in git repo

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214518#comment-14214518
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7620:


Github user anshul1886 closed the pull request at:

https://github.com/apache/cloudstack/pull/31


 Put SNMP MIB file for snmp-alerts plugin in git repo
 

 Key: CLOUDSTACK-7620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7620
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar

 Currently it is available at  
 https://cwiki.apache.org/confluence/download/attachments/30747160/CS-ROOT-MIB.mib?version=1modificationDate=1362442825000api=v2
  for download. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7620) Put SNMP MIB file for snmp-alerts plugin in git repo

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214519#comment-14214519
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7620:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/31#issuecomment-63287776
  
closing pull request as this got merged to master


 Put SNMP MIB file for snmp-alerts plugin in git repo
 

 Key: CLOUDSTACK-7620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7620
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar

 Currently it is available at  
 https://cwiki.apache.org/confluence/download/attachments/30747160/CS-ROOT-MIB.mib?version=1modificationDate=1362442825000api=v2
  for download. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214520#comment-14214520
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7703:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/30#issuecomment-63287919
  
closing this PR as patch got committed


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 CPU: 57268 , Requested CPU: 500
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 RAM: 93916725248 , Requested RAM: 524288000
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host has 
 enough CPU and RAM available
 2014-10-14 11:39:20,721 DEBUG 

[jira] [Commented] (CLOUDSTACK-7703) Cloudstack server endless loop when trying to create a volume while storage pool is full

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214521#comment-14214521
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7703:


Github user anshul1886 closed the pull request at:

https://github.com/apache/cloudstack/pull/30


 Cloudstack server endless loop when trying to create a volume while storage 
 pool is full
 

 Key: CLOUDSTACK-7703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Centos 6.5
Reporter: JF Vincent
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 When trying to create a VM, and thus a volume for it and the primary storage 
 is full (over 90%), the managament server enter in and endless loop (extract 
 below) and we have to restart it to exit this loop.
 2014-10-14 11:39:20,701 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found for 
 volume: Vol[5436|vm=5855|DATADISK] under cluster: 2
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable pools found
 2014-10-14 11:39:20,702 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) No suitable storagePools found 
 under this Cluster: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Could not find suitable 
 Deployment Destination for this VM under any clusters, returning.
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Searching all possible resources 
 under this Zone: 2
 2014-10-14 11:39:20,705 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Listing clusters in order of 
 aggregate capacity, that have (atleast one host with) enough CPU and RAM 
 capacity under this Zone: 2
 2014-10-14 11:39:20,707 DEBUG [cloud.deploy.FirstFitPlanner] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Removing from the clusterId list 
 these clusters from avoid set: []
 2014-10-14 11:39:20,714 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c) Checking resources in Cluster: 2 
 under Pod: 2
 2014-10-14 11:39:20,714 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for hosts in dc: 2  pod:2  cluster:2
 2014-10-14 11:39:20,716 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) 
 FirstFitAllocator has 3 hosts to check for allocation: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Found 3 
 hosts for allocation after prioritization: [Host[-79-Routing], 
 Host[-89-Routing], Host[-77-Routing]]
 2014-10-14 11:39:20,717 DEBUG [allocator.impl.FirstFitAllocator] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Looking 
 for speed=500Mhz, Ram=500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host: 79 
 has cpu capability (cpu:8, speed:2399) to support requested CPU: 1 and 
 requested speed: 500
 2014-10-14 11:39:20,720 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Checking 
 if host: 79 has enough capacity for requested CPU: 500 and requested RAM: 
 524288000 , cpuOverprovisioningFactor: 4.0
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Hosts's 
 actual total CPU: 19192 and CPU after applying overprovisioning: 76768
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 CPU: 57268 , Requested CPU: 500
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Free 
 RAM: 93916725248 , Requested RAM: 524288000
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f ctx-e581af2c FirstFitRoutingAllocator) Host has 
 enough CPU and RAM available
 2014-10-14 11:39:20,721 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-10:ctx-02d42f8f 

[jira] [Commented] (CLOUDSTACK-7758) Although API calls are failing, events tab shows them as successful

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214524#comment-14214524
 ] 

ASF subversion and git services commented on CLOUDSTACK-7758:
-

Commit e8a47594da08a04065c644380c34e4830ab30df3 in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e8a4759 ]

CLOUDSTACK-7758: Fixed although api calls are failing, event tab shows them as 
successful

This closes #29


 Although API calls are failing, events tab shows them as successful
 ---

 Key: CLOUDSTACK-7758
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7758
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Like Deployment of VM is failing. But event tab shows them as Successfully 
 completed starting VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7541) Volume gets created with the size mentioned in the custom disk offering

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214529#comment-14214529
 ] 

ASF subversion and git services commented on CLOUDSTACK-7541:
-

Commit efe47b07044a863e5a34f48ef6e2468265925604 in cloudstack's branch 
refs/heads/master from [~anshulg]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=efe47b0 ]

CLOUDSTACK-7541: Added restriction to not allow custom disk offering with 
disksize UI doesn't allow but with API we were able to create custom disk 
offering with disk size which was causing this issue
This closes #28


 Volume gets created with the size mentioned in the custom disk offering 
 

 Key: CLOUDSTACK-7541
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7541
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
 Environment: latest build from 4.5 with commit  
 932ea253eb8c65821503ab9db301073cdb2a413e
Reporter: Sanjeev N
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0

 Attachments: cloud.dmp, management-server.rar


 Volume gets created with the size mentioned in the custom disk offering 
 Steps to reproduce:
 ==
 1.Bring up cs with latest build
 2.Create custom disk offering with disk size say 2
 3.Create data disk using this offering and provide disk size as 1 while 
 creating data disk
 Expected Result:
 ==
 Since the disk offering is of type custom , volume should be created with 
 size given during volume creation instead of taking it from the disk offering.
 If the disk offering is custom then the volume creation must ignore the size 
 given in the offering and should create volume with size provide while 
 creating volume.
 Actual Result:
 ===
 Disk got created with size mentioned in disk offering rather than the size 
 given during volume creation time. 
 Observations:
 ===
 http://10.147.38.153:8096/client/api?command=createDiskOfferingisMirrored=falsename=customdisplaytext=customstorageType=sharedcacheMode=noneprovisioningType=thincustomized=truedisksize=2
 { creatediskofferingresponse :  { diskoffering : 
 {id:2ddb8b79-9592-4b8c-8bd9-3d32c582873b,name:custom,displaytext:custom,disksize:2,created:2014-09-12T23:33:24+0530,iscustomized:true,storagetype:shared,provisioningtype:thin,displayoffering:true}
  }  }
 http://10.147.38.153:8080/client/api?command=createVolumeresponse=jsonsessionkey=USf4e%2BpnzNiyWyq1PCeDFswjB%2BU%3Dname=customzoneId=2c67c83e-b8c3-42d0-a37b-b37287ac84dddiskOfferingId=2ddb8b79-9592-4b8c-8bd9-3d32c582873bsize=1_=1410525928417
 { queryasyncjobresultresponse : 
 {accountid:638d4e82-341f-11e4-a4c9-06097e23,userid:638d5ddc-341f-11e4-a4c9-06097e23,cmd:org.apache.cloudstack.api.command.admin.volume.CreateVolumeCmdByAdmin,jobstatus:1,jobprocstatus:0,jobresultcode:0,jobresulttype:object,jobresult:{volume:{id:42f24df4-b4ae-4b4c-80ce-ea1b5daf12bd,name:custom,zoneid:2c67c83e-b8c3-42d0-a37b-b37287ac84dd,zonename:zone1,type:DATADISK,provisioningtype:thin,size:2147483648,created:2014-09-12T23:34:32+0530,state:Allocated,account:admin,domainid:2caca782-341f-11e4-a4c9-06097e23,domain:ROOT,storagetype:shared,hypervisor:None,diskofferingid:2ddb8b79-9592-4b8c-8bd9-3d32c582873b,diskofferingname:custom,diskofferingdisplaytext:custom,destroyed:false,isextractable:true,tags:[],displayvolume:true,quiescevm:false,jobid:edf1a066-63b0-400b-bf15-4f77bf659206,jobstatus:0}},jobinstancetype:Volume,jobinstanceid:42f24df4-b4ae-4b4c-80ce-ea1b5daf12bd,created:2014-09-12T23:34:32+0530,jobid:edf1a066-63b0-400b-bf15-4f77bf659206}
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7758) Although API calls are failing, events tab shows them as successful

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214538#comment-14214538
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7758:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/29


 Although API calls are failing, events tab shows them as successful
 ---

 Key: CLOUDSTACK-7758
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7758
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Like Deployment of VM is failing. But event tab shows them as Successfully 
 completed starting VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7541) Volume gets created with the size mentioned in the custom disk offering

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214539#comment-14214539
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7541:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/28


 Volume gets created with the size mentioned in the custom disk offering 
 

 Key: CLOUDSTACK-7541
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7541
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
 Environment: latest build from 4.5 with commit  
 932ea253eb8c65821503ab9db301073cdb2a413e
Reporter: Sanjeev N
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0

 Attachments: cloud.dmp, management-server.rar


 Volume gets created with the size mentioned in the custom disk offering 
 Steps to reproduce:
 ==
 1.Bring up cs with latest build
 2.Create custom disk offering with disk size say 2
 3.Create data disk using this offering and provide disk size as 1 while 
 creating data disk
 Expected Result:
 ==
 Since the disk offering is of type custom , volume should be created with 
 size given during volume creation instead of taking it from the disk offering.
 If the disk offering is custom then the volume creation must ignore the size 
 given in the offering and should create volume with size provide while 
 creating volume.
 Actual Result:
 ===
 Disk got created with size mentioned in disk offering rather than the size 
 given during volume creation time. 
 Observations:
 ===
 http://10.147.38.153:8096/client/api?command=createDiskOfferingisMirrored=falsename=customdisplaytext=customstorageType=sharedcacheMode=noneprovisioningType=thincustomized=truedisksize=2
 { creatediskofferingresponse :  { diskoffering : 
 {id:2ddb8b79-9592-4b8c-8bd9-3d32c582873b,name:custom,displaytext:custom,disksize:2,created:2014-09-12T23:33:24+0530,iscustomized:true,storagetype:shared,provisioningtype:thin,displayoffering:true}
  }  }
 http://10.147.38.153:8080/client/api?command=createVolumeresponse=jsonsessionkey=USf4e%2BpnzNiyWyq1PCeDFswjB%2BU%3Dname=customzoneId=2c67c83e-b8c3-42d0-a37b-b37287ac84dddiskOfferingId=2ddb8b79-9592-4b8c-8bd9-3d32c582873bsize=1_=1410525928417
 { queryasyncjobresultresponse : 
 {accountid:638d4e82-341f-11e4-a4c9-06097e23,userid:638d5ddc-341f-11e4-a4c9-06097e23,cmd:org.apache.cloudstack.api.command.admin.volume.CreateVolumeCmdByAdmin,jobstatus:1,jobprocstatus:0,jobresultcode:0,jobresulttype:object,jobresult:{volume:{id:42f24df4-b4ae-4b4c-80ce-ea1b5daf12bd,name:custom,zoneid:2c67c83e-b8c3-42d0-a37b-b37287ac84dd,zonename:zone1,type:DATADISK,provisioningtype:thin,size:2147483648,created:2014-09-12T23:34:32+0530,state:Allocated,account:admin,domainid:2caca782-341f-11e4-a4c9-06097e23,domain:ROOT,storagetype:shared,hypervisor:None,diskofferingid:2ddb8b79-9592-4b8c-8bd9-3d32c582873b,diskofferingname:custom,diskofferingdisplaytext:custom,destroyed:false,isextractable:true,tags:[],displayvolume:true,quiescevm:false,jobid:edf1a066-63b0-400b-bf15-4f77bf659206,jobstatus:0}},jobinstancetype:Volume,jobinstanceid:42f24df4-b4ae-4b4c-80ce-ea1b5daf12bd,created:2014-09-12T23:34:32+0530,jobid:edf1a066-63b0-400b-bf15-4f77bf659206}
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7926) don't teally delete volumes - have the pruge do it

2014-11-17 Thread Andrija Panic (JIRA)
Andrija Panic created CLOUDSTACK-7926:
-

 Summary: don't teally delete volumes - have the pruge do it
 Key: CLOUDSTACK-7926
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7926
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.3.0, 4.4.1
 Environment: NA
Reporter: Andrija Panic


Currently I have hit a bug, when I click on some instance, then on View 
Volumes, and then I get listed volumes that belong to some other VM - it 
already happened to me that I deleted the volumes - beacuse of ACS bug in GUI !

So, I suggest to consider maybe to implement purging the same way it is 
implemented with VM-s - so the VM is not really deleted - and the purge thread 
in ACS will acually delete it when it runs...

THis way, if wrong volume is deleted, we can recover it quickly...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7926) Don't immediately delete volumes - have the pruge thread do it

2014-11-17 Thread Andrija Panic (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrija Panic updated CLOUDSTACK-7926:
--
Summary: Don't immediately delete volumes - have the pruge thread do it  
(was: don't teally delete volumes - have the pruge do it)

 Don't immediately delete volumes - have the pruge thread do it
 --

 Key: CLOUDSTACK-7926
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7926
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.3.0, 4.4.1
 Environment: NA
Reporter: Andrija Panic
  Labels: storage

 Currently I have hit a bug, when I click on some instance, then on View 
 Volumes, and then I get listed volumes that belong to some other VM - it 
 already happened to me that I deleted the volumes - beacuse of ACS bug in GUI 
 !
 So, I suggest to consider maybe to implement purging the same way it is 
 implemented with VM-s - so the VM is not really deleted - and the purge 
 thread in ACS will acually delete it when it runs...
 THis way, if wrong volume is deleted, we can recover it quickly...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7364) NetScaler won't create the Public VLAN and Bind the IP to it

2014-11-17 Thread Francois Gaudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214578#comment-14214578
 ] 

Francois Gaudreault commented on CLOUDSTACK-7364:
-

Interesting. 
It differs from what I see on my end. What version of ACS are you using? I see 
you use NS 10.5, is it working on NS 10.1?

 NetScaler won't create the Public VLAN and Bind the IP to it
 

 Key: CLOUDSTACK-7364
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7364
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Francois Gaudreault
Assignee: Rajesh Battala
Priority: Critical
 Attachments: management-server.log.debug.gz, screenshot-1.png, 
 screenshot-2.png


 When adding a Load Balancing rule with the NetScaler, the provider will tag 
 and bind the private IP to the appropriate interface. However, the behaviour 
 for the Public Interface is different. It simply adds the IP untagged on all 
 interfaces. This is wrong.
 The public VLAN should be tagged, and the VIP bound to the right VLAN tag to 
 avoid unnecessary ARP on other VLANs.
 NS Version tested: 123,11, 127.10, 128.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7364) NetScaler won't create the Public VLAN and Bind the IP to it

2014-11-17 Thread Francois Gaudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214580#comment-14214580
 ] 

Francois Gaudreault commented on CLOUDSTACK-7364:
-

Oh and another thing, we use 1/2 for our public interface, and 1/3 for the 
private. Is the code expecting 1/1 systematically?

 NetScaler won't create the Public VLAN and Bind the IP to it
 

 Key: CLOUDSTACK-7364
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7364
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Francois Gaudreault
Assignee: Rajesh Battala
Priority: Critical
 Attachments: management-server.log.debug.gz, screenshot-1.png, 
 screenshot-2.png


 When adding a Load Balancing rule with the NetScaler, the provider will tag 
 and bind the private IP to the appropriate interface. However, the behaviour 
 for the Public Interface is different. It simply adds the IP untagged on all 
 interfaces. This is wrong.
 The public VLAN should be tagged, and the VIP bound to the right VLAN tag to 
 avoid unnecessary ARP on other VLANs.
 NS Version tested: 123,11, 127.10, 128.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7364) NetScaler won't create the Public VLAN and Bind the IP to it

2014-11-17 Thread Rajesh Battala (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214645#comment-14214645
 ] 

Rajesh Battala commented on CLOUDSTACK-7364:


That's not an issue. 
We take the public interface value specified while adding the device and use it 
to bind vlan and ip address .



 NetScaler won't create the Public VLAN and Bind the IP to it
 

 Key: CLOUDSTACK-7364
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7364
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Francois Gaudreault
Assignee: Rajesh Battala
Priority: Critical
 Attachments: management-server.log.debug.gz, screenshot-1.png, 
 screenshot-2.png


 When adding a Load Balancing rule with the NetScaler, the provider will tag 
 and bind the private IP to the appropriate interface. However, the behaviour 
 for the Public Interface is different. It simply adds the IP untagged on all 
 interfaces. This is wrong.
 The public VLAN should be tagged, and the VIP bound to the right VLAN tag to 
 avoid unnecessary ARP on other VLANs.
 NS Version tested: 123,11, 127.10, 128.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7775) Xen S3 backed secondary storage - local volume snapshots fail

2014-11-17 Thread Justyn Shull (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214691#comment-14214691
 ] 

Justyn Shull commented on CLOUDSTACK-7775:
--

This is also an issue for us.   The errors/logs we get are almost the same as 
in this ticket, but please let me know if there is any more information needed. 
 

In the meantime, I was able to workaround this error by editing 
/etc/xapi.d/plugins/s3xen on the hypervisor, and adding this line to the s3 
function:
{code}
filename = %s.vhd % filename.replace('/dev/VG_XenStorage-', 
'/var/run/sr-mount/').replace('VHD-', ‘')
{code}

It just changes the filename to what it would be set to if isISCSI returned 
false in 
*plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/XenServerStorageProcessor.java*
 around line 1070.   The IsISCSI function returns true for SRType.LVM which is 
what I’d be using with local storage - is that intended?

 Xen S3 backed secondary storage - local volume snapshots fail
 -

 Key: CLOUDSTACK-7775
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7775
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.4.0, 4.3.1
Reporter: CS User

 Xenserver, 6.01 latest hotfixes, cloudstack 4.3 and 4.4. Snapshot of volume 
 on local disk, to be transferred to S3. When the xenhost attemps the PUT 
 request, the snapshot is no longer present and so the request fails. 
 Snapshots of volumes stored on Primary storage work fine and are uploaded to 
 the S3 backed secondary storage as expected. 
 Also tried upgrading the hosts to Xenserver 6.2, however it still has the 
 same issue. On another environment with cloudstack 4.3, with Xenserver 6.01 
 (no recent xen hotfixes and no S3 backed secondary storage), local snapshots 
 work fine. 
 We see this in the management server logs:
 (I have removed references to any hosts from the logs and amended IP's)
 {noformat}
 2014-10-21 12:36:55,332 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-235:ctx-55338794) Task failed! Task record: 
 uuid: c520bf5a-1bc1-57ca-f9d0-960179118118
nameLabel: Async.host.call_plugin
  nameDescription:
allowedOperations: []
currentOperations: {}
  created: Tue Oct 21 12:36:56 BST 2014
 finished: Tue Oct 21 12:36:56 BST 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@1dd85be5
 progress: 1.0
 type: none/
   result:
errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, 
 CommandException, 2]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 2014-10-21 12:36:55,386 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-235:ctx-55338794) callHostPlugin failed for cmd: getSnapshotSize 
 with args snapshotUuid: 0cef2f39-03e8-458d-bf19-7a2294c40ac7, isISCSI: true, 
 primaryStorageSRUuid: b1802ff2-5a63-d1e7-04de-dea7ba7eab27,  due to Task 
 failed! Task record: uuid: 
 c520bf5a-1bc1-57ca-f9d0-960179118118
nameLabel: Async.host.call_plugin
  nameDescription:
allowedOperations: []
currentOperations: {}
  created: Tue Oct 21 12:36:56 BST 2014
 finished: Tue Oct 21 12:36:56 BST 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@1dd85be5
 progress: 1.0
 type: none/
   result:
errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, 
 CommandException, 2]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 Task failed! Task record: uuid: 
 c520bf5a-1bc1-57ca-f9d0-960179118118
nameLabel: Async.host.call_plugin
  nameDescription:
allowedOperations: []
currentOperations: {}
  created: Tue Oct 21 12:36:56 BST 2014
 finished: Tue Oct 21 12:36:56 BST 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@1dd85be5
 progress: 1.0
 type: none/
   result:
errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, 
 CommandException, 2]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 at 
 com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(CitrixResourceBase.java:3293)
 at 
 com.cloud.hypervisor.xen.resource.CitrixResourceBase.callHostPluginAsync(CitrixResourceBase.java:3507)
 at 
 com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.getSnapshotSize(XenServerStorageProcessor.java:1211)
 at 
 

[jira] [Commented] (CLOUDSTACK-7364) NetScaler won't create the Public VLAN and Bind the IP to it

2014-11-17 Thread Francois Gaudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214713#comment-14214713
 ] 

Francois Gaudreault commented on CLOUDSTACK-7364:
-

Ok. So then back to square one, why I get a different behaviour than your 
tests? :S 

Is it because we use VPCs?

 NetScaler won't create the Public VLAN and Bind the IP to it
 

 Key: CLOUDSTACK-7364
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7364
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Francois Gaudreault
Assignee: Rajesh Battala
Priority: Critical
 Attachments: management-server.log.debug.gz, screenshot-1.png, 
 screenshot-2.png


 When adding a Load Balancing rule with the NetScaler, the provider will tag 
 and bind the private IP to the appropriate interface. However, the behaviour 
 for the Public Interface is different. It simply adds the IP untagged on all 
 interfaces. This is wrong.
 The public VLAN should be tagged, and the VIP bound to the right VLAN tag to 
 avoid unnecessary ARP on other VLANs.
 NS Version tested: 123,11, 127.10, 128.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7857) CitrixResourceBase wrongly calculates total memory on hosts with a lot of memory and large Dom0

2014-11-17 Thread Anthony Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214983#comment-14214983
 ] 

Anthony Xu commented on CLOUDSTACK-7857:


 The formula that is used by XenCenter for this seems pretty easy and spot.
This is too hypervisor-specific, we don't want to couple CloudStack with 
hypervisor too tight, but if hypervisor provides the memory overhead through 
API, we can use it.

 recalculate the free memory metric every couple minutes (for instance as part 
 of the stats collection cycle)? 
We were discussing it for a while.
I like this idea, but it is a big change,
1. right now, the memory capacity is based on memory size in service offering, 
not real memory, if we use real memory metric, then we add this to some place,
 UI, need to show allocated memory and real used memory
2. VM deployment planer needs to consider both.
3. how to handle memory thin provision.
4. other hypervisors may not be able to provide accurate memory metric, like 
KVM, the memory(cache) being used by host OS can be used by VM deployment, but 
the free memory reported by host OS doesn't include memory used by cache.


I think we can start with XS, since it is a big case, it is better to consider 
it as a new feature, use both allocated and real memory in host capacity.

Anthony 







 CitrixResourceBase wrongly calculates total memory on hosts with a lot of 
 memory and large Dom0
 ---

 Key: CLOUDSTACK-7857
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7857
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: Future, 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.6.0
Reporter: Joris van Lieshout
Priority: Blocker

 We have hosts with 256GB memory and 4GB dom0. During startup ACS calculates 
 available memory using this formula:
 CitrixResourceBase.java
   protected void fillHostInfo
   ram = (long) ((ram - dom0Ram - _xs_memory_used) * 
 _xs_virtualization_factor);
 In our situation:
   ram = 274841497600
   dom0Ram = 4269801472
   _xs_memory_used = 128 * 1024 * 1024L = 134217728
   _xs_virtualization_factor = 63.0/64.0 = 0,984375
   (274841497600 - 4269801472 - 134217728) * 0,984375 = 266211892800
 This is in fact not the actual amount of memory available for instances. The 
 difference in our situation is a little less then 1GB. On this particular 
 hypervisor Dom0+Xen uses about 9GB.
 As the comment above the definition of XsMemoryUsed allready stated it's time 
 to review this logic. 
 //Hypervisor specific params with generic value, may need to be overridden 
 for specific versions
 The effect of this bug is that when you put a hypervisor in maintenance it 
 might try to move instances (usually small instances (1GB)) to a host that 
 in fact does not have enought free memory.
 This exception is thrown:
 ERROR [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-3:ctx-09aca6e9 
 work-8981) Terminating HAWork[8981-Migration-4482-Running-Migrating]
 com.cloud.utils.exception.CloudRuntimeException: Unable to migrate due to 
 Catch Exception com.cloud.utils.exception.CloudRuntimeException: Migration 
 failed due to com.cloud.utils.exception.CloudRuntim
 eException: Unable to migrate VM(r-4482-VM) from 
 host(6805d06c-4d5b-4438-a245-7915e93041d9) due to Task failed! Task record:   
   uuid: 645b63c8-1426-b412-7b6a-13d61ee7ab2e
nameLabel: Async.VM.pool_migrate
  nameDescription: 
allowedOperations: []
currentOperations: {}
  created: Thu Nov 06 13:44:14 CET 2014
 finished: Thu Nov 06 13:44:14 CET 2014
   status: failure
   residentOn: com.xensource.xenapi.Host@b42882c6
 progress: 1.0
 type: none/
   result: 
errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 272629760, 263131136]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:1840)
 at 
 com.cloud.vm.VirtualMachineManagerImpl.migrateAway(VirtualMachineManagerImpl.java:2214)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl.migrate(HighAvailabilityManagerImpl.java:610)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.runWithContext(HighAvailabilityManagerImpl.java:865)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.access$000(HighAvailabilityManagerImpl.java:822)
 at 
 com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread$1.run(HighAvailabilityManagerImpl.java:834)
 at 
 

[jira] [Created] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread Jessica Wang (JIRA)
Jessica Wang created CLOUDSTACK-7927:


 Summary: UI  Infrastructure  Primary Storage  detailView  add 
View Volumes link that will list all volumes under this primary storage when 
being clicked.
 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215175#comment-14215175
 ] 

ASF subversion and git services commented on CLOUDSTACK-7927:
-

Commit 635abaf2e9ca4f0399085f441ea6d5eeaab9f3ab in cloudstack's branch 
refs/heads/4.5 from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=635abaf ]

CLOUDSTACK-7927: UI  Infrastructure  Primary Storage  detailView  add View 
Volumes link that will list all volumes under this primary storage when being 
clicked.


 UI  Infrastructure  Primary Storage  detailView  add View Volumes link 
 that will list all volumes under this primary storage when being clicked.
 --

 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang closed CLOUDSTACK-7927.


 UI  Infrastructure  Primary Storage  detailView  add View Volumes link 
 that will list all volumes under this primary storage when being clicked.
 --

 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical
 Fix For: 4.5.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang updated CLOUDSTACK-7927:
-
Fix Version/s: 4.5.0

 UI  Infrastructure  Primary Storage  detailView  add View Volumes link 
 that will list all volumes under this primary storage when being clicked.
 --

 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical
 Fix For: 4.5.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215176#comment-14215176
 ] 

ASF subversion and git services commented on CLOUDSTACK-7927:
-

Commit eba7cc78da751cc25994a5a736a87ce2e83c394e in cloudstack's branch 
refs/heads/master from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=eba7cc7 ]

CLOUDSTACK-7927: UI  Infrastructure  Primary Storage  detailView  add View 
Volumes link that will list all volumes under this primary storage when being 
clicked.


 UI  Infrastructure  Primary Storage  detailView  add View Volumes link 
 that will list all volumes under this primary storage when being clicked.
 --

 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical
 Fix For: 4.5.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-7927) UI Infrastructure Primary Storage detailView add View Volumes link that will list all volumes under this primary storage when being clicked.

2014-11-17 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang resolved CLOUDSTACK-7927.
--
Resolution: Fixed

 UI  Infrastructure  Primary Storage  detailView  add View Volumes link 
 that will list all volumes under this primary storage when being clicked.
 --

 Key: CLOUDSTACK-7927
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7927
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
Priority: Critical
 Fix For: 4.5.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6624) Unable to create new Network Offerings via UI with Specify VLAN option set

2014-11-17 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215224#comment-14215224
 ] 

Jessica Wang commented on CLOUDSTACK-6624:
--

Rohit, Geoff,

For network offering whose GuestIpType is Isolated, specifyIpRanges should be 
set to false.

Jessica

 Unable to create new Network Offerings via UI with Specify VLAN option set
 --

 Key: CLOUDSTACK-6624
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6624
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.3.0
Reporter: Geoff Higgibottom
Assignee: Rohit Yadav
Priority: Critical
  Labels: UI
 Fix For: 4.3.1


 When creating a new network offering with the Specify VLAN option set, the 
 Specify IP Option should also be set automatically.  The UI is no longer 
 sending this parameter in the API string so the Network Offering has an 
 invalid configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6624) Unable to create new Network Offerings via UI with Specify VLAN option set

2014-11-17 Thread Stephen Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215232#comment-14215232
 ] 

Stephen Turner commented on CLOUDSTACK-6624:


Thank you for your email. I am at the CloudStack Collaboration Conference in 
Budapest until Monday 24th November, so responses to email are likely to be 
delayed. For urgent questions, please contact my manager, 
andrew.hal...@citrix.com.

Thank you very much,

--
Stephen Turner
Sr Manager, Citrix



 Unable to create new Network Offerings via UI with Specify VLAN option set
 --

 Key: CLOUDSTACK-6624
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6624
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.3.0
Reporter: Geoff Higgibottom
Assignee: Rohit Yadav
Priority: Critical
  Labels: UI
 Fix For: 4.3.1


 When creating a new network offering with the Specify VLAN option set, the 
 Specify IP Option should also be set automatically.  The UI is no longer 
 sending this parameter in the API string so the Network Offering has an 
 invalid configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3607) guest_os_hypervisor table has values that are not registered in guest_os table

2014-11-17 Thread Chandan Purushothama (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandan Purushothama updated CLOUDSTACK-3607:
-
Assignee: (was: Chandan Purushothama)

 guest_os_hypervisor table has values that are not registered in guest_os 
 table
 --

 Key: CLOUDSTACK-3607
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3607
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
Reporter: Chandan Purushothama
 Fix For: 4.4.0, 4.5.0


 mysql select * from guest_os_hypervisor where guest_os_id not in (select id 
 from guest_os);
 +-+-++-+
 | id  | hypervisor_type | guest_os_name  | guest_os_id |
 +-+-++-+
 | 128 | VmWare  | Red Hat Enterprise Linux 6(32-bit) | 204 |
 | 129 | VmWare  | Red Hat Enterprise Linux 6(64-bit) | 205 |
 +-+-++-+
 2 rows in set (0.07 sec)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6624) Unable to create new Network Offerings via UI with Specify VLAN option set

2014-11-17 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215271#comment-14215271
 ] 

Jessica Wang commented on CLOUDSTACK-6624:
--

Geoff, Rohit,

 [Geoff] The UI is no longer sending this parameter in the API string so the 
 Network Offering has an invalid configuration.

UI used to send specifyIpRanges=false to createNetworkOffering API when 
guestItType is Isolated.
(before 2014-06-11)

UI was changed to not send it because server-side's default value of 
specifyIpRanges is false, so no need to send the same value.
(at 2014-06-11) (CLOUDSTACK-6889)

The UI change of sending specifyIpRanges=true to createNetworkOffering API 
when guestItType is Isolated is wrong.
(at 2014-09-08) (CLOUDSTACK-6624)

Isolated network does NOT support specifyIpRanges=true.

Jessica

 Unable to create new Network Offerings via UI with Specify VLAN option set
 --

 Key: CLOUDSTACK-6624
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6624
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.3.0
Reporter: Geoff Higgibottom
Assignee: Rohit Yadav
Priority: Critical
  Labels: UI
 Fix For: 4.3.1


 When creating a new network offering with the Specify VLAN option set, the 
 Specify IP Option should also be set automatically.  The UI is no longer 
 sending this parameter in the API string so the Network Offering has an 
 invalid configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7735) Admin is not allowed deploy VM in a disabled host if hostId is parameter is not passed.

2014-11-17 Thread Prachi Damle (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215275#comment-14215275
 ] 

Prachi Damle commented on CLOUDSTACK-7735:
--

A disabled host is used mainly for test purposes of the admin, while this bug 
states that CCP is not following the general rule of using diabled resources 
for admin, the admin still has a workaround here.

Admin can set the hostId directly in the deployVm API to test the host.

So this change is not very necessary from any functionality perspective.

 Admin is not allowed deploy VM in a disabled host if hostId is parameter is 
 not passed.
 ---

 Key: CLOUDSTACK-7735
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7735
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
 Environment: build from master
Reporter: Sangeetha Hariharan
Assignee: Prachi Damle
 Fix For: 4.5.0


 Admin is not allowed deploy VM in a disabled host if hostId is parameter is 
 not passed.
 Steps to reproduce the problem:
 Disable host h1.
 As admin, try to deploy a Vm in host1 using a service offering that has host 
 tags that matches with host1.
 Admin is not allowed to deploy a VM in this host.  This behavior is different 
 from the behavior where admin is allowed to deploy Vms in disabled zone / 
 disabled pod/ disabled cluster.
 But when I try to deploy a VM by passing hostId parameter , then I am allowed 
 to deploy  VM in this host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7928) [Automation] Fix the script test_vpc_vm_life_cycle.py - Network Rules Validation fails when VPC VR is Stopped as per design

2014-11-17 Thread Chandan Purushothama (JIRA)
Chandan Purushothama created CLOUDSTACK-7928:


 Summary: [Automation] Fix the script test_vpc_vm_life_cycle.py - 
Network Rules Validation fails when VPC VR is Stopped as per design
 Key: CLOUDSTACK-7928
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7928
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation, Test
Affects Versions: 4.5.0
Reporter: Chandan Purushothama
Assignee: Chandan Purushothama
 Fix For: 4.5.0


Following Test Cases currently fail in TestVMLifeCycleStoppedVPCVR Test Suite:

test_07_migrate_instance_in_network
test_08_user_data
test_09_meta_data
test_10_expunge_instance_in_network

The test cases fail for the obvious reason since the VPC VR is stopped. The 
Stopped VPCVR doesnt allow the traffic to or from the Guest VMs. Hence the test 
cases are not valid to be tested in such a scenario and should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-5429) KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of primary store being down hangs.

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su resolved CLOUDSTACK-5429.
---
Resolution: Duplicate

 KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of 
 primary store being down hangs.
 -

 Key: CLOUDSTACK-5429
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5429
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
Priority: Critical
 Fix For: 4.4.0

 Attachments: kvm-networkshutdown.png, kvmhostreboot.png, psdown.rar


 KVM - Primary store down - Hosts attempt to reboot becasue of primary store 
 being down hangs.
 Set up:
 Advanced zone with KVM (RHEL 6.3) hosts.
 Steps to reproduce the problem:
 1. Deploy few Vms in each of the hosts with 10 GB ROOT volume size , so we 
 start with 10 Vms.
 2. Create snaposhot for ROOT volumes.
 3. When snapshot is still in progress , Make the primary storage unavailable 
 for 10 mts.
 This results in the KVM hosts to reboot.
 But reboot of KVM host is not successful.
 It is stuck at trying to unmount nfs mount points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-5429) KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of primary store being down hangs.

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su reopened CLOUDSTACK-5429:
---

 KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of 
 primary store being down hangs.
 -

 Key: CLOUDSTACK-5429
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5429
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
Priority: Critical
 Fix For: 4.4.0

 Attachments: kvm-networkshutdown.png, kvmhostreboot.png, psdown.rar


 KVM - Primary store down - Hosts attempt to reboot becasue of primary store 
 being down hangs.
 Set up:
 Advanced zone with KVM (RHEL 6.3) hosts.
 Steps to reproduce the problem:
 1. Deploy few Vms in each of the hosts with 10 GB ROOT volume size , so we 
 start with 10 Vms.
 2. Create snaposhot for ROOT volumes.
 3. When snapshot is still in progress , Make the primary storage unavailable 
 for 10 mts.
 This results in the KVM hosts to reboot.
 But reboot of KVM host is not successful.
 It is stuck at trying to unmount nfs mount points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-5578) KVM - Network down - When the host looses network connectivity , reboot stuck while unmounting primary

2014-11-17 Thread edison su (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215390#comment-14215390
 ] 

edison su commented on CLOUDSTACK-5578:
---

It's due to the behavior on kvm host:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Managing_guest_virtual_machines_with_virsh-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine.html

Normally, KVM host will try to suspend vms on host during reboot, which may 
stuck when primary storage is unavailable.
As you said, we could try reboot -f, or change the KVM host behavior, to 
forcefully shutdown vms.

 KVM - Network down - When the host looses network connectivity , reboot stuck 
 while unmounting primary
 --

 Key: CLOUDSTACK-5578
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5578
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: Kishan Kavala
Priority: Critical
 Fix For: 4.5.0

 Attachments: DisconnectedHost.png, kvm-hostdisconnect.rar, 
 nfsUmount.jpg


 KVM - Network down - When the host looses network connectivity , it is not 
 able to fence itself.
 Steps to reproduce the problem:
 Set up - Advanced zone with 2 Rhel 6.3 hosts in cluster.
 Deploy ~10 Vms.
 Simulate network disconnect on the host ( ifdown em1)
 Host gets marked as Down and all the Vms gets HA-ed to the other host.
 On the KVM host which lost connectivity , attempt to shutdown itself fails.
 It was not able to umount the primary store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in CreatedOnPri

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su updated CLOUDSTACK-5482:
--
Assignee: Sateesh Chodapuneedi  (was: edison su)

 Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
 , snapshot job failed when nfs was brought up leaving behind  snaphots in 
 CreatedOnPrimary state.
 -

 Key: CLOUDSTACK-5482
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: Sateesh Chodapuneedi
 Fix For: 4.4.0, 4.5.0

 Attachments: nfs12down.rar, vmware.rar, vmware.rar


 Set up :
 Advanced Zone with 2 5.1 ESXI hosts.
 Steps to reproduce the problem:
 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
 2. Start concurrent snapshots for ROOT volumes of all the Vms.
 3. Shutdown the Secondary storage server when the snapshots are in the 
 progress.
 4. Bring the Secondary storage server up after 1 hour.
 When the secondary storage was down , 2 of the  snapshots were already 
 completed. 5 of them were in progress and the other 4 had not started yet.
 Once the secondary store was brought up , I see the snapshots that were in 
 progress actually continue to download to secondary and succeed. But the 
 other 4 snapshots error out. 
 mysql select volume_id,status,created from snapshots;
 +---+--+-+
 | volume_id | status   | created |
 +---+--+-+
 |22 | BackedUp | 2013-12-12 23:24:13 |
 |21 | Destroyed| 2013-12-12 23:24:13 |
 |20 | BackedUp | 2013-12-12 23:24:14 |
 |19 | Destroyed| 2013-12-12 23:24:14 |
 |18 | BackedUp | 2013-12-12 23:24:14 |
 |17 | BackedUp | 2013-12-12 23:24:14 |
 |16 | BackedUp | 2013-12-12 23:24:14 |
 |14 | BackedUp | 2013-12-12 23:24:15 |
 |25 | BackedUp | 2013-12-12 23:24:15 |
 |24 | BackedUp | 2013-12-12 23:24:15 |
 |23 | BackedUp | 2013-12-12 23:24:15 |
 |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
 |21 | BackedUp | 2013-12-12 23:53:38 |
 |20 | BackedUp | 2013-12-12 23:53:38 |
 |19 | BackedUp | 2013-12-12 23:53:39 |
 |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
 |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |14 | BackedUp | 2013-12-12 23:53:40 |
 |25 | BackedUp | 2013-12-12 23:53:41 |
 |24 | BackedUp | 2013-12-12 23:53:41 |
 |23 | BackedUp | 2013-12-12 23:53:42 |
 |21 | BackedUp | 2013-12-13 00:53:37 |
 |19 | BackedUp | 2013-12-13 00:53:38 |
 +---+--+-+
 24 rows in set (0.00 sec)
 This leaves behind incomplete snapshots. The directory does not have a ovf 
 file and has incomplete vmdk file.
 [root@Rack3Host8 18]# ls -ltR
 .:
 total 12
 drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
 drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
 ./36d7964c-e545-41d7-b303-96359a88dcef:
 total 403256
 -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
 ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
 total 448860
 -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
 -rw-r--r--. 1 root root  6454 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
 [root@Rack3Host8 18]#
 Following exception seen in the management server logs:
 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
 Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
 v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:backup
  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
 finish the whole process to package as a OVA file\n,wait:0}}] }
 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
 ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
 via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
 (Job-Executor-1:ctx-83fb69a5 ctx-51e56052) 

[jira] [Assigned] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in CreatedOnPr

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su reassigned CLOUDSTACK-5482:
-

Assignee: edison su  (was: Sateesh Chodapuneedi)

 Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
 , snapshot job failed when nfs was brought up leaving behind  snaphots in 
 CreatedOnPrimary state.
 -

 Key: CLOUDSTACK-5482
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
 Fix For: 4.4.0, 4.5.0

 Attachments: nfs12down.rar, vmware.rar, vmware.rar


 Set up :
 Advanced Zone with 2 5.1 ESXI hosts.
 Steps to reproduce the problem:
 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
 2. Start concurrent snapshots for ROOT volumes of all the Vms.
 3. Shutdown the Secondary storage server when the snapshots are in the 
 progress.
 4. Bring the Secondary storage server up after 1 hour.
 When the secondary storage was down , 2 of the  snapshots were already 
 completed. 5 of them were in progress and the other 4 had not started yet.
 Once the secondary store was brought up , I see the snapshots that were in 
 progress actually continue to download to secondary and succeed. But the 
 other 4 snapshots error out. 
 mysql select volume_id,status,created from snapshots;
 +---+--+-+
 | volume_id | status   | created |
 +---+--+-+
 |22 | BackedUp | 2013-12-12 23:24:13 |
 |21 | Destroyed| 2013-12-12 23:24:13 |
 |20 | BackedUp | 2013-12-12 23:24:14 |
 |19 | Destroyed| 2013-12-12 23:24:14 |
 |18 | BackedUp | 2013-12-12 23:24:14 |
 |17 | BackedUp | 2013-12-12 23:24:14 |
 |16 | BackedUp | 2013-12-12 23:24:14 |
 |14 | BackedUp | 2013-12-12 23:24:15 |
 |25 | BackedUp | 2013-12-12 23:24:15 |
 |24 | BackedUp | 2013-12-12 23:24:15 |
 |23 | BackedUp | 2013-12-12 23:24:15 |
 |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
 |21 | BackedUp | 2013-12-12 23:53:38 |
 |20 | BackedUp | 2013-12-12 23:53:38 |
 |19 | BackedUp | 2013-12-12 23:53:39 |
 |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
 |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |14 | BackedUp | 2013-12-12 23:53:40 |
 |25 | BackedUp | 2013-12-12 23:53:41 |
 |24 | BackedUp | 2013-12-12 23:53:41 |
 |23 | BackedUp | 2013-12-12 23:53:42 |
 |21 | BackedUp | 2013-12-13 00:53:37 |
 |19 | BackedUp | 2013-12-13 00:53:38 |
 +---+--+-+
 24 rows in set (0.00 sec)
 This leaves behind incomplete snapshots. The directory does not have a ovf 
 file and has incomplete vmdk file.
 [root@Rack3Host8 18]# ls -ltR
 .:
 total 12
 drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
 drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
 ./36d7964c-e545-41d7-b303-96359a88dcef:
 total 403256
 -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
 ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
 total 448860
 -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
 -rw-r--r--. 1 root root  6454 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
 [root@Rack3Host8 18]#
 Following exception seen in the management server logs:
 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
 Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
 v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:backup
  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
 finish the whole process to package as a OVA file\n,wait:0}}] }
 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
 ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
 via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
 (Job-Executor-1:ctx-83fb69a5 ctx-51e56052) 

[jira] [Resolved] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in CreatedOnPr

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su resolved CLOUDSTACK-5482.
---
Resolution: Duplicate

 Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
 , snapshot job failed when nfs was brought up leaving behind  snaphots in 
 CreatedOnPrimary state.
 -

 Key: CLOUDSTACK-5482
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
 Fix For: 4.4.0, 4.5.0

 Attachments: nfs12down.rar, vmware.rar, vmware.rar


 Set up :
 Advanced Zone with 2 5.1 ESXI hosts.
 Steps to reproduce the problem:
 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
 2. Start concurrent snapshots for ROOT volumes of all the Vms.
 3. Shutdown the Secondary storage server when the snapshots are in the 
 progress.
 4. Bring the Secondary storage server up after 1 hour.
 When the secondary storage was down , 2 of the  snapshots were already 
 completed. 5 of them were in progress and the other 4 had not started yet.
 Once the secondary store was brought up , I see the snapshots that were in 
 progress actually continue to download to secondary and succeed. But the 
 other 4 snapshots error out. 
 mysql select volume_id,status,created from snapshots;
 +---+--+-+
 | volume_id | status   | created |
 +---+--+-+
 |22 | BackedUp | 2013-12-12 23:24:13 |
 |21 | Destroyed| 2013-12-12 23:24:13 |
 |20 | BackedUp | 2013-12-12 23:24:14 |
 |19 | Destroyed| 2013-12-12 23:24:14 |
 |18 | BackedUp | 2013-12-12 23:24:14 |
 |17 | BackedUp | 2013-12-12 23:24:14 |
 |16 | BackedUp | 2013-12-12 23:24:14 |
 |14 | BackedUp | 2013-12-12 23:24:15 |
 |25 | BackedUp | 2013-12-12 23:24:15 |
 |24 | BackedUp | 2013-12-12 23:24:15 |
 |23 | BackedUp | 2013-12-12 23:24:15 |
 |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
 |21 | BackedUp | 2013-12-12 23:53:38 |
 |20 | BackedUp | 2013-12-12 23:53:38 |
 |19 | BackedUp | 2013-12-12 23:53:39 |
 |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
 |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
 |14 | BackedUp | 2013-12-12 23:53:40 |
 |25 | BackedUp | 2013-12-12 23:53:41 |
 |24 | BackedUp | 2013-12-12 23:53:41 |
 |23 | BackedUp | 2013-12-12 23:53:42 |
 |21 | BackedUp | 2013-12-13 00:53:37 |
 |19 | BackedUp | 2013-12-13 00:53:38 |
 +---+--+-+
 24 rows in set (0.00 sec)
 This leaves behind incomplete snapshots. The directory does not have a ovf 
 file and has incomplete vmdk file.
 [root@Rack3Host8 18]# ls -ltR
 .:
 total 12
 drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
 drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
 ./36d7964c-e545-41d7-b303-96359a88dcef:
 total 403256
 -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
 ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
 total 448860
 -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
 -rw-r--r--. 1 root root  6454 Dec 12 22:30 
 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
 [root@Rack3Host8 18]#
 Following exception seen in the management server logs:
 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
 Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
 v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:backup
  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
 finish the whole process to package as a OVA file\n,wait:0}}] }
 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
 ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
 via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
 (Job-Executor-1:ctx-83fb69a5 ctx-51e56052) Failed to create snapshot
 

[jira] [Resolved] (CLOUDSTACK-3815) SNAPSHOT.CREATE event's states are not registered on the events table

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su resolved CLOUDSTACK-3815.
---
Resolution: Cannot Reproduce

Tried on 4.5, can't reproduce it any more.

 SNAPSHOT.CREATE event's states are not registered on the events table 
 

 Key: CLOUDSTACK-3815
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3815
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Snapshot
Affects Versions: 4.2.0
Reporter: Chandan Purushothama
Assignee: edison su
Priority: Minor
 Fix For: 4.5.0


 I see only Scheduled state of the event registered. 
 Created,Started,Completed states of the event are missing
 mysql select 
 id,type,state,description,user_id,account_id,domain_id,created,level,start_id,parameters,archived
  from event where type like SNAPSHOT.CREATE;
 ++-+---+-+-++---+-+---+--++--+
 | id | type| state | description | 
 user_id | account_id | domain_id | created | level | start_id | 
 parameters | archived |
 ++-+---+-+-++---+-+---+--++--+
 | 76 | SNAPSHOT.CREATE | Scheduled | creating snapshot for volume: 3 |   
 3 |  3 | 1 | 2013-07-24 21:32:15 | INFO  |0 | NULL
|0 |
 ++-+---+-+-++---+-+---+--++--+
 1 row in set (0.01 sec)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-5429) KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of primary store being down hangs.

2014-11-17 Thread Marcus Sorensen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215462#comment-14215462
 ] 

Marcus Sorensen commented on CLOUDSTACK-5429:
-

No, that does not work. VMs cannot be cleanly shut down (or even forced off) if 
their storage is hanging. The qemu processes will be in D state and 
unresponsive. Force reboot of host via IPMI or sysrq trigger, or something like 
that would be necessary, and the mgmt server would need to recognize that this 
has happened so the VMs can start elsewhere safely.

 KVM - Primary store down/Network Failure - Hosts attempt to reboot becasue of 
 primary store being down hangs.
 -

 Key: CLOUDSTACK-5429
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5429
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: edison su
Priority: Critical
 Fix For: 4.4.0

 Attachments: kvm-networkshutdown.png, kvmhostreboot.png, psdown.rar


 KVM - Primary store down - Hosts attempt to reboot becasue of primary store 
 being down hangs.
 Set up:
 Advanced zone with KVM (RHEL 6.3) hosts.
 Steps to reproduce the problem:
 1. Deploy few Vms in each of the hosts with 10 GB ROOT volume size , so we 
 start with 10 Vms.
 2. Create snaposhot for ROOT volumes.
 3. When snapshot is still in progress , Make the primary storage unavailable 
 for 10 mts.
 This results in the KVM hosts to reboot.
 But reboot of KVM host is not successful.
 It is stuck at trying to unmount nfs mount points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7929) Unhandled exception when setting negative value for throttling rate while creating network offering

2014-11-17 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-7929:
--

 Summary: Unhandled exception when setting negative value for 
throttling rate while creating network offering
 Key: CLOUDSTACK-7929
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7929
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar




Steps


Create a network offering and specify -1 for network throttling rate.

Result
=
Exception is not handled properly throwing a DB exception exposing the DB 
column names in the Logs and UI.

Expected Result
=

-1 is generally an acceptable input for signifying infinite values or not 
applicable values. So we should allow -1 and translate it appropriately as no 
throttling applied. Or if we don't, we should handle the input correctly and 
throw a suitable error message for the user.

Following is the exception seen presently in the logs (or through UI):

[{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
2014-11-13 13:58:31,414 DEBUG [c.c.a.ApiServlet] (catalina-exec-8:ctx-ac230e40) 
===START=== 10.144.7.5 – POST 
command=createNetworkOfferingresponse=jsonsessionkey=vL5F3A1A1pr98OOTv7eei
G2jvBI%3D
2014-11-13 13:58:31,428 DEBUG [c.c.c.ConfigurationManagerImpl] 
(catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding Firewall service with 
provider VirtualRouter
2014-11-13 13:58:31,432 DEBUG [c.c.c.ConfigurationManagerImpl] 
(catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding network offering [Network 
Offering [0-Guest-test]
2014-11-13 13:58:31,435 DEBUG [c.c.u.d.T.Transaction] 
(catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Rolling back the transaction: Time 
= 3 Name = catalina-exec-8; called by -TransactionLegac
y.rollback:902-TransactionLegacy.removeUpTo:845-TransactionLegacy.close:669-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke
:91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy79.persist:-1-ConfigurationManagerImpl$11.doInTransaction:4218-ConfigurationManagerImpl$11.doInTransaction:420
9-Transaction$2.doInTransaction:57
2014-11-13 13:58:31,442 ERROR [c.c.a.ApiServer] (catalina-exec-8:ctx-ac230e40 
ctx-60a6474c) unhandled exception executing api command: 
[Ljava.lang.String;@61bc5278
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
com.mysql.jdbc.JDBC4PreparedStatement@27a103c3: INSERT INTO network_offerings 
(network_offerings.name, network_offerings.un
ique_name, network_offerings.display_text, network_offerings.nw_rate, 
network_offerings.mc_rate, network_offerings.traffic_type, 
network_offerings.specify_vlan, network_offerings.system_onl
y, network_offerings.service_offering_id, network_offerings.tags, 
network_offerings.default, network_offerings.availability, 
network_offerings.state, network_offerings.created, network_offe
rings.guest_type, network_offerings.dedicated_lb_service, 
network_offerings.shared_source_nat_service, 
network_offerings.specify_ip_ranges, network_offerings.sort_key, 
network_offerings.uui
d, network_offerings.redundant_router_service, network_offerings.conserve_mode, 
network_offerings.elastic_ip_service, 
network_offerings.eip_associate_public_ip, network_offerings.elastic_lb
_service, network_offerings.inline, network_offerings.is_persistent, 
network_offerings.egress_default_policy, 
network_offerings.concurrent_connections, network_offerings.keep_alive_enabled,
network_offerings.supports_streched_l2, network_offerings.internal_lb, 
network_offerings.public_lb) VALUES (_binary'test', _binary'test', 
_binary'test', -1, 10, 'Guest', 0, 0, null, null,
0, 'Optional', 'Disabled', '2014-11-13 08:28:31', 'Isolated', 0, 0, 0, 0, 
_binary'f8bf35f5-dd77-4fa8-83ff-af1f1e85ece3', 0, 1, 0, 0, 0, 0, 0, 1, null, 0, 
0, 0, 0)
at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1400)
at 
com.cloud.offerings.dao.NetworkOfferingDaoImpl.persist(NetworkOfferingDaoImpl.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at 

[jira] [Created] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-7930:
--

 Summary: Do not allow to set invalid values for global settings 
which are of type Integer, Float
 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical


Setting Integer/Float/Boolean to invalid values results in 
NullPointerException, NumberFormatException later in code.

In case of network.throttling.rate parameter set to null results in deploy VM 
failure with message of null and no other exception.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-5578) KVM - Network down - When the host looses network connectivity , reboot stuck while unmounting primary

2014-11-17 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala resolved CLOUDSTACK-5578.
---
Resolution: Won't Fix

Cannot be fixed based on discussion in 
https://issues.apache.org/jira/browse/CLOUDSTACK-5429

 KVM - Network down - When the host looses network connectivity , reboot stuck 
 while unmounting primary
 --

 Key: CLOUDSTACK-5578
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5578
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Assignee: Kishan Kavala
Priority: Critical
 Fix For: 4.5.0

 Attachments: DisconnectedHost.png, kvm-hostdisconnect.rar, 
 nfsUmount.jpg


 KVM - Network down - When the host looses network connectivity , it is not 
 able to fence itself.
 Steps to reproduce the problem:
 Set up - Advanced zone with 2 Rhel 6.3 hosts in cluster.
 Deploy ~10 Vms.
 Simulate network disconnect on the host ( ifdown em1)
 Host gets marked as Down and all the Vms gets HA-ed to the other host.
 On the KVM host which lost connectivity , attempt to shutdown itself fails.
 It was not able to umount the primary store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7929) Unhandled exception when setting negative value for throttling rate while creating network offering

2014-11-17 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-7929:
---
Fix Version/s: 4.5.0

 Unhandled exception when setting negative value for throttling rate while 
 creating network offering
 ---

 Key: CLOUDSTACK-7929
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7929
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 Steps
 
 Create a network offering and specify -1 for network throttling rate.
 Result
 =
 Exception is not handled properly throwing a DB exception exposing the DB 
 column names in the Logs and UI.
 Expected Result
 =
 -1 is generally an acceptable input for signifying infinite values or not 
 applicable values. So we should allow -1 and translate it appropriately as 
 no throttling applied. Or if we don't, we should handle the input correctly 
 and throw a suitable error message for the user.
 Following is the exception seen presently in the logs (or through UI):
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2014-11-13 13:58:31,414 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-ac230e40) ===START=== 10.144.7.5 – POST 
 command=createNetworkOfferingresponse=jsonsessionkey=vL5F3A1A1pr98OOTv7eei
 G2jvBI%3D
 2014-11-13 13:58:31,428 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding Firewall service with 
 provider VirtualRouter
 2014-11-13 13:58:31,432 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding network offering [Network 
 Offering [0-Guest-test]
 2014-11-13 13:58:31,435 DEBUG [c.c.u.d.T.Transaction] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Rolling back the transaction: 
 Time = 3 Name = catalina-exec-8; called by -TransactionLegac
 y.rollback:902-TransactionLegacy.removeUpTo:845-TransactionLegacy.close:669-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke
 :91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy79.persist:-1-ConfigurationManagerImpl$11.doInTransaction:4218-ConfigurationManagerImpl$11.doInTransaction:420
 9-Transaction$2.doInTransaction:57
 2014-11-13 13:58:31,442 ERROR [c.c.a.ApiServer] (catalina-exec-8:ctx-ac230e40 
 ctx-60a6474c) unhandled exception executing api command: 
 [Ljava.lang.String;@61bc5278
 com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
 com.mysql.jdbc.JDBC4PreparedStatement@27a103c3: INSERT INTO network_offerings 
 (network_offerings.name, network_offerings.un
 ique_name, network_offerings.display_text, network_offerings.nw_rate, 
 network_offerings.mc_rate, network_offerings.traffic_type, 
 network_offerings.specify_vlan, network_offerings.system_onl
 y, network_offerings.service_offering_id, network_offerings.tags, 
 network_offerings.default, network_offerings.availability, 
 network_offerings.state, network_offerings.created, network_offe
 rings.guest_type, network_offerings.dedicated_lb_service, 
 network_offerings.shared_source_nat_service, 
 network_offerings.specify_ip_ranges, network_offerings.sort_key, 
 network_offerings.uui
 d, network_offerings.redundant_router_service, 
 network_offerings.conserve_mode, network_offerings.elastic_ip_service, 
 network_offerings.eip_associate_public_ip, network_offerings.elastic_lb
 _service, network_offerings.inline, network_offerings.is_persistent, 
 network_offerings.egress_default_policy, 
 network_offerings.concurrent_connections, 
 network_offerings.keep_alive_enabled,
 network_offerings.supports_streched_l2, network_offerings.internal_lb, 
 network_offerings.public_lb) VALUES (_binary'test', _binary'test', 
 _binary'test', -1, 10, 'Guest', 0, 0, null, null,
 0, 'Optional', 'Disabled', '2014-11-13 08:28:31', 'Isolated', 0, 0, 0, 0, 
 _binary'f8bf35f5-dd77-4fa8-83ff-af1f1e85ece3', 0, 1, 0, 0, 0, 0, 0, 1, null, 
 0, 0, 0, 0)
 at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1400)
 at 
 com.cloud.offerings.dao.NetworkOfferingDaoImpl.persist(NetworkOfferingDaoImpl.java:181)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 

[jira] [Updated] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-7930:
---
Fix Version/s: 4.5.0

 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7931) Setting Null for global network throttling params doesn't trigger suitable error, fails silently

2014-11-17 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-7931:
--

 Summary: Setting Null for global network throttling params doesn't 
trigger suitable error, fails silently
 Key: CLOUDSTACK-7931
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7931
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical




Set global configs network.throttling.rate and vm.network.throttling.rate to 
NULL value.
Then launch VM in a new network

Result
=
VM fails to launch but it fails without any ERROR logs or suitable exceptions.
A corresponding INFO log seems to have nothing but null

Generally, for few global configs NULL is an acceptable value in some cases. If 
this is not the case, then we should not allow to set such a value for the 
config. The API should error out suitably. This is one issue.

Further, it should throw an appropriate error when the deploy VM fails to 
design network. The error in this case is not handled suitably and there's 
nothing in ERROR logs as well.

Looking at the below logs, it's impossible to figure out the reason for the 
failure of deploy VM. So at some point, if a user inadvertently sets it to 
NULL, neither does the updateConfiguration API result in error nor does the 
deployVirtualMachine throw a suitable error.

Here's the log:

2014-11-13 13:29:15,584 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-18:ctx-285ce7d9) ===START=== 10.144.7.5 – GET 
command=createNetworkresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3DnetworkOfferingId=e8746c6b-e945-4084-9290-37cea253e262name=newtest1displayText=newtest1zoneId=b642a92a-3480-4818-99bf-6546a28df624_=1415866216789
2014-11-13 13:29:15,617 DEBUG [o.a.c.n.c.m.ContrailGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,617 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) design called
2014-11-13 13:29:15,618 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network, 
the physical isolation type is not MIDO
2014-11-13 13:29:15,619 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,620 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,621 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,644 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) SSP not configured to be active
2014-11-13 13:29:15,645 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,646 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
2014-11-13 13:29:15,648 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Releasing lock for 
Acct[467a4f66-698f-11e4-be18-42407779c24b-admin]
2014-11-13 13:29:15,688 DEBUG [c.c.a.ApiServlet] (catalina-exec-18:ctx-285ce7d9 
ctx-5245ccb7) ===END=== 10.144.7.5 – GET 
command=createNetworkresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3DnetworkOfferingId=e8746c6b-e945-4084-9290-37cea253e262name=newtest1displayText=newtest1zoneId=b642a92a-3480-4818-99bf-6546a28df624_=1415866216789
2014-11-13 13:29:15,727 DEBUG [c.c.a.ApiServlet] (catalina-exec-9:ctx-54781545) 
===START=== 10.144.7.5 – GET 
command=deployVirtualMachineresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3Dzoneid=b642a92a-3480-4818-99bf-6546a28df624templateid=f7df5ef0-698e-11e4-be18-42407779c24bhypervisor=XenServerserviceofferingid=04840780-04d0-4b41-847a-dda08ad460f4iptonetworklist%5B0%5D.networkid=c0e24f7a-fe03-4a3b-a11e-ab29150b803bdisplayname=throttlingvm1name=throttlingvm1_=1415866216945
2014-11-13 13:29:15,753 DEBUG [c.c.n.NetworkModelImpl] 
(catalina-exec-9:ctx-54781545 ctx-e87f4810) Service SecurityGroup is not 
supported in the network id=209
2014-11-13 13:29:15,777 DEBUG [c.c.v.UserVmManagerImpl] 
(catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating in the DB for vm
2014-11-13 13:29:15,793 DEBUG [c.c.v.VirtualMachineManagerImpl] 
(catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating entries for VM: 
VM[User|i-2-22-VM]
2014-11-13 13:29:15,794 DEBUG [c.c.v.VirtualMachineManagerImpl] 
(catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating nics for 
VM[User|i-2-22-VM]
2014-11-13 13:29:15,794 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating nic for vm 
VM[User|i-2-22-VM] in network Ntwk[209|Guest|8] with requested profile 

[jira] [Updated] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-7932:
---
Fix Version/s: 4.5.0

 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,691 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Received: { Ans: , MgmtId: 1, via: 3, Ver: 
 v1, Flags: 10, { Answer }
 }
 2014-11-15 13:35:21,691 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,691 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,691 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) ManagementIPSysVMInvestigator found 
 

[jira] [Created] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-7932:
--

 Summary: [Hyper-V] Wrong semantics for isVmAlive() method in 
HypervInvestigator
 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar




The isVmAlive() method should return null when it is unable to conclusively 
determine if the VM is alive or not.

I ran some tests using Simulator and found that HypervInvestigator determined 
that VM is not alive. How can HypervInvestigator determine status of a VM 
running on Simulator or any other HV?

2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? false

Full logs for the HA worker thread

2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) Processing 
HAWork[1-HA-1-Running-Investigating]
2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
(HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
with specified id is not in the right state: Down
2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? null
2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? null
2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
(HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
VM[SecondaryStorageVm|s-1-VM] returning null
2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? null
2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
(HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
alive
2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 100011, 
[{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
 }
2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 100011, 
[{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
 }
2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
v1, Flags: 10,
{ Answer } }
2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
returning null ('I don't know')
2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 100011, 
[{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
 }
2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 100011, 
[{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
 }
2014-11-15 13:35:21,691 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
work-1) Seq 3-248260929458798725: Received: { Ans: , MgmtId: 1, via: 3, Ver: 
v1, Flags: 10, { Answer }

}
2014-11-15 13:35:21,691 DEBUG [c.c.h.AbstractInvestigatorImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
returning null ('I don't know')
2014-11-15 13:35:21,691 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
(HA-Worker-1:ctx-e0b5183c work-1) unable to determine state of 
VM[SecondaryStorageVm|s-1-VM] returning null
2014-11-15 13:35:21,691 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) ManagementIPSysVMInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? null
2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c work-1) KVMInvestigator found 
VM[SecondaryStorageVm|s-1-VM]to be alive? null
2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:ctx-e0b5183c 

[jira] [Updated] (CLOUDSTACK-7931) Setting Null for global network throttling params doesn't trigger suitable error, fails silently

2014-11-17 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-7931:
---
Fix Version/s: 4.5.0

 Setting Null for global network throttling params doesn't trigger suitable 
 error, fails silently
 

 Key: CLOUDSTACK-7931
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7931
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Set global configs network.throttling.rate and vm.network.throttling.rate to 
 NULL value.
 Then launch VM in a new network
 Result
 =
 VM fails to launch but it fails without any ERROR logs or suitable exceptions.
 A corresponding INFO log seems to have nothing but null
 Generally, for few global configs NULL is an acceptable value in some cases. 
 If this is not the case, then we should not allow to set such a value for the 
 config. The API should error out suitably. This is one issue.
 Further, it should throw an appropriate error when the deploy VM fails to 
 design network. The error in this case is not handled suitably and there's 
 nothing in ERROR logs as well.
 Looking at the below logs, it's impossible to figure out the reason for the 
 failure of deploy VM. So at some point, if a user inadvertently sets it to 
 NULL, neither does the updateConfiguration API result in error nor does the 
 deployVirtualMachine throw a suitable error.
 Here's the log:
 2014-11-13 13:29:15,584 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-18:ctx-285ce7d9) ===START=== 10.144.7.5 – GET 
 command=createNetworkresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3DnetworkOfferingId=e8746c6b-e945-4084-9290-37cea253e262name=newtest1displayText=newtest1zoneId=b642a92a-3480-4818-99bf-6546a28df624_=1415866216789
 2014-11-13 13:29:15,617 DEBUG [o.a.c.n.c.m.ContrailGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,617 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) design called
 2014-11-13 13:29:15,618 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network, 
 the physical isolation type is not MIDO
 2014-11-13 13:29:15,619 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,620 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,621 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,644 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) SSP not configured to be active
 2014-11-13 13:29:15,645 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,646 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Refusing to design this network
 2014-11-13 13:29:15,648 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) Releasing lock for 
 Acct[467a4f66-698f-11e4-be18-42407779c24b-admin]
 2014-11-13 13:29:15,688 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-18:ctx-285ce7d9 ctx-5245ccb7) ===END=== 10.144.7.5 – GET 
 command=createNetworkresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3DnetworkOfferingId=e8746c6b-e945-4084-9290-37cea253e262name=newtest1displayText=newtest1zoneId=b642a92a-3480-4818-99bf-6546a28df624_=1415866216789
 2014-11-13 13:29:15,727 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-9:ctx-54781545) ===START=== 10.144.7.5 – GET 
 command=deployVirtualMachineresponse=jsonsessionkey=6ZKk3l0f4pdKU1yfDZxwF31YgCM%3Dzoneid=b642a92a-3480-4818-99bf-6546a28df624templateid=f7df5ef0-698e-11e4-be18-42407779c24bhypervisor=XenServerserviceofferingid=04840780-04d0-4b41-847a-dda08ad460f4iptonetworklist%5B0%5D.networkid=c0e24f7a-fe03-4a3b-a11e-ab29150b803bdisplayname=throttlingvm1name=throttlingvm1_=1415866216945
 2014-11-13 13:29:15,753 DEBUG [c.c.n.NetworkModelImpl] 
 (catalina-exec-9:ctx-54781545 ctx-e87f4810) Service SecurityGroup is not 
 supported in the network id=209
 2014-11-13 13:29:15,777 DEBUG [c.c.v.UserVmManagerImpl] 
 (catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating in the DB for vm
 2014-11-13 13:29:15,793 DEBUG [c.c.v.VirtualMachineManagerImpl] 
 (catalina-exec-9:ctx-54781545 ctx-e87f4810) Allocating entries for VM: 
 VM[User|i-2-22-VM]
 2014-11-13 13:29:15,794 DEBUG 

[jira] [Updated] (CLOUDSTACK-6703) [Windows] Try to install as a normal java service (Spawn a java thread)

2014-11-17 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala updated CLOUDSTACK-6703:
--
Affects Version/s: (was: 4.5.0)

 [Windows] Try to install as a normal java service (Spawn a java thread)
 ---

 Key: CLOUDSTACK-6703
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6703
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: Future


 1, Currently it is started as s tomcat service. Instead of that try to spawn 
 a java thread service
 2. Try to add Cloud stack Version some where in the service name or in the 
 registry keys



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6704) [Windows] Localization of the windows installer

2014-11-17 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala updated CLOUDSTACK-6704:
--
Affects Version/s: (was: 4.5.0)

 [Windows] Localization of the windows installer
 ---

 Key: CLOUDSTACK-6704
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6704
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: Future






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-6978) CLONE - [Windows] Can not create Template from ROOT snapshot For S3 Storage Server

2014-11-17 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala updated CLOUDSTACK-6978:
--
Affects Version/s: (was: 4.5.0)

 CLONE - [Windows] Can not create Template from ROOT snapshot For S3 Storage 
 Server
 --

 Key: CLOUDSTACK-6978
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6978
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: Future
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T

 Steps to reproduce the issue
 1. Create a snapshot from ROOT disk
 2. Goto the above created snapshot
 3. Try to create template out of it.
 It is failing with the following trace.
 014-05-12 04:02:01,813 WARN  [c.c.h.x.r.XenServerStorageProcessor] 
 (DirectAgent-428:ctx-2ece4baa) null due to Illegal character in path at index 
 47: 
 nfs://10.147.28.7/export/home/damoder/secondary\snapshots/2/3\dc74fe1c-0891-4cb3-aaf2-1c03a25b6d58.vhd
 java.net.URISyntaxException: Illegal character in path at index 47: 
 nfs://10.147.28.7/export/home/damoder/secondary\snapshots/2/3\dc74fe1c-0891-4cb3-aaf2-1c03a25b6d58.vhd
   at java.net.URI$Parser.fail(Unknown Source)
   at java.net.URI$Parser.checkChars(Unknown Source)
   at java.net.URI$Parser.parseHierarchical(Unknown Source)
   at java.net.URI$Parser.parse(Unknown Source)
   at java.net.URI.init(Unknown Source)
   at 
 com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.createVolumeFromSnapshot(XenServerStorageProcessor.java:1617)
   at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:96)
   at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:52)
   at 
 com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:542)
   at 
 com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:60)
   at 
 com.cloud.hypervisor.xen.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:93)
   at 
 com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
  Source)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  Source)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7460) [LXC][RHEl7] Agent installaion fails if Management server is already installed on the same machine

2014-11-17 Thread Kishan Kavala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Kavala updated CLOUDSTACK-7460:
--
Fix Version/s: Future

 [LXC][RHEl7] Agent installaion fails if Management server is already 
 installed on the same machine
 --

 Key: CLOUDSTACK-7460
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7460
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Install and Setup, KVM
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Damodar Reddy T
 Fix For: Future


 Repro steps:
 on rhel 7 Machine. First install MS and try installing Agent on the same 
 machine
 Bug:
 Agent installation will fail with following error :
 Transaction check error:
   file /var/log/cloudstack/agent from install of 
 cloudstack-agent-4.5.0-SNAPSHOT.el7.x86_64 conflicts with file from package 
 cloudstack-management-4.5.0-SNAPSHOT.el7.x86_64
 Error Summary
 -
 Done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215727#comment-14215727
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7932:


GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/39

CLOUDSTACK-7932: Fixed wrong semantics for isVmAlive() method in 
HypervInvestigator

Fixed wrong semantics for isVmAlive() method in HypervInvestigator

Findbugs will report error on this as it is expecting true/false for 
Boolean value.
But we have diffrent meaning for null so it is false positive case from 
findbug

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-7932

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/39.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #39


commit fe07c08c0093bbe89732bf6509f12ec8a8a0a8f2
Author: Anshul Gangwar anshul.gang...@citrix.com
Date:   2014-11-17T07:30:42Z

CLOUDSTACK-7932: Fixed wrong semantics for isVmAlive() method in 
HypervInvestigator

Findbugs will report error on this as it is expecting true/false for 
Boolean value.
But we have diffrent meaning for null so it is false positive case from 
findbug




 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 

[jira] [Commented] (CLOUDSTACK-7929) Unhandled exception when setting negative value for throttling rate while creating network offering

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215736#comment-14215736
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7929:


GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/40

CLOUDSTACK-7929: Unhandled exception when setting negative value for 
throttling rate while creating network offering

To fix issue while creating network offering if one specifies negative 
value for network rate

then we will convert that value to 0 i.e. unlimited

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-7929

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/40.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #40


commit 2386a33e0df00920332892c00e0802be72cf63db
Author: Anshul Gangwar anshul.gang...@citrix.com
Date:   2014-11-14T05:26:05Z

CLOUDSTACK-7929: While creating network offering if one specifies negative 
value for network rate
then we will convert that value to 0 i.e. unlimited




 Unhandled exception when setting negative value for throttling rate while 
 creating network offering
 ---

 Key: CLOUDSTACK-7929
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7929
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 Steps
 
 Create a network offering and specify -1 for network throttling rate.
 Result
 =
 Exception is not handled properly throwing a DB exception exposing the DB 
 column names in the Logs and UI.
 Expected Result
 =
 -1 is generally an acceptable input for signifying infinite values or not 
 applicable values. So we should allow -1 and translate it appropriately as 
 no throttling applied. Or if we don't, we should handle the input correctly 
 and throw a suitable error message for the user.
 Following is the exception seen presently in the logs (or through UI):
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2014-11-13 13:58:31,414 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-ac230e40) ===START=== 10.144.7.5 – POST 
 command=createNetworkOfferingresponse=jsonsessionkey=vL5F3A1A1pr98OOTv7eei
 G2jvBI%3D
 2014-11-13 13:58:31,428 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding Firewall service with 
 provider VirtualRouter
 2014-11-13 13:58:31,432 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding network offering [Network 
 Offering [0-Guest-test]
 2014-11-13 13:58:31,435 DEBUG [c.c.u.d.T.Transaction] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Rolling back the transaction: 
 Time = 3 Name = catalina-exec-8; called by -TransactionLegac
 y.rollback:902-TransactionLegacy.removeUpTo:845-TransactionLegacy.close:669-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke
 :91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy79.persist:-1-ConfigurationManagerImpl$11.doInTransaction:4218-ConfigurationManagerImpl$11.doInTransaction:420
 9-Transaction$2.doInTransaction:57
 2014-11-13 13:58:31,442 ERROR [c.c.a.ApiServer] (catalina-exec-8:ctx-ac230e40 
 ctx-60a6474c) unhandled exception executing api command: 
 [Ljava.lang.String;@61bc5278
 com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
 com.mysql.jdbc.JDBC4PreparedStatement@27a103c3: INSERT INTO network_offerings 
 (network_offerings.name, network_offerings.un
 ique_name, network_offerings.display_text, network_offerings.nw_rate, 
 network_offerings.mc_rate, network_offerings.traffic_type, 
 network_offerings.specify_vlan, network_offerings.system_onl
 y, network_offerings.service_offering_id, network_offerings.tags, 
 network_offerings.default, network_offerings.availability, 
 network_offerings.state, network_offerings.created, network_offe
 rings.guest_type, network_offerings.dedicated_lb_service, 
 network_offerings.shared_source_nat_service, 
 network_offerings.specify_ip_ranges, network_offerings.sort_key, 
 network_offerings.uui
 d, network_offerings.redundant_router_service, 
 network_offerings.conserve_mode, network_offerings.elastic_ip_service, 
 network_offerings.eip_associate_public_ip, network_offerings.elastic_lb
 _service, network_offerings.inline, network_offerings.is_persistent, 
 network_offerings.egress_default_policy, 

[jira] [Created] (CLOUDSTACK-7933) test_escalations_instances.py - test_13_vm_nics - Skip remove_nic step for vmware if vmware-tools are not installed

2014-11-17 Thread Gaurav Aradhye (JIRA)
Gaurav Aradhye created CLOUDSTACK-7933:
--

 Summary: test_escalations_instances.py - test_13_vm_nics - Skip 
remove_nic step for vmware if vmware-tools are not installed
 Key: CLOUDSTACK-7933
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7933
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Gaurav Aradhye
 Fix For: 4.5.0


Remove Nic should be skipped for vmware when vmware tools are not installed 
because it is not supported without vmware tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215737#comment-14215737
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/41

CLOUDSTACK-7930, CLOUDSTACK-7931: Do not allow to set invalid values for 
global settings which are of type integer and float

 Do not allow to set invalid values for global settings which are of type 
integer and float

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-7930

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/41.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #41


commit b7de1ecbe470365c30bc6afcea5f5c560bc9c8c2
Author: Anshul Gangwar anshul.gang...@citrix.com
Date:   2014-11-17T09:57:22Z

CLOUDSTACK-7930, CLOUDSTACK-7931: Do not allow to set invalid values for 
global settings which are of type integer and float




 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7929) Unhandled exception when setting negative value for throttling rate while creating network offering

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215774#comment-14215774
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7929:


Github user karuturi commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/40#discussion_r20487864
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -3723,6 +3723,10 @@ public NetworkOffering 
createNetworkOffering(CreateNetworkOfferingCmd cmd) {
 throw new InvalidParameterValueException(Invalid value for 
Availability. Supported types:  + Availability.Required + ,  + 
Availability.Optional);
 }
 
+if (networkRate != null  networkRate  0) {
--- End diff --

is networkRate == null a valid value?


 Unhandled exception when setting negative value for throttling rate while 
 creating network offering
 ---

 Key: CLOUDSTACK-7929
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7929
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 Steps
 
 Create a network offering and specify -1 for network throttling rate.
 Result
 =
 Exception is not handled properly throwing a DB exception exposing the DB 
 column names in the Logs and UI.
 Expected Result
 =
 -1 is generally an acceptable input for signifying infinite values or not 
 applicable values. So we should allow -1 and translate it appropriately as 
 no throttling applied. Or if we don't, we should handle the input correctly 
 and throw a suitable error message for the user.
 Following is the exception seen presently in the logs (or through UI):
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2014-11-13 13:58:31,414 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-ac230e40) ===START=== 10.144.7.5 – POST 
 command=createNetworkOfferingresponse=jsonsessionkey=vL5F3A1A1pr98OOTv7eei
 G2jvBI%3D
 2014-11-13 13:58:31,428 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding Firewall service with 
 provider VirtualRouter
 2014-11-13 13:58:31,432 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding network offering [Network 
 Offering [0-Guest-test]
 2014-11-13 13:58:31,435 DEBUG [c.c.u.d.T.Transaction] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Rolling back the transaction: 
 Time = 3 Name = catalina-exec-8; called by -TransactionLegac
 y.rollback:902-TransactionLegacy.removeUpTo:845-TransactionLegacy.close:669-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke
 :91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy79.persist:-1-ConfigurationManagerImpl$11.doInTransaction:4218-ConfigurationManagerImpl$11.doInTransaction:420
 9-Transaction$2.doInTransaction:57
 2014-11-13 13:58:31,442 ERROR [c.c.a.ApiServer] (catalina-exec-8:ctx-ac230e40 
 ctx-60a6474c) unhandled exception executing api command: 
 [Ljava.lang.String;@61bc5278
 com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
 com.mysql.jdbc.JDBC4PreparedStatement@27a103c3: INSERT INTO network_offerings 
 (network_offerings.name, network_offerings.un
 ique_name, network_offerings.display_text, network_offerings.nw_rate, 
 network_offerings.mc_rate, network_offerings.traffic_type, 
 network_offerings.specify_vlan, network_offerings.system_onl
 y, network_offerings.service_offering_id, network_offerings.tags, 
 network_offerings.default, network_offerings.availability, 
 network_offerings.state, network_offerings.created, network_offe
 rings.guest_type, network_offerings.dedicated_lb_service, 
 network_offerings.shared_source_nat_service, 
 network_offerings.specify_ip_ranges, network_offerings.sort_key, 
 network_offerings.uui
 d, network_offerings.redundant_router_service, 
 network_offerings.conserve_mode, network_offerings.elastic_ip_service, 
 network_offerings.eip_associate_public_ip, network_offerings.elastic_lb
 _service, network_offerings.inline, network_offerings.is_persistent, 
 network_offerings.egress_default_policy, 
 network_offerings.concurrent_connections, 
 network_offerings.keep_alive_enabled,
 network_offerings.supports_streched_l2, network_offerings.internal_lb, 
 network_offerings.public_lb) VALUES (_binary'test', _binary'test', 
 _binary'test', -1, 10, 'Guest', 0, 0, null, null,
 0, 'Optional', 'Disabled', '2014-11-13 08:28:31', 'Isolated', 0, 0, 0, 0, 
 _binary'f8bf35f5-dd77-4fa8-83ff-af1f1e85ece3', 0, 1, 0, 0, 0, 0, 0, 1, null, 
 0, 0, 0, 0)
 at 

[jira] [Commented] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215771#comment-14215771
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7932:


Github user karuturi commented on the pull request:

https://github.com/apache/cloudstack/pull/39#issuecomment-63428603
  
From the method definition (isVmAlive), looks like what it is doing is 
right. 
ideally, we should change the return type from Boolean to boolean in the 
interface. 

Maybe in the caller(HighAvailabilityManagerImpl), you could check for both 
null and false for now. 

Also, can you add some unittests? 


 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,691 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Received: { Ans: , MgmtId: 1, via: 3, Ver: 
 v1, Flags: 10, { Answer }
 }
 

[jira] [Commented] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215785#comment-14215785
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7932:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/39#issuecomment-63429460
  
Actually null has different meaning than false. False means that vm is not 
alive while null means it cannot be determined whether it is alive or not. 

This is existing functionality for all hypervisor investigators so it 
doesn't make sense to write unit tests for this. This got changed to this 
behavior while fixing findbug  bugs. Better would be to use enum to represent 
various better states. But that will require many changes and can be taken for 
next release.


 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,691 DEBUG 

[jira] [Commented] (CLOUDSTACK-7929) Unhandled exception when setting negative value for throttling rate while creating network offering

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215788#comment-14215788
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7929:


Github user anshul1886 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/40#discussion_r20488120
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -3723,6 +3723,10 @@ public NetworkOffering 
createNetworkOffering(CreateNetworkOfferingCmd cmd) {
 throw new InvalidParameterValueException(Invalid value for 
Availability. Supported types:  + Availability.Required + ,  + 
Availability.Optional);
 }
 
+if (networkRate != null  networkRate  0) {
--- End diff --

yes. it means that networkRate parameter is not passed and networkRate for 
this offering will be taken from other global settings


 Unhandled exception when setting negative value for throttling rate while 
 creating network offering
 ---

 Key: CLOUDSTACK-7929
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7929
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 Steps
 
 Create a network offering and specify -1 for network throttling rate.
 Result
 =
 Exception is not handled properly throwing a DB exception exposing the DB 
 column names in the Logs and UI.
 Expected Result
 =
 -1 is generally an acceptable input for signifying infinite values or not 
 applicable values. So we should allow -1 and translate it appropriately as 
 no throttling applied. Or if we don't, we should handle the input correctly 
 and throw a suitable error message for the user.
 Following is the exception seen presently in the logs (or through UI):
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2014-11-13 13:58:31,414 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-8:ctx-ac230e40) ===START=== 10.144.7.5 – POST 
 command=createNetworkOfferingresponse=jsonsessionkey=vL5F3A1A1pr98OOTv7eei
 G2jvBI%3D
 2014-11-13 13:58:31,428 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding Firewall service with 
 provider VirtualRouter
 2014-11-13 13:58:31,432 DEBUG [c.c.c.ConfigurationManagerImpl] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Adding network offering [Network 
 Offering [0-Guest-test]
 2014-11-13 13:58:31,435 DEBUG [c.c.u.d.T.Transaction] 
 (catalina-exec-8:ctx-ac230e40 ctx-60a6474c) Rolling back the transaction: 
 Time = 3 Name = catalina-exec-8; called by -TransactionLegac
 y.rollback:902-TransactionLegacy.removeUpTo:845-TransactionLegacy.close:669-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:161-ExposeInvocationInterceptor.invoke
 :91-ReflectiveMethodInvocation.proceed:172-JdkDynamicAopProxy.invoke:204-$Proxy79.persist:-1-ConfigurationManagerImpl$11.doInTransaction:4218-ConfigurationManagerImpl$11.doInTransaction:420
 9-Transaction$2.doInTransaction:57
 2014-11-13 13:58:31,442 ERROR [c.c.a.ApiServer] (catalina-exec-8:ctx-ac230e40 
 ctx-60a6474c) unhandled exception executing api command: 
 [Ljava.lang.String;@61bc5278
 com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
 com.mysql.jdbc.JDBC4PreparedStatement@27a103c3: INSERT INTO network_offerings 
 (network_offerings.name, network_offerings.un
 ique_name, network_offerings.display_text, network_offerings.nw_rate, 
 network_offerings.mc_rate, network_offerings.traffic_type, 
 network_offerings.specify_vlan, network_offerings.system_onl
 y, network_offerings.service_offering_id, network_offerings.tags, 
 network_offerings.default, network_offerings.availability, 
 network_offerings.state, network_offerings.created, network_offe
 rings.guest_type, network_offerings.dedicated_lb_service, 
 network_offerings.shared_source_nat_service, 
 network_offerings.specify_ip_ranges, network_offerings.sort_key, 
 network_offerings.uui
 d, network_offerings.redundant_router_service, 
 network_offerings.conserve_mode, network_offerings.elastic_ip_service, 
 network_offerings.eip_associate_public_ip, network_offerings.elastic_lb
 _service, network_offerings.inline, network_offerings.is_persistent, 
 network_offerings.egress_default_policy, 
 network_offerings.concurrent_connections, 
 network_offerings.keep_alive_enabled,
 network_offerings.supports_streched_l2, network_offerings.internal_lb, 
 network_offerings.public_lb) VALUES (_binary'test', _binary'test', 
 _binary'test', -1, 10, 'Guest', 0, 0, null, null,
 0, 'Optional', 'Disabled', '2014-11-13 08:28:31', 'Isolated', 0, 0, 0, 0, 
 

[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215799#comment-14215799
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


Github user karuturi commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/41#discussion_r20488370
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -725,6 +725,21 @@ private String validateConfigurationValue(String name, 
String value, String scop
 type = c.getType();
 }
 
+String errMsg = null;
+try {
+if (type.equals(Integer.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid integer value for parameter  + name;
+Integer.parseInt(value);
+} else if (type.equals(Float.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid float value for parameter  + name;
+Float.parseFloat(value);
--- End diff --

should we allow values like 3,400.5 ?


 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215801#comment-14215801
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


Github user karuturi commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/41#discussion_r20488404
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -725,6 +725,21 @@ private String validateConfigurationValue(String name, 
String value, String scop
 type = c.getType();
 }
 
+String errMsg = null;
+try {
+if (type.equals(Integer.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid integer value for parameter  + name;
+Integer.parseInt(value);
+} else if (type.equals(Float.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid float value for parameter  + name;
+Float.parseFloat(value);
+}
+} catch (Exception e) {
--- End diff --

Can you catch specific expected exceptions NPE and NFE?


 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215804#comment-14215804
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


Github user karuturi commented on the pull request:

https://github.com/apache/cloudstack/pull/41#issuecomment-63430386
  
Can you add unittests for this?


 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215811#comment-14215811
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7932:


Github user karuturi commented on the pull request:

https://github.com/apache/cloudstack/pull/39#issuecomment-63430873
  
In the second return statement, should it return false since the agent is 
alive and returned some status?

I didnt understand the argument for not adding unittests. I see that as an 
only way to avoid the same changes in future as findbugs would report it again 
and someone would attempt to fix it again. 
adding some documentation might help.



 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,691 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Received: { Ans: , MgmtId: 1, via: 3, Ver: 
 v1, 

[jira] [Closed] (CLOUDSTACK-7486) [LXC] Fedora VM fails to start on LXc host

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7486.
--

 [LXC] Fedora VM  fails to start on LXc host
 ---

 Key: CLOUDSTACK-7486
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7486
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Kishan Kavala
 Fix For: 4.5.0

 Attachments: agent.tar.gz


 Repro Steps:
 Repro steps:
 Create a LXC setup
 Register debian Template
 Create Debian VM
 Bug:
 Debian VMs fails to start
 Agent log shows:
 2014-09-04 17:29:08,581 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-2:null) starting i-2-3-VM: domain type='lxc'
 namei-2-3-VM/name
 uuida2dafe32-9d8c-47a8-abe5-9af100987155/uuid
 descriptionFedora 9/description
 clock offset='utc'
 timer name='kvmclock' present='no' //clock
 features
 pae/
 apic/
 acpi/
 /features
 devices
 emulator/emulator
 interface type='bridge'
 source bridge='brem1-533'/
 mac address='02:00:1a:7c:00:01'/
 model type='virtio'/
 bandwidth
 inbound average='25600' peak='25600'/
 outbound average='25600' peak='25600'/
 /bandwidth
 /interface
 filesystem type='mount'
   source 
 dir='/mnt/dfa2ec3c-d133-3284-8583-0a0845aa4424/36d7a92b-a78b-4f30-83bb-c65ea7768627'/
   target dir='/'/
 /filesystem
 serial type='pty'
 target port='0'/
 /serial
 console type='pty'
 target port='0'/
 /console
 /devices
 memory524288/memory
 devices
 memballoon model='none'/
 /devices
 vcpu1/vcpu
 os
 typeexe/type
 init/sbin/init/init
 /os
 cputune
 shares500/shares
 /cputune
 cpu/cpuon_rebootrestart/on_reboot
 on_poweroffdestroy/on_poweroff
 on_crashdestroy/on_crash
 /domain
 2014-09-04 17:29:08,944 WARN  [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-2:null) LibvirtExceptionon_crashdestroy/on_crash
 /domain
 2014-09-04 17:29:08,944 WARN  [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-2:null) LibvirtException
 org.libvirt.LibvirtException: internal error: guest failed to start: cannot 
 find init path '/sbin/init' relative to container root: No such file or 
 directory
 at org.libvirt.ErrorHandler.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.domainCreateXML(Unknown Source)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.startVM(LibvirtComputingResource.java:1238)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:3766)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1332)
 at com.cloud.agent.Agent.processRequest(Agent.java:503)
 at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:810)
 at com.cloud.utils.nio.Task.run(Task.java:84)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 2014-09-04 17:29:08,945 DEBUG [kvm.storage.KVMStoragePoolManager] 
 (agentRequest-Handler-2:null) Disconnecting disk 
 36d7a92b-a78b-4f30-83bb-c65ea7768627
 2014-09-04 17:29:08,945 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
 (agentRequest-Handler-2:null) Trying to fetch storage pool 
 dfa2ec3c-d133-3284-8583-0a0845aa4424 from libvirt
 2014-09-04 17:29:08,971 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-2:null) Seq 1-8326029811101205165:  { Ans: , MgmtId: 
 233845178472723, via: 1, Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.StartAnswer:{vm:{id:3,name:i-2-3-VM,type:User,cpus:1,minSpeed:500,maxSpeed:500,minRam:536870912,maxRam:536870912,arch:x86_64,os:Fedora
  9,platformEmulator:Fedora 
 

[jira] [Closed] (CLOUDSTACK-7485) [LXC] Debian VMs fails to start in LXC host

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7485.
--

 [LXC] Debian VMs fails to start in LXC host
 ---

 Key: CLOUDSTACK-7485
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7485
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Kishan Kavala
 Fix For: 4.5.0

 Attachments: agent.tar.gz


 Repro steps:
 Create a LXC setup
 Register debian Template
 Create Debian VM 
 Bug:
 Debian VMs fails to start
 Agent log shows:
 2014-09-04 17:34:01,902 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-5:null) starting i-2-5-VM: domain type='lxc'
 namei-2-5-VM/name
 uuid6c3618c2-0189-4678-8242-f53bc1099cee/uuid
 descriptionDebian GNU/Linux 6(64-bit)/description
 clock offset='utc'
 timer name='kvmclock' present='no' //clock
 features
 pae/
 apic/
 acpi/
 /features
 devices
 emulator/emulator
 interface type='bridge'
 source bridge='brem1-533'/
 mac address='02:00:22:4f:00:03'/
 model type='virtio'/
 bandwidth
 inbound average='25600' peak='25600'/
 outbound average='25600' peak='25600'/
 /bandwidth
 /interface
 filesystem type='mount'
   source 
 dir='/mnt/dfa2ec3c-d133-3284-8583-0a0845aa4424/3ddee90e-6037-489c-b4ac-1df9867bf378'/
   target dir='/'/
 /filesystem
 serial type='pty'
 target port='0'/
 /serial
 console type='pty'
 target port='0'/
 /console
 /devices
 memory524288/memory
 devices
 memballoon model='none'/
 /devices
 vcpu1/vcpu
 os
 typeexe/type
 init/sbin/init/init
 /os
 cputune
 shares500/shares
 /cputune
 cpu/cpuon_rebootrestart/on_reboot
 on_poweroffdestroy/on_poweroff
 on_crashdestroy/on_crash
 /domain
 2014-09-04 17  at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.domainCreateXML(Unknown Source)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.startVM(LibvirtComputingResource.java:1238)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:3766)
 at 
 com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1332)
 at com.cloud.agent.Agent.processRequest(Agent.java:503)
 at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:810)
 at com.cloud.utils.nio.Task.run(Task.java:84)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 2014-09-04 17:34:02,207 DEBUG [kvm.storage.KVMStoragePoolManager] 
 (agentRequest-Handler-5:null) Disconnecting disk 
 3ddee90e-6037-489c-b4ac-1df9867bf378
 2014-09-04 17:34:02,208 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
 (agentRequest-Handler-5:null) Trying to fetch storage pool 
 dfa2ec3c-d133-3284-8583-0a0845aa4424 from libvirt
 2014-09-04 17:34:02,231 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-5:null) Seq 1-8326029811101205191:  { Ans: , MgmtId: 
 233845178472723, via: 1, Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.StartAnswer:{vm:{id:5,name:i-2-5-VM,type:User,cpus:1,minSpeed:500,maxSpeed:500,minRam:536870912,maxRam:536870912,arch:x86_64,os:Debian
  GNU/Linux 6(64-bit),platfo:34:02,207 WARN  
 [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:null) 
 LibvirtException
 org.libvirt.LibvirtException: internal error: guest failed to start: cannot 
 find init path '/sbin/init' relative to container root: No such file or 
 directory
 at org.libvirt.ErrorHandler.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7430) [UI] no option to remove /delete host from cluster from UI

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7430.
--

 [UI] no option to remove /delete host from cluster from UI
 --

 Key: CLOUDSTACK-7430
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7430
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
 Fix For: 4.5.0

 Attachments: 2014-10-17-jessica.PNG, delete-host.png


 Repro steps:
 1. Create a Zone
 2. Put host to maintenance 
 3. when host is in maintenance try to remove/delete host from cluster via Ui
 Bug:
 No option to delete host 
 attaching snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7804) [CEPH] Adding RBD primary storage is failing

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7804.
--

 [CEPH] Adding  RBD primary storage is failing 
 --

 Key: CLOUDSTACK-7804
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7804
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Kishan Kavala
Priority: Blocker
 Fix For: 4.5.0

 Attachments: MS.tar.gz, agent.tar.gz


 Repro steps :
 1. Create a LXC zone with NFS primary storage
 2. Once System VMs are up add Ceph storage as another primary storage
 Bug:
 Storage addition is failing. It goes to initialized state but after some time 
 gets message storage addition failed .
 attached MS log and Agents log for the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7402) [UI] no option to delete or copy template across zone for registered template

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7402.
--

 [UI] no option to delete or copy template across zone for registered template
 -

 Key: CLOUDSTACK-7402
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7402
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: shweta agarwal
 Fix For: Future

 Attachments: temp.png


 Repro steps:
 Register a template .
 Once its ready notice there is no command button to delete this template
 Also there is not command option to copy template across zone
 Attaching snapshot 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-7318) [UI] processing wheel continue to spin even after error messaage during VM snapshot creation

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal reopened CLOUDSTACK-7318:


issue still reproducible but now in taking volume snapshot instead of VM  
snaphot

 [UI] processing wheel continue to spin even after error messaage during VM 
 snapshot creation
 

 Key: CLOUDSTACK-7318
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7318
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Mihaela Stoica
 Fix For: 4.6.0

 Attachments: processingwheel.png


 Repro steps:
 Create a LXC VM
 When VM is running try to createa VM  snapshot
 Bug:
 Notice you get message VM snapshot is not enabled for hypervisor type: LXC
 but spinnign wheel continue to spin . attaching snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7384) [LXC][UI] show change service offering command option only when VM is in stop state

2014-11-17 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-7384.
--

Verified . fixed

 [LXC][UI] show change service offering command option only when VM is in stop 
 state
 ---

 Key: CLOUDSTACK-7384
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7384
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.5.0
Reporter: shweta agarwal
Assignee: Jessica Wang
 Fix For: 4.5.0

 Attachments: change.png


 we should show change service offerings of a LXC VM in stop state in  
 Instance detail tab , the way we do it in KVM VMs. 
 Currently we show change service offering option in LXC VMs when VM is 
 running as well
 Attaching screen shot 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7932) [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215839#comment-14215839
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7932:


Github user anshul1886 commented on the pull request:

https://github.com/apache/cloudstack/pull/39#issuecomment-63432886
  
No. Because agent is reporting status of host. We cannot determine the 
status of VMs from that.


 [Hyper-V] Wrong semantics for isVmAlive() method in HypervInvestigator
 --

 Key: CLOUDSTACK-7932
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7932
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
 Fix For: 4.5.0


 The isVmAlive() method should return null when it is unable to conclusively 
 determine if the VM is alive or not.
 I ran some tests using Simulator and found that HypervInvestigator determined 
 that VM is not alive. How can HypervInvestigator determine status of a VM 
 running on Simulator or any other HV?
 2014-11-15 13:35:21,692 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HypervInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? false
 Full logs for the HA worker thread
 2014-11-15 13:35:21,642 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) Processing 
 HAWork[1-HA-1-Running-Investigating]
 2014-11-15 13:35:21,648 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) HA on VM[SecondaryStorageVm|s-1-VM]
 2014-11-15 13:35:21,658 DEBUG [c.c.h.CheckOnAgentInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Unable to reach the agent for 
 VM[SecondaryStorageVm|s-1-VM]: Resource [Host:1] is unreachable: Host 1: Host 
 with specified id is not in the right state: Down
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) SimpleInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) XenServerInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.UserVmDomRInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Not a User Vm, unable to determine state of 
 VM[SecondaryStorageVm|s-1-VM] returning null
 2014-11-15 13:35:21,659 INFO [c.c.h.HighAvailabilityManagerImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) PingInvestigator found 
 VM[SecondaryStorageVm|s-1-VM]to be alive? null
 2014-11-15 13:35:21,659 DEBUG [c.c.h.ManagementIPSystemVMInvestigator] 
 (HA-Worker-1:ctx-e0b5183c work-1) Testing if VM[SecondaryStorageVm|s-1-VM] is 
 alive
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Sending { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,670 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Executing: { Cmd , MgmtId: 1, via: 
 2(SimulatedAgent.08984ca6-967c-49b0-84c1-968076cd6992), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,675 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 2-5786281096240955453: Received: { Ans: , MgmtId: 1, via: 2, Ver: 
 v1, Flags: 10,
 { Answer } }
 2014-11-15 13:35:21,675 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,678 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Sending { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,679 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Executing: { Cmd , MgmtId: 1, via: 
 3(SimulatedAgent.9bcff565-4ae7-492a-8e39-30d11f1cbbd7), Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.PingTestCommand:{_computingHostIp:172.16.15.74,wait:20}}]
  }
 2014-11-15 13:35:21,691 DEBUG [c.c.a.t.Request] (HA-Worker-1:ctx-e0b5183c 
 work-1) Seq 3-248260929458798725: Received: { Ans: , MgmtId: 1, via: 3, Ver: 
 v1, Flags: 10, { Answer }
 }
 2014-11-15 13:35:21,691 DEBUG [c.c.h.AbstractInvestigatorImpl] 
 (HA-Worker-1:ctx-e0b5183c work-1) host (172.16.15.74) cannot be pinged, 
 returning null ('I don't know')
 2014-11-15 13:35:21,691 DEBUG 

[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215848#comment-14215848
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


Github user anshul1886 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/41#discussion_r20489652
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -725,6 +725,21 @@ private String validateConfigurationValue(String name, 
String value, String scop
 type = c.getType();
 }
 
+String errMsg = null;
+try {
+if (type.equals(Integer.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid integer value for parameter  + name;
+Integer.parseInt(value);
+} else if (type.equals(Float.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid float value for parameter  + name;
+Float.parseFloat(value);
--- End diff --

No. Because they will fail later in parsing.


 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7930) Do not allow to set invalid values for global settings which are of type Integer, Float

2014-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215856#comment-14215856
 ] 

ASF GitHub Bot commented on CLOUDSTACK-7930:


Github user anshul1886 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/41#discussion_r20489860
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -725,6 +725,21 @@ private String validateConfigurationValue(String name, 
String value, String scop
 type = c.getType();
 }
 
+String errMsg = null;
+try {
+if (type.equals(Integer.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid integer value for parameter  + name;
+Integer.parseInt(value);
+} else if (type.equals(Float.class)) {
+errMsg = There was error in trying to parse value:  + 
value + . Please enter a valid float value for parameter  + name;
+Float.parseFloat(value);
+}
+} catch (Exception e) {
--- End diff --

What value will be added by catching specific exceptions?
In the end user only wants to know whether the value is valid or not. 
he/she is not concerned with what type of exception he is getting.



 Do not allow to set invalid values for global settings which are of type 
 Integer, Float
 ---

 Key: CLOUDSTACK-7930
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7930
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar
Priority: Critical
 Fix For: 4.5.0


 Setting Integer/Float/Boolean to invalid values results in 
 NullPointerException, NumberFormatException later in code.
 In case of network.throttling.rate parameter set to null results in deploy VM 
 failure with message of null and no other exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)