[jira] [Commented] (CLOUDSTACK-4621) Changing the management server's ethernet interface / mac address leaves the system in unstable state.

2013-09-05 Thread venkata swamybabu budumuru (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13760003#comment-13760003
 ] 

venkata swamybabu budumuru commented on CLOUDSTACK-4621:


Here is the thread which where he issue was brought to community notice 
sometime back on 4.0.2

http://markmail.org/message/6sdxpgcleb6zgctl#query:+page:1+mid:u6nvybyrcfw25opv+state:results

> Changing the management server's ethernet interface / mac address leaves the 
> system in unstable state.
> --
>
> Key: CLOUDSTACK-4621
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4621
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: venkata swamybabu budumuru
>Priority: Critical
> Fix For: 4.2.1
>
>
> Steps to reproduce:
> 1. Have latest CloudStack setup with 4.2 build.
> 2. Have at least 1 advanced zone using Xen Cluster
> 3. deploy VMs and make sure everything works fine.
> Note : In my case, the management server deployed on VMware.
> Before MAC changes on mgmt server :
> [root@Rhel63-Sanjeev ~]# cat ifconfig.output 
> eth1  Link encap:Ethernet  HWaddr 06:04:5A:00:00:66  
>   inet addr:10.147.59.126  Bcast:10.147.59.255  Mask:255.255.255.0
>   inet6 addr: fe80::404:5aff:fe00:66/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:35035311 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:31941744 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000 
>   RX bytes:31951991629 (29.7 GiB)  TX bytes:17754778160 (16.5 GiB)
> mysql> select * from mshost;
> ++---+---++---+-+---+--+-+-+-+
> | id | msid  | runid | name   | state | version | 
> service_ip| service_port | last_update | removed | alert_count |
> ++---+---++---+-+---+--+-+-+-+
> |  1 | 6615759585382 | 1378110990284 | Rhel63-Sanjeev | Up| 4.2.0   | 
> 10.147.59.126 | 9090 | 2013-09-06 04:44:45 | NULL|   0 |
> 4. I have manually logged into my vmware host and change the above MAC 
> address to "06:04:5A:00:00:68" and that resulted in a new interface.
> [root@Rhel63-Sanjeev ~]# ifconfig
> eth2  Link encap:Ethernet  HWaddr 06:04:5A:00:00:68  
>   inet addr:10.147.59.126  Bcast:10.147.59.255  Mask:255.255.255.0
>   inet6 addr: fe80::404:5aff:fe00:68/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:294927 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:475806 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000 
>   RX bytes:22230231 (21.2 MiB)  TX bytes:392491450 (374.3 MiB)
> 5. Restart the management server and verify the cloud.mshost table
> mysql> select * from mshost;
> ++---+---++---+-+---+--+-+-+-+
> | id | msid  | runid | name   | state | version | 
> service_ip| service_port | last_update | removed | alert_count |
> ++---+---++---+-+---+--+-+-+-+
> |  1 | 6615759585382 | 1378110990284 | Rhel63-Sanjeev | Up| 4.2.0   | 
> 10.147.59.126 | 9090 | 2013-09-06 04:44:45 | NULL|   0 |
> |  2 | 6615759585384 | 1378462772622 | Rhel63-Sanjeev | Up| 4.2.0   | 
> 10.147.59.126 | 9090 | 2013-09-06 10:20:35 | NULL|   0 |
> ++---+---++---+-+---+--+-+-+-+
> Observations:
> i. Now it created a new entry for the mshost due to mac / interface changes
> ii. Both the above entries shows the status as UP and it assumes there are 
> two mgmt servers
> iii. All the system VMs have the mgmt_server_id set to the old which is not 
> infact up
> mysql> select * from host where name like '%-VM%';
> +++--++++-+-++-+-+--+---+---+

[jira] [Created] (CLOUDSTACK-4621) Changing the management server's ethernet interface / mac address leaves the system in unstable state.

2013-09-05 Thread venkata swamybabu budumuru (JIRA)
venkata swamybabu budumuru created CLOUDSTACK-4621:
--

 Summary: Changing the management server's ethernet interface / mac 
address leaves the system in unstable state.
 Key: CLOUDSTACK-4621
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4621
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.0
Reporter: venkata swamybabu budumuru
Priority: Critical
 Fix For: 4.2.1


Steps to reproduce:

1. Have latest CloudStack setup with 4.2 build.
2. Have at least 1 advanced zone using Xen Cluster
3. deploy VMs and make sure everything works fine.

Note : In my case, the management server deployed on VMware.

Before MAC changes on mgmt server :

[root@Rhel63-Sanjeev ~]# cat ifconfig.output 
eth1  Link encap:Ethernet  HWaddr 06:04:5A:00:00:66  
  inet addr:10.147.59.126  Bcast:10.147.59.255  Mask:255.255.255.0
  inet6 addr: fe80::404:5aff:fe00:66/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:35035311 errors:0 dropped:0 overruns:0 frame:0
  TX packets:31941744 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:31951991629 (29.7 GiB)  TX bytes:17754778160 (16.5 GiB)

mysql> select * from mshost;
++---+---++---+-+---+--+-+-+-+
| id | msid  | runid | name   | state | version | 
service_ip| service_port | last_update | removed | alert_count |
++---+---++---+-+---+--+-+-+-+
|  1 | 6615759585382 | 1378110990284 | Rhel63-Sanjeev | Up| 4.2.0   | 
10.147.59.126 | 9090 | 2013-09-06 04:44:45 | NULL|   0 |

4. I have manually logged into my vmware host and change the above MAC address 
to "06:04:5A:00:00:68" and that resulted in a new interface.


[root@Rhel63-Sanjeev ~]# ifconfig
eth2  Link encap:Ethernet  HWaddr 06:04:5A:00:00:68  
  inet addr:10.147.59.126  Bcast:10.147.59.255  Mask:255.255.255.0
  inet6 addr: fe80::404:5aff:fe00:68/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:294927 errors:0 dropped:0 overruns:0 frame:0
  TX packets:475806 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:22230231 (21.2 MiB)  TX bytes:392491450 (374.3 MiB)

5. Restart the management server and verify the cloud.mshost table

mysql> select * from mshost;
++---+---++---+-+---+--+-+-+-+
| id | msid  | runid | name   | state | version | 
service_ip| service_port | last_update | removed | alert_count |
++---+---++---+-+---+--+-+-+-+
|  1 | 6615759585382 | 1378110990284 | Rhel63-Sanjeev | Up| 4.2.0   | 
10.147.59.126 | 9090 | 2013-09-06 04:44:45 | NULL|   0 |
|  2 | 6615759585384 | 1378462772622 | Rhel63-Sanjeev | Up| 4.2.0   | 
10.147.59.126 | 9090 | 2013-09-06 10:20:35 | NULL|   0 |
++---+---++---+-+---+--+-+-+-+

Observations:

i. Now it created a new entry for the mshost due to mac / interface changes
ii. Both the above entries shows the status as UP and it assumes there are two 
mgmt servers
iii. All the system VMs have the mgmt_server_id set to the old which is not 
infact up

mysql> select * from host where name like '%-VM%';
+++--++++-+-++-+-+--+---+---++---++++++--+---+---+-+-++--+--+-+++--++---+---+-+++--+-+-+--++---+-+--+
| id | name   | uuid | status | type
   | private_ip_address | private_netmask | 

[jira] [Commented] (CLOUDSTACK-4533) permission issue in usage server and it failed to start after upgrade from 3.0.4 to 4.2

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759972#comment-13759972
 ] 

ASF subversion and git services commented on CLOUDSTACK-4533:
-

Commit 5101f7dce9f3368eefb4fc3fb8be6b5525119120 in branch refs/heads/4.1 from 
[~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5101f7d ]

CLOUDSTACK-4533: fix two usage issues (db.properties and log4j-cloud.xml)

(1) Replacing db.properties with management server db.properties
(2) Rename log4j-cloud_usage.xml to log4j-cloud.xml
(cherry picked from commit fb97e8e617393ac86924304f2765e933cfa30a6a)


> permission issue in usage server and it failed to start after upgrade from 
> 3.0.4 to 4.2
> ---
>
> Key: CLOUDSTACK-4533
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4533
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Packaging, Upgrade, Usage
>Affects Versions: 4.2.1
> Environment: 
>Reporter: shweta agarwal
>Assignee: frank zhang
>Priority: Critical
>  Labels: ReleaseNote
> Fix For: 4.2.1
>
> Attachments: cloudstack-usage.err, cloudstack-usage.err, 
> cloudstack-usage.out, cloudstack-usage.out, usage.log
>
>
> did an upgrade from 3.0.4 to 4.2  and then start usage server. 
> Usage server failed to start
> giving following exception :
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
> at 
> org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:773)
> at 
> org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:901)
> at 
> org.springframework.util.Log4jConfigurer.initLogging(Log4jConfigurer.java:69)
> at com.cloud.usage.UsageServer.initLog4j(UsageServer.java:89)
> at com.cloud.usage.UsageServer.init(UsageServer.java:52)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java

[jira] [Commented] (CLOUDSTACK-4533) permission issue in usage server and it failed to start after upgrade from 3.0.4 to 4.2

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759970#comment-13759970
 ] 

ASF subversion and git services commented on CLOUDSTACK-4533:
-

Commit ff5ac2676e8135ac31d6320f3d9e1360f5850ab7 in branch refs/heads/master 
from [~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ff5ac26 ]

CLOUDSTACK-4533: fix two usage issues (db.properties and log4j-cloud.xml)

(1) Replacing db.properties with management server db.properties
(2) Rename log4j-cloud_usage.xml to log4j-cloud.xml
(cherry picked from commit fb97e8e617393ac86924304f2765e933cfa30a6a)


> permission issue in usage server and it failed to start after upgrade from 
> 3.0.4 to 4.2
> ---
>
> Key: CLOUDSTACK-4533
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4533
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Packaging, Upgrade, Usage
>Affects Versions: 4.2.1
> Environment: 
>Reporter: shweta agarwal
>Assignee: frank zhang
>Priority: Critical
>  Labels: ReleaseNote
> Fix For: 4.2.1
>
> Attachments: cloudstack-usage.err, cloudstack-usage.err, 
> cloudstack-usage.out, cloudstack-usage.out, usage.log
>
>
> did an upgrade from 3.0.4 to 4.2  and then start usage server. 
> Usage server failed to start
> giving following exception :
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
> at 
> org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:773)
> at 
> org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:901)
> at 
> org.springframework.util.Log4jConfigurer.initLogging(Log4jConfigurer.java:69)
> at com.cloud.usage.UsageServer.initLog4j(UsageServer.java:89)
> at com.cloud.usage.UsageServer.init(UsageServer.java:52)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.j

[jira] [Resolved] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succeeds but gets into Alert state due to "Unable to create local link network". It remains in "Alert" state for a while and then gets to "

2013-09-05 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam resolved CLOUDSTACK-4617.


Resolution: Duplicate

CLOUDSTACK-4499 and CLOUDSTACK-3839

> Xenserver 6.1 - Add host succeeds but gets into Alert state due to "Unable to 
> create local link network". It remains in "Alert" state for a while and then 
> gets to "UP" state.
> --
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
> "Alert" state for a while and then gets to "UP" state.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExe

[jira] [Closed] (CLOUDSTACK-2982) listTemplates with templatefilter as “All” is not showing proper listing of templates when there are multiple zones.

2013-09-05 Thread manasaveloori (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

manasaveloori closed CLOUDSTACK-2982.
-

Resolution: Fixed

Issue not seen in latest builds.Hence closing the issue.

> listTemplates with templatefilter as “All” is not showing  proper listing of 
> templates when there are multiple zones.
> -
>
> Key: CLOUDSTACK-2982
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2982
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.2.0
>Reporter: manasaveloori
> Fix For: 4.2.0
>
> Attachments: listTemplate.jpg
>
>
> Steps:
> 1.Have a CS with multiple zones. I have 2 advanced zones (1 using xen and 
> other VMware).
> 2.Go to templates page. Filter by “All”.
> Observation:
> The listing is not proper when multiple zones are there. Some templates are 
> not downloaded and the zone ,hypervisor mapping for the templates are not 
> proper.
> Attached is the screenshot.
> mysql> select * from template_host_ref\G;
> *** 1. row ***
> id: 1
>host_id: 2
>template_id: 10
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/10/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 2. row ***
> id: 2
>host_id: 2
>template_id: 9
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/9/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 3. row ***
> id: 3
>host_id: 2
>template_id: 8
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-13 10:35:24
> job_id: NULL
>   download_pct: 100
>   size: 2097152000
>  physical_size: 372702720
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/8//74e8ee19-8ea4-495a-899a-446970388717.ova
>url: 
> http://download.cloud.com/templates/burbank/burbank-systemvm-08012012.ova
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 4. row ***
> id: 4
>host_id: 2
>template_id: 3
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/3/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 5. row ***
> id: 5
>host_id: 2
>template_id: 1
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/1/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 6. row ***
> id: 6
>host_id: 5
>template_id: 10
>created: 2013-06-12 11:58:04
>   last_updated: 2013-06-12 11:58:04
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/10/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2
>  destroyed: 0
>is_copy: 0
>  

[jira] [Updated] (CLOUDSTACK-4072) [DOC][upgrade][2.2.14 to 4.2][CentOS5 RPM builds] mysql-connector-java rpm dependency while upgrading from 2.2.14 to 4.2

2013-09-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-4072:
---

Summary: [DOC][upgrade][2.2.14 to 4.2][CentOS5 RPM builds] 
mysql-connector-java rpm dependency while upgrading from 2.2.14 to 4.2  (was: 
[upgrade][2.2.14 to 4.2][CentOS5 RPM builds] mysql-connector-java rpm 
dependency while upgrading from 2.2.14 to 4.2)

> [DOC][upgrade][2.2.14 to 4.2][CentOS5 RPM builds] mysql-connector-java rpm 
> dependency while upgrading from 2.2.14 to 4.2
> 
>
> Key: CLOUDSTACK-4072
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4072
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Doc, Packaging
>Affects Versions: 4.2.0
> Environment: MS : CentOS 5.6
> Host : ESX 4.1
> Rhel 5 builds are used to install and upgrade
>Reporter: Abhinav Roy
>Priority: Critical
> Fix For: 4.2.0
>
>
> Steps :
> ===
> 1. Install CS advanced zone setup on CentOS 5.6 management server with Rhel 5 
> build for 2.2.14 
> 2. Do some operations before upgrade.
> 3. Upgrade to 4.2 , rhel 5 build .
> Upgrade fails with this dependency :
> [root@MS-CentOS56 CloudPlatform-4.2-4.2-55-rhel5]# ./install.sh
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> 3.We detect you already have MySql server installed, you can 
> bypass mysql install chapter in CloudPlatform installation guide.
> Or you can use E) to remove current mysql then re-run install.sh 
> selecting D) to reinstall if you think existing MySql server has some trouble.
> For MySql downloaded from community, the script may not be able to 
> detect it.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> U) Upgrade the CloudPlatform packages installed on this computer
> R) Stop any running CloudPlatform services and remove the CloudPlatform 
> packages from this computer
> E) Remove the MySQL server (will not remove the MySQL databases)
> Q) Quit
>  > u
> Updating the CloudPlatform and its dependencies...
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: centos.mirror.net.in
>  * extras: centos.mirror.net.in
>  * updates: centos.mirror.net.in
> base
> cloud-temp
> cloud-temp/primary
> cloud-temp
> extras
> updates
> Setting up Update Process
> Resolving Dependencies
> --> Running transaction check
> ---> Package cloudstack-common.x86_64 0:4.2.0-SNAPSHOT.el5 set to be updated
> ---> Package cloudstack-management.x86_64 0:4.2.0-SNAPSHOT.el5 set to be 
> updated
> --> Processing Dependency: cloudstack-awsapi = 4.2.0 for package: 
> cloudstack-management
> --> Processing Dependency: mysql-connector-java for package: 
> cloudstack-management
> --> Running transaction check
> ---> Package cloudstack-awsapi.x86_64 0:4.2.0-SNAPSHOT.el5 set to be updated
> ---> Package cloudstack-management.x86_64 0:4.2.0-SNAPSHOT.el5 set to be 
> updated
> --> Processing Dependency: mysql-connector-java for package: 
> cloudstack-management
> --> Finished Dependency Resolution
> cloudstack-management-4.2.0-SNAPSHOT.el5.x86_64 from cloud-temp has 
> depsolving problems
>   --> Missing Dependency: mysql-connector-java is needed by package 
> cloudstack-management-4.2.0-SNAPSHOT.el5.x86_64 (cloud-temp)
> Error: Missing Dependency: mysql-connector-java is needed by package 
> cloudstack-management-4.2.0-SNAPSHOT.el5.x86_64 (cloud-temp)
>  You could try using --skip-broken to work around the problem
>  You could try running: package-cleanup --problems
> package-cleanup --dupes
> rpm -Va --nofiles --nodigest
> The program package-cleanup is found in the yum-utils package.
> workaround : Download that package from 
> http://dl.fedoraproject.org/pub/epel/5/x86_64/  follow the 
> instructions at 
> http://pkgs.org/centos-5-rhel-5/epel-i386/mysql-connector-java-5.1.12-2.el5.i386.rpm.html
>   .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4088) cloudstack will stop and delete vm which not belongs to cs

2013-09-05 Thread jianmoto (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759750#comment-13759750
 ] 

jianmoto commented on CLOUDSTACK-4088:
--

Hi,Abhinandan PrateekThere
You said "There is a way to subvert this by naming a VM using cloudstack naming 
convention, such VM will not be stopped by CS."
Is that right? Why I do these in my xenserver then the cloudstack stop and 
remove the vms yet.Is there any details that I ignore?

> cloudstack will stop and delete vm which not belongs to cs
> --
>
> Key: CLOUDSTACK-4088
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4088
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.0.2, 4.1.0, 4.2.0, Future
> Environment: xenserver 6.0.2
>Reporter: Hongtu Zang
>  Labels: removed, vm, xenserver
> Fix For: 4.2.0, Future
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> 1.create vm in xenserver, by xencenter
> 2.add xenserver to cloudstack
> the vm will be stopped and removed
> 3.create another vm in xencenter
> it is also removed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succeeds but gets into Alert state due to "Unable to create local link network". It remains in "Alert" state for a while and then gets to "U

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Summary: Xenserver 6.1 - Add host succeeds but gets into Alert state due to 
"Unable to create local link network". It remains in "Alert" state for a while 
and then gets to "UP" state.  (was: Xenserver 6.1 - Add host succeeds but gets 
into Alert state due to "". It remains in "Alert" state for a while and then 
gets to "UP" state.)

> Xenserver 6.1 - Add host succeeds but gets into Alert state due to "Unable to 
> create local link network". It remains in "Alert" state for a while and then 
> gets to "UP" state.
> --
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
> "Alert" state for a while and then gets to "UP" state.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11N

[jira] [Updated] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succeeds but gets into Alert state due to "". It remains in "Alert" state for a while and then gets to "UP" state.

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Summary: Xenserver 6.1 - Add host succeeds but gets into Alert state due to 
"". It remains in "Alert" state for a while and then gets to "UP" state.  (was: 
Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
"Alert" state for a while and then gets to "UP" state.)

> Xenserver 6.1 - Add host succeeds but gets into Alert state due to "". It 
> remains in "Alert" state for a while and then gets to "UP" state.
> ---
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
> "Alert" state for a while and then gets to "UP" state.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)

[jira] [Updated] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in "Alert" state for a while and then gets to "UP" state.

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Description: 
Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
"Alert" state for a while and then gets to "UP" state.

Steps to reproduce the problem:
In my case I already had 1 zone.

Create another advanced zone with 1 Xenserver host.

As part of zone creation wizard , provide all the values for zone creation 
including primary and secondary storage details.

Primary storage creation fails with error - "Failed to delete storage pool on 
host"

 Following exception seen in management server logs:
2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
===START===  10.215.3.9 -- GET  comman
d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
2013-09-05 11:51:25,975 DEBUG 
[datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
(catalina-exec-20:null)
 createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
/export/home/sangeetha/307/zone2-primary
port - -1
2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
(catalina-exec-20:null) Failed to add data store
com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
storage pool with in cluster 2
at 
org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
at 
org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
at 
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
2013-09-05 11:51:26,173 INFO  [cloud.api.ApiServer] (catalina-exec-20:null) 
Failed to delete storage pool on host


The reason for this failure was that the host was in "Alert" state , when the 
primary storage was being added.

Following exception seen in management server log:

2013-09-05 11:51:25,038 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-100:null) Lowest available Vif device number: 0 for VM: Control 
domain on host: Rack3Host3.lab.vmops.com
2013-09-05 11:51:25,462 WARN  [xen.resource.CitrixResourceBase] 
(DirectAgent-100:null) Unable to create local link network
The server failed to handle your request, due to an internal error.  The given 
message may give details useful for debugging the problem.
at com.xensource.xenapi.Types.checkResponse(Types.java:1694)
at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
at 
com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConne

[jira] [Updated] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in "Alert" state for a while and then gets to "UP" state.

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Summary: Xenserver 6.1 - Add host succeeds but gets into Alert state. It 
remains in "Alert" state for a while and then gets to "UP" state.  (was: 
Xenserver 6.1 - Add host succceeds but gets into Alert state. It remains in 
"Alert)

> Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in 
> "Alert" state for a while and then gets to "UP" state.
> -
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Adding primary storage pool as part of zone creation fails stating that there 
> is no host in the cluster. Retrying to add the primary storage pool again 
> after some delay succeeds.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util

[jira] [Updated] (CLOUDSTACK-4617) Xenserver 6.1 - Add host succceeds but gets into Alert state. It remains in "Alert

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Priority: Critical  (was: Major)
 Summary: Xenserver 6.1 - Add host succceeds but gets into Alert state. It 
remains in "Alert  (was: Adding primary storage pool as part of zone creation 
fails stating that there is no host in the cluster. Retrying to add the primary 
storage pool again after some delay succeeds.)

> Xenserver 6.1 - Add host succceeds but gets into Alert state. It remains in 
> "Alert
> --
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Adding primary storage pool as part of zone creation fails stating that there 
> is no host in the cluster. Retrying to add the primary storage pool again 
> after some delay succeeds.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concur

[jira] [Created] (CLOUDSTACK-4615) [Baremetal] Baremetal agent missing from campo GA release packaging

2013-09-05 Thread angeline shen (JIRA)
angeline shen created CLOUDSTACK-4615:
-

 Summary: [Baremetal] Baremetal agent missing from campo GA release 
packaging
 Key: CLOUDSTACK-4615
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Baremetal
Affects Versions: 4.2.0
 Environment: MS   RHEL 6.3
host  baremetal 
Reporter: angeline shen
Priority: Blocker
 Fix For: 4.2.0


>From   cheolsoo.p...@citrix.com  :


[root@campo-bm-pxe CloudPlatform-4.2.0-1-rhel6.3]# ./install.sh
Setting up the temporary repository...
Cleaning Yum cache...
Loaded plugins: fastestmirror, security
Cleaning repos: base cloud-temp extras updates
7 metadata files removed
Welcome to the CloudPlatform Installer.  What would you like to do?

NOTE:   For installing KVM agent, please setup 
EPEL yum repo first;
For installing CloudPlatform on RHEL6.x, please setup 
distribution yum repo either from ISO or from your registeration account.


M) Install the Management Server
A) Install the Agent
B) Install BareMetal Agent
S) Install the Usage Monitor
D) Install the database server (from distribution's repo)
L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
higher version MySql)
Q) Quit
 > B
Installing the BareMetal Agent...
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
base
  | 3.7 kB 00:00
cloud-temp  
  | 1.3 kB 00:00 ...
extras  
  | 3.4 kB 00:00
updates 
  | 3.4 kB 00:00
Setting up Install Process
No package cloudstack-baremetal-agent available.
Error: Nothing to do


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent missing from campo GA release packaging

2013-09-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-4615:
--

Assignee: frank zhang

> [Baremetal] Baremetal agent missing from campo GA release packaging
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0
>
>
> From   cheolsoo.p...@citrix.com  :
> [root@campo-bm-pxe CloudPlatform-4.2.0-1-rhel6.3]# ./install.sh
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror, security
> Cleaning repos: base cloud-temp extras updates
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * extras: centos.tt.co.kr
>  * updates: centos.tt.co.kr
> base  
> | 3.7 kB 00:00
> cloud-temp
> | 1.3 kB 00:00 ...
> extras
> | 3.4 kB 00:00
> updates   
> | 3.4 kB 00:00
> Setting up Install Process
> No package cloudstack-baremetal-agent available.
> Error: Nothing to do

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4600) Registered Cross-zone template does not populate template_zone_ref for later added zones using S3

2013-09-05 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen updated CLOUDSTACK-4600:
-

Description: 
1. Set up CS with one zone using S3 as secondary storage.
2. Register a template with cross-zone set to true.
3. Now add another zone using the same region-wide S3 secondary storage as zone 
one.
4. template_zone_ref only has one entry for the cross-zone template, no entry 
for later added zone.

  was:
1. Set up CS with one zone.
2. Register a template with cross-zone set to true.
3. Now add another zone.
4. template_zone_ref only has one entry for the cross-zone template, no entry 
for later added zone.

Summary: Registered Cross-zone template does not populate 
template_zone_ref for later added zones using S3  (was: Registered Cross-zone 
template does not populate template_zone_ref for later added zones)

> Registered Cross-zone template does not populate template_zone_ref for later 
> added zones using S3
> -
>
> Key: CLOUDSTACK-4600
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4600
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Sanjeev N
>Assignee: Min Chen
>Priority: Critical
> Fix For: 4.2.1
>
>
> 1. Set up CS with one zone using S3 as secondary storage.
> 2. Register a template with cross-zone set to true.
> 3. Now add another zone using the same region-wide S3 secondary storage as 
> zone one.
> 4. template_zone_ref only has one entry for the cross-zone template, no entry 
> for later added zone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4616) When system Vms fail to start when host is down , link local Ip addresses do not get released resulting in all the link local Ip addresses being consumed eventually

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4616:


Attachment: hostdown.rar

> When system Vms fail to start when host is down ,  link local Ip addresses do 
> not get released resulting in all the link local Ip addresses being consumed 
> eventually.
> --
>
> Key: CLOUDSTACK-4616
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4616
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> When system Vms fail to start when host is down ,  link local Ip addresses do 
> not get released resulting in all the link local Ip addresses being consumed 
> eventually.
> Steps to reproduce the problem:
> Advanced zone with 1 cluster having 1 host (Xenserver).
> Had SSVM,CCPVM, 2 routers and few user Vms running in the host.
> power down the host.
> When host was powered down , host is still marked as being in "Up" state . 
> Bug tracked  in - CLOUDSTACK-2140.
> Attempt to restart all the system Vms in the host that is down is made 
> continuously  and it fails.
> These failed attempts do not result in releasing the linked local Ip , 
> resulting in all linked local Ips being consumed.
> When the host is actually powered on , attempts to start the System Vms fail 
> , because of teh following exception seen in the management-server.logs:
> 013-09-05 12:00:09,551 INFO  [cloud.vm.VirtualMachineManagerImpl] 
> (secstorage-1:null) Insufficient capacity
> com.cloud.exception.InsufficientAddressCapacityException: Insufficient link 
> local address capacityScope=interface com.cloud.dc.DataCenter; id=1
> at 
> com.cloud.network.guru.ControlNetworkGuru.reserve(ControlNetworkGuru.java:156)
> at 
> com.cloud.network.NetworkManagerImpl.prepareNic(NetworkManagerImpl.java:2157)
> at 
> com.cloud.network.NetworkManagerImpl.prepare(NetworkManagerImpl.java:2127)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:886)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:578)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:571)
> at 
> com.cloud.storage.secondary.SecondaryStorageManagerImpl.startSecStorageVm(SecondaryStorageManagerImpl.java:267)
> at 
> com.cloud.storage.secondary.SecondaryStorageManagerImpl.allocCapacity(SecondaryStorageManagerImpl.java:696)
> at 
> com.cloud.storage.secondary.SecondaryStorageManagerImpl.expandPool(SecondaryStorageManagerImpl.java:1300)
> at 
> com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:123)
> at 
> com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:50)
> at 
> com.cloud.vm.SystemVmLoadScanner.loadScan(SystemVmLoadScanner.java:104)
> at 
> com.cloud.vm.SystemVmLoadScanner.access$100(SystemVmLoadScanner.java:33)
> at 
> com.cloud.vm.SystemVmLoadScanner$1.reallyRun(SystemVmLoadScanner.java:81)
> at com.cloud.vm.SystemVmLoadScanner$1.run(SystemVmLoadScanner.java:72)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> mysql> select * from op_dc_link_local_ip_address_alloc where data_center_id=1 
> and taken is null;
> Empty set (0.00 sec)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4620) Vm failed to start on the host on which it was running due to not having enough reservedMem when the host was powered on after being shutdown.

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4620:


Summary: Vm failed to start on the host on which it was running due to not 
having enough reservedMem when the host was powered on after being shutdown.  
(was: Vm failed to start on the host on which it was running due to no having 
enough reservedMem when the host was powered on after being shutdown.)

> Vm failed to start on the host on which it was running due to not having 
> enough reservedMem when the host was powered on after being shutdown.
> --
>
> Key: CLOUDSTACK-4620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4620
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Vm failed to start on the host on which it was running due to no having 
> enough reservedMem when the host was powered on after being shutdown
> Steps to reproduce the problem:
> Advanced zone with 1 cluster having 1 host (Xenserver).
> Had SSVM,CCPVM, 2 routers and few user Vms running in the host.
> Power down the host. 
> After few hours, powered on the host.
> All the Vms running on this host were marked "Stopped".
> Tried to start all the user Vms running in this host.
> 1 of the user Vms fails to start because of not having enough "Reserved RAM"
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912
> When i tried to start the same Vm  again after few minutes , it started 
> successfully on the same host.
> Seems like there is some issue with releasing the capacity when all the Vms 
> get marked as "Stopped" by VM sync process.
> Vm that failed to start because of capacity and then eventually succeeded 
> when starting after few minutes is "temfromsnap" .
> Management server logs when starting the VM fails to start in the 
> last_host_id.
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) DeploymentPlanner allocation algorithm: 
> com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_b297c61
> b@7e43d432
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) Trying to allocate a host and storage pools from dc:1, 
> pod:1,cluster:1, requested cpu: 500, requested
>  ram: 536870912
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) Is ROOT volume READY (pool already allocated)?: Yes
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) This VM has last host_id specified, trying to choose the 
> same host: 1
> 2013-09-05 12:52:44,938 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Checking if host: 1 has enough capacity for requested CPU: 500 
> and requested RAM: 536870912 , cpuOverprovisio
> ningFactor: 1.0
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Hosts's actual total CPU: 9040 and CPU after applying 
> overprovisioning: 9040
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) We need to allocate to the last host again, so checking if there 
> is enough reserved capacity
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved CPU: 1500 , Requested CPU: 500
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) STATS: Failed to alloc resource from host: 1 reservedCpu: 1500, 
> requested cpu: 500, reservedMem: 0, requested
>  mem: 536870912
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 

[jira] [Updated] (CLOUDSTACK-4617) Adding primary storage pool as part of zone creation fails stating that there is no host in the cluster. Retrying to add the primary storage pool again after some de

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Attachment: hostdown.rar

> Adding primary storage pool as part of zone creation fails stating that there 
> is no host in the cluster. Retrying to add the primary storage pool again 
> after some delay succeeds.
> --
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Adding primary storage pool as part of zone creation fails stating that there 
> is no host in the cluster. Retrying to add the primary storage pool again 
> after some delay succeeds.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation 
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on 
> host"
>  Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
> ===START===  10.215.3.9 -- GET  comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG 
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
> (catalina-exec-20:null)
>  createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
> storage pool with in cluster 2
> at 
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at 
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at 
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(T

[jira] [Updated] (CLOUDSTACK-4620) Vm failed to start on the host on which it was running due to no having enough reservedMem when the host was powered on after being shutdown.

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4620:


Attachment: hostdown.rar

> Vm failed to start on the host on which it was running due to no having 
> enough reservedMem when the host was powered on after being shutdown.
> -
>
> Key: CLOUDSTACK-4620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4620
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
> Environment: Build from 4.2-forward
>Reporter: Sangeetha Hariharan
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Vm failed to start on the host on which it was running due to no having 
> enough reservedMem when the host was powered on after being shutdown
> Steps to reproduce the problem:
> Advanced zone with 1 cluster having 1 host (Xenserver).
> Had SSVM,CCPVM, 2 routers and few user Vms running in the host.
> Power down the host. 
> After few hours, powered on the host.
> All the Vms running on this host were marked "Stopped".
> Tried to start all the user Vms running in this host.
> 1 of the user Vms fails to start because of not having enough "Reserved RAM"
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912
> When i tried to start the same Vm  again after few minutes , it started 
> successfully on the same host.
> Seems like there is some issue with releasing the capacity when all the Vms 
> get marked as "Stopped" by VM sync process.
> Vm that failed to start because of capacity and then eventually succeeded 
> when starting after few minutes is "temfromsnap" .
> Management server logs when starting the VM fails to start in the 
> last_host_id.
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) DeploymentPlanner allocation algorithm: 
> com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_b297c61
> b@7e43d432
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) Trying to allocate a host and storage pools from dc:1, 
> pod:1,cluster:1, requested cpu: 500, requested
>  ram: 536870912
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) Is ROOT volume READY (pool already allocated)?: Yes
> 2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) This VM has last host_id specified, trying to choose the 
> same host: 1
> 2013-09-05 12:52:44,938 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Checking if host: 1 has enough capacity for requested CPU: 500 
> and requested RAM: 536870912 , cpuOverprovisio
> ningFactor: 1.0
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Hosts's actual total CPU: 9040 and CPU after applying 
> overprovisioning: 9040
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) We need to allocate to the last host again, so checking if there 
> is enough reserved capacity
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved CPU: 1500 , Requested CPU: 500
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) STATS: Failed to alloc resource from host: 1 reservedCpu: 1500, 
> requested cpu: 500, reservedMem: 0, requested
>  mem: 536870912
> 2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
> 186b3cbed ]) Host does not have enough reserved RAM available, cannot 
> allocate to this host.
> 2013-09-05 12:52:44,940 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
> a441-533186b3cbed ]) The last host of this VM does not h

[jira] [Commented] (CLOUDSTACK-4533) permission issue in usage server and it failed to start after upgrade from 3.0.4 to 4.2

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759506#comment-13759506
 ] 

ASF subversion and git services commented on CLOUDSTACK-4533:
-

Commit fb97e8e617393ac86924304f2765e933cfa30a6a in branch 
refs/heads/4.2-forward from [~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fb97e8e ]

CLOUDSTACK-4533: fix two usage issues (db.properties and log4j-cloud.xml)

(1) Replacing db.properties with management server db.properties
(2) Rename log4j-cloud_usage.xml to log4j-cloud.xml


> permission issue in usage server and it failed to start after upgrade from 
> 3.0.4 to 4.2
> ---
>
> Key: CLOUDSTACK-4533
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4533
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Packaging, Upgrade, Usage
>Affects Versions: 4.2.1
> Environment: 
>Reporter: shweta agarwal
>Assignee: frank zhang
>Priority: Critical
>  Labels: ReleaseNote
> Fix For: 4.2.1
>
> Attachments: cloudstack-usage.err, cloudstack-usage.err, 
> cloudstack-usage.out, cloudstack-usage.out, usage.log
>
>
> did an upgrade from 3.0.4 to 4.2  and then start usage server. 
> Usage server failed to start
> giving following exception :
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
> at 
> org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
> at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:773)
> at 
> org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:901)
> at 
> org.springframework.util.Log4jConfigurer.initLogging(Log4jConfigurer.java:69)
> at com.cloud.usage.UsageServer.initLog4j(UsageServer.java:89)
> at com.cloud.usage.UsageServer.init(UsageServer.java:52)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
> (Permission denied)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:207)
> at java.io.FileOutputStream.(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at 
> org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
> at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
> at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
> at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
> at 
> org.apache.log4j.xml.DOMConfigurator.pa

[jira] [Created] (CLOUDSTACK-4620) Vm failed to start on the host on which it was running due to no having enough reservedMem when the host was powered on after being shutdown.

2013-09-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-4620:
---

 Summary: Vm failed to start on the host on which it was running 
due to no having enough reservedMem when the host was powered on after being 
shutdown.
 Key: CLOUDSTACK-4620
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4620
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.1
 Environment: Build from 4.2-forward
Reporter: Sangeetha Hariharan
 Fix For: 4.2.1


Vm failed to start on the host on which it was running due to no having enough 
reservedMem when the host was powered on after being shutdown

Steps to reproduce the problem:

Advanced zone with 1 cluster having 1 host (Xenserver).

Had SSVM,CCPVM, 2 routers and few user Vms running in the host.

Power down the host. 

After few hours, powered on the host.

All the Vms running on this host were marked "Stopped".

Tried to start all the user Vms running in this host.

1 of the user Vms fails to start because of not having enough "Reserved RAM"

2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912

When i tried to start the same Vm  again after few minutes , it started 
successfully on the same host.

Seems like there is some issue with releasing the capacity when all the Vms get 
marked as "Stopped" by VM sync process.

Vm that failed to start because of capacity and then eventually succeeded when 
starting after few minutes is "temfromsnap" .

Management server logs when starting the VM fails to start in the last_host_id.

2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) DeploymentPlanner allocation algorithm: 
com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_b297c61
b@7e43d432
2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) Trying to allocate a host and storage pools from dc:1, 
pod:1,cluster:1, requested cpu: 500, requested
 ram: 536870912
2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) Is ROOT volume READY (pool already allocated)?: Yes
2013-09-05 12:52:44,934 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) This VM has last host_id specified, trying to choose the 
same host: 1
2013-09-05 12:52:44,938 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Checking if host: 1 has enough capacity for requested CPU: 500 and 
requested RAM: 536870912 , cpuOverprovisio
ningFactor: 1.0
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Hosts's actual total CPU: 9040 and CPU after applying 
overprovisioning: 9040
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) We need to allocate to the last host again, so checking if there 
is enough reserved capacity
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Reserved CPU: 1500 , Requested CPU: 500
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Reserved RAM: 0 , Requested RAM: 536870912
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) STATS: Failed to alloc resource from host: 1 reservedCpu: 1500, 
requested cpu: 500, reservedMem: 0, requested
 mem: 536870912
2013-09-05 12:52:44,940 DEBUG [cloud.capacity.CapacityManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533
186b3cbed ]) Host does not have enough reserved RAM available, cannot allocate 
to this host.
2013-09-05 12:52:44,940 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) The last host of this VM does not have enough capacity
2013-09-05 12:52:44,940 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-
a441-533186b3cbed ]) Cannot choose the last host to deploy this VM
2013-09-05 12:52:44,940 DEBUG [cloud.deploy.FirstFitPlanner] 
(Job-Executor-26:job-84 = [ ac245729-bfda-4e77-a441-533186b3c
bed ]) Searching resources only under specified Cluster: 1
2013-09-05 12:52:44,943 DEBUG [cloud.deploy.FirstFitPlanner] 
(Job-Executor

[jira] [Created] (CLOUDSTACK-4619) UI > zone detail page > check if the zone has any cluster whose hypervisor is VMware. If not, skip calling listVmwareDcs API.

2013-09-05 Thread Jessica Wang (JIRA)
Jessica Wang created CLOUDSTACK-4619:


 Summary: UI > zone detail page > check if the zone has any cluster 
whose hypervisor is VMware. If not, skip calling listVmwareDcs API.
 Key: CLOUDSTACK-4619
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4619
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Reporter: Jessica Wang
Assignee: Jessica Wang
 Fix For: 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4618) storage refactor has broken CLVM

2013-09-05 Thread Marcus Sorensen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759622#comment-13759622
 ] 

Marcus Sorensen commented on CLOUDSTACK-4618:
-

I spent a little time looking into it, but traced back to the storage handler 
and decided I was in too deep, since I don't have time at the moment.

> storage refactor has broken CLVM
> 
>
> Key: CLOUDSTACK-4618
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4618
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Marcus Sorensen
>Assignee: edison su
>Priority: Blocker
> Fix For: 4.2.0
>
>
>  I see the storage refactor has broken CLVM. It looks like the process is 
> doing something like:
> copy template from secondary to primary storage, then create copy of primary 
> storage template as new volume
> This breaks CLVM, because it used to just do:
> copy template from secondary to primary storage as new volume
> Since we can't efficiently clone in CLVM, it expects to always copy the 
> template from secondary storage, rather than copying to primary first and 
> then copying the whole template from the primary back to the same disks. 1) 
> because it thrashes the disks, and 2) copying the template is usually much 
> faster because the template is sparse, and the logical volume is not, so 
> copying a 10G template with a real size of 500M is much faster than copying a 
> 10G logical volume to another 10G logical volume.
> in KVMStorageProcessor.java cloneVolumeFromBaseTemplate:
> if (primaryPool.getType() == StoragePoolType.CLVM) {
> vol = templateToPrimaryDownload(templatePath, primaryPool);
> }
> This will never work, because templateToPrimaryDownload expects secondary 
> storage, and we have copied the template to primary storage and are passing 
> that. e.g.:
> {
> "org.apache.cloudstack.storage.command.CopyCommand": {
> "destTO": {
> "org.apache.cloudstack.storage.to.VolumeObjectTO": {
> "accountId": 2,
> "dataStore": {
> "org.apache.cloudstack.storage.to.PrimaryDataStoreTO": {
> "host": "localhost",
> "id": 2,
> "path": "/vg0",
> "poolType": "CLVM",
> "port": 0,
> "uuid": "4e00fe65-c47e-4b85-afe8-4f97fb8689d0"
> }
> },
> "format": "QCOW2",
> "hypervisorType": "KVM",
> "id": 9,
> "name": "ROOT-9",
> "size": 1073741824,
> "uuid": "d73f3a2b-9e63-4faf-a45b-d6fcf7633793",
> "vmName": "i-2-9-VM",
> "volumeId": 9,
> "volumeType": "ROOT"
> }
> },
> "executeInSequence": true,
> "srcTO": {
> "org.apache.cloudstack.storage.to.TemplateObjectTO": {
> "accountId": 2,
> "checksum": "44cd0e6330a59f031460bc18a40c95a2",
> "displayText": "tiny",
> "format": "QCOW2",
> "hvm": true,
> "hypervisorType": "KVM",
> "id": 201,
> "imageDataStore": {
> "org.apache.cloudstack.storage.to.PrimaryDataStoreTO": {
> "host": "localhost",
> "id": 2,
> "path": "/vg0",
> "poolType": "CLVM",
> "port": 0,
> "uuid": "4e00fe65-c47e-4b85-afe8-4f97fb8689d0"
> }
> },
> "name": "201-2-a04f958e-0aed-3642-960f-a675a2ee1c44",
> "origUrl": 
> "http://mirrors.betterservers.com/template/tiny-centos-63.qcow2";,
> "path": "c8da0364-6f94-4c71-9c1d-74078e55bbb8",
> "uuid": "7dcdb1fb-e7e3-4de0-bf93-13d3e6c4ade5"
> }
> },
> "wait": 0
> }
> }
> Also, format should be 'RAW', I believe, not 'QCOW2'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4618) storage refactor has broken CLVM

2013-09-05 Thread Marcus Sorensen (JIRA)
Marcus Sorensen created CLOUDSTACK-4618:
---

 Summary: storage refactor has broken CLVM
 Key: CLOUDSTACK-4618
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4618
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.2.0
Reporter: Marcus Sorensen
Assignee: edison su
Priority: Blocker
 Fix For: 4.2.0


 I see the storage refactor has broken CLVM. It looks like the process is doing 
something like:

copy template from secondary to primary storage, then create copy of primary 
storage template as new volume

This breaks CLVM, because it used to just do:

copy template from secondary to primary storage as new volume

Since we can't efficiently clone in CLVM, it expects to always copy the 
template from secondary storage, rather than copying to primary first and then 
copying the whole template from the primary back to the same disks. 1) because 
it thrashes the disks, and 2) copying the template is usually much faster 
because the template is sparse, and the logical volume is not, so copying a 10G 
template with a real size of 500M is much faster than copying a 10G logical 
volume to another 10G logical volume.

in KVMStorageProcessor.java cloneVolumeFromBaseTemplate:

if (primaryPool.getType() == StoragePoolType.CLVM) {
vol = templateToPrimaryDownload(templatePath, primaryPool);
}

This will never work, because templateToPrimaryDownload expects secondary 
storage, and we have copied the template to primary storage and are passing 
that. e.g.:

{
"org.apache.cloudstack.storage.command.CopyCommand": {
"destTO": {
"org.apache.cloudstack.storage.to.VolumeObjectTO": {
"accountId": 2,
"dataStore": {
"org.apache.cloudstack.storage.to.PrimaryDataStoreTO": {
"host": "localhost",
"id": 2,
"path": "/vg0",
"poolType": "CLVM",
"port": 0,
"uuid": "4e00fe65-c47e-4b85-afe8-4f97fb8689d0"
}
},
"format": "QCOW2",
"hypervisorType": "KVM",
"id": 9,
"name": "ROOT-9",
"size": 1073741824,
"uuid": "d73f3a2b-9e63-4faf-a45b-d6fcf7633793",
"vmName": "i-2-9-VM",
"volumeId": 9,
"volumeType": "ROOT"
}
},
"executeInSequence": true,
"srcTO": {
"org.apache.cloudstack.storage.to.TemplateObjectTO": {
"accountId": 2,
"checksum": "44cd0e6330a59f031460bc18a40c95a2",
"displayText": "tiny",
"format": "QCOW2",
"hvm": true,
"hypervisorType": "KVM",
"id": 201,
"imageDataStore": {
"org.apache.cloudstack.storage.to.PrimaryDataStoreTO": {
"host": "localhost",
"id": 2,
"path": "/vg0",
"poolType": "CLVM",
"port": 0,
"uuid": "4e00fe65-c47e-4b85-afe8-4f97fb8689d0"
}
},
"name": "201-2-a04f958e-0aed-3642-960f-a675a2ee1c44",
"origUrl": 
"http://mirrors.betterservers.com/template/tiny-centos-63.qcow2";,
"path": "c8da0364-6f94-4c71-9c1d-74078e55bbb8",
"uuid": "7dcdb1fb-e7e3-4de0-bf93-13d3e6c4ade5"
}
},
"wait": 0
}
}

Also, format should be 'RAW', I believe, not 'QCOW2'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759571#comment-13759571
 ] 

ASF subversion and git services commented on CLOUDSTACK-4089:
-

Commit 356a39077e6dcd7afc78ec19e82a1761ec153444 in branch refs/heads/master 
from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=356a390 ]

CLOUDSTACK-4089: zone wizard > hypervisor VMware > if zoneType is Basic, not 
show vSwitchType dropdown in Edit Traffic Type for Guest.


> Provide a drop down to specify VLAN,Switch type, Traffic label name while 
> configuring Zone(VMWARE)
> --
>
> Key: CLOUDSTACK-4089
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Jessica Wang
> Fix For: 4.2.0
>
> Attachments: 2013-08-06-A.jpg, 
> addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
> edit-traffic-type-vmware.jpg
>
>
> Observation:
> Setup: VMWARE 
> 1. While configuring Zone,  During Physical network creation ,  currently 
> there is a text field to specify VLAN Id for the traffic ,  Traffic label 
> name & Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
> 2. It is text field and there is a possibility of missing some of the 
> parameters. 
> 3.  While adding cluster we have an option to specify the traffic label name 
> and drop down to select the Switch type.  
> This is the request to provide  a drop down to specify VLAN,Switch type, 
> Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
> confusion between Zone vs Cluster level configuration.
> It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4617) Adding primary storage pool as part of zone creation fails stating that there is no host in the cluster. Retrying to add the primary storage pool again after some de

2013-09-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-4617:
---

 Summary: Adding primary storage pool as part of zone creation 
fails stating that there is no host in the cluster. Retrying to add the primary 
storage pool again after some delay succeeds.
 Key: CLOUDSTACK-4617
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.1
 Environment: Build from 4.2-forward
Reporter: Sangeetha Hariharan
 Fix For: 4.2.1


Adding primary storage pool as part of zone creation fails stating that there 
is no host in the cluster. Retrying to add the primary storage pool again after 
some delay succeeds.

Steps to reproduce the problem:

Create an advanced zone with 1 Xenserver host.

As part of zone creation wizard , provide all the values for zone creation 
including primary and secondary storage details.

Primary storage creation fails with error - "Failed to delete storage pool on 
host"

 Following exception seen in management server logs:
2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
===START===  10.215.3.9 -- GET  comman
d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
2013-09-05 11:51:25,975 DEBUG 
[datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
(catalina-exec-20:null)
 createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
/export/home/sangeetha/307/zone2-primary
port - -1
2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
(catalina-exec-20:null) Failed to add data store
com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
storage pool with in cluster 2
at 
org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
at 
org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
at 
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
2013-09-05 11:51:26,173 INFO  [cloud.api.ApiServer] (catalina-exec-20:null) 
Failed to delete storage pool on host


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4604) add cancel/restart/pause async jobs to help recovery from failures

2013-09-05 Thread Murali Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Murali Reddy updated CLOUDSTACK-4604:
-

Description: 
Two complementary functionalities want to address with this improvement.

- checkpoint the async jobs. perhaps use current journal to record the all 
entity manipulation, For e.g  asyncJob for VM create can have journal like
 Volume, create, ID   
 Network, implement, id
 Nic, prepare, id etc

- asyncjob management api currently exposes ability to perform query and list 
async jobs. There is no api support to cancel a job or restart a job. async job 
manager can be enhanced to add pause, cancel, restart the jobs

A Job pause (pause to nearest checkpoint) and job restart (restart from the 
last checkpoint), cancel (rollback from last checkpoint) can be used to help 
recovering from failures. 


  was:
Two complementary functionalities want to address with the bug.

- checkpoint the async jobs. perhaps use current journal to record the all 
entity manipulation, For e.g  asyncJob for VM create can have journal like
 Volume, create, ID   
 Network, implement, id
 Nic, prepare, id etc

- asyncjob management api currently exposes ability to perform query and list 
async jobs. There is no api support to cancel a job or restart a job. async job 
manager can be enhanced to add pause, cancel, restart the jobs

A Job pause (pause to nearest checkpoint) and job restart (restart from the 
last checkpoint), cancel (rollback from last checkpoint) can be used to help 
recovering from failures. 



> add cancel/restart/pause async jobs to help recovery from failures
> --
>
> Key: CLOUDSTACK-4604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Murali Reddy
> Fix For: Future
>
>
> Two complementary functionalities want to address with this improvement.
> - checkpoint the async jobs. perhaps use current journal to record the all 
> entity manipulation, For e.g  asyncJob for VM create can have journal like
>  Volume, create, ID   
>  Network, implement, id
>  Nic, prepare, id etc
> - asyncjob management api currently exposes ability to perform query and list 
> async jobs. There is no api support to cancel a job or restart a job. async 
> job manager can be enhanced to add pause, cancel, restart the jobs
> A Job pause (pause to nearest checkpoint) and job restart (restart from the 
> last checkpoint), cancel (rollback from last checkpoint) can be used to help 
> recovering from failures. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759558#comment-13759558
 ] 

ASF subversion and git services commented on CLOUDSTACK-4089:
-

Commit eb9ab676f6463df50e22460af2a3376f3206b0ee in branch 
refs/heads/4.2-forward from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=eb9ab67 ]

CLOUDSTACK-4089: zone wizard > hypervisor VMware > if zoneType is Basic, not 
show vSwitchType dropdown in Edit Traffic Type for Guest.


> Provide a drop down to specify VLAN,Switch type, Traffic label name while 
> configuring Zone(VMWARE)
> --
>
> Key: CLOUDSTACK-4089
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Jessica Wang
> Fix For: 4.2.0
>
> Attachments: 2013-08-06-A.jpg, 
> addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
> edit-traffic-type-vmware.jpg
>
>
> Observation:
> Setup: VMWARE 
> 1. While configuring Zone,  During Physical network creation ,  currently 
> there is a text field to specify VLAN Id for the traffic ,  Traffic label 
> name & Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
> 2. It is text field and there is a possibility of missing some of the 
> parameters. 
> 3.  While adding cluster we have an option to specify the traffic label name 
> and drop down to select the Switch type.  
> This is the request to provide  a drop down to specify VLAN,Switch type, 
> Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
> confusion between Zone vs Cluster level configuration.
> It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-09-05 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759555#comment-13759555
 ] 

Jessica Wang edited comment on CLOUDSTACK-4089 at 9/5/13 10:28 PM:
---

Sailaja's email at 2013-09-05:  
"We support DVS/Nexus only in Adv Zone,  But in case of Basic Zone also, We 
have the drop down given for Guest Traffic"

  was (Author: jessicawang):
Sailaja's email at 2013-09-05:

"We support DVS/Nexus only in Adv Zone,  But in case of Basic Zone also, We 
have the drop down given for Guest Traffic"
  
> Provide a drop down to specify VLAN,Switch type, Traffic label name while 
> configuring Zone(VMWARE)
> --
>
> Key: CLOUDSTACK-4089
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Jessica Wang
> Fix For: 4.2.0
>
> Attachments: 2013-08-06-A.jpg, 
> addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
> edit-traffic-type-vmware.jpg
>
>
> Observation:
> Setup: VMWARE 
> 1. While configuring Zone,  During Physical network creation ,  currently 
> there is a text field to specify VLAN Id for the traffic ,  Traffic label 
> name & Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
> 2. It is text field and there is a possibility of missing some of the 
> parameters. 
> 3.  While adding cluster we have an option to specify the traffic label name 
> and drop down to select the Switch type.  
> This is the request to provide  a drop down to specify VLAN,Switch type, 
> Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
> confusion between Zone vs Cluster level configuration.
> It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-09-05 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759555#comment-13759555
 ] 

Jessica Wang commented on CLOUDSTACK-4089:
--

Sailaja's email at 2013-09-05:

"We support DVS/Nexus only in Adv Zone,  But in case of Basic Zone also, We 
have the drop down given for Guest Traffic"

> Provide a drop down to specify VLAN,Switch type, Traffic label name while 
> configuring Zone(VMWARE)
> --
>
> Key: CLOUDSTACK-4089
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Jessica Wang
> Fix For: 4.2.0
>
> Attachments: 2013-08-06-A.jpg, 
> addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
> edit-traffic-type-vmware.jpg
>
>
> Observation:
> Setup: VMWARE 
> 1. While configuring Zone,  During Physical network creation ,  currently 
> there is a text field to specify VLAN Id for the traffic ,  Traffic label 
> name & Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
> 2. It is text field and there is a possibility of missing some of the 
> parameters. 
> 3.  While adding cluster we have an option to specify the traffic label name 
> and drop down to select the Switch type.  
> This is the request to provide  a drop down to specify VLAN,Switch type, 
> Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
> confusion between Zone vs Cluster level configuration.
> It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4617) Adding primary storage pool as part of zone creation fails stating that there is no host in the cluster. Retrying to add the primary storage pool again after some de

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-4617:


Description: 
Adding primary storage pool as part of zone creation fails stating that there 
is no host in the cluster. Retrying to add the primary storage pool again after 
some delay succeeds.

Steps to reproduce the problem:
In my case I already had 1 zone.

Create another advanced zone with 1 Xenserver host.

As part of zone creation wizard , provide all the values for zone creation 
including primary and secondary storage details.

Primary storage creation fails with error - "Failed to delete storage pool on 
host"

 Following exception seen in management server logs:
2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
===START===  10.215.3.9 -- GET  comman
d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
2013-09-05 11:51:25,975 DEBUG 
[datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
(catalina-exec-20:null)
 createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath - 
/export/home/sangeetha/307/zone2-primary
port - -1
2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl] 
(catalina-exec-20:null) Failed to add data store
com.cloud.utils.exception.CloudRuntimeException: No host up to associate a 
storage pool with in cluster 2
at 
org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
at 
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
at 
org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
at 
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
2013-09-05 11:51:26,173 INFO  [cloud.api.ApiServer] (catalina-exec-20:null) 
Failed to delete storage pool on host


  was:
Adding primary storage pool as part of zone creation fails stating that there 
is no host in the cluster. Retrying to add the primary storage pool again after 
some delay succeeds.

Steps to reproduce the problem:

Create an advanced zone with 1 Xenserver host.

As part of zone creation wizard , provide all the values for zone creation 
including primary and secondary storage details.

Primary storage creation fails with error - "Failed to delete storage pool on 
host"

 Following exception seen in management server logs:
2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
===START===  10.215.3.9 -- GET  comman
d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&s

[jira] [Commented] (CLOUDSTACK-2792) Redundant router: Password is reset again after fail-over happened

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759512#comment-13759512
 ] 

ASF subversion and git services commented on CLOUDSTACK-2792:
-

Commit e1e6f93306959dc8799eca00df11587237f1b38d in branch 
refs/heads/4.2-forward from [~yasker]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e1e6f93 ]

Revert "CLOUDSTACK-2792: Send "saved_password" to BACKUP router when reset 
password for user VM"

This reverts commit 5a8a2a259ea6e049b3e5810ff3a432d6ca7767e1.

We would fix it in another way, since mgmt server may get state updated in
time.

Conflicts:

server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java


> Redundant router: Password is reset again after fail-over happened
> --
>
> Key: CLOUDSTACK-2792
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2792
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.0.0
>Reporter: Sheng Yang
>Assignee: Sheng Yang
> Fix For: 4.2.1
>
>
> Consider this scenario with RVR and "Password protected" VM:
> 
> 1. Both Master and Backup is running.
> 2. We reset the password on VM
> 3. Both Master and Backup have password; for example say; "password1"
> 4. VM boots up and requests for password; receives it from Master VR
> 5. Master VR sets the password to Saved_Password and Backup VR continues to
> keep "password1"
> 6. Backup VR goes down; it had password as "password1"
> 7. Maste VR is running
> 8. We reset the password; so the password is only changed to Master VR (as
> Backup VR is down); for example "password2"
> 9. VM boots up and requests the password; gets it as "password2"
> 10. Master VR sets the password to be Saved_Password
> 11. Now Master VR goes down
> 12. Backup VR was brought online (it still has "password1")
> 13. Now we reboot the VM
> 14. It sends a password request
> 15. Backup VR (which is only available now; so is Master) sends the password 
> as
> "password1"
> User tries to login as "password2" and he cannot; unless we reset the password
> again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4616) When system Vms fail to start when host is down , link local Ip addresses do not get released resulting in all the link local Ip addresses being consumed eventually

2013-09-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-4616:
---

 Summary: When system Vms fail to start when host is down ,  link 
local Ip addresses do not get released resulting in all the link local Ip 
addresses being consumed eventually.
 Key: CLOUDSTACK-4616
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4616
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.1
 Environment: Build from 4.2-forward
Reporter: Sangeetha Hariharan
Priority: Critical
 Fix For: 4.2.1


When system Vms fail to start when host is down ,  link local Ip addresses do 
not get released resulting in all the link local Ip addresses being consumed 
eventually.

Steps to reproduce the problem:

Advanced zone with 1 cluster having 1 host (Xenserver).

Had SSVM,CCPVM, 2 routers and few user Vms running in the host.

power down the host.

When host was powered down , host is still marked as being in "Up" state . Bug 
tracked  in - CLOUDSTACK-2140.

Attempt to restart all the system Vms in the host that is down is made 
continuously  and it fails.
These failed attempts do not result in releasing the linked local Ip , 
resulting in all linked local Ips being consumed.

When the host is actually powered on , attempts to start the System Vms fail , 
because of teh following exception seen in the management-server.logs:

013-09-05 12:00:09,551 INFO  [cloud.vm.VirtualMachineManagerImpl] 
(secstorage-1:null) Insufficient capacity
com.cloud.exception.InsufficientAddressCapacityException: Insufficient link 
local address capacityScope=interface com.cloud.dc.DataCenter; id=1
at 
com.cloud.network.guru.ControlNetworkGuru.reserve(ControlNetworkGuru.java:156)
at 
com.cloud.network.NetworkManagerImpl.prepareNic(NetworkManagerImpl.java:2157)
at 
com.cloud.network.NetworkManagerImpl.prepare(NetworkManagerImpl.java:2127)
at 
com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:886)
at 
com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:578)
at 
com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:571)
at 
com.cloud.storage.secondary.SecondaryStorageManagerImpl.startSecStorageVm(SecondaryStorageManagerImpl.java:267)
at 
com.cloud.storage.secondary.SecondaryStorageManagerImpl.allocCapacity(SecondaryStorageManagerImpl.java:696)
at 
com.cloud.storage.secondary.SecondaryStorageManagerImpl.expandPool(SecondaryStorageManagerImpl.java:1300)
at 
com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:123)
at 
com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:50)
at 
com.cloud.vm.SystemVmLoadScanner.loadScan(SystemVmLoadScanner.java:104)
at 
com.cloud.vm.SystemVmLoadScanner.access$100(SystemVmLoadScanner.java:33)
at 
com.cloud.vm.SystemVmLoadScanner$1.reallyRun(SystemVmLoadScanner.java:81)
at com.cloud.vm.SystemVmLoadScanner$1.run(SystemVmLoadScanner.java:72)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)


mysql> select * from op_dc_link_local_ip_address_alloc where data_center_id=1 
and taken is null;
Empty set (0.00 sec)




  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4430) [Automation][vmware] Failed to deploy vm, if one host is down in a cluster

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759495#comment-13759495
 ] 

ASF subversion and git services commented on CLOUDSTACK-4430:
-

Commit 25281ae7a7209ad71781b29802e97a25535e7be6 in branch refs/heads/master 
from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=25281ae ]

CLOUDSTACK-4430: Add retry logic back in case of template reload needed
for vmware.


> [Automation][vmware] Failed to deploy vm, if one host is down in a cluster
> --
>
> Key: CLOUDSTACK-4430
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4430
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, Management Server, VMware
>Affects Versions: 4.2.0
> Environment: Automation
> vmware
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: management-server.log.2013-08-20.gz, 
> management-server.rar
>
>
> This issue observed during automation run
> 1) Automation running with 2 zones (advanced ), both zone contions one 
> cluster;  first cluster  having 2 hosts (10.223.250.130 and 10.223.250.131) 
> and another cluster have 1  host.
> 2) Create vm one first zone;  
> 3) make one host (10.223.250.130) down from first zone 
> 4) deploy an another vm after the first zone is down
> Expected Behavior
> ---
> New vm should be deployed with first zone, on the second host (10.223.250.131)
> Actual Result 
> 
> VM deployment failed with below error
> 2013-08-21 15:49:49,125 DEBUG [cloud.storage.VolumeManagerImpl] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) 
> Checking if we need to prepare 1 volumes for VM[DomainRouter|r-524-TestVM]
> 2013-08-21 15:49:49,131 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) 
> template 8 is already in store:2, type:Image
> 2013-08-21 15:49:49,155 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) 
> template 8 is already in store:1, type:Primary
> 2013-08-21 15:49:49,224 DEBUG [storage.motion.AncientDataMotionStrategy] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) 
> copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type VOLUME
> 2013-08-21 15:49:49,232 DEBUG [agent.transport.Request] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) Seq 
> 3-1188243408: Sending  { Cmd , MgmtId: 90928106758026, via: 3, Ver: v1, 
> Flags: 100011, [{"org.apac
> he.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d559e79dc94f3ceea16e841774053435","origUrl":"http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova";
> ,"uuid":"426a759e-09ec-11e3-a733-52b2d980df8a","id":8,"format":"OVA","accountId":1,"checksum":"8fde62b1089e5844a9cd3b9b953f9596","hvm":false,"displayText":"SystemVM
>  Template (vSphere)","imageDataStore":{"org.apache.cloudstack.st
> orage.to.PrimaryDataStoreTO":{"uuid":"4faf04c2-6dd8-3025-b43f-65d32cc49d02","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/automation/SC-CLOUD-QA03/primary1","port":2049}},"name":"routing-8","
> hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"3c7b36c7-a034-485b-b808-501e6b8a067a","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid"
> :"4faf04c2-6dd8-3025-b43f-65d32cc49d02","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/automation/SC-CLOUD-QA03/primary1","port":2049}},"name":"ROOT-524","size":2097152000,"volumeId":575,"vmNa
> me":"r-524-TestVM","accountId":2,"format":"OVA","id":575,"hypervisorType":"None"}},"executeInSequence":false,"wait":0}}]
>  }
> 2013-08-21 15:49:49,232 DEBUG [agent.transport.Request] 
> (Job-Executor-29:job-1549 = [ cf900a2a-6c83-4ddd-b898-e19c90cef0a6 ]) Seq 
> 3-1188243408: Executing:  { Cmd , MgmtId: 90928106758026, via: 3, Ver: v1, 
> Flags: 100011, [{"org.a
> pache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d559e79dc94f3ceea16e841774053435","origUrl":"http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.o
> va","uuid":"426a759e-09ec-11e3-a733-52b2d980df8a","id":8,"format":"OVA","accountId":1,"checksum":"8fde62b1089e5844a9cd3b9b953f9596","hvm":false,"displayText":"SystemVM
>  Template (vSphere)","imageDataStore":{"org.apache.clo

[jira] [Commented] (CLOUDSTACK-2792) Redundant router: Password is reset again after fail-over happened

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759487#comment-13759487
 ] 

ASF subversion and git services commented on CLOUDSTACK-2792:
-

Commit ebb9a0c6192de77f7803ea666cfc2a7974308dcd in branch refs/heads/master 
from [~yasker]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ebb9a0c ]

Revert "CLOUDSTACK-2792: Send "saved_password" to BACKUP router when reset 
password for user VM"

This reverts commit 5a8a2a259ea6e049b3e5810ff3a432d6ca7767e1.

We would fix it in another way, since mgmt server may get state updated in
time.

Conflicts:

server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java


> Redundant router: Password is reset again after fail-over happened
> --
>
> Key: CLOUDSTACK-2792
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2792
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.0.0
>Reporter: Sheng Yang
>Assignee: Sheng Yang
> Fix For: 4.2.1
>
>
> Consider this scenario with RVR and "Password protected" VM:
> 
> 1. Both Master and Backup is running.
> 2. We reset the password on VM
> 3. Both Master and Backup have password; for example say; "password1"
> 4. VM boots up and requests for password; receives it from Master VR
> 5. Master VR sets the password to Saved_Password and Backup VR continues to
> keep "password1"
> 6. Backup VR goes down; it had password as "password1"
> 7. Maste VR is running
> 8. We reset the password; so the password is only changed to Master VR (as
> Backup VR is down); for example "password2"
> 9. VM boots up and requests the password; gets it as "password2"
> 10. Master VR sets the password to be Saved_Password
> 11. Now Master VR goes down
> 12. Backup VR was brought online (it still has "password1")
> 13. Now we reboot the VM
> 14. It sends a password request
> 15. Backup VR (which is only available now; so is Master) sends the password 
> as
> "password1"
> User tries to login as "password2" and he cannot; unless we reset the password
> again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4614) listVirtualMachines does not return anything when using the name parameter

2013-09-05 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou closed CLOUDSTACK-4614.


Resolution: Fixed

I think this issue duplicated CLOUDSTACK-4296, which has been fixed in 4.1 
branch

https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=52aaa3311eff3c75ac680f6cd87eacbb86cf85ed

> listVirtualMachines does not return anything when using the name parameter
> --
>
> Key: CLOUDSTACK-4614
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4614
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.1.1
>Reporter: Patrick McGahee
>
> When using the listVirtualMachines method of the API including the name 
> parameter will cause no results to be returned.
> When I do not use this parameter I can get a list a VMs to return.  I then 
> find a VM and a name I want and put it into a new query.  No results are 
> returned.  I have double checked the spelling of the name.
> Other parameters seem to work fine (like hostid for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-2792) Redundant router: Password is reset again after fail-over happened

2013-09-05 Thread Sheng Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sheng Yang updated CLOUDSTACK-2792:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Redundant router: Password is reset again after fail-over happened
> --
>
> Key: CLOUDSTACK-2792
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2792
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.0.0
>Reporter: Sheng Yang
>Assignee: Sheng Yang
> Fix For: 4.2.1
>
>
> Consider this scenario with RVR and "Password protected" VM:
> 
> 1. Both Master and Backup is running.
> 2. We reset the password on VM
> 3. Both Master and Backup have password; for example say; "password1"
> 4. VM boots up and requests for password; receives it from Master VR
> 5. Master VR sets the password to Saved_Password and Backup VR continues to
> keep "password1"
> 6. Backup VR goes down; it had password as "password1"
> 7. Maste VR is running
> 8. We reset the password; so the password is only changed to Master VR (as
> Backup VR is down); for example "password2"
> 9. VM boots up and requests the password; gets it as "password2"
> 10. Master VR sets the password to be Saved_Password
> 11. Now Master VR goes down
> 12. Backup VR was brought online (it still has "password1")
> 13. Now we reboot the VM
> 14. It sends a password request
> 15. Backup VR (which is only available now; so is Master) sends the password 
> as
> "password1"
> User tries to login as "password2" and he cannot; unless we reset the password
> again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent is missing in installer

2013-09-05 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4615:
-

Description: 
M) Install the Management Server
A) Install the Agent
B) Install BareMetal Agent
S) Install the Usage Monitor
D) Install the database server (from distribution's repo)
L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
higher version MySql)
Q) Quit
 > B
Installing the BareMetal Agent...
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
base
  | 3.7 kB 00:00
cloud-temp  
  | 1.3 kB 00:00 ...
extras  
  | 3.4 kB 00:00
updates 
  | 3.4 kB 00:00
Setting up Install Process
No package cloudstack-baremetal-agent available.
Error: Nothing to do


  was:
Setting up the temporary repository...
Cleaning Yum cache...
Loaded plugins: fastestmirror, security
Cleaning repos: base cloud-temp extras updates
7 metadata files removed
Welcome to the CloudPlatform Installer.  What would you like to do?

NOTE:   For installing KVM agent, please setup 
EPEL yum repo first;
For installing CloudPlatform on RHEL6.x, please setup 
distribution yum repo either from ISO or from your registeration account.


M) Install the Management Server
A) Install the Agent
B) Install BareMetal Agent
S) Install the Usage Monitor
D) Install the database server (from distribution's repo)
L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
higher version MySql)
Q) Quit
 > B
Installing the BareMetal Agent...
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
base
  | 3.7 kB 00:00
cloud-temp  
  | 1.3 kB 00:00 ...
extras  
  | 3.4 kB 00:00
updates 
  | 3.4 kB 00:00
Setting up Install Process
No package cloudstack-baremetal-agent available.
Error: Nothing to do



> [Baremetal] Baremetal agent is missing in installer
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0, 4.2.1
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0, 4.2.1
>
>
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * extras: centos.tt.co.kr
>  * updates: centos.tt.co.kr
> base  
> | 3.7 kB 00:00
> cloud-temp
> | 1.3 kB 00:00 ...
> extras
> | 3.4 kB 00:00
> updates   
> | 3.4 kB 00:00
> Setting up Install Process
> No package cloudstack-baremetal-agent available.
> Error: Nothing to do

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent is missing in installer

2013-09-05 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4615:
-

Fix Version/s: 4.2.1

> [Baremetal] Baremetal agent is missing in installer
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0, 4.2.1
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0, 4.2.1
>
>
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror, security
> Cleaning repos: base cloud-temp extras updates
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * extras: centos.tt.co.kr
>  * updates: centos.tt.co.kr
> base  
> | 3.7 kB 00:00
> cloud-temp
> | 1.3 kB 00:00 ...
> extras
> | 3.4 kB 00:00
> updates   
> | 3.4 kB 00:00
> Setting up Install Process
> No package cloudstack-baremetal-agent available.
> Error: Nothing to do

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent is missing in installer

2013-09-05 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4615:
-

Description: 
Setting up the temporary repository...
Cleaning Yum cache...
Loaded plugins: fastestmirror, security
Cleaning repos: base cloud-temp extras updates
7 metadata files removed
Welcome to the CloudPlatform Installer.  What would you like to do?

NOTE:   For installing KVM agent, please setup 
EPEL yum repo first;
For installing CloudPlatform on RHEL6.x, please setup 
distribution yum repo either from ISO or from your registeration account.


M) Install the Management Server
A) Install the Agent
B) Install BareMetal Agent
S) Install the Usage Monitor
D) Install the database server (from distribution's repo)
L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
higher version MySql)
Q) Quit
 > B
Installing the BareMetal Agent...
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
base
  | 3.7 kB 00:00
cloud-temp  
  | 1.3 kB 00:00 ...
extras  
  | 3.4 kB 00:00
updates 
  | 3.4 kB 00:00
Setting up Install Process
No package cloudstack-baremetal-agent available.
Error: Nothing to do


  was:
>From   cheolsoo.p...@citrix.com  :


[root@campo-bm-pxe CloudPlatform-4.2.0-1-rhel6.3]# ./install.sh
Setting up the temporary repository...
Cleaning Yum cache...
Loaded plugins: fastestmirror, security
Cleaning repos: base cloud-temp extras updates
7 metadata files removed
Welcome to the CloudPlatform Installer.  What would you like to do?

NOTE:   For installing KVM agent, please setup 
EPEL yum repo first;
For installing CloudPlatform on RHEL6.x, please setup 
distribution yum repo either from ISO or from your registeration account.


M) Install the Management Server
A) Install the Agent
B) Install BareMetal Agent
S) Install the Usage Monitor
D) Install the database server (from distribution's repo)
L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
higher version MySql)
Q) Quit
 > B
Installing the BareMetal Agent...
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
base
  | 3.7 kB 00:00
cloud-temp  
  | 1.3 kB 00:00 ...
extras  
  | 3.4 kB 00:00
updates 
  | 3.4 kB 00:00
Setting up Install Process
No package cloudstack-baremetal-agent available.
Error: Nothing to do



> [Baremetal] Baremetal agent is missing in installer
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0
>
>
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror, security
> Cleaning repos: base cloud-temp extras updates
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * ex

[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent is missing in installer

2013-09-05 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4615:
-

Affects Version/s: 4.2.1

> [Baremetal] Baremetal agent is missing in installer
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0, 4.2.1
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0
>
>
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror, security
> Cleaning repos: base cloud-temp extras updates
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * extras: centos.tt.co.kr
>  * updates: centos.tt.co.kr
> base  
> | 3.7 kB 00:00
> cloud-temp
> | 1.3 kB 00:00 ...
> extras
> | 3.4 kB 00:00
> updates   
> | 3.4 kB 00:00
> Setting up Install Process
> No package cloudstack-baremetal-agent available.
> Error: Nothing to do

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4615) [Baremetal] Baremetal agent is missing in installer

2013-09-05 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4615:
-

Summary: [Baremetal] Baremetal agent is missing in installer  (was: 
[Baremetal] Baremetal agent missing from campo GA release packaging)

> [Baremetal] Baremetal agent is missing in installer
> ---
>
> Key: CLOUDSTACK-4615
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4615
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.0
> Environment: MS   RHEL 6.3
> host  baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Blocker
> Fix For: 4.2.0
>
>
> From   cheolsoo.p...@citrix.com  :
> [root@campo-bm-pxe CloudPlatform-4.2.0-1-rhel6.3]# ./install.sh
> Setting up the temporary repository...
> Cleaning Yum cache...
> Loaded plugins: fastestmirror, security
> Cleaning repos: base cloud-temp extras updates
> 7 metadata files removed
> Welcome to the CloudPlatform Installer.  What would you like to do?
> NOTE:   For installing KVM agent, please setup 
> EPEL yum repo first;
> For installing CloudPlatform on RHEL6.x, please setup 
> distribution yum repo either from ISO or from your registeration account.
> M) Install the Management Server
> A) Install the Agent
> B) Install BareMetal Agent
> S) Install the Usage Monitor
> D) Install the database server (from distribution's repo)
> L) Install the MySQL 5.1.58 (only for CentOS5.x, Rhel6.x naturally has 
> higher version MySql)
> Q) Quit
>  > B
> Installing the BareMetal Agent...
> Loaded plugins: fastestmirror, security
> Loading mirror speeds from cached hostfile
>  * base: centos.tt.co.kr
>  * extras: centos.tt.co.kr
>  * updates: centos.tt.co.kr
> base  
> | 3.7 kB 00:00
> cloud-temp
> | 1.3 kB 00:00 ...
> extras
> | 3.4 kB 00:00
> updates   
> | 3.4 kB 00:00
> Setting up Install Process
> No package cloudstack-baremetal-agent available.
> Error: Nothing to do

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4614) listVirtualMachines does not return anything when using the name parameter

2013-09-05 Thread Patrick McGahee (JIRA)
Patrick McGahee created CLOUDSTACK-4614:
---

 Summary: listVirtualMachines does not return anything when using 
the name parameter
 Key: CLOUDSTACK-4614
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4614
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: API
Affects Versions: 4.1.1
Reporter: Patrick McGahee


When using the listVirtualMachines method of the API including the name 
parameter will cause no results to be returned.

When I do not use this parameter I can get a list a VMs to return.  I then find 
a VM and a name I want and put it into a new query.  No results are returned.  
I have double checked the spelling of the name.

Other parameters seem to work fine (like hostid for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-2140) Host is still marked as being in "Up" state when the host is shutdown (when there are no more hosts in the cluster)

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-2140:


Affects Version/s: 4.2.1

> Host is still marked as being in "Up" state when the host is shutdown (when 
> there are no more hosts in the cluster)
> ---
>
> Key: CLOUDSTACK-2140
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2140
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0, 4.2.1
> Environment: build from master
>Reporter: Sangeetha Hariharan
>Assignee: Koushik Das
> Fix For: 4.2.0
>
> Attachments: management-server.rar
>
>
> Host is still marked as being in "Up" state when the host is shutdown (when 
> there are no more hosts in the cluster.
> Set up:
> Advanced zone.
> 3 hosts in a cluster ( in my case host id - 7 ,8 ,9 ).
> I did not have any problems when host 8 and host 9 where shutdown.
> When I tried to shutdown host 7 , I see the host still being in "Up" state , 
> even after the management server detected that it is not able to connect with 
> this host.
> Following exception seen in management server logs:
> 2013-04-22 14:48:18,350 DEBUG [xen.resource.XenServerConnectionPool] 
> (DirectAgent-350:null) localLogout has problem Failed to read server's 
> response: connect timed out
> 2013-04-22 14:48:18,350 WARN  [xen.resource.CitrixResourceBase] 
> (DirectAgent-350:null) Unable to stop i-3-45-VM due to
> com.cloud.utils.exception.CloudRuntimeException: Unable to reset master of 
> slave 10.223.59.4 to 10.223.59.2 due to org.apache.xmlrpc.XmlRpcException: 
> Failed to read server's response: connect timed out
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.PoolEmergencyResetMaster(XenServerConnectionPool.java:443)
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.connect(XenServerConnectionPool.java:661)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getConnection(CitrixResourceBase.java:5583)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3728)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:474)
> at 
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-04-22 14:48:18,364 DEBUG [agent.manager.DirectAgentAttache] 
> (DirectAgent-350:null) Seq 9-72160431: Response Received:
> 2013-04-22 14:48:18,370 DEBUG [agent.transport.Request] 
> (DirectAgent-350:null) Seq 9-72160431: Processing:  { Ans: , MgmtId: 
> 7508777239729, via: 9, Ver: v1, Flags: 110, 
> [{"StopAnswer":{"result":false,"details":"Exception: 
> com.cloud.utils.exception.CloudRuntimeException\nMessage: Unable to reset 
> master of slave 10.223.59.4 to 10.223.59.2 due to 
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: connect 
> timed out\nStack: com.cloud.utils.exception.CloudRuntimeException: Unable to 
> reset master of slave 10.223.59.4 to 10.223.59.2 due to 
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: connect 
> timed out\n\tat 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.PoolEmergencyResetMaster(XenServerConnectionPool.java:443)\n\tat
>  
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.connect(XenServerConnectionPool.java:661)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getConnection(CitrixResourceBase.java:5583)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3728)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:474)\n\tat
>  
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Re

[jira] [Reopened] (CLOUDSTACK-2140) Host is still marked as being in "Up" state when the host is shutdown (when there are no more hosts in the cluster)

2013-09-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan reopened CLOUDSTACK-2140:
-


Reopening this issue.

If this is the last host in XS cluster and all investigators don’t return it as 
UP, then CS should put it in alert status.



> Host is still marked as being in "Up" state when the host is shutdown (when 
> there are no more hosts in the cluster)
> ---
>
> Key: CLOUDSTACK-2140
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2140
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
> Environment: build from master
>Reporter: Sangeetha Hariharan
>Assignee: Koushik Das
> Fix For: 4.2.0
>
> Attachments: management-server.rar
>
>
> Host is still marked as being in "Up" state when the host is shutdown (when 
> there are no more hosts in the cluster.
> Set up:
> Advanced zone.
> 3 hosts in a cluster ( in my case host id - 7 ,8 ,9 ).
> I did not have any problems when host 8 and host 9 where shutdown.
> When I tried to shutdown host 7 , I see the host still being in "Up" state , 
> even after the management server detected that it is not able to connect with 
> this host.
> Following exception seen in management server logs:
> 2013-04-22 14:48:18,350 DEBUG [xen.resource.XenServerConnectionPool] 
> (DirectAgent-350:null) localLogout has problem Failed to read server's 
> response: connect timed out
> 2013-04-22 14:48:18,350 WARN  [xen.resource.CitrixResourceBase] 
> (DirectAgent-350:null) Unable to stop i-3-45-VM due to
> com.cloud.utils.exception.CloudRuntimeException: Unable to reset master of 
> slave 10.223.59.4 to 10.223.59.2 due to org.apache.xmlrpc.XmlRpcException: 
> Failed to read server's response: connect timed out
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.PoolEmergencyResetMaster(XenServerConnectionPool.java:443)
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.connect(XenServerConnectionPool.java:661)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getConnection(CitrixResourceBase.java:5583)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3728)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:474)
> at 
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-04-22 14:48:18,364 DEBUG [agent.manager.DirectAgentAttache] 
> (DirectAgent-350:null) Seq 9-72160431: Response Received:
> 2013-04-22 14:48:18,370 DEBUG [agent.transport.Request] 
> (DirectAgent-350:null) Seq 9-72160431: Processing:  { Ans: , MgmtId: 
> 7508777239729, via: 9, Ver: v1, Flags: 110, 
> [{"StopAnswer":{"result":false,"details":"Exception: 
> com.cloud.utils.exception.CloudRuntimeException\nMessage: Unable to reset 
> master of slave 10.223.59.4 to 10.223.59.2 due to 
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: connect 
> timed out\nStack: com.cloud.utils.exception.CloudRuntimeException: Unable to 
> reset master of slave 10.223.59.4 to 10.223.59.2 due to 
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: connect 
> timed out\n\tat 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.PoolEmergencyResetMaster(XenServerConnectionPool.java:443)\n\tat
>  
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool.connect(XenServerConnectionPool.java:661)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getConnection(CitrixResourceBase.java:5583)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3728)\n\tat
>  
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeReq

[jira] [Updated] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columes as a part of 41-42 db upgrade

2013-09-05 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen updated CLOUDSTACK-3658:
-

Description: 
We should deprecate the following db tables and table columes as a part of 
41-42 db upgrade due to recent object storage refactoring:
-Upload
-s3
-swift
-template_host_ref
-template_s3_ref
-template_swift_ref
-volume_host_ref
-columes (s3_id, swift_id, sechost_id) from snapshots table.
Summary: [DB Upgrade] - Deprecate several old object storage tables and 
columes as a part of 41-42 db upgrade  (was: [DB Upgrade] - Deprecate the 
upload table as a part of 41-42 db upgrade)

> [DB Upgrade] - Deprecate several old object storage tables and columes as a 
> part of 41-42 db upgrade
> 
>
> Key: CLOUDSTACK-3658
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup, Storage Controller
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: cloud-after-upgrade.dmp
>
>
> We should deprecate the following db tables and table columes as a part of 
> 41-42 db upgrade due to recent object storage refactoring:
> -Upload
> -s3
> -swift
> -template_host_ref
> -template_s3_ref
> -template_swift_ref
> -volume_host_ref
> -columes (s3_id, swift_id, sechost_id) from snapshots table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-2982) listTemplates with templatefilter as “All” is not showing proper listing of templates when there are multiple zones.

2013-09-05 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759224#comment-13759224
 ] 

Min Chen commented on CLOUDSTACK-2982:
--

Why are you still looking at template_host_ref, that table is already 
deprecated in 4.2. We are using template_store_ref to track where the template 
is downloaded.

> listTemplates with templatefilter as “All” is not showing  proper listing of 
> templates when there are multiple zones.
> -
>
> Key: CLOUDSTACK-2982
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2982
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.2.0
>Reporter: manasaveloori
> Fix For: 4.2.0
>
> Attachments: listTemplate.jpg
>
>
> Steps:
> 1.Have a CS with multiple zones. I have 2 advanced zones (1 using xen and 
> other VMware).
> 2.Go to templates page. Filter by “All”.
> Observation:
> The listing is not proper when multiple zones are there. Some templates are 
> not downloaded and the zone ,hypervisor mapping for the templates are not 
> proper.
> Attached is the screenshot.
> mysql> select * from template_host_ref\G;
> *** 1. row ***
> id: 1
>host_id: 2
>template_id: 10
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/10/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 2. row ***
> id: 2
>host_id: 2
>template_id: 9
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/9/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 3. row ***
> id: 3
>host_id: 2
>template_id: 8
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-13 10:35:24
> job_id: NULL
>   download_pct: 100
>   size: 2097152000
>  physical_size: 372702720
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/8//74e8ee19-8ea4-495a-899a-446970388717.ova
>url: 
> http://download.cloud.com/templates/burbank/burbank-systemvm-08012012.ova
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 4. row ***
> id: 4
>host_id: 2
>template_id: 3
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/3/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 5. row ***
> id: 5
>host_id: 2
>template_id: 1
>created: 2013-06-12 11:52:07
>   last_updated: 2013-06-12 11:52:07
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/1/
>url: 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2
>  destroyed: 0
>is_copy: 0
>  state: NULL
>   update_count: 0
>updated: NULL
> *** 6. row ***
> id: 6
>host_id: 5
>template_id: 10
>created: 2013-06-12 11:58:04
>   last_updated: 2013-06-12 11:58:04
> job_id: NULL
>   download_pct: 100
>   size: 0
>  physical_size: 0
> download_state: DOWNLOADED
>  error_str: NULL
> local_path: NULL
>   install_path: template/tmpl/1/10/
>

[jira] [Created] (CLOUDSTACK-4613) security group rules issue in host

2013-09-05 Thread Jayapal Reddy (JIRA)
Jayapal Reddy created CLOUDSTACK-4613:
-

 Summary: security group rules issue in host
 Key: CLOUDSTACK-4613
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4613
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Affects Versions: 4.1.0
Reporter: Jayapal Reddy
Assignee: Jayapal Reddy
 Fix For: 4.2.1


Observed the following security group iptables rules issues

1. iptables anti spoofing DROP rules add failed on VM reboot.
inscmd = "iptables-save | grep '\-A " +  vmchain_default + "' | grep  
physdev-in | grep vif | sed -r 's/vif[0-9]+.0/" + vif + "/' | sed 's/-A/-I/'"
Here the rules in inscmd are with out space for !--set, this causes rule 
execute failed.

2. The order of iptables rules are incorrect on VM reboot.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4203) Include a test for migrating volumes of stopped vms to test_stopped_vm.py

2013-09-05 Thread venkata swamybabu budumuru (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata swamybabu budumuru closed CLOUDSTACK-4203.
--

Resolution: Fixed

> Include a test for migrating volumes of stopped vms to test_stopped_vm.py
> -
>
> Key: CLOUDSTACK-4203
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4203
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Prasanna Santhanam
>Assignee: Sanjeev N
>Priority: Critical
> Fix For: Future
>
>
> This is from CLOUDSTACK-2670.
> We don't have any tests for migrating volumes of stopped VMs in the module 
> test_stopped_vm.py

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4203) Include a test for migrating volumes of stopped vms to test_stopped_vm.py

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759054#comment-13759054
 ] 

ASF subversion and git services commented on CLOUDSTACK-4203:
-

Commit 883b9802d4a58633ae9bc75023e063751282783a in branch refs/heads/master 
from [~sanjeevn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=883b980 ]

CLOUDSTACK-4203: Adding a test for migrating volumes of stopped vms to 
test_stopped_vm.py

Signed-off-by: sanjeevneelarapu 
Signed-off-by: venkataswamybabu budumuru 

s
(cherry picked from commit dfee47e3b6c35d08cba7f28433b9e1bce12b4078)


> Include a test for migrating volumes of stopped vms to test_stopped_vm.py
> -
>
> Key: CLOUDSTACK-4203
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4203
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Prasanna Santhanam
>Assignee: Sanjeev N
>Priority: Critical
> Fix For: Future
>
>
> This is from CLOUDSTACK-2670.
> We don't have any tests for migrating volumes of stopped VMs in the module 
> test_stopped_vm.py

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4203) Include a test for migrating volumes of stopped vms to test_stopped_vm.py

2013-09-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13759052#comment-13759052
 ] 

ASF subversion and git services commented on CLOUDSTACK-4203:
-

Commit dfee47e3b6c35d08cba7f28433b9e1bce12b4078 in branch 
refs/heads/4.2-forward from [~sanjeevn]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=dfee47e ]

CLOUDSTACK-4203: Adding a test for migrating volumes of stopped vms to 
test_stopped_vm.py

Signed-off-by: sanjeevneelarapu 
Signed-off-by: venkataswamybabu budumuru 

s


> Include a test for migrating volumes of stopped vms to test_stopped_vm.py
> -
>
> Key: CLOUDSTACK-4203
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4203
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Prasanna Santhanam
>Assignee: Sanjeev N
>Priority: Critical
> Fix For: Future
>
>
> This is from CLOUDSTACK-2670.
> We don't have any tests for migrating volumes of stopped VMs in the module 
> test_stopped_vm.py

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4612) Specified keyboard language is not showing as default in consoleView passed during deployVM

2013-09-05 Thread Sanjay Tripathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Tripathi updated CLOUDSTACK-4612:


Description: 
While deploying a VM, user passes the "keyboard" parameter to specify the 
default language for that VM but in the consoleView, the default language 
selected is en-us irrespective of the default language of the VM. 

To change the language, user has to navigate through the dropdown menu provided 
in consoleView.

  was:
While deploying a VM, user passes the "keyboard" parameter to specify the 
default language for that VM but in the consoleView, the default language 
selected is en-us irrespective of the default language of the VM. 

To change the language, user has to navigate through the dropdown menu.


> Specified keyboard language is not showing as default in consoleView passed 
> during deployVM
> ---
>
> Key: CLOUDSTACK-4612
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4612
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VNC Proxy
>Affects Versions: 4.1.0, 4.2.0
>Reporter: Sanjay Tripathi
>Assignee: Sanjay Tripathi
> Fix For: 4.2.1
>
>
> While deploying a VM, user passes the "keyboard" parameter to specify the 
> default language for that VM but in the consoleView, the default language 
> selected is en-us irrespective of the default language of the VM. 
> To change the language, user has to navigate through the dropdown menu 
> provided in consoleView.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4612) Specified keyboard language is not showing as default in consoleView passed during deployVM

2013-09-05 Thread Sanjay Tripathi (JIRA)
Sanjay Tripathi created CLOUDSTACK-4612:
---

 Summary: Specified keyboard language is not showing as default in 
consoleView passed during deployVM
 Key: CLOUDSTACK-4612
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4612
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: VNC Proxy
Affects Versions: 4.1.0, 4.2.0
Reporter: Sanjay Tripathi
Assignee: Sanjay Tripathi
 Fix For: 4.2.1


While deploying a VM, user passes the "keyboard" parameter to specify the 
default language for that VM but in the consoleView, the default language 
selected is en-us irrespective of the default language of the VM. 

To change the language, user has to navigate through the dropdown menu.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (CLOUDSTACK-4405) (Upgrade) Migrate failed between existing hosts and new hosts

2013-09-05 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou reopened CLOUDSTACK-4405:
--


There still exist two issues
(1) Migration from new hosts to old hosts failed.
as the bridge name on old host is set to cloudVirBr* if 
network.bridge.name.schema is set to 3.0 in 
/etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* 
after running cloudstack-agent-upgrade.

(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old 
hosts are open, because the iptables rules are binding to device (bridge) name 
which is changed by cloudstack-agent-upgrade.

I posted a patch for these two issues. It is tested ok on my environment.
https://reviews.apache.org/r/13992/

After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host 
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing 
bridge name to new bridge name, and update related firewall rules.
   c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook 
/etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart

> (Upgrade) Migrate failed between existing hosts and new hosts
> -
>
> Key: CLOUDSTACK-4405
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4405
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.1.0, 4.2.0
> Environment: CS 4.1
>Reporter: Wei Zhou
>Assignee: edison su
>Priority: Blocker
>  Labels: ReleaseNote
> Fix For: 4.1.1, 4.2.0, 4.2.1
>
>
> There are two hosts (cs-kvm001, cs-kvm002) in old 2.2.14 environment .
> After upgrade from 2.2.14 to 4.1, I added two new hosts (cs-kvm003, 
> cs-kvm004).
> The migration between cs-kvm001 and cs-kvm002, or cs-kvm003 and cs-kvm004 
> succeed.
> However, the migration from cs-kvm001/002 to the new hosts (cs-kvm003, 
> cs-kvm004) failed.
> 2013-08-19 16:57:31,051 DEBUG [kvm.resource.BridgeVifDriver] 
> (agentRequest-Handler-1:null) nic=[Nic:Guest-10.11.110.231-vlan://110]
> 2013-08-19 16:57:31,051 DEBUG [kvm.resource.BridgeVifDriver] 
> (agentRequest-Handler-1:null) creating a vlan dev and bridge for guest 
> traffic per traffic label cloudbr0
> 2013-08-19 16:57:31,051 DEBUG [utils.script.Script] 
> (agentRequest-Handler-1:null) Executing: /bin/bash -c brctl show | grep 
> cloudVirBr110
> 2013-08-19 16:57:31,063 DEBUG [utils.script.Script] 
> (agentRequest-Handler-1:null) Exit value is 1
> 2013-08-19 16:57:31,063 DEBUG [utils.script.Script] 
> (agentRequest-Handler-1:null)
> 2013-08-19 16:57:31,063 DEBUG [kvm.resource.BridgeVifDriver] 
> (agentRequest-Handler-1:null) Executing: 
> /usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvlan.sh -v 110 -p 
> em1 -b brem1-110 -o add
> 2013-08-19 16:57:31,121 DEBUG [kvm.resource.BridgeVifDriver] 
> (agentRequest-Handler-1:null) Execution is successful.
> 2013-08-19 16:57:31,122 DEBUG [kvm.resource.BridgeVifDriver] 
> (agentRequest-Handler-1:null) Set name-type for VLAN subsystem. Should be 
> visible in /proc/net/vlan/config
> This is because the bridge name on old hosts are cloudVirBr110, and brem1-110 
> on new hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4570) Doc: service cloud-management wrongly named

2013-09-05 Thread Pavan Kumar Bandarupally (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavan Kumar Bandarupally closed CLOUDSTACK-4570.


Resolution: Fixed

> Doc: service cloud-management wrongly named
> ---
>
> Key: CLOUDSTACK-4570
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4570
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Doc
>Affects Versions: 4.2.0
>Reporter: Pavan Kumar Bandarupally
>Priority: Critical
> Fix For: 4.2.0
>
>
> 4.2.6 LDAP User Authentication: Limitation
> "service cloud-management restart"  should be changed to "service 
> cloudstack-management restart"
> Apart from that there is a minor spelling mistake in section "3.7. About 
> Secondary Storage"
> In the last but one paragraph in that section , swift is misspelled as swoft.
> The NFS storage in each zone acts as a staging area
> through which all templates and other secondary storage data pass before 
> being forwarded to Swoft
> or S3

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3583) Management server stop is not removing the PID

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3583:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Management server stop is not removing the PID 
> ---
>
> Key: CLOUDSTACK-3583
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3583
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
> Fix For: 4.2.1
>
>
> Steps:
> 1. Configure Adv Zone with VMWARE
> 2. Deploy VM's 3. Update expunge interval GC value
> 3. Stop the Management server
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> 4. Check the status 
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]#
> 5. Start the management server
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> 6. Check the status 
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> But when Management server stopped , status shows as its not a clean stop 
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> [root@ec2 management]# cat /var/run/cloudstack-management.pid
> 350
> [root@ec2 management]# cat /var/lock/subsys/cloudstack-management

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3583) Management server stop is not removing the PID

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3583:
---

Assignee: Saksham Srivastava

> Management server stop is not removing the PID 
> ---
>
> Key: CLOUDSTACK-3583
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3583
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Saksham Srivastava
> Fix For: 4.2.1
>
>
> Steps:
> 1. Configure Adv Zone with VMWARE
> 2. Deploy VM's 3. Update expunge interval GC value
> 3. Stop the Management server
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> 4. Check the status 
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]#
> 5. Start the management server
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> 6. Check the status 
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> But when Management server stopped , status shows as its not a clean stop 
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> [root@ec2 management]# cat /var/run/cloudstack-management.pid
> 350
> [root@ec2 management]# cat /var/lock/subsys/cloudstack-management

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3583) Management server stop is not removing the PID

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3583:
---

Priority: Critical  (was: Major)

> Management server stop is not removing the PID 
> ---
>
> Key: CLOUDSTACK-3583
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3583
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Saksham Srivastava
>Priority: Critical
> Fix For: 4.2.1
>
>
> Steps:
> 1. Configure Adv Zone with VMWARE
> 2. Deploy VM's 3. Update expunge interval GC value
> 3. Stop the Management server
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> 4. Check the status 
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]#
> 5. Start the management server
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> 6. Check the status 
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> But when Management server stopped , status shows as its not a clean stop 
> [root@ec2 management]# service cloudstack-management stop
> Stopping cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management dead but pid file exists
> The pid file locates at /var/run/cloudstack-management.pid and lock file at 
> /var/lock/subsys/cloudstack-management.
> Starting cloudstack-management will take care of them or you can 
> manually clean up.
> [root@ec2 management]# service cloudstack-management start
> Starting cloudstack-management:[  OK  ]
> [root@ec2 management]# service cloudstack-management status
> cloudstack-management (pid  350) is running...
> [root@ec2 management]#
> [root@ec2 management]# cat /var/run/cloudstack-management.pid
> 350
> [root@ec2 management]# cat /var/lock/subsys/cloudstack-management

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3608) "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3608:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS
> 
>
> Key: CLOUDSTACK-3608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Sanjay Tripathi
> Fix For: 4.2.1
>
>
> mysql> select hypervisor_type,guest_os_name,guest_os_id from 
> guest_os_hypervisor where guest_os_id in (165,166,167,168);
> +-+--+-+
> | hypervisor_type | guest_os_name| guest_os_id |
> +-+--+-+
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> +-+--+-+
> 16 rows in set (0.00 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-3607) "guest_os_hypervisor" table has values that are not registered in "guest_os" table

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek resolved CLOUDSTACK-3607.


Resolution: Invalid
  Assignee: Abhinandan Prateek

"guest_os_hypervisor" is not used by cloudstack

> "guest_os_hypervisor" table has values that are not registered in "guest_os" 
> table
> --
>
> Key: CLOUDSTACK-3607
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3607
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.2.1
>
>
> mysql> select * from guest_os_hypervisor where guest_os_id not in (select id 
> from guest_os);
> +-+-++-+
> | id  | hypervisor_type | guest_os_name  | guest_os_id |
> +-+-++-+
> | 128 | VmWare  | Red Hat Enterprise Linux 6(32-bit) | 204 |
> | 129 | VmWare  | Red Hat Enterprise Linux 6(64-bit) | 205 |
> +-+-++-+
> 2 rows in set (0.07 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3603) "template_store_ref" table has Invalid URL References

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3603:
---

Assignee: Harikrishna Patnala

> "template_store_ref" table has Invalid URL References
> -
>
> Key: CLOUDSTACK-3603
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3603
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Harikrishna Patnala
> Fix For: 4.2.0
>
>
> mysql> select id,template_id,download_state,install_path,url,state from 
> template_store_ref;
> ++-++--+-+---+
> | id | template_id | download_state | install_path
>  | url
>  | state |
> ++-++--+-+---+
> |  4 |   3 | DOWNLOADED | 
> template/tmpl/1/3/4f102c62-686d-42d9-8a07-11d2058c7d31.qcow2 | 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2   
>   | Ready |
> |  5 |   1 | DOWNLOADED | 
> template/tmpl/1/1/4ab43c65-76d5-4196-8d3f-08a5a803b195.vhd   | 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2 
>   | Ready |
> |  6 |   4 | DOWNLOADED | 
> template/tmpl/1/4/4694cfe1-1f7d-3ce8-bda2-914b5e1ed979.qcow2 | 
> http://download.cloud.com/releases/2.2.0/eec2209b-9875-3c8d-92be-c001bd8a0faf.qcow2.bz2
>  | Ready |
> ++-++--+-+---+
> 3 rows in set (0.00 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3603) "template_store_ref" table has Invalid URL References

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3603:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> "template_store_ref" table has Invalid URL References
> -
>
> Key: CLOUDSTACK-3603
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3603
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Harikrishna Patnala
> Fix For: 4.2.1
>
>
> mysql> select id,template_id,download_state,install_path,url,state from 
> template_store_ref;
> ++-++--+-+---+
> | id | template_id | download_state | install_path
>  | url
>  | state |
> ++-++--+-+---+
> |  4 |   3 | DOWNLOADED | 
> template/tmpl/1/3/4f102c62-686d-42d9-8a07-11d2058c7d31.qcow2 | 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2   
>   | Ready |
> |  5 |   1 | DOWNLOADED | 
> template/tmpl/1/1/4ab43c65-76d5-4196-8d3f-08a5a803b195.vhd   | 
> http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2 
>   | Ready |
> |  6 |   4 | DOWNLOADED | 
> template/tmpl/1/4/4694cfe1-1f7d-3ce8-bda2-914b5e1ed979.qcow2 | 
> http://download.cloud.com/releases/2.2.0/eec2209b-9875-3c8d-92be-c001bd8a0faf.qcow2.bz2
>  | Ready |
> ++-++--+-+---+
> 3 rows in set (0.00 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3607) "guest_os_hypervisor" table has values that are not registered in "guest_os" table

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3607:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> "guest_os_hypervisor" table has values that are not registered in "guest_os" 
> table
> --
>
> Key: CLOUDSTACK-3607
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3607
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Priority: Critical
> Fix For: 4.2.1
>
>
> mysql> select * from guest_os_hypervisor where guest_os_id not in (select id 
> from guest_os);
> +-+-++-+
> | id  | hypervisor_type | guest_os_name  | guest_os_id |
> +-+-++-+
> | 128 | VmWare  | Red Hat Enterprise Linux 6(32-bit) | 204 |
> | 129 | VmWare  | Red Hat Enterprise Linux 6(64-bit) | 205 |
> +-+-++-+
> 2 rows in set (0.07 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3608) "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3608:
---

Assignee: Sanjay Tripathi

> "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS
> 
>
> Key: CLOUDSTACK-3608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Sanjay Tripathi
> Fix For: 4.2.0
>
>
> mysql> select hypervisor_type,guest_os_name,guest_os_id from 
> guest_os_hypervisor where guest_os_id in (165,166,167,168);
> +-+--+-+
> | hypervisor_type | guest_os_name| guest_os_id |
> +-+--+-+
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> +-+--+-+
> 16 rows in set (0.00 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3607) "guest_os_hypervisor" table has values that are not registered in "guest_os" table

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3607:
---

Priority: Critical  (was: Major)

> "guest_os_hypervisor" table has values that are not registered in "guest_os" 
> table
> --
>
> Key: CLOUDSTACK-3607
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3607
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Priority: Critical
> Fix For: 4.2.0
>
>
> mysql> select * from guest_os_hypervisor where guest_os_id not in (select id 
> from guest_os);
> +-+-++-+
> | id  | hypervisor_type | guest_os_name  | guest_os_id |
> +-+-++-+
> | 128 | VmWare  | Red Hat Enterprise Linux 6(32-bit) | 204 |
> | 129 | VmWare  | Red Hat Enterprise Linux 6(64-bit) | 205 |
> +-+-++-+
> 2 rows in set (0.07 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3608) "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3608:
---

Priority: Critical  (was: Major)

> "guest_os_hypervisor" table has repeated mappings of hypervisor and guest OS
> 
>
> Key: CLOUDSTACK-3608
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3608
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Sanjay Tripathi
>Priority: Critical
> Fix For: 4.2.1
>
>
> mysql> select hypervisor_type,guest_os_name,guest_os_id from 
> guest_os_hypervisor where guest_os_id in (165,166,167,168);
> +-+--+-+
> | hypervisor_type | guest_os_name| guest_os_id |
> +-+--+-+
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | VmWare  | Windows 8 (32-bit)   | 165 |
> | VmWare  | Windows 8 (64-bit)   | 166 |
> | VmWare  | Windows Server 2012 (64-bit) | 167 |
> | VmWare  | Windows Server 8 (64-bit)| 168 |
> | XenServer   | Windows 8 (32-bit)   | 165 |
> | XenServer   | Windows 8 (64-bit)   | 166 |
> | XenServer   | Windows Server 2012 (64-bit) | 167 |
> | XenServer   | Windows Server 8 (64-bit)| 168 |
> +-+--+-+
> 16 rows in set (0.00 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3656) lots of cloud-management should be changed to cloudstack-management

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3656:
---

Fix Version/s: (was: 4.1.1)
   (was: 4.2.0)
   4.2.1

> lots of cloud-management should be changed to cloudstack-management
> ---
>
> Key: CLOUDSTACK-3656
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3656
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.1.1, 4.2.0, Future
> Environment: CentOS & Ubuntu
>Reporter: gavin lee
>  Labels: cloud-management, cloudstack-management
> Fix For: 4.2.1
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Starting from 4.1, prefix of service name and paths changed from cloud- to 
> cloudstack-;
> There are still many cloud-management which should be cloudstack-management.
> The folders contain this issue are: 
> client/test/packaging/tools and some translations under docs.
> This will also cause another issue: when stop/restart OS which installed 
> management server, the pid of service "cloud-management" or  
> "K20cloudstack-management" could not be found, thus stopping cloud-management 
> will be failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3714) 4.2 KVM agent sends wrong StartupRoutingCommand to 4.1 management server

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3714:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> 4.2 KVM agent sends wrong StartupRoutingCommand to 4.1 management server
> 
>
> Key: CLOUDSTACK-3714
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3714
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.2.0
> Environment: 4.2 agent with 4.1 management server
>Reporter: Wido den Hollander
> Fix For: 4.2.1
>
>
> When the Agent starts it sends a StartupRoutingCommand to the management 
> server, but this has changed it seems:
> In 4.1 the Agent sends this JSON:
> Sending Startup: Seq 4-0:  { Cmd , MgmtId: -1, via: 4, Ver: v1, Flags: 1, 
> [{"StartupRoutingCommand":{
> In 4.2 however the JSON data starts with:
> Sending Startup: Seq 1-6:  { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 1, 
> [{"com.cloud.agent.api.StartupRoutingCommand":{
> So the Agent sends the full name of the class and this confuses the 
> Management server, it throws an Exception:
> Caused by: com.cloud.utils.exception.CloudRuntimeException: can't find 
> com.cloud.agent.api.com.cloud.agent.api.StartupRoutingCommand
>   at 
> com.cloud.agent.transport.ArrayTypeAdaptor.deserialize(ArrayTypeAdaptor.java:79)
>   at 
> com.cloud.agent.transport.ArrayTypeAdaptor.deserialize(ArrayTypeAdaptor.java:37)
>   at 
> com.google.gson.JsonDeserializerExceptionWrapper.deserialize(JsonDeserializerExceptionWrapper.java:51)
>   ... 15 more
> So it's searching for 
> "com.cloud.agent.api.com.cloud.agent.api.StartupRoutingCommand" which 
> obviously fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3737) Uploaded volume is not getting deleted from secondary storage after attaching it to guest vm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3737:
---

Priority: Critical  (was: Major)

> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> 
>
> Key: CLOUDSTACK-3737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller, Volumes
>Affects Versions: 4.2.0
> Environment: Latest build from ACS 4.2 branch
> Storage: NFS for both primary and secondary
>Reporter: Sanjeev N
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: management-server.rar
>
>
> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> Steps to Reproduce:
> 
> 1.Bring up CS with xen hypervisor and use NFS as the storage for both primary 
> and secondary storage
> 2.Deploy guest vm using default cent of template just with root disk 
> 3.Upload volume to CS using following API:
> http://10.147.59.126:8096/client/api?command=uploadVolume&format=VHD&name=cent62&url=http://10.147.28.7/templates/CentOS62-64bit/280c2a70-e37f-4863-bff8-d318122fd61b.vhd&zoneid=5c5c0b8a-9d5a-4b95-8f13-b31058ffdb37&account=admin&domainid=1
> 4.After the volume download is complete to secondary storage , attache it to 
> vm using API:
> http://10.147.59.126:8096/client/api?command=attachVolume&id=0d627eec-3824-4d35-8997-853472502454&virtualmachineid=6ef83f81-2577-4be2-9720-5fdcf3912e5f
> Observations:
> ===
> After step4 volume should be moved from secondary to primary. But the 
> observation is it is just getting copied to primary and still present in the 
> secondary. It should be deleted from secondary storage after successful 
> copying to primary storage.
> Volume state in DB after attaching to VM:
> mysql> select * from volumes where 
> uuid='0d627eec-3824-4d35-8997-853472502454'\G;
> *** 1. row ***
> id: 10
> account_id: 2
>  domain_id: 1
>pool_id: 1
>   last_pool_id: NULL
>instance_id: 3
>  device_id: 1
>   name: cent62
>   uuid: 0d627eec-3824-4d35-8997-853472502454
>   size: 10737418240
> folder: NULL
>   path: b2b96197-665a-40cd-a52b-de0506c45a8e
> pod_id: NULL
> data_center_id: 1
> iscsi_name: NULL
>host_ip: NULL
>volume_type: DATADISK
>  pool_type: NULL
>   disk_offering_id: 6
>template_id: NULL
> first_snapshot_backup_uuid: NULL
>recreatable: 0
>created: 2013-07-23 12:36:22
>   attached: 2013-07-23 12:52:40
>updated: 2013-07-23 13:09:08
>removed: NULL
>  state: Ready
> chain_info: NULL
>   update_count: 6
>  disk_type: NULL
> display_volume: 0
> format: VHD
>   min_iops: NULL
>   max_iops: NULL
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> In volume_store_ref the volume state is remained in "Creating" state. Still 
> attaching volume was succeeded.
> Here is the volume state in volume_store_ref from cloud db:
> mysql> select * from volume_store_ref where id=4\G;
> *** 1. row ***
> id: 4
>   store_id: 1
>  volume_id: 10
>zone_id: 0
>created: 2013-07-23 13:09:07
>   last_updated: NULL
> job_id: NULL
>   download_pct: 0
>   size: 0
>  physical_size: 0
> download_state: NULL
>   checksum: NULL
>  error_str: NULL
> local_path: NULL
>   install_path: volumes/2/10
>url: NULL
>   download_url: NULL
>  state: Creating
>  destroyed: 0
>   update_count: 1
>ref_cnt: 0
>updated: 2013-07-23 13:09:07
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> Log snippet for upload volume:
> 2013-07-23 08:36:22,827 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (ApiServer-1:null) submit async job-22 = [ 
> 63bf80a6-5142-489e-ab5d-7a5ef061140d ], details: AsyncJobVO {id:22, userId: 
> 1, accountId: 1, sessionKey: null, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.user.volume.UploadVolumeCmd, cmdO

[jira] [Updated] (CLOUDSTACK-3737) Uploaded volume is not getting deleted from secondary storage after attaching it to guest vm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3737:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> 
>
> Key: CLOUDSTACK-3737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller, Volumes
>Affects Versions: 4.2.0
> Environment: Latest build from ACS 4.2 branch
> Storage: NFS for both primary and secondary
>Reporter: Sanjeev N
>Assignee: Nitin Mehta
> Fix For: 4.2.1
>
> Attachments: management-server.rar
>
>
> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> Steps to Reproduce:
> 
> 1.Bring up CS with xen hypervisor and use NFS as the storage for both primary 
> and secondary storage
> 2.Deploy guest vm using default cent of template just with root disk 
> 3.Upload volume to CS using following API:
> http://10.147.59.126:8096/client/api?command=uploadVolume&format=VHD&name=cent62&url=http://10.147.28.7/templates/CentOS62-64bit/280c2a70-e37f-4863-bff8-d318122fd61b.vhd&zoneid=5c5c0b8a-9d5a-4b95-8f13-b31058ffdb37&account=admin&domainid=1
> 4.After the volume download is complete to secondary storage , attache it to 
> vm using API:
> http://10.147.59.126:8096/client/api?command=attachVolume&id=0d627eec-3824-4d35-8997-853472502454&virtualmachineid=6ef83f81-2577-4be2-9720-5fdcf3912e5f
> Observations:
> ===
> After step4 volume should be moved from secondary to primary. But the 
> observation is it is just getting copied to primary and still present in the 
> secondary. It should be deleted from secondary storage after successful 
> copying to primary storage.
> Volume state in DB after attaching to VM:
> mysql> select * from volumes where 
> uuid='0d627eec-3824-4d35-8997-853472502454'\G;
> *** 1. row ***
> id: 10
> account_id: 2
>  domain_id: 1
>pool_id: 1
>   last_pool_id: NULL
>instance_id: 3
>  device_id: 1
>   name: cent62
>   uuid: 0d627eec-3824-4d35-8997-853472502454
>   size: 10737418240
> folder: NULL
>   path: b2b96197-665a-40cd-a52b-de0506c45a8e
> pod_id: NULL
> data_center_id: 1
> iscsi_name: NULL
>host_ip: NULL
>volume_type: DATADISK
>  pool_type: NULL
>   disk_offering_id: 6
>template_id: NULL
> first_snapshot_backup_uuid: NULL
>recreatable: 0
>created: 2013-07-23 12:36:22
>   attached: 2013-07-23 12:52:40
>updated: 2013-07-23 13:09:08
>removed: NULL
>  state: Ready
> chain_info: NULL
>   update_count: 6
>  disk_type: NULL
> display_volume: 0
> format: VHD
>   min_iops: NULL
>   max_iops: NULL
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> In volume_store_ref the volume state is remained in "Creating" state. Still 
> attaching volume was succeeded.
> Here is the volume state in volume_store_ref from cloud db:
> mysql> select * from volume_store_ref where id=4\G;
> *** 1. row ***
> id: 4
>   store_id: 1
>  volume_id: 10
>zone_id: 0
>created: 2013-07-23 13:09:07
>   last_updated: NULL
> job_id: NULL
>   download_pct: 0
>   size: 0
>  physical_size: 0
> download_state: NULL
>   checksum: NULL
>  error_str: NULL
> local_path: NULL
>   install_path: volumes/2/10
>url: NULL
>   download_url: NULL
>  state: Creating
>  destroyed: 0
>   update_count: 1
>ref_cnt: 0
>updated: 2013-07-23 13:09:07
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> Log snippet for upload volume:
> 2013-07-23 08:36:22,827 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (ApiServer-1:null) submit async job-22 = [ 
> 63bf80a6-5142-489e-ab5d-7a5ef061140d ], details: AsyncJobVO {id:22, userId: 
> 1, accountId: 1, sessionKey: null, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.user.volume.UploadVolumeCmd, cmdOriginato

[jira] [Updated] (CLOUDSTACK-3692) Test case test_netscaler_lb.TestVmWithLb.test_06_destroy_user_vm failed

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3692:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Test case test_netscaler_lb.TestVmWithLb.test_06_destroy_user_vm failed 
> 
>
> Key: CLOUDSTACK-3692
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3692
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.2.0
> Environment: KVM and VMware, advanced zone
> 4.2
>Reporter: Rayees Namathponnan
>Assignee: Sowmya Krishnan
> Fix For: 4.2.1
>
> Attachments: MSlog.rar
>
>
> Failed with below stack trace 
> SSH Access failed for 10.223.52.5: The server should not be present in 
> netscaler after destroy
>  >> begin captured logging << 
> testclient.testcase.TestVmWithLb: DEBUG: Destroying VM instance: 
> ec812279-bceb-4d6b-9b8c-54320346bc25
> testclient.testcase.TestVmWithLb: DEBUG: Destroying VM: 
> ec812279-bceb-4d6b-9b8c-54320346bc25
> testclient.testcase.TestVmWithLb: DEBUG: Verifying request served by only 
> running instances
> testclient.testcase.TestVmWithLb: DEBUG: Command: hostname
> paramiko.transport: DEBUG: [chan 2] Max packet in: 34816 bytes
> paramiko.transport: DEBUG: [chan 2] Max packet out: 32768 bytes
> paramiko.transport: INFO: Secsh channel 2 opened.
> paramiko.transport: DEBUG: [chan 2] Sesch channel 2 request ok
> paramiko.transport: DEBUG: [chan 2] EOF received (2)
> paramiko.transport: DEBUG: [chan 2] EOF sent (2)
> sshClient: DEBUG: {Cmd: hostname via Host: 10.223.122.96} {returns: 
> ['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e']}
> testclient.testcase.TestVmWithLb: DEBUG: Output: 
> ['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e']
> paramiko.transport: DEBUG: starting thread (client mode): 0x11405350L
> paramiko.transport: INFO: Connected (version 2.0, client OpenSSH_4.3)
> paramiko.transport: DEBUG: kex algos:['diffie-hellman-group-exchange-sha1', 
> 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server 
> key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 
> 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
> 'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
> 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 
> 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
> 'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
> 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 
> 'hmac-ripemd160', 'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 
> 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'hmac-ripemd160', 
> 'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client 
> compress:['none', 'z...@openssh.com'] server compress:['none', 
> 'z...@openssh.com'] client lang:[''] server lang:[''] kex follows?False
> paramiko.transport: DEBUG: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
> paramiko.transport: DEBUG: using kex diffie-hellman-group1-sha1; server key 
> type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local 
> hmac-sha1, remote hmac-sha1; compression: local none, remote none
> paramiko.transport: DEBUG: Switch to new keys ...
> paramiko.transport: DEBUG: Adding ssh-rsa host key for 10.223.122.96: 
> 0f8b3ff9dc4dce10340227dab3cac032
> paramiko.transport: DEBUG: Trying discovered key 
> 76be480fa6b8ad3b78082d6d19e4ee44 in /root/.ssh/id_rsa
> paramiko.transport: DEBUG: userauth is OK
> paramiko.transport: INFO: Authentication (publickey) failed.
> paramiko.transport: DEBUG: userauth is OK
> paramiko.transport: INFO: Authentication (password) successful!
> sshClient: DEBUG: SSH connect: root@10.223.122.96 with passwd password
> paramiko.transport: DEBUG: EOF in transport thread
> testclient.testcase.TestVmWithLb: DEBUG: Command: hostname
> paramiko.transport: DEBUG: [chan 1] Max packet in: 34816 bytes
> paramiko.transport: DEBUG: [chan 1] Max packet out: 32768 bytes
> paramiko.transport: INFO: Secsh channel 1 opened.
> paramiko.transport: DEBUG: [chan 1] Sesch channel 1 request ok
> paramiko.transport: DEBUG: [chan 1] EOF received (1)
> paramiko.transport: DEBUG: [chan 1] EOF sent (1)
> sshClient: DEBUG: {Cmd: hostname via Host: 10.223.122.96} {returns: 
> ['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e']}
> testclient.testcase.TestVmWithLb: DEBUG: Output: 
> ['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e']
> testclient.testcase.TestVmWithLb: DEBUG: Hostnames: 
> [['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e'], 
> ['9793f67d-bac4-4dd3-bf7c-e2d837b66e6e']]
> testclient.testcase.TestVmWithLb: DEBUG: SSH into Ne

[jira] [Updated] (CLOUDSTACK-3737) Uploaded volume is not getting deleted from secondary storage after attaching it to guest vm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3737:
---

Assignee: Nitin Mehta

> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> 
>
> Key: CLOUDSTACK-3737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller, Volumes
>Affects Versions: 4.2.0
> Environment: Latest build from ACS 4.2 branch
> Storage: NFS for both primary and secondary
>Reporter: Sanjeev N
>Assignee: Nitin Mehta
> Fix For: 4.2.0
>
> Attachments: management-server.rar
>
>
> Uploaded volume is not getting deleted from secondary storage after attaching 
> it to guest vm
> Steps to Reproduce:
> 
> 1.Bring up CS with xen hypervisor and use NFS as the storage for both primary 
> and secondary storage
> 2.Deploy guest vm using default cent of template just with root disk 
> 3.Upload volume to CS using following API:
> http://10.147.59.126:8096/client/api?command=uploadVolume&format=VHD&name=cent62&url=http://10.147.28.7/templates/CentOS62-64bit/280c2a70-e37f-4863-bff8-d318122fd61b.vhd&zoneid=5c5c0b8a-9d5a-4b95-8f13-b31058ffdb37&account=admin&domainid=1
> 4.After the volume download is complete to secondary storage , attache it to 
> vm using API:
> http://10.147.59.126:8096/client/api?command=attachVolume&id=0d627eec-3824-4d35-8997-853472502454&virtualmachineid=6ef83f81-2577-4be2-9720-5fdcf3912e5f
> Observations:
> ===
> After step4 volume should be moved from secondary to primary. But the 
> observation is it is just getting copied to primary and still present in the 
> secondary. It should be deleted from secondary storage after successful 
> copying to primary storage.
> Volume state in DB after attaching to VM:
> mysql> select * from volumes where 
> uuid='0d627eec-3824-4d35-8997-853472502454'\G;
> *** 1. row ***
> id: 10
> account_id: 2
>  domain_id: 1
>pool_id: 1
>   last_pool_id: NULL
>instance_id: 3
>  device_id: 1
>   name: cent62
>   uuid: 0d627eec-3824-4d35-8997-853472502454
>   size: 10737418240
> folder: NULL
>   path: b2b96197-665a-40cd-a52b-de0506c45a8e
> pod_id: NULL
> data_center_id: 1
> iscsi_name: NULL
>host_ip: NULL
>volume_type: DATADISK
>  pool_type: NULL
>   disk_offering_id: 6
>template_id: NULL
> first_snapshot_backup_uuid: NULL
>recreatable: 0
>created: 2013-07-23 12:36:22
>   attached: 2013-07-23 12:52:40
>updated: 2013-07-23 13:09:08
>removed: NULL
>  state: Ready
> chain_info: NULL
>   update_count: 6
>  disk_type: NULL
> display_volume: 0
> format: VHD
>   min_iops: NULL
>   max_iops: NULL
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> In volume_store_ref the volume state is remained in "Creating" state. Still 
> attaching volume was succeeded.
> Here is the volume state in volume_store_ref from cloud db:
> mysql> select * from volume_store_ref where id=4\G;
> *** 1. row ***
> id: 4
>   store_id: 1
>  volume_id: 10
>zone_id: 0
>created: 2013-07-23 13:09:07
>   last_updated: NULL
> job_id: NULL
>   download_pct: 0
>   size: 0
>  physical_size: 0
> download_state: NULL
>   checksum: NULL
>  error_str: NULL
> local_path: NULL
>   install_path: volumes/2/10
>url: NULL
>   download_url: NULL
>  state: Creating
>  destroyed: 0
>   update_count: 1
>ref_cnt: 0
>updated: 2013-07-23 13:09:07
> 1 row in set (0.00 sec)
> ERROR:
> No query specified
> Log snippet for upload volume:
> 2013-07-23 08:36:22,827 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (ApiServer-1:null) submit async job-22 = [ 
> 63bf80a6-5142-489e-ab5d-7a5ef061140d ], details: AsyncJobVO {id:22, userId: 
> 1, accountId: 1, sessionKey: null, instanceType: None, instanceId: null, cmd: 
> org.apache.cloudstack.api.command.user.volume.UploadVolumeCmd, cmdOriginator: 
> null, cmdInfo: 
> {"cmdEventT

[jira] [Updated] (CLOUDSTACK-3788) [KVM] Weekly Snapshot got stuck in "Allocated State"

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3788:
---

Priority: Critical  (was: Major)

> [KVM] Weekly Snapshot got stuck in "Allocated State"
> 
>
> Key: CLOUDSTACK-3788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Fang Wang
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: management-server.log.2013-07-23.gz, 
> mysql_cloudstack_dump.zip
>
>
> Weekly Snapshot stuck in Allocated State:
> mysql> select * from snapshots where name like 
> "Atoms-VM-1_ROOT-6_20130723235146";
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> | id | data_center_id | account_id | domain_id | volume_id | disk_offering_id 
> | status| path | name | uuid  
>| snapshot_type | type_description | size   | created  
>| removed | backup_snap_id | swift_id | sechost_id | prev_snap_id | 
> hypervisor_type | version | s3_id |
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> | 24 |  1 |  3 | 1 | 6 |1 
> | Destroyed | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
> 08a0d2aa-9635-41cd-ba54-5367303bceac | 3 | HOURLY   | 
> 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
> NULL | NULL | KVM | 2.2 |  NULL |
> | 25 |  1 |  3 | 1 | 6 |1 
> | Allocated | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
> 1e24a056-be38-4b55-845b-a5672b9fa93c | 5 | WEEKLY   | 
> 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
> NULL | NULL | KVM | 2.2 |  NULL |
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> 2 rows in set (0.04 sec)
> Attached Management Server logs and cloud database dump

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3803) Unable to complete Add zone wizard.

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3803:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Unable to complete Add zone wizard.
> ---
>
> Key: CLOUDSTACK-3803
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3803
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
> Environment: Build: CloudPlatform-4.2-263-rhel6.3
> Management Server is running on CentOS 6.3.
>Reporter: Pradeep S
> Fix For: 4.2.1
>
> Attachments: add-zone1.swf
>
>
> Steps to Reproduce:
> ---
> 1. Login to Management Server UI. http://:8080/client
> 2. Navigate to Infrastructure->Zones->Add Zone wizard.
> 3. Continue with Basic Zone type.
> 4. Enter the details for zone, pod, cluster, host, primary storage.
> 5. Before entering the details of secondary storage, click next, so that it 
> shows the mandatory fields as required.
> 6. After that, enter the details in all the fields of secondary storage 
> configuration and click next.
> Observation:
> 
> While entering the details of secondary storage, if we first click next in 
> that screen and then enter all the required values, it still shows the fields 
> marked with " * " as required and it is unable to proceed to the next step 
> and complete adding a new zone.
> It that case, we should cancel the add zone wizard and should start from the 
> beginning by entering all details of zone, pod, cluster, host, primary and 
> secondary storage once again.
> Expected behavior:
> -
> Even if we missed to enter the details in some mandatory fields of secondary 
> storage and then enter those details after system shows the fields as 
> required, it should be able to proceed and complete the add zone wizard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3788) [KVM] Weekly Snapshot got stuck in "Allocated State"

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3788:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> [KVM] Weekly Snapshot got stuck in "Allocated State"
> 
>
> Key: CLOUDSTACK-3788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Fang Wang
> Fix For: 4.2.1
>
> Attachments: management-server.log.2013-07-23.gz, 
> mysql_cloudstack_dump.zip
>
>
> Weekly Snapshot stuck in Allocated State:
> mysql> select * from snapshots where name like 
> "Atoms-VM-1_ROOT-6_20130723235146";
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> | id | data_center_id | account_id | domain_id | volume_id | disk_offering_id 
> | status| path | name | uuid  
>| snapshot_type | type_description | size   | created  
>| removed | backup_snap_id | swift_id | sechost_id | prev_snap_id | 
> hypervisor_type | version | s3_id |
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> | 24 |  1 |  3 | 1 | 6 |1 
> | Destroyed | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
> 08a0d2aa-9635-41cd-ba54-5367303bceac | 3 | HOURLY   | 
> 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
> NULL | NULL | KVM | 2.2 |  NULL |
> | 25 |  1 |  3 | 1 | 6 |1 
> | Allocated | NULL | Atoms-VM-1_ROOT-6_20130723235146 | 
> 1e24a056-be38-4b55-845b-a5672b9fa93c | 5 | WEEKLY   | 
> 147456 | 2013-07-23 23:51:46 | NULL| NULL   | NULL |   
> NULL | NULL | KVM | 2.2 |  NULL |
> ++++---+---+--+---+--+--+--+---+--++-+-++--++--+-+-+---+
> 2 rows in set (0.04 sec)
> Attached Management Server logs and cloud database dump

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3808) Attach volume should work when root is at zone level primary store and data at cluster level or host level store

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3808:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Attach volume should work when root is at zone level primary store and data 
> at cluster level or host level store
> 
>
> Key: CLOUDSTACK-3808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Koushik Das
> Fix For: 4.2.1
>
>
> There are new scenarios that needs to be handled with the introduction of 
> zone-wide primary store. 
> - root (zone scope) + data (cluster scope)
> - root (zone scope) + data (host scope)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3823) UI: NTier: Adding Network ACL Rule by specifying Protocol Number doesnt allow the User to Specify the ICMP Type and Code

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3823:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> UI: NTier: Adding Network ACL Rule by specifying Protocol Number doesnt allow 
> the User to Specify the ICMP Type and Code
> 
>
> Key: CLOUDSTACK-3823
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3823
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Brian Federle
> Fix For: 4.2.1
>
> Attachments: UICapture28.png
>
>
> Corresponding UI Screenshot is attached to the bug report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3808) Attach volume should work when root is at zone level primary store and data at cluster level or host level store

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3808:
---

Assignee: Devdeep Singh

> Attach volume should work when root is at zone level primary store and data 
> at cluster level or host level store
> 
>
> Key: CLOUDSTACK-3808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Koushik Das
>Assignee: Devdeep Singh
> Fix For: 4.2.1
>
>
> There are new scenarios that needs to be handled with the introduction of 
> zone-wide primary store. 
> - root (zone scope) + data (cluster scope)
> - root (zone scope) + data (host scope)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3823) UI: NTier: Adding Network ACL Rule by specifying Protocol Number doesnt allow the User to Specify the ICMP Type and Code

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3823:
---

Assignee: Brian Federle

> UI: NTier: Adding Network ACL Rule by specifying Protocol Number doesnt allow 
> the User to Specify the ICMP Type and Code
> 
>
> Key: CLOUDSTACK-3823
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3823
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
>Assignee: Brian Federle
> Fix For: 4.2.0
>
> Attachments: UICapture28.png
>
>
> Corresponding UI Screenshot is attached to the bug report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3813) "Service.provider.create" event doesnt mention about the Service Provider in the event's description

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3813:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> "Service.provider.create" event doesnt mention about the Service Provider in 
> the event's description
> 
>
> Key: CLOUDSTACK-3813
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3813
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
>Reporter: Chandan Purushothama
> Fix For: 4.2.1
>
>
> mysql> select 
> id,type,state,description,user_id,account_id,domain_id,created,level,start_id,parameters,archived
>  from event where type like "SERVICE.PROVIDER.CREATE";
> ++-+-+---+-++---+-+---+--++--+
> | id | type| state   | description
>| user_id | account_id | domain_id | 
> created | level | start_id | parameters | archived |
> ++-+-+---+-++---+-+---+--++--+
> |  8 | SERVICE.PROVIDER.CREATE | Created | Successfully created entity for 
> Creating Physical Network ServiceProvider |   2 |  1 | 1 
> | 2013-07-24 20:48:39 | INFO  |0 | NULL   |0 |
> |  9 | SERVICE.PROVIDER.CREATE | Created | Successfully created entity for 
> Creating Physical Network ServiceProvider |   2 |  1 | 1 
> | 2013-07-24 20:48:39 | INFO  |0 | NULL   |0 |
> | 10 | SERVICE.PROVIDER.CREATE | Created | Successfully created entity for 
> Creating Physical Network ServiceProvider |   2 |  1 | 1 
> | 2013-07-24 20:48:39 | INFO  |0 | NULL   |0 |
> | 11 | SERVICE.PROVIDER.CREATE | Created | Successfully created entity for 
> Creating Physical Network ServiceProvider |   2 |  1 | 1 
> | 2013-07-24 20:48:39 | INFO  |0 | NULL   |0 |
> ++-+-+---+-++---+-+---+--++--+
> 4 rows in set (0.04 sec)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3808) Attach volume should work when root is at zone level primary store and data at cluster level or host level store

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3808:
---

Priority: Critical  (was: Major)

> Attach volume should work when root is at zone level primary store and data 
> at cluster level or host level store
> 
>
> Key: CLOUDSTACK-3808
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3808
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Koushik Das
>Assignee: Devdeep Singh
>Priority: Critical
> Fix For: 4.2.1
>
>
> There are new scenarios that needs to be handled with the introduction of 
> zone-wide primary store. 
> - root (zone scope) + data (cluster scope)
> - root (zone scope) + data (host scope)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3873) No error notification is generated when Primary storage (Cluster level) is added with wrong path

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3873:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> No error notification is generated when Primary storage (Cluster level) is 
> added with wrong path 
> -
>
> Key: CLOUDSTACK-3873
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3873
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
> Fix For: 4.2.1
>
> Attachments: apilog.log, management-server.log
>
>
> Steps:
> 1. Configure Adv zone with VMWARE host 
> 2. Tried to add cluster level primary storage with wrong path 
> 3.  It is not added but there is no notification about the failure.
> Only from MS logs Admin can figure out that it failed to add :
> 2013-07-27 12:15:04,954 DEBUG [agent.transport.Request] 
> (StatsCollector-3:null) Seq 1-1364136443: Received:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
> 2013-07-27 12:15:06,193 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-13:null) SeqA 2-10614: Processing Seq 2-10614:  { Cmd , 
> MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n
>   \"connections\": []\n}","wait":0}}] }
> 2013-07-27 12:15:06,294 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-13:null) SeqA 2-10614: Sending Seq 2-10614:  { Ans: , 
> MgmtId: 55638679939377, via: 2, Ver: v1, Flags: 100010, 
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2013-07-27 12:15:06,831 INFO  [vmware.mo.HostMO] 
> (DirectAgent-341:10.102.192.19) Creation of NFS datastore on vCenter failed.  
> Details: vCenter API trace - mountDatastore(). target MOR: host-10, vmfs: 
> false, poolHost: 10.102.192.100, poolHostPort: 2049, poolPath: 
> /cpg_vol/sailaja/wrongps1, poolUuid: acae88b9f9213bb1bba37360ba60038b. 
> Exception mesg: An error occurred during host configuration.
> 2013-07-27 12:15:06,855 ERROR [vmware.resource.VmwareResource] 
> (DirectAgent-341:10.102.192.19) ModifyStoragePoolCommand failed due to 
> Exception: java.lang.Exception
> Message: Creation of NFS datastore on vCenter failed.
> java.lang.Exception: Creation of NFS datastore on vCenter failed.
> at 
> com.cloud.hypervisor.vmware.mo.HostMO.mountDatastore(HostMO.java:772)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:4104)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:472)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-07-27 12:15:06,857 DEBUG [agent.manager.DirectAgentAttache] 
> (DirectAgent-341:null) Seq 1-1364136442: Response Received:
> 2013-07-27 12:15:06,857 DEBUG [agent.transport.Request] 
> (DirectAgent-341:null) Seq 1-1364136442: Processing:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.Answer":{"result":false,"details":"ModifyStoragePoolCommand
>  failed due to Exception: java.lang.Exception\nMessage: Creation of NFS 
> datastore on vCenter failed.\n","wait":0}}] }
> 2013-07-27 12:15:06,858 DEBUG [agent.transport.Request] 
> (catalina-exec-22:null) Seq 1-1364136442: Received:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, { Answer } }
> 2013-07-27 12:15:06,858 DEBUG [agent.manager.AgentManagerImpl] 
> (catalina-exec-22:null) Details from executing class 
> com.cloud.agent.api.ModifyStoragePoolCommand: ModifyStoragePoolCommand failed 
> due to Exception: java.lang.Exception
> Message: Creation of NFS datastore on vCenter failed.
> 2013-07-27 12:15:06,858 WARN  [apache.cloudstack.alerts] 
> (catalina-exec-22:null)  alertType:: 7 // dataCenterId:: 1 // podId:: 1 // 
> clusterId:: null 

[jira] [Updated] (CLOUDSTACK-3873) No error notification is generated when Primary storage (Cluster level) is added with wrong path

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3873:
---

Assignee: Venkata Siva Vijayendra Bhamidipati

> No error notification is generated when Primary storage (Cluster level) is 
> added with wrong path 
> -
>
> Key: CLOUDSTACK-3873
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3873
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Venkata Siva Vijayendra Bhamidipati
> Fix For: 4.2.1
>
> Attachments: apilog.log, management-server.log
>
>
> Steps:
> 1. Configure Adv zone with VMWARE host 
> 2. Tried to add cluster level primary storage with wrong path 
> 3.  It is not added but there is no notification about the failure.
> Only from MS logs Admin can figure out that it failed to add :
> 2013-07-27 12:15:04,954 DEBUG [agent.transport.Request] 
> (StatsCollector-3:null) Seq 1-1364136443: Received:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
> 2013-07-27 12:15:06,193 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-13:null) SeqA 2-10614: Processing Seq 2-10614:  { Cmd , 
> MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n
>   \"connections\": []\n}","wait":0}}] }
> 2013-07-27 12:15:06,294 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-13:null) SeqA 2-10614: Sending Seq 2-10614:  { Ans: , 
> MgmtId: 55638679939377, via: 2, Ver: v1, Flags: 100010, 
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2013-07-27 12:15:06,831 INFO  [vmware.mo.HostMO] 
> (DirectAgent-341:10.102.192.19) Creation of NFS datastore on vCenter failed.  
> Details: vCenter API trace - mountDatastore(). target MOR: host-10, vmfs: 
> false, poolHost: 10.102.192.100, poolHostPort: 2049, poolPath: 
> /cpg_vol/sailaja/wrongps1, poolUuid: acae88b9f9213bb1bba37360ba60038b. 
> Exception mesg: An error occurred during host configuration.
> 2013-07-27 12:15:06,855 ERROR [vmware.resource.VmwareResource] 
> (DirectAgent-341:10.102.192.19) ModifyStoragePoolCommand failed due to 
> Exception: java.lang.Exception
> Message: Creation of NFS datastore on vCenter failed.
> java.lang.Exception: Creation of NFS datastore on vCenter failed.
> at 
> com.cloud.hypervisor.vmware.mo.HostMO.mountDatastore(HostMO.java:772)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:4104)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:472)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-07-27 12:15:06,857 DEBUG [agent.manager.DirectAgentAttache] 
> (DirectAgent-341:null) Seq 1-1364136442: Response Received:
> 2013-07-27 12:15:06,857 DEBUG [agent.transport.Request] 
> (DirectAgent-341:null) Seq 1-1364136442: Processing:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.Answer":{"result":false,"details":"ModifyStoragePoolCommand
>  failed due to Exception: java.lang.Exception\nMessage: Creation of NFS 
> datastore on vCenter failed.\n","wait":0}}] }
> 2013-07-27 12:15:06,858 DEBUG [agent.transport.Request] 
> (catalina-exec-22:null) Seq 1-1364136442: Received:  { Ans: , MgmtId: 
> 55638679939377, via: 1, Ver: v1, Flags: 10, { Answer } }
> 2013-07-27 12:15:06,858 DEBUG [agent.manager.AgentManagerImpl] 
> (catalina-exec-22:null) Details from executing class 
> com.cloud.agent.api.ModifyStoragePoolCommand: ModifyStoragePoolCommand failed 
> due to Exception: java.lang.Exception
> Message: Creation of NFS datastore on vCenter failed.
> 2013-07-27 12:15:06,858 WARN  [apache.cloudstack.alerts] 
> (catalina-exec-22:null)  alertType:: 7 // dataC

[jira] [Updated] (CLOUDSTACK-3880) /sbin/poweroff et al or ACPID initiated shutdown does not stop cloudstack-[usage|management]

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3880:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> /sbin/poweroff et al or ACPID initiated shutdown does not stop 
> cloudstack-[usage|management]
> 
>
> Key: CLOUDSTACK-3880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Usage
>Affects Versions: 4.1.1
> Environment: manager running on RHEL64 (variant, OEL64)
>Reporter: Ove Ewerlid
>  Labels: patch
> Fix For: 4.2.1
>
>
> cloudstack-management has custom stop() that does not handle finding the PID 
> file if invoked via S|K runlevel links. Interactive use works. 
> cloudstack-usage start() does not create the /var/run/subsys lockfile causing 
> the stop() function to not be invoked. Interactive use works.
> cloudstack-usage lacks runlevel # settings, causing default 50 to be selected 
> and this causes stop() to run after mysqld has been shutdown.
> effect of this can be data base corruption
> Here is a possible fix;
> [root@slc00xfr rc.d]# diff -c -r init.d.ref init.d
> diff -c -r init.d.ref/cloudstack-management init.d/cloudstack-management
> *** init.d.ref/cloudstack-management2013-07-26 17:33:05.0 +0100
> --- init.d/cloudstack-management2013-07-27 18:36:43.0 +0100
> ***
> *** 44,49 
> --- 44,52 
>   NAME="$(basename $0)"
>   stop() {
> + if [ "${NAME:0:1}" = "S" -o "${NAME:0:1}" = "K" ]; then
> +   NAME="${NAME:3}"
> + fi
> SHUTDOWN_WAIT="30"
> count="0"
> if [ -f /var/run/${NAME}.pid ]; then
> diff -c -r init.d.ref/cloudstack-usage init.d/cloudstack-usage
> *** init.d.ref/cloudstack-usage 2013-07-26 17:33:05.0 +0100
> --- init.d/cloudstack-usage 2013-07-27 18:36:53.0 +0100
> ***
> *** 2,7 
> --- 2,8 
>   ### BEGIN INIT INFO
>   # Provides:  cloud usage
> + # chkconfig: - 79 21
>   # Required-Start:$network $local_fs
>   # Required-Stop: $network $local_fs
>   # Default-Start: 3 4 5
> ***
> *** 96,101 
> --- 97,103 
>   if [ $rc -eq 0 ]; then
>   success
> + touch $LOCKFILE
>   else
>   failure
>   rm -f "$PIDFILE"
> [root@slc00xfr rc.d]#

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3888) UI should also return the mode(Strict/Preferred) when listing the ServiceOffering that uses ImplicitDedicationPlanner

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3888:
---

Assignee: Jessica Wang

> UI should also return the mode(Strict/Preferred) when listing the 
> ServiceOffering that uses ImplicitDedicationPlanner 
> --
>
> Key: CLOUDSTACK-3888
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3888
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Prachi Damle
>Assignee: Jessica Wang
> Fix For: 4.2.1
>
>
> This is the UI portion for the issue 
> https://issues.apache.org/jira/browse/CLOUDSTACK-3343

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3888) UI should also return the mode(Strict/Preferred) when listing the ServiceOffering that uses ImplicitDedicationPlanner

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3888:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> UI should also return the mode(Strict/Preferred) when listing the 
> ServiceOffering that uses ImplicitDedicationPlanner 
> --
>
> Key: CLOUDSTACK-3888
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3888
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Prachi Damle
> Fix For: 4.2.1
>
>
> This is the UI portion for the issue 
> https://issues.apache.org/jira/browse/CLOUDSTACK-3343

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3911) [PortableIP] there is no check while adding a zone public range to see whether the same VLAN exists in portable IP range.

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3911:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> [PortableIP] there is no check while adding a zone  public range to see 
> whether the same VLAN exists in portable IP range.
> --
>
> Key: CLOUDSTACK-3911
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3911
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: commit # ca474d0e09f772cb22abf2802a308a2da5351592
>Reporter: venkata swamybabu budumuru
>Assignee: Murali Reddy
> Fix For: 4.2.1
>
>
> Steps to reproduce:
> 1. Have at least one portable IP range added
> for ex: VLAN 54, start ip : 10.147.54.100, end ip : 10.147.54.150, netmask : 
> 255.255.255.0
> 2. create an advanced zone. during the creation of advanced zone specify the 
> above VLAN with either the same of different ip ranges.
> Observations:
> (i) zone gets created successfully without any issues.
> (ii) Ideally it should throw an error saying there is another VLAN exists on 
> the network.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3911) [PortableIP] there is no check while adding a zone public range to see whether the same VLAN exists in portable IP range.

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3911:
---

Priority: Critical  (was: Major)

> [PortableIP] there is no check while adding a zone  public range to see 
> whether the same VLAN exists in portable IP range.
> --
>
> Key: CLOUDSTACK-3911
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3911
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: commit # ca474d0e09f772cb22abf2802a308a2da5351592
>Reporter: venkata swamybabu budumuru
>Assignee: Murali Reddy
>Priority: Critical
> Fix For: 4.2.1
>
>
> Steps to reproduce:
> 1. Have at least one portable IP range added
> for ex: VLAN 54, start ip : 10.147.54.100, end ip : 10.147.54.150, netmask : 
> 255.255.255.0
> 2. create an advanced zone. during the creation of advanced zone specify the 
> above VLAN with either the same of different ip ranges.
> Observations:
> (i) zone gets created successfully without any issues.
> (ii) Ideally it should throw an error saying there is another VLAN exists on 
> the network.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3888) UI should also return the mode(Strict/Preferred) when listing the ServiceOffering that uses ImplicitDedicationPlanner

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3888:
---

Priority: Critical  (was: Major)

> UI should also return the mode(Strict/Preferred) when listing the 
> ServiceOffering that uses ImplicitDedicationPlanner 
> --
>
> Key: CLOUDSTACK-3888
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3888
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
>Reporter: Prachi Damle
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.2.1
>
>
> This is the UI portion for the issue 
> https://issues.apache.org/jira/browse/CLOUDSTACK-3343

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3968) [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are removed/User Account of this network is removed from cloudstack

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3968:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are 
> removed/User Account of this network is removed from cloudstack
> --
>
> Key: CLOUDSTACK-3968
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3968
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
> Fix For: 4.2.1
>
> Attachments: dvswitchsnap.png
>
>
> Setup: VMWARE with DVSwitch 
> 1. Configure Adv Zone with DVSwitch enabled VMWARE cluster
> 2. Create an account and deploy VM using default offering guest network.
> 3. Delete this account. With this all the resources of this account gets 
> removed from cloudstack 
> Observation:
> But from vCenter dv Switch, the distributed port groups configured for this 
> account guest networks will not get removed. 
> Expected results: 
> All the configurations created by cloudstack should get cleaned up as part of 
> smooth removal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3968) [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are removed/User Account of this network is removed from cloudstack

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3968:
---

Priority: Critical  (was: Major)

> [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are 
> removed/User Account of this network is removed from cloudstack
> --
>
> Key: CLOUDSTACK-3968
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3968
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Sateesh Chodapuneedi
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: dvswitchsnap.png
>
>
> Setup: VMWARE with DVSwitch 
> 1. Configure Adv Zone with DVSwitch enabled VMWARE cluster
> 2. Create an account and deploy VM using default offering guest network.
> 3. Delete this account. With this all the resources of this account gets 
> removed from cloudstack 
> Observation:
> But from vCenter dv Switch, the distributed port groups configured for this 
> account guest networks will not get removed. 
> Expected results: 
> All the configurations created by cloudstack should get cleaned up as part of 
> smooth removal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3968) [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are removed/User Account of this network is removed from cloudstack

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3968:
---

Assignee: Sateesh Chodapuneedi

> [VMWAREDVS] Distributed Portgroups are not deleted when guest networks are 
> removed/User Account of this network is removed from cloudstack
> --
>
> Key: CLOUDSTACK-3968
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3968
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.2.0
>Reporter: Sailaja Mada
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.2.1
>
> Attachments: dvswitchsnap.png
>
>
> Setup: VMWARE with DVSwitch 
> 1. Configure Adv Zone with DVSwitch enabled VMWARE cluster
> 2. Create an account and deploy VM using default offering guest network.
> 3. Delete this account. With this all the resources of this account gets 
> removed from cloudstack 
> Observation:
> But from vCenter dv Switch, the distributed port groups configured for this 
> account guest networks will not get removed. 
> Expected results: 
> All the configurations created by cloudstack should get cleaned up as part of 
> smooth removal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3973) [GSLB] [LOGS Message] Improving logs messages for GSLB rule configuration

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3973:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> [GSLB] [LOGS Message] Improving logs messages for GSLB rule configuration
> -
>
> Key: CLOUDSTACK-3973
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3973
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.2.0
> Environment: commit # 6275d697e340be5b520f37b15d72343fa47c00a9
>Reporter: venkata swamybabu budumuru
>Assignee: Murali Reddy
> Fix For: 4.2.1
>
> Attachments: logs.tgz
>
>
> Steps to reproduce:
> 1. Have latest cloudstack with at least 2 advanced zones where each zone is 
> configure with GSLB provider.
> 2. Create LB rule in each zone using VR (LB rule 1 in zone1 and LB rule 2 in 
> zone2)
> 3. As a non-ROOT domain user, create a GSLB rule and map LBrule1.
> Observations:
> (i) This will create GSLB site, vserver, service in zone1 netscaler
> 4. now map LBRule2 to the above GSLB rule.
> (ii) This will create GSLB site2 on netscaler devices in both the zones. 
> During this process, this will end up in configuring site1, vserver, service 
> in Zone1 NS device again and throws log message like below. This is little 
> confusing and gives an impression that something went wrong. it would be good 
> to change these message to be little more informative.
> 2013-07-31 14:28:23,869 DEBUG [agent.manager.DirectAgentAttache] 
> (DirectAgent-93:null) Seq 9-712310787: Executing request
> 2013-07-31 14:28:23,973 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Failed to add GSLB virtual server: 
> cloud-gslb-vserver-GSLB1.xyztelco.com due to Resource already exists
> 2013-07-31 14:28:24,085 WARN  [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Retrying GlobalLoadBalancerConfigCommand. Number of 
> retries remaining: 1
> 2013-07-31 14:28:24,158 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully added GSLB virtual server: 
> cloud-gslb-vserver-GSLB1.xyztelco.com
> 2013-07-31 14:28:24,258 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully updated GSLB site: cloudsite1
> 2013-07-31 14:28:24,356 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully created service: 
> cloud-gslb-service-cloudsite1-10.147.44.64-22 at site: cloudsite1
> 2013-07-31 14:28:24,474 WARN  [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Failed to bind monitor to GSLB service due to The 
> monitor is already bound to the service
> 2013-07-31 14:28:24,562 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully created GSLB site: cloudsite2
> 2013-07-31 14:28:24,692 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully created service: 
> cloud-gslb-service-cloudsite2-10.147.54.63-22 at site: cloudsite2
> 2013-07-31 14:28:24,730 DEBUG [network.resource.NetscalerResource] 
> (DirectAgent-93:null) Successfully created service: 
> cloud-gslb-service-cloudsite2-10.147.54.63-22 and virtual server: 
> cloud-gslb-vserver-GSLB1.xyztelco.com binding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3980) [GSLB][UI] Add UI support for updateGlobalLoadBalancerRule

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3980:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> [GSLB][UI] Add UI support for updateGlobalLoadBalancerRule
> --
>
> Key: CLOUDSTACK-3980
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3980
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
> Environment: commit # 6275d697e340be5b520f37b15d72343fa47c00a9
>Reporter: venkata swamybabu budumuru
> Fix For: 4.2.1
>
>
> Steps to reproduce:
> 1. Have latest CloudStack setup with at least 2 advanced zones. Make sure 
> each zone is configured and enabled with GSLB device.
> There is a new API introduced for GSLB i.e. "updateGlobalLoadBalancerRule" 
> hence, for existing GSLB rules, we need to provide an edit option for 
> updating some of the parameters.
> Here is the sample API :
> http://10.147.59.194:8096/api?command=updateGlobalLoadBalancerRule&id=0626f066-5a79-4021-b9bc-1fd21845a856&description=hello&gslblbmethod=ROUNDROBIN&gslbstickysessionmethodname=SOURCEIP&sessionkey=c6uplMagL0jdTFU7pSvRsKiOsCI=

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3980) [GSLB][UI] Add UI support for updateGlobalLoadBalancerRule

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-3980:
---

Assignee: Brian Federle

> [GSLB][UI] Add UI support for updateGlobalLoadBalancerRule
> --
>
> Key: CLOUDSTACK-3980
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3980
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.2.0
> Environment: commit # 6275d697e340be5b520f37b15d72343fa47c00a9
>Reporter: venkata swamybabu budumuru
>Assignee: Brian Federle
> Fix For: 4.2.1
>
>
> Steps to reproduce:
> 1. Have latest CloudStack setup with at least 2 advanced zones. Make sure 
> each zone is configured and enabled with GSLB device.
> There is a new API introduced for GSLB i.e. "updateGlobalLoadBalancerRule" 
> hence, for existing GSLB rules, we need to provide an edit option for 
> updating some of the parameters.
> Here is the sample API :
> http://10.147.59.194:8096/api?command=updateGlobalLoadBalancerRule&id=0626f066-5a79-4021-b9bc-1fd21845a856&description=hello&gslblbmethod=ROUNDROBIN&gslbstickysessionmethodname=SOURCEIP&sessionkey=c6uplMagL0jdTFU7pSvRsKiOsCI=

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4058) Under host details we show ACS version and not host version

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4058:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> Under host details we show ACS version and not host version
> ---
>
> Key: CLOUDSTACK-4058
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4058
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, UI, VMware
>Affects Versions: 4.2.0
>Reporter: Ahmad Emneina
>Assignee: Brian Federle
> Fix For: 4.2.1
>
>
> On the host details pane, version shows cloudstack version and not esxi 
> version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4036) UI remains in procesing state and queryAsyncJobResult geting called repeatedly for scaleSystemVm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4036:
---

Priority: Critical  (was: Major)

> UI remains in procesing state and queryAsyncJobResult geting called 
> repeatedly for scaleSystemVm
> 
>
> Key: CLOUDSTACK-4036
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4036
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: prashant kumar mishra
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.2.1
>
> Attachments: Logs_DB.rar, screenshot-1.jpg
>
>
> Steps to reproduce
> ==
> 1-preapre a CS setup with esxi5.1
> 2-create a SO for ssvm
> 3-try scale up ssvm
> Expected
> --
> UI should not be in processing state for forever 
> Actual
> -
> UI remains in processing state and queryAsyncJobResult is getting called 
> multiple times 
> API
> ===
> http://10.147.38.253:8080/client/api?command=scaleSystemVm&id=f367a159-9a44-4e52-b0ca-3f42927b94bb&serviceofferingid=c879cb15-9891-49e6-8c62-3adedbff9f81&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431841965
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431845734
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> --
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431849164
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> my obs
> ==
> status of  job is not getting changed even operation went successful 
> ->jobprocstatus":0,"jobresultcode":0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4036) UI remains in procesing state and queryAsyncJobResult geting called repeatedly for scaleSystemVm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4036:
---

Assignee: Nitin Mehta

> UI remains in procesing state and queryAsyncJobResult geting called 
> repeatedly for scaleSystemVm
> 
>
> Key: CLOUDSTACK-4036
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4036
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: prashant kumar mishra
>Assignee: Nitin Mehta
> Fix For: 4.2.0
>
> Attachments: Logs_DB.rar, screenshot-1.jpg
>
>
> Steps to reproduce
> ==
> 1-preapre a CS setup with esxi5.1
> 2-create a SO for ssvm
> 3-try scale up ssvm
> Expected
> --
> UI should not be in processing state for forever 
> Actual
> -
> UI remains in processing state and queryAsyncJobResult is getting called 
> multiple times 
> API
> ===
> http://10.147.38.253:8080/client/api?command=scaleSystemVm&id=f367a159-9a44-4e52-b0ca-3f42927b94bb&serviceofferingid=c879cb15-9891-49e6-8c62-3adedbff9f81&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431841965
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431845734
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> --
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431849164
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> my obs
> ==
> status of  job is not getting changed even operation went successful 
> ->jobprocstatus":0,"jobresultcode":0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4036) UI remains in procesing state and queryAsyncJobResult geting called repeatedly for scaleSystemVm

2013-09-05 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4036:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

> UI remains in procesing state and queryAsyncJobResult geting called 
> repeatedly for scaleSystemVm
> 
>
> Key: CLOUDSTACK-4036
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4036
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: prashant kumar mishra
>Assignee: Nitin Mehta
> Fix For: 4.2.1
>
> Attachments: Logs_DB.rar, screenshot-1.jpg
>
>
> Steps to reproduce
> ==
> 1-preapre a CS setup with esxi5.1
> 2-create a SO for ssvm
> 3-try scale up ssvm
> Expected
> --
> UI should not be in processing state for forever 
> Actual
> -
> UI remains in processing state and queryAsyncJobResult is getting called 
> multiple times 
> API
> ===
> http://10.147.38.253:8080/client/api?command=scaleSystemVm&id=f367a159-9a44-4e52-b0ca-3f42927b94bb&serviceofferingid=c879cb15-9891-49e6-8c62-3adedbff9f81&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431841965
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431845734
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> --
> http://10.147.38.253:8080/client/api?command=queryAsyncJobResult&jobId=93104d35-1437-4112-afc2-e0904ed913e8&response=json&sessionkey=W7FeVMt%2FzpGgr2yIMdyMSmmkuaY%3D&_=1375431849164
> { "queryasyncjobresultresponse" : 
> {"accountid":"c1428ce0-fb63-11e2-8004-0618b087","userid":"c1435a12-fb63-11e2-8004-0618b087","cmd":"org.apache.cloudstack.api.command.admin.systemvm.ScaleSystemVMCmd","jobstatus":1,"jobprocstatus":0,"jobresultcode":0,"jobresulttype":"object","jobresult":{"systemvm":{"id":"f367a159-9a44-4e52-b0ca-3f42927b94bb","systemvmtype":"consoleproxy","zoneid":"8ec4457d-0189-4786-8388-aa53df790ae8","zonename":"zn2","dns1":"10.103.128.16","gateway":"10.147.55.1","name":"v-2-VM","podid":"18ee774e-efa2-44d0-a689-f056d11c0d7a","linklocalmacaddress":"02:00:6d:e0:00:02","publicip":"10.147.55.41","publicmacaddress":"06:42:0a:00:00:0c","publicnetmask":"255.255.255.0","templateid":"876e972a-fb63-11e2-8004-0618b087","created":"2013-08-02T17:07:06+0530","state":"Stopped","activeviewersessions":0}},"created":"2013-08-02T19:18:37+0530","jobid":"93104d35-1437-4112-afc2-e0904ed913e8"}
>  }
> my obs
> ==
> status of  job is not getting changed even operation went successful 
> ->jobprocstatus":0,"jobresultcode":0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >