[
https://issues.apache.org/jira/browse/CLOUDSTACK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devdeep Singh resolved CLOUDSTACK-4617.
---------------------------------------
Resolution: Fixed
The fix is complete and no additional changes are required for the issue. The
original bug was that there was an exception while creating link local network
because of which the host went to alert state. This issue has been fixed and
has not been seen. After this change a UI change isn’t needed in the wizard.
Additionally, when a primary storage is added it is added to a cluster/pool and
not one host. Doing a check in the UI that if all hosts are Up and then only
adding a primary storage isn't needed. If there are any failures, the api will
report it.
Sangeetha reported then when a second host was added to the cluster/pool, it
went from alert to connecting to up state. There aren’t any management server
logs shared so I cannot tell what was the actual issue. If an issue is there,
it still looks like a different one. I’ll close this bug and we can open a new
bug for it (with management server logs).
> [UI] Xenserver 6.1/ 6.2 - Add host succeeds but gets into Alert state due to
> "Unable to create local link network". It remains in "Alert" state for a
> while and then gets to "UP" state.
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: CLOUDSTACK-4617
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4617
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: Management Server
> Affects Versions: 4.2.0, 4.2.1
> Environment: Build from 4.2-forward
> Reporter: Sangeetha Hariharan
> Assignee: Devdeep Singh
> Priority: Critical
> Fix For: 4.2.1
>
> Attachments: hostdown.rar
>
>
> Xenserver 6.1 - Add host succeeds but gets into Alert state. It remains in
> "Alert" state for a while and then gets to "UP" state.
> Steps to reproduce the problem:
> In my case I already had 1 zone.
> Create another advanced zone with 1 Xenserver host.
> As part of zone creation wizard , provide all the values for zone creation
> including primary and secondary storage details.
> Primary storage creation fails with error - "Failed to delete storage pool on
> host"
> Following exception seen in management server logs:
> 2013-09-05 11:51:25,894 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null)
> ===START=== 10.215.3.9 -- GET comman
> d=createStoragePool&zoneid=b34c3dc3-27c3-4f9a-9173-c74b04a4acab&podId=fde823c6-f763-480e-a5bf-c96f4e6e43fa&clusteri
> d=3658d76d-fa9a-41c1-95a9-0c1688c2707c&name=ps1&scope=cluster&url=nfs%3A%2F%2F10.223.110.232%2Fexport%2Fhome%2Fsang
> eetha%2F307%2Fzone2-primary&response=json&sessionkey=e4LjGKV4k%2FmbX%2BczFre8RM8zhXI%3D&_=1378408027682
> 2013-09-05 11:51:25,975 DEBUG
> [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl]
> (catalina-exec-20:null)
> createPool Params @ scheme - nfs storageHost - 10.223.110.232 hostPath -
> /export/home/sangeetha/307/zone2-primary
> port - -1
> 2013-09-05 11:51:26,125 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-20:null) Failed to add data store
> com.cloud.utils.exception.CloudRuntimeException: No host up to associate a
> storage pool with in cluster 2
> at
> org.apache.cloudstack.storage.datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl.attachCluster(CloudStackPrimaryDataStoreLifeCycleImpl.java:371)
> at
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:749)
> at
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:177)
> at
> org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:168)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:514)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-09-05 11:51:26,173 INFO [cloud.api.ApiServer] (catalina-exec-20:null)
> Failed to delete storage pool on host
> The reason for this failure was that the host was in "Alert" state , when the
> primary storage was being added.
> Following exception seen in management server log:
> 2013-09-05 11:51:25,038 DEBUG [xen.resource.CitrixResourceBase]
> (DirectAgent-100:null) Lowest available Vif device number: 0 for VM: Control
> domain on host: Rack3Host3.lab.vmops.com
> 2013-09-05 11:51:25,462 WARN [xen.resource.CitrixResourceBase]
> (DirectAgent-100:null) Unable to create local link network
> The server failed to handle your request, due to an internal error. The
> given message may give details useful for debugging the problem.
> at com.xensource.xenapi.Types.checkResponse(Types.java:1694)
> at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
> at
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:909)
> at com.xensource.xenapi.VIF.plug(VIF.java:846)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.setupLinkLocalNetwork(CitrixResourceBase.java:4871)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3391)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> at
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
> at
> com.cloud.hypervisor.xen.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:104)
> at
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-09-05 11:51:25,463 WARN [agent.manager.DirectAgentAttache]
> (DirectAgent-100:null) Seq 6-975634439: Exception Caught while executing
> command
> com.cloud.utils.exception.CloudRuntimeException: Unable to create local link
> network due to The server failed to handle your request, due to an internal
> error. The given message may give details useful for debugging the problem.
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.setupLinkLocalNetwork(CitrixResourceBase.java:4885)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:3391)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> at
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
> addHost command in this case does not error out. Instead sends host details
> with "state" indicating it is in "Alert" state.
> 2013-09-05 16:57:03,090 INFO [cloud.api.ApiServer] (catalina-exec-15:null)
> (userId=2 accountId=2 sessionId=72F8ED
> D7B6A93CF324AD1991DC69D405) 10.215.3.9 -- POST
> command=addHost&response=json&sessionkey=cZmPlLscwjfca5vffTZMd7gFYw
> A%3D 200 { "addhostresponse" : { "count":1 ,"host" : [
> {"id":"2f3a8c1b-cffd-48dd-8c9e-bd2c564784de","name":"Rack3
> Host4.lab.vmops.com","state":"Alert","type":"Routing","ipaddress":"10.223.57.4","zoneid":"b34c3dc3-27c3-4f9a-9173-
> c74b04a4acab","zonename":"zone2","podid":"fde823c6-f763-480e-a5bf-c96f4e6e43fa","podname":"pod1","version":"4.2.0"
> ,"hypervisor":"XenServer","cpunumber":4,"cpuspeed":2261,"cpuallocated":"0%","cpuwithoverprovisioning":"9044.0","me
> morytotal":16189931520,"memoryallocated":0,"capabilities":"xen-3.0-x86_64 ,
> xen-3.0-x86_32p , hvm-3.0-x86_32 , hvm
> -3.0-x86_32p ,
> hvm-3.0-x86_64","lastpinged":"1969-12-31T16:00:00-0800","managementserverid":6612941014073,"cluster
> id":"3658d76d-fa9a-41c1-95a9-0c1688c2707c","clustername":"cluster2","clustertype":"CloudManaged","islocalstorageac
> tive":false,"created":"2013-09-05T16:57:03-0700","events":"ManagementServerDown;
> Remove; ShutdownRequested; AgentC
> onnected; AgentDisconnected;
> Ping","resourcestate":"Enabled","hypervisorversion":"6.1.0","hahost":false} ]
> } }
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira