This looks different, but I'll take a peek nonetheless. On Jan 6, 2014 1:10 PM, "Rayees Namathponnan (JIRA)" <j...@apache.org> wrote:
> > [ > https://issues.apache.org/jira/browse/CLOUDSTACK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel] > > Rayees Namathponnan reopened CLOUDSTACK-5432: > --------------------------------------------- > > > Still i am hitting this issue again; please see the agent log; also > attaching libvird and agent logs > > > 2014-01-06 02:59:18,953 DEBUG [cloud.agent.Agent] > (agentRequest-Handler-4:null) Request:Seq 2-812254431: { Cmd , MgmtId: > 29066118877352, via: 2, Ver: v1, Flags: 100011, > [{"org.apache.cloudstack.storage.command.DeleteCommand":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"98229b00-ad9e-4b90-a911-78a73f90548a","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"41b632b5-40b3-3024-a38b-ea259c72579f","id":2,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary2","port":2049,"url":"NetworkFilesystem:// > 10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary2/?ROLE=Primary&STOREUUID=41b632b5-40b3-3024-a38b-ea259c72579f > "}},"name":"ROOT-266","size":8589934592,"path":"98229b00-ad9e-4b90-a911-78a73f90548a","volumeId":280,"vmName":"i-212-266-QA","accountId":212,"format":"QCOW2","id":280,"deviceId":0,"hypervisorType":"KVM"}},"wait":0}}] > } > 2014-01-06 02:59:18,953 DEBUG [cloud.agent.Agent] > (agentRequest-Handler-4:null) Processing command: > org.apache.cloudstack.storage.command.DeleteCommand > 2014-01-06 02:59:25,054 DEBUG [cloud.agent.Agent] > (agentRequest-Handler-1:null) Request:Seq 2-812254432: { Cmd , MgmtId: > 29066118877352, via: 2, Ver: v1, Flags: 100111, > [{"com.cloud.agent.api.storage.DestroyCommand":{"volume":{"id":126,"mountPoint":"/export/home/rayees/SC_QA_AUTO4/primary","path":"7c5859c4-792b-4594-81d7-1e149e8a6aef","size":0,"storagePoolType":"NetworkFilesystem","storagePoolUuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","deviceId":0},"wait":0}}] > } > 2014-01-06 02:59:25,054 DEBUG [cloud.agent.Agent] > (agentRequest-Handler-1:null) Processing command: > com.cloud.agent.api.storage.DestroyCommand > 2014-01-06 03:03:05,781 DEBUG [utils.nio.NioConnection] > (Agent-Selector:null) Location 1: Socket > Socket[addr=/10.223.49.195,port=8250,localport=44856] > closed on read. Probably -1 returned: Connection closed with -1 on reading > size. > 2014-01-06 03:03:05,781 DEBUG [utils.nio.NioConnection] > (Agent-Selector:null) Closing socket Socket[addr=/10.223.49.195 > ,port=8250,localport=44856] > 2014-01-06 03:03:05,781 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) > Clearing watch list: 2 > 2014-01-06 03:03:10,782 INFO [cloud.agent.Agent] (Agent-Handler-5:null) > Lost connection to the server. Dealing with the remaining commands... > 2014-01-06 03:03:10,782 INFO [cloud.agent.Agent] (Agent-Handler-5:null) > Cannot connect because we still have 5 commands in progress. > 2014-01-06 03:03:15,782 INFO [cloud.agent.Agent] (Agent-Handler-5:null) > Lost connection to the server. Dealing with the remaining commands... > 2014-01-06 03:03:15,783 INFO [cloud.agent.Agent] (Agent-Handler-5:null) > Cannot connect because we still have 5 commands in progress. > 2014-01-06 03:03:20,783 INFO [cloud.agent.Agent] (Agent-Handler-5:null) > Lost connection to the server. Dealing with the remaining commands... > > > [Automation] Libvtd getting crashed and agent going to alert start > > ------------------------------------------------------------------- > > > > Key: CLOUDSTACK-5432 > > URL: > https://issues.apache.org/jira/browse/CLOUDSTACK-5432 > > Project: CloudStack > > Issue Type: Bug > > Security Level: Public(Anyone can view this level - this is the > default.) > > Components: KVM > > Affects Versions: 4.3.0 > > Environment: KVM (RHEL 6.3) > > Branch : 4.3 > > Reporter: Rayees Namathponnan > > Assignee: Marcus Sorensen > > Priority: Blocker > > Fix For: 4.3.0 > > > > Attachments: KVM_Automation_Dec_11.rar, agent1.rar, agent2.rar, > management-server.rar > > > > > > This issue is observed in 4.3 automation environment; libvirt crashed > and cloudstack agent went to alert start; > > Please see the agent log; connection between agent and MS lost with > error "Connection closed with -1 on reading size." @ 2013-12-09 > 19:47:06,969 > > 2013-12-09 19:43:41,495 DEBUG [cloud.agent.Agent] > (agentRequest-Handler-2:null) Processing command: > com.cloud.agent.api.GetStorageStatsCommand > > 2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection] > (Agent-Selector:null) Location 1: Socket > Socket[addr=/10.223.49.195,port=8250,localport=40801] > closed on read. Probably -1 returned: Connection closed with -1 on reading > size. > > 2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection] > (Agent-Selector:null) Closing socket Socket[addr=/10.223.49.195 > ,port=8250,localport=40801] > > 2013-12-09 19:47:06,969 DEBUG [cloud.agent.Agent] (Agent-Handler-3:null) > Clearing watch list: 2 > > 2013-12-09 19:47:11,969 INFO [cloud.agent.Agent] (Agent-Handler-3:null) > Lost connection to the server. Dealing with the remaining commands... > > 2013-12-09 19:47:11,970 INFO [cloud.agent.Agent] (Agent-Handler-3:null) > Cannot connect because we still have 5 commands in progress. > > 2013-12-09 19:47:16,970 INFO [cloud.agent.Agent] (Agent-Handler-3:null) > Lost connection to the server. Dealing with the remaining commands... > > 2013-12-09 19:47:16,990 INFO [cloud.agent.Agent] (Agent-Handler-3:null) > Cannot connect because we still have 5 commands in progress. > > 2013-12-09 19:47:21,990 INFO [cloud.agent.Agent] (Agent-Handler-3:null) > Lost connection to the server. Dealing with the remaining commands.. > > Please see the lib virtd log at same time (please see the attached > complete log, there is a 5 hour difference in agent log and libvirt log ) > > 2013-12-10 02:45:45.563+0000: 5938: error : qemuMonitorIO:574 : internal > error End of file from monitor > > 2013-12-10 02:45:47.663+0000: 5942: error : virCommandWait:2308 : > internal error Child process (/bin/umount > /mnt/41b632b5-40b3-3024-a38b-ea259c72579f) status unexpected: exit status 16 > > 2013-12-10 02:45:53.925+0000: 5943: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet14 root) status > unexpected: exit status 2 > > 2013-12-10 02:45:53.929+0000: 5943: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet14 ingress) status > unexpected: exit status 2 > > 2013-12-10 02:45:54.011+0000: 5943: warning : qemuDomainObjTaint:1297 : > Domain id=71 name='i-45-97-QA' uuid=7717ba08-be84-4b63-a674-1534f9dc7bef is > tainted: high-privileges > > 2013-12-10 02:46:33.070+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet12 root) status > unexpected: exit status 2 > > 2013-12-10 02:46:33.081+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet12 ingress) status > unexpected: exit status 2 > > 2013-12-10 02:46:33.197+0000: 5940: warning : qemuDomainObjTaint:1297 : > Domain id=72 name='i-47-111-QA' uuid=7fcce58a-96dc-4207-9998-b8fb72b446ac > is tainted: high-privileges > > 2013-12-10 02:46:36.394+0000: 5938: error : qemuMonitorIO:574 : internal > error End of file from monitor > > 2013-12-10 02:46:37.685+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/bin/umount > /mnt/41b632b5-40b3-3024-a38b-ea259c72579f) status unexpected: exit status 16 > > 2013-12-10 02:46:57.869+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet15 root) status > unexpected: exit status 2 > > 2013-12-10 02:46:57.873+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet15 ingress) status > unexpected: exit status 2 > > 2013-12-10 02:46:57.925+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet17 root) status > unexpected: exit status 2 > > 2013-12-10 02:46:57.933+0000: 5940: error : virCommandWait:2308 : > internal error Child process (/sbin/tc qdisc del dev vnet17 ingress) status > unexpected: exit status 2 > > 2013-12-10 02:46:58.034+0000: 5940: warning : qemuDomainObjTaint:1297 : > Domain id=73 name='r-114-QA' uuid=8ded6f1b-69e7-419d-8396-5795372d0ae2 is > tainted: high-privileges > > 2013-12-10 02:47:22.762+0000: 5938: error : qemuMonitorIO:574 : internal > error End of file from monitor > > 2013-12-10 02:47:23.273+0000: 5939: error : virCommandWait:2308 : > internal error Child process (/bin/umount > /mnt/41b632b5-40b3-3024-a38b-ea259c72579f) status unexpected: exit status 16 > > virsh command doest not return anything and hung; > > [root@Rack2Host11 libvirt]# virsh list > > Work around > > If i restart libvirtd, agent can connect MS > > > > -- > This message was sent by Atlassian JIRA > (v6.1.5#6160) >