您好:
    开源是好,出问题没有技术支持,很纠结。
    买了本cs的书,概念讲的很清楚,就是实际操作出问题不知道怎么解决。
    肯定有遗漏的地方,日志里边也没有什么重要的提示,怀疑是esxi的网络配置或者iptables拦截了,可是以我的能力检查后也没发现有什么问题。

----------------
姓名:王啸雨
电话:15910794340
公司:北京中彩在线
邮箱:wangxia...@clo.com.cn



roarain...@126.com
 
发件人: jsliug...@lvmama.com
发送时间: 2016-09-29 10:00
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好!
        
我是年初的时候测试过,当时这个问题应该是解决了,但是我实在想不起来是怎么解决的,您只能多测试几次看看了。前几天接领导通知,要在北京做一灾备机房,要用cloudstack,我感觉太危险了。
 
 
 
jsliug...@lvmama.com
发件人: roarain...@126.com
发送时间: 2016-09-29 09:54
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好:
    对啊,只要报错就是二级存储这块,也是醉了。
    一直想上生产,测试不通过也不好上。
    cs的坑太多了
    在准备cs4.8,今天再做一遍。
roarain...@126.com
发件人: jsliug...@lvmama.com
发送时间: 2016-09-29 09:47
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好!
           
我当时也是测试了好多边,印象最深的就是二级存储这个问题,只要主页面能读到并且不掉线,ssvm就是running状态。然后后面的就都可以进行了。
jsliug...@lvmama.com
发件人: roarain...@126.com
发送时间: 2016-09-29 09:43
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好:
    没有完成,已经是第二次用cs4.6版本做测试,失败了。
    可能我搭建时有盲区,两次都掉进去。
    请问ssvm中的ssvm-check.sh脚本依赖二级存储吗,为何找不到?
    无语啊
roarain...@126.com
发件人: jsliug...@lvmama.com
发送时间: 2016-09-29 09:15
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好!
      现在是完成了吗?有确定是什么原因导致二级存储掉线么?我之前是在dell R720上面测的。
jsliug...@lvmama.com
发件人: roarain...@126.com
发送时间: 2016-09-29 08:49
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
您好:
    打算更换版本4.6到4.8,rhel6.5(csm)+rhel7.2(cs1)。
    之前在笔记本的vmware workstation,用cs4.8能上传iso,但是由于资源限制,新建的instance一直重启。
    回归到4.8试试吧。
非常感谢!
roarain...@126.com
发件人: jsliug...@lvmama.com
发送时间: 2016-09-28 18:14
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
这个应该是不行,我之前的做法是,把数据库干掉,把程序卸载,完全重新再从新来过。
jsliug...@lvmama.com
发件人: roarain...@126.com
发送时间: 2016-09-28 17:44
收件人: users-cn
主题: Re: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
重建ssvm,重新定义secondary storage,重建 advance 
zone后在页面读取不到二级存储,且在ssvm虚拟机中没有ssvm-check.sh脚本。
esxi的网络如下所示,请帮忙看下esxi的网络是否有问题
http://d2.freep.cn/3tb_1609281743017c1t574784.png 
多谢!
roarain...@126.com
发件人: jsliug...@lvmama.com
发送时间: 2016-09-28 15:50
收件人: users-cn
主题: Re: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
那应该就是二级存储的问题,之前也是在系统里检查NFS挂载正常,但是主页面读取不到。
jsliug...@lvmama.com
发件人: roarain...@126.com
发送时间: 2016-09-28 15:45
收件人: users-cn
主题: 读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本
读取不到二级存储,进入ssvm时没有找到ssvm-check.sh脚本。
csm和cs1的操作系统是Redhat6.5X64。
根据文档
https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM,+templates,+Secondary+storage+troubleshootinghttps://cwiki.apache.org/confluence/display/CLOUDSTACK/SystemVm.iso#SystemVm.iso-KVM
 
中说明,应该是ssvm与agent版本不匹配导致的。
查看agent版本
[root@cs1 ~]# rpm -qa | grep cloud
cloudstack-agent-4.6.0-1.el6.x86_64
cloudstack-common-4.6.0-1.el6.x86_64
在csm加载sysvm模板时的语句为“/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
 -m /export/secondary -f 
/var/www/html/repo/cs46rhel65/systemvm64template-4.6.0-kvm.qcow2.bz2 -h kvm -s 
-F”
校验MD5值一致。
[root@csm ~]#  md5sum 
/var/www/html/repo/cs46rhel65/systemvm64template-4.6.0-kvm.qcow2.bz2
c059b0d051e0cd6fbe9d5d4fc40c7e5d  
/var/www/html/repo/cs46rhel65/systemvm64template-4.6.0-kvm.qcow2.bz2
与文档http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.6/management-server/index.html
 
中一致。
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt/secondary \
-u 
http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
 \
-h kvm \
-s <optional-management-server-secret-key> \
-F
问题好奇怪。
roarain...@126.com
From: Hongtu Zang
Date: 2016-09-28 15:30
To: users-cn
Subject: Re: CS4.6 system vms is struck in starting state . But KVM display 
running !
看着没什么问题,建议你在cloudstack中先把系统虚拟机强制关机,如果找不到强制关机可以修改db的状态。然后删除这两台系统虚拟机,等等cs重建。或许可以解决这个问题
2016-09-28 11:14 GMT+08:00 roarain...@126.com <roarain...@126.com>:
> 您好:
> CS4.6 system vms is struck in starting state . But KVM display running !
>
>  Managerment Server/NFS
>  Host KVM
>  csm
>  cs1
>  192.168.1.10
>  192.168.1.11
>  RedHat6.4_X64
>  RedHat6.4_X64
>  CS4.6
>  CS4.6
>     sysvm模板为systemvm64template-4.6.0-kvm.qcow2.bz2,已校验md5。
>     在esxi上虚拟出来的csm和cs1搭建此环境,csm和cs1各有3块网卡
> 网卡名称 vlan id IP地址 lable 用途
> csm vmnetwork none 172.28.201.191  仅仅管理使用
> vnetmgrt 10 192.168.1.10 cloudbr0 管理与guest
> vnetpublic 11 172.16.1.0/24 cloudbr1 public
> cs1 vmnetwork none 172.28.201.192  仅仅管理使用
> vnetmgrt 10 192.168.1.11 cloudbr0 管理与guest
> vnetpublic 11 172.16.1.0/24 cloudbr1 public
>     已修改全局参数secstorage.allowed.intern为192.168.1.0/24,192.168.1.10
>     创建完高级zone后system vms一直显示starting状态,但是在cs1上使用virsh list
> --all显示为running,重启managerment,agent,libvirtd后在页面显示为running。
>  [root@cs1 ~]# virsh list --all
>  Id    Name                           State
> ----------------------------------------------------
>  3     s-8-VM                         running
>  4     v-7-VM                         running
> agent有如下日志
> 2016-09-28 10:55:49,939 WARN  [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-2:null) Timed out: /usr/share/cloudstack-common/
> scripts/vm/hypervisor/kvm/patchviasocket.pl -n s-8-VM -p
> %template=domP%type=secstorage%host=192.168.1.10%
> port=8250%name=s-8-VM%zone=1%pod=1%guid=s-8-VM%workers=5%
> resource=com.cloud.storage.resource.PremiumSecondaryStorageResourc
> e%instance=SecStorage%sslcopy=false%role=templateProcessor%
> mtu=1500%eth2ip=172.16.1.32%eth2mask=255.255.255.0%
> gateway=172.16.1.254%public.network.device=eth2%eth0ip=
> 169.254.2.237%eth0mask=255.255.0.0%eth1ip=192.168.1.72%
> eth1mask=255.255.255.0%mgmtcidr=192.168.1.0/24%
> localgw=192.168.1.254%private.network.device=eth1%
> internaldns1=192.168.1.254%dns1=8.8.8.8 .  Output is:
> 2016-09-28 10:55:55,135 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-4:null) Trying to fetch storage pool
> fd7d94b4-1672-3337-89f2-7dbc82e716f2 from libvirt
> 2016-09-28 10:55:55,150 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-4:null) Asking libvirt to refresh storage pool
> fd7d94b4-1672-3337-89f2-7dbc82e716f2
> 2016-09-28 10:56:10,945 WARN  [kvm.resource.LibvirtComputingResource]
> (Script-5:null) Interrupting script.
> 2016-09-28 10:56:10,946 WARN  [kvm.resource.LibvirtComputingResource]
> (Script-5:null) Interrupting script.
> 2016-09-28 10:56:10,946 WARN  [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-1:null) Timed out: /usr/share/cloudstack-common/
> scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-7-VM -p
> %template=domP%type=consoleproxy%host=192.168.1.
> 10%port=8250%name=v-7-VM%zone=1%pod=1%guid=Proxy.7%proxy_vm=
> 7%disable_rp_filter=true%eth2ip=172.16.1.31%eth2mask=
> 255.255.255.0%gateway=172.16.1.254%eth0ip=169.254.1.225%
> eth0mask=255.255.0.0%eth1ip=192.168.1.87%eth1mask=255.255.255.0%mgmtcidr=
> 192.168.1.0/24%localgw=192.168.1.254%internaldns1=192.168.1.254%
> dns1=8.8.8.8 .  Output is:
> 2016-09-28 10:56:10,946 WARN  [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-2:null) Timed out: /usr/share/cloudstack-common/
> scripts/vm/hypervisor/kvm/patchviasocket.pl -n s-8-VM -p
> %template=domP%type=secstorage%host=192.168.1.10%
> port=8250%name=s-8-VM%zone=1%pod=1%guid=s-8-VM%workers=5%
> resource=com.cloud.storage.resource.PremiumSecondaryStorageResourc
> e%instance=SecStorage%sslcopy=false%role=templateProcessor%
> mtu=1500%eth2ip=172.16.1.32%eth2mask=255.255.255.0%
> gateway=172.16.1.254%public.network.device=eth2%eth0ip=
> 169.254.2.237%eth0mask=255.255.0.0%eth1ip=192.168.1.72%
> eth1mask=255.255.255.0%mgmtcidr=192.168.1.0/24%
> localgw=192.168.1.254%private.network.device=eth1%
> internaldns1=192.168.1.254%dns1=8.8.8.8 .  Output is:
>
> managerment后台日志中有如下错误信息,请问是网络哪块出问题了?
> 2016-09-28 11:02:53,235 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-1:ctx-f47d9415)
> AutoScaling Monitor is running...
> 2016-09-28 11:02:53,281 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-2:ctx-a7efaf1d)
> VmStatsCollector is running...
> 2016-09-28 11:02:53,695 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-4:ctx-5034a236)
> HostStatsCollector is running...
> 2016-09-28 11:02:53,718 DEBUG [c.c.a.t.Request] 
> (StatsCollector-4:ctx-5034a236)
> Seq 1-808959083066425394: Received:  { Ans: , MgmtId: 345048851725, via:
> 1(cs1), Ver: v1, Flags: 10, { GetHostStatsAnswer } }
> 2016-09-28 11:02:54,883 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-3:ctx-2f74a5a7)
> StorageCollector is running...
> 2016-09-28 11:02:54,888 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-3:ctx-2f74a5a7)
> There is no secondary storage VM for secondary storage host SStorage
> 2016-09-28 11:02:54,956 DEBUG [c.c.a.t.Request] 
> (StatsCollector-3:ctx-2f74a5a7)
> Seq 1-808959083066425395: Received:  { Ans: , MgmtId: 345048851725, via:
> 1(cs1), Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
> 2016-09-28 11:02:58,070 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-215db595) Begin cleanup expired async-jobs
> 2016-09-28 11:02:58,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-215db595) End cleanup expired async-jobs
> 2016-09-28 11:03:01,602 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-11:null) Ping from 1
> 2016-09-28 11:03:01,603 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) Process host VM state report from ping
> process. host: 1
> 2016-09-28 11:03:01,610 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) Process VM state report. host: 1, number of
> records in report: 2
> 2016-09-28 11:03:01,610 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) VM state report. host: 1, vm id: 7, power
> state: PowerOn
> 2016-09-28 11:03:01,628 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) VM state report is updated. host: 1, vm id:
> 7, power state: PowerOn
> 2016-09-28 11:03:01,630 INFO  [c.c.v.VirtualMachineManagerImpl]
> (AgentManager-Handler-11:null) There is pending job or HA tasks working on
> the VM. vm id: 7, postpone power-change report by resetting power-change
> counters
> 2016-09-28 11:03:01,653 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) VM state report. host: 1, vm id: 8, power
> state: PowerOn
> 2016-09-28 11:03:01,672 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) VM state report is updated. host: 1, vm id:
> 8, power state: PowerOn
> 2016-09-28 11:03:01,674 INFO  [c.c.v.VirtualMachineManagerImpl]
> (AgentManager-Handler-11:null) There is pending job or HA tasks working on
> the VM. vm id: 8, postpone power-change report by resetting power-change
> counters
> 2016-09-28 11:03:01,712 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-11:null) Done with process of VM state report. host: 1
> 2016-09-28 11:03:08,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-476e9849) Begin cleanup expired async-jobs
> 2016-09-28 11:03:08,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-476e9849) End cleanup expired async-jobs
> 2016-09-28 11:03:08,202 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-9696b0ad) Found 0 routers to update status.
> 2016-09-28 11:03:08,204 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-9696b0ad) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:03:08,205 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-9696b0ad) Found 0 networks to update RvR
> status.
> 2016-09-28 11:03:08,242 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-76fc0b61) Found 0 routers to update status.
> 2016-09-28 11:03:08,243 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-76fc0b61) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:03:08,244 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-76fc0b61) Found 0 networks to update RvR
> status.
> 2016-09-28 11:03:18,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-fbfabe80) Begin cleanup expired async-jobs
> 2016-09-28 11:03:18,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-fbfabe80) End cleanup expired async-jobs
> 2016-09-28 11:03:28,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-5601a321) Begin cleanup expired async-jobs
> 2016-09-28 11:03:28,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-5601a321) End cleanup expired async-jobs
> 2016-09-28 11:03:29,252 WARN  [o.a.c.f.j.i.AsyncJobMonitor]
> (Timer-1:ctx-0be61564) Task (job-66) has been pending for 1019 seconds
> 2016-09-28 11:03:29,253 WARN  [o.a.c.f.j.i.AsyncJobMonitor]
> (Timer-1:ctx-0be61564) Task (job-67) has been pending for 1019 seconds
> 2016-09-28 11:03:38,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-153039ea) Begin cleanup expired async-jobs
> 2016-09-28 11:03:38,077 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-153039ea) End cleanup expired async-jobs
> 2016-09-28 11:03:38,202 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-b5675452) Found 0 routers to update status.
> 2016-09-28 11:03:38,204 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-b5675452) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:03:38,205 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-b5675452) Found 0 networks to update RvR
> status.
> 2016-09-28 11:03:38,241 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-18c9b227) Found 0 routers to update status.
> 2016-09-28 11:03:38,243 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-18c9b227) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:03:38,244 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-18c9b227) Found 0 networks to update RvR
> status.
> 2016-09-28 11:03:43,163 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Resetting hosts suitable for reconnect
> 2016-09-28 11:03:43,165 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Completed resetting hosts suitable for reconnect
> 2016-09-28 11:03:43,165 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Acquiring hosts for clusters already owned by this
> management server
> 2016-09-28 11:03:43,166 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Completed acquiring hosts for clusters already owned by
> this management server
> 2016-09-28 11:03:43,166 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Acquiring hosts for clusters not owned by any
> management server
> 2016-09-28 11:03:43,167 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-080a7551) Completed acquiring hosts for clusters not owned by any
> management server
> 2016-09-28 11:03:48,070 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-871ba457) Begin cleanup expired async-jobs
> 2016-09-28 11:03:48,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-871ba457) End cleanup expired async-jobs
> 2016-09-28 11:03:53,236 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-1:ctx-f9198145)
> AutoScaling Monitor is running...
> 2016-09-28 11:03:53,285 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-2:ctx-ece47c17)
> VmStatsCollector is running...
> 2016-09-28 11:03:53,719 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-4:ctx-67496858)
> HostStatsCollector is running...
> 2016-09-28 11:03:53,739 DEBUG [c.c.a.t.Request] 
> (StatsCollector-4:ctx-67496858)
> Seq 1-808959083066425396: Received:  { Ans: , MgmtId: 345048851725, via:
> 1(cs1), Ver: v1, Flags: 10, { GetHostStatsAnswer } }
> 2016-09-28 11:03:54,957 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-3:ctx-53e6999b)
> StorageCollector is running...
> 2016-09-28 11:03:54,961 DEBUG [c.c.s.StatsCollector] 
> (StatsCollector-3:ctx-53e6999b)
> There is no secondary storage VM for secondary storage host SStorage
> 2016-09-28 11:03:55,037 DEBUG [c.c.a.t.Request] 
> (StatsCollector-3:ctx-53e6999b)
> Seq 1-808959083066425397: Received:  { Ans: , MgmtId: 345048851725, via:
> 1(cs1), Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
> 2016-09-28 11:03:58,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-70730610) Begin cleanup expired async-jobs
> 2016-09-28 11:03:58,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-70730610) End cleanup expired async-jobs
> 2016-09-28 11:04:01,590 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-14:null) Ping from 1
> 2016-09-28 11:04:01,591 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) Process host VM state report from ping
> process. host: 1
> 2016-09-28 11:04:01,600 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) Process VM state report. host: 1, number of
> records in report: 2
> 2016-09-28 11:04:01,600 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) VM state report. host: 1, vm id: 7, power
> state: PowerOn
> 2016-09-28 11:04:01,625 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) VM state report is updated. host: 1, vm id:
> 7, power state: PowerOn
> 2016-09-28 11:04:01,627 INFO  [c.c.v.VirtualMachineManagerImpl]
> (AgentManager-Handler-14:null) There is pending job or HA tasks working on
> the VM. vm id: 7, postpone power-change report by resetting power-change
> counters
> 2016-09-28 11:04:01,664 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) VM state report. host: 1, vm id: 8, power
> state: PowerOn
> 2016-09-28 11:04:01,693 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) VM state report is updated. host: 1, vm id:
> 8, power state: PowerOn
> 2016-09-28 11:04:01,695 INFO  [c.c.v.VirtualMachineManagerImpl]
> (AgentManager-Handler-14:null) There is pending job or HA tasks working on
> the VM. vm id: 8, postpone power-change report by resetting power-change
> counters
> 2016-09-28 11:04:01,726 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
> (AgentManager-Handler-14:null) Done with process of VM state report. host: 1
> 2016-09-28 11:04:08,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-cc6322c3) Begin cleanup expired async-jobs
> 2016-09-28 11:04:08,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-cc6322c3) End cleanup expired async-jobs
> 2016-09-28 11:04:08,202 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-23ebb3e7) Found 0 routers to update status.
> 2016-09-28 11:04:08,203 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-23ebb3e7) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:04:08,205 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-23ebb3e7) Found 0 networks to update RvR
> status.
> 2016-09-28 11:04:08,242 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-199cd318) Found 0 routers to update status.
> 2016-09-28 11:04:08,243 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-199cd318) Found 0 VPC networks to update
> Redundant State.
> 2016-09-28 11:04:08,245 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-199cd318) Found 0 networks to update RvR
> status.
> 2016-09-28 11:04:18,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-764c45e1) Begin cleanup expired async-jobs
> 2016-09-28 11:04:18,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-764c45e1) End cleanup expired async-jobs
> 2016-09-28 11:04:28,069 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-e81105c6) Begin cleanup expired async-jobs
> 2016-09-28 11:04:28,075 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-e81105c6) End cleanup expired async-jobs
> 2016-09-28 11:04:29,252 WARN  [o.a.c.f.j.i.AsyncJobMonitor]
> (Timer-1:ctx-c76d7a66) Task (job-66) has been pending for 1079 seconds
> 2016-09-28 11:04:29,253 WARN  [o.a.c.f.j.i.AsyncJobMonitor]
> (Timer-1:ctx-c76d7a66) Task (job-67) has been pending for 1079 seconds
>
>
>
>
>
>
>
> roarain...@126.com
>

Reply via email to