Thank you for your reply

I hava  check both  vm host and physical host,  Free Ram is enough (100G left)  
And load   0.5 (5min  32cores)

in short . Cpu and memory are sufficient.  Iops (see ceph -s) only 2000op/s.   
(16 ssd disk cache ,  92 hdd disk)

The physical host is centos7 .  Kernel version 3.10.0.   openstack version:pike 
. Ceph version luminous . Bluestore


I hava  5 pools . Volumes ,images, volume_cache, vms, vms_cache.  volume_cache 
is ssd cache tier of Volumes and vms_cache is ssd cache tier of vms.


I notice that you said ceph cache. Do you mean ceph rbd cache ?


 原始邮件
发件人: EDH - Manuel Rios Fernandez<mrios...@easydatahost.com>
收件人: '刘亮'<liang...@linkdoc.com>; 'ceph-users'<ceph-users@lists.ceph.com>
发送时间: 2019年5月17日(周五) 15:23
主题: RE: [ceph-users] openstack with ceph rbd vms IO/erros

Did you check your KVM host RAM usage?

We saw this on host very very loaded with overcommit in RAM causes a random 
crash of VM.

As you said for solve must be remounted externaly and fsck. You can prevent it 
disabled ceph cache at Openstack Nova host. But your VM’s are going get less 
performance.

Whats you Ceph & Openstack version?

Regards


De: ceph-users 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>> 
En nombre de ??
Enviado el: viernes, 17 de mayo de 2019 9:01
Para: ceph-users <ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Asunto: [ceph-users] openstack with ceph rbd vms IO/erros

hi:
 I hava a openstack cluster with a ceph cluster ,use rbd,ceph cluster use ssd  
pool tier.

some vm on openstack sometimes crashed in two case .

1.  become readonly filesystem. after reboot ,it work fine again.
2.  IO errors . I must  repair the file system by fsck. thenreboot , it work 
fine again.

I do not know if this is ceph bugs or kvm bugs.

I need some ideas to resolv this ,    Anyone can help me ?
Look forward to your reply
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to