On Wed, Dec 7, 2016 at 11:16 PM, Steve Taylor
<steve.tay...@storagecraft.com> wrote:
>
> I'm seeing the same behavior with very similar perf top output. One server 
> with 32 OSDs has a load average approaching 800. No excessive memory usage 
> and no iowait at all.


Also seeing AVC's so I guess I'll switch to permissive mode for now:

type=AVC msg=audit(1481149036.203:819): avc:  denied  { read } for
pid=3401 comm="ceph-osd" name="nvme0n1p13" dev="devtmpfs" ino=18529
scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1481149036.203:820): avc:  denied  { open } for
pid=3401 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=18529 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1481149036.203:821): avc:  denied  { getattr } for
pid=3401 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=18529 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1481149036.203:822): avc:  denied  { ioctl } for
pid=3401 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=18529 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1481149036.204:823): avc:  denied  { write } for
pid=3401 comm="ceph-osd" name="nvme0n1p13" dev="devtmpfs" ino=18529
scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file

I must say I'm a bit surprised that these issues weren't caught in QA.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to