Hi Mark,

Sorry I somehow missed this email.

I'm currently running Debian 11 with the following kernel version.
Linux ceph01 5.10.0-12-amd64 #1 SMP Debian 5.10.103-1 (2022-03-07) x86_64
GNU/Linux

I've tried upgrading to 17.2.0 and the issue still exists.

Regards,
Gene Kuo
Co-organizer Cloud Native Taiwan User Group


Mark Nelson <mnel...@redhat.com> 於 2022年1月11日 週二 上午3:15寫道:

> Hi Gene,
>
>
> Unfortunately when the io_uring code was first implemented there were no
> stable centos kernels in our test lab that included io_uring support so
> it hasn't gotten a ton of testing.  I agree that your issue looks
> similar to what was reported in issue #47661, but it looks like you are
> running pacific so should have the patch that was included in octopus to
> fix that issue?
>
> What OS/Kernel is this?  FWIW our initial testing was on CentOS 8 with a
> custom EPEL kernel build.
>
> Mark
>
>
> On 1/7/22 7:27 AM, Kuo Gene wrote:
> > Hi,
> >
> > I'm recently trying to enable OSD to use io_uring with our Cephadm
> > deployment by bellow command.
> >
> > ceph config set osd bdev_ioring true
> > ceph config set osd bdev_ioring_hipri true
> > ceph config set osd bdev_ioring_sqthread_poll true
> >
> > However, I've ran into the issue similar to this bug.
> > Bug #47661: Cannot allocate memory appears when using io_uring osd -
> > bluestore - Ceph <https://tracker.ceph.com/issues/47661>
> >
> > I've tried setting "--ulimit memlock=-1:-1" to the docker run line in
> > unit.run file that cephadm created for OSD service.
> > I can confirm that the "max locked memory" is set to unlimited in the
> > container when running ulimit -a in the container.
> > The osd still failed to start when io_uring is enabled.
> >
> > Any suggestions?
> >
> > OSD logs:
> > Using recent ceph image
> >
> quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
> >
> > debug 2022-01-05T18:34:38.878+0000 7f06ffaee080  0 set uid:gid to 167:167
> > (ceph:ceph)
> > debug 2022-01-05T18:34:38.878+0000 7f06ffaee080  0 ceph version 16.2.7
> > (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable), process
> > ceph-osd, pid 7
> > debug 2022-01-05T18:34:38.878+0000 7f06ffaee080  0 pidfile_write: ignore
> > empty --pid-file
> > debug 2022-01-05T18:34:38.878+0000 7f06ffaee080  1 bdev(0x55f113f5c800
> > /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
> > debug 2022-01-05T18:34:38.882+0000 7f06ffaee080 -1 bdev(0x55f113f5c800
> > /var/lib/ceph/osd/ceph-2/block) _aio_start io_setup(2) failed: (12)
> Cannot
> > allocate memory
> > debug 2022-01-05T18:34:38.882+0000 7f06ffaee080  0 starting osd.2
> osd_data
> > /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
> >
> > ulimit -a output (container is started when io_uring is disabled):
> > core file size          (blocks, -c) unlimited
> > data seg size           (kbytes, -d) unlimited
> > scheduling priority             (-e) 0
> > file size               (blocks, -f) unlimited
> > pending signals                 (-i) 1030203
> > max locked memory       (kbytes, -l) unlimited
> > max memory size         (kbytes, -m) unlimited
> > open files                      (-n) 1048576
> > pipe size            (512 bytes, -p) 8
> > POSIX message queues     (bytes, -q) 819200
> > real-time priority              (-r) 0
> > stack size              (kbytes, -s) 8192
> > cpu time               (seconds, -t) unlimited
> > max user processes              (-u) unlimited
> > virtual memory          (kbytes, -v) unlimited
> > file locks                      (-x) unlimited
> >
> >
> > Regards,
> > Gene Kuo
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to