I'm using CentOS 7.7.1908 with kernel 3.10.0-1062.1.2.el7.x86_64. The
workload was a VMware Storage Motion from a local SSD backed datastore to
the ceph backed datastore. Performance was measured using dstat on the
iscsi gateway for network traffic and ceph status as this cluster is
basically idle.  I changed max_data_area_mb to 256 and cmdsn_depth to 128.
This appears to have given a slight improvement of maybe 10MB/s.

Moving VM to the ceph backed datastore
io:
    client:   124 KiB/s rd, 76 MiB/s wr, 95 op/s rd, 1.26k op/s wr

Moving VM off the ceph backed datastore
  io:
    client:   344 MiB/s rd, 625 KiB/s wr, 5.54k op/s rd, 62 op/s wr

I'm going to test bonnie++ with an rbd volume mounted directly on the iscsi
gateway. Also will test bonnie++ inside a VM on a ceph backed datastore.

On Thu, Oct 24, 2019 at 7:15 PM Mike Christie <mchri...@redhat.com> wrote:

> On 10/24/2019 12:22 PM, Ryan wrote:
> > I'm in the process of testing the iscsi target feature of ceph. The
> > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
>
> What kernel are you using?
>
> > hosts with 12 SSD OSDs per host. Some basic testing moving VMs to a ceph
> > backed datastore is only showing 60MB/s transfers. However moving these
> > back off the datastore is fast at 200-300MB/s.
>
> What is the workload and what are you using to measure the throughput?
>
> If you are using fio, what arguments are you using? And, could you
> change the ioengine to rbd and re-run the test from the target system so
> we can check if rbd is slow or iscsi?
>
> For small IOs, 60 is about right.
>
> For 128-512K IOs you should be able to get around 300 MB/s for writes
> and 600 for reads.
>
> 1. Increase max_data_area_mb. This is a kernel buffer lio/tcmu uses to
> pass data between the kernel and tcmu-runner. The default is only 8MB.
>
> In gwcli cd to your disk and do:
>
> # reconfigure max_data_area_mb %N
>
> where N is between 8 and 2048 MBs.
>
> 2. The Linux kernel target only allows 64 commands per iscsi session by
> default. We increase that to 128, but you can increase this to 512.
>
> In gwcli cd to the target dir and do
>
> reconfigure cmdsn_depth 512
>
> 3. I think ceph-iscsi and lio work better with higher queue depths so if
> you are using fio you want higher numjobs and/or iodepths.
>
> >
> > What should I be looking at to track down the write performance issue?
> > In comparison with the Nimble Storage arrays I can see 200-300MB/s in
> > both directions.
> >
> > Thanks,
> > Ryan
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to