To clarify what has just been stated. With zil disabled I got 4MB/sec.
With zil enabled I get 1.25MB/sec

On 6/23/06, Tao Chen <[EMAIL PROTECTED]> wrote:


On 6/23/06, Roch <[EMAIL PROTECTED]> wrote:
>
> > > On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
> > > > On 6/22/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> > > > >> a test against the same iscsi targets using linux and XFS and the
> > > > >> NFS server implementation there gave me 1.25MB/sec writes. I was
about
> > > > >> to throw in the towel and deem ZFS/NFS has unusable until B41
came
> > > > >> along and at least gave me 1.25MB/sec.
> > > > >
> > > > >That's still super slow -- is this over a 10Mb link or something?
> > > > >
> > > > >Jeff
>
> I  think the performance is   in line with expectation  for,
> small  file,    single  threaded,     open/write/close
NFS
> workload (nfs must commit on close). Therefore I expect :
>
>         (avg file size) / (I/O latency).
>
> Joe does this formula approach the 1.25 MB/s ?


Joe sent me another set of DTrace output (biorpt.sh.rec.gz),
running 105 seconds with zil_disable=1.

I generate a graph using Grace ( rec.gif ).

The interesting part for me:
1) How I/O response time (at bdev level) changes in a pattern.
2) Both iSCSI (sd2) and local (sd1) storage follow the same pattern and have
almost identicle latency on average.
3) The latency is very high, either on average or at peaks.

Although a low throughput is expected given large amount of small files, I
don't expect such high latency,
and of course 1.25MB/s is too low, even after turn on zil_disable, I see
4MB/s in this data set.
I/O size at bdev level are actually pretty decent: mostly (75%) 128KB.

Here's a summary:

# biorpt -i biorpt.sh.rec

Generating report from biorpt.sh.rec ...

   === Top 5 I/O types ===

   DEVICE    T  BLKs     COUNT
  --------  -  ----  --------
   sd1       W   256      3122
  sd2       W   256      3118
   sd1       W     2       164
  sd2       W     2       151
   sd2       W     3       123



          === Top 5 worst I/O response time ===

   DEVICE    T  BLKs      OFFSET    TIMESTAMP  TIME.ms
  --------  -  ----  ----------  -----------  -------
   sd1       W   256   529562656   104.322170  3316.90
  sd1       W   256   529563424   104.322185  3281.97
  sd2       W   256   521152480   104.262081  3262.49
   sd2       W   256   521152736   104.262102  3258.56
  sd1       W   256   529562912   104.262091  3249.85



          === Top 5 Devices with largest number of I/Os ===

  DEVICE      READ AVG.ms     MB    WRITE AVG.ms     MB      IOs SEEK
  -------  ------- ------ ------  ------- ------ ------  ------- ----
   sd1            7   2.70      0     4169 440.62    409     4176   0%
   sd2            6   0.25      0     4131 444.79    407     4137   0%
  cmdk0          5  21.50      0      138   0.82       0      143  11%


       === Top 5 Devices with largest amount of data transfer ===


  DEVICE      READ AVG.ms     MB    WRITE AVG.ms     MB   Tol.MB MB/s
  -------  ------- ------ ------  ------- ------ ------  ------- ----
  sd1            7   2.70      0     4169 440.62    409      409    4
   sd2            6   0.25      0     4131 444.79    407      407    4
   cmdk0          5  21.50      0      138   0.82      0        0    0

 Tao

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to