Is the speed of a 'zfs send' dependant on file size / number of files ?

        We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with very few files changing, take between 15 and
20 hours). We already use snapshots for day to day restores, but we
need the 'real' backups for DR.

        I have been testing zfs send throughput and have not been
getting promising results. Note that this is NOT OpenSolaris, but
Solaris 10U6 (10/08) with the IDR for the snapshot interrupts resilver
bug.

Server: V480, 4 CPU, 16 GB RAM (test server, production is an M4000)
Storage: two SE-3511, each with one 512 GB LUN presented

Simple mirror layout:

pkr...@nyc-sted1:/IDR-test/ppk> zpool status
  pool: IDR-test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Jul  1 16:54:58 2009
config:

        NAME                                       STATE     READ WRITE CKSUM
        IDR-test                                   ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c6t600C0FF0000000000927852FB91AD308d0  ONLINE       0     0     0
            c6t600C0FF0000000000922614781B19008d0  ONLINE       0     0     0

errors: No known data errors
pkr...@nyc-sted1:/IDR-test/ppk>

pkr...@nyc-sted1:/IDR-test/ppk> zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
IDR-test                      101G   399G  24.3M  /IDR-test
idr-t...@1250597527          96.8M      -   101M  -
idr-t...@1250604834          20.1M      -  24.3M  -
idr-t...@1250605236            16K      -  24.3M  -
idr-t...@1250605400            20K      -  24.3M  -
idr-t...@1250606582            20K      -  24.3M  -
idr-t...@1250612553            20K      -  24.3M  -
idr-t...@1250616026            20K      -  24.3M  -
IDR-test/dataset              101G   399G   100G  /IDR-test/dataset
IDR-test/data...@1250597527   313K      -  87.1G  -
IDR-test/data...@1250604834   266K      -  87.1G  -
IDR-test/data...@1250605236   187M      -  88.2G  -
IDR-test/data...@1250605400   192M      -  89.3G  -
IDR-test/data...@1250606582   246K      -  95.4G  -
IDR-test/data...@1250612553   233K      -  95.4G  -
IDR-test/data...@1250616026   230K      -   100G  -
pkr...@nyc-sted1:/IDR-test/ppk>

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null

real    91m19.024s
user    0m0.022s
sys     11m51.422s
pkr...@nyc-sted1:/IDR-test/ppk>

Which translates to a little over 18 MB/sec. and 600 files/sec. That
would mean almost 16 hours per TB. Better, but not much better than
NBU.

I do not think the SE-3511 is limiting us, as I have seen much higher
throughput on them when resilvering one or more mirrors.

Any thoughts as to why I am not getting better throughput ?

Thanks.

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer, "The Pajama Game" @ Schenectady Light Opera Company
( http://www.sloctheater.org/ )
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to