Re: [zfs-discuss] ZFS issue on read performance

2011-10-12 Thread degger
Hi,

thanks for your help
I won't be able to install pv on our backup server, as it is in production (and 
we don't have test environment)
how do you use the zfs send command ? what is a snasphot at a ZFS view ?
We use the ZFS volume for storing data only, as a ufs or vxfs volume would do, 
and we don't create snapshot on/from it :
df -h :
Filesystem size used avail capacity Mounted on
zvol01/vls 7.3T 3.4T 4.0T 46% /vls

ll
total 245946205
drwxr-x--- 3 root other 9 Oct 11 14:39 .
drwxr-xr-x 4 root root 4 Jul 27 18:52 ..
-rw-r- 1 root other 20971406848 Sep 29 08:58 TLVLS2C04_32640
-rw-r- 1 root other 20971406848 Jul 12 17:47 TLVLS2C06_7405
-rw-r- 1 root other 20971406848 Jul 13 07:36 TLVLS2C06_7406
-rw-r- 1 root other 20971406848 Jul 13 08:31 TLVLS2C06_7407
-rw-r- 1 root other 20971406848 Jul 13 09:06 TLVLS2C06_7408
-rw-r- 1 root other 20971406848 Jul 13 09:26 TLVLS2C06_7409

-> TLVLS2C06_xxx files are virtual cartridges created by our backup software 
(time navigator), containing data from servers backuped up over the network. 
Reading /vls to write those files onto LTO3 tapes (or copying them to another 
volume - an ufs one for test) is slow.

Here is the zpool status command result :
pool: zvol01
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvol01 ONLINE 0 0 0
c10t600508B40001350A0001804Dd0 ONLINE 0 0 0
c10t600508B40001350A0001805Bd0 ONLINE 0 0 0
c10t600508B40001350A00018050d0 ONLINE 0 0 0
c10t600508B40001350A00018053d0 ONLINE 0 0 0

errors: No known data errors

At this time, no writing is made on the /vls volume as no data are backed up by 
our software, only reading from disk to write on LTO3 tapes
Here are some stats :

root@tlbkup02:/etc# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
-- - - - - - -
zvol01 3.35T 4.09T 103 120 12.9M 13.3M
zvol01 3.35T 4.09T 213 0 26.6M 0
zvol01 3.35T 4.09T 181 0 22.6M 0
zvol01 3.35T 4.09T 135 0 16.9M 0
zvol01 3.35T 4.09T 183 0 22.8M 0
zvol01 3.35T 4.09T 204 0 25.5M 0
zvol01 3.35T 4.09T 158 39 19.5M 89.3K
zvol01 3.35T 4.09T 227 0 28.4M 0
zvol01 3.35T 4.09T 264 0 29.5M 0
zvol01 3.35T 4.09T 292 436 33.7M 2.26M
zvol01 3.35T 4.09T 200 0 25.0M 0
zvol01 3.35T 4.09T 193 0 24.0M 0
zvol01 3.35T 4.09T 187 0 23.4M 0
zvol01 3.35T 4.09T 249 0 31.0M 0
zvol01 3.35T 4.09T 240 0 29.9M 0
zvol01 3.35T 4.09T 222 0 27.8M 0
zvol01 3.35T 4.09T 194 0 24.3M 0
zvol01 3.35T 4.09T 236 0 29.4M 0
zvol01 3.35T 4.09T 230 0 28.7M 0
zvol01 3.35T 4.09T 188 0 23.3M 0
zvol01 3.35T 4.09T 249 0 31.1M 0
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS issue on read performance

2011-10-11 Thread Paul Kraus
On Tue, Oct 11, 2011 at 6:25 AM,   wrote:

> I'm not familiar with ZFS stuff, so I'll try to give you as much as info I 
> can get with our environment
> We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris 10), 
> and we are faced with very low read performance (whilst write performance is 
> much better, i.e : up to 40GB/h to migrate data onto LTO-3 tape from disk, 
> and up to 100GB/h to unstage data from LTO-3 tape to disk, either with Time 
> Navigator 4.2 software or directly using dd commands)
> We have tunned ZFS parameters for ARC and disabled preftech but performance 
> is poor. If we dd from disk to RAM or tape, it's very slow, but if we dd from 
> tape or RAM to disk, it's faster. I can't figure out why. I've read other 
> posts related to this, but I'm not sure what can of tuning can be made.
> For disks concern, I have no idea on how our System team created the ZFS 
> volume.
> Can you help ?

If you can, please post the output from `zpool status` so we know
what your configuration is. There are many ways to configure a zpool,
some of which have horrible read performance. We are using zfs as
backend storage for NetBackup and we do not see the disk storage as
the bottleneck except when copying from disk to tape (LTO-3) and that
depends on the backup images. We regularly see 75-100 MB/sec
throughput disk to tape for large backup images. I rarely see LTO-3
drives writing any faster than 100 MB/sec.

100 MB/sec. is about 350 GB/hr.
75 MB/sec. is about 260 GB/hr.

Our disk stage zpool is configured for capacity and reliability and
not performance.

  pool: nbu-ds0
 state: ONLINE
 scrub: scrub completed after 7h9m with 0 errors on Thu Sep 29 16:25:56 2011
config:

NAME   STATE READ WRITE CKSUM
nbu-ds0  ONLINE   0 0 0
  raidz2-0 ONLINE   0 0 0
c3t5000C5001A67AB63d0  ONLINE   0 0 0
c3t5000C5001A671685d0  ONLINE   0 0 0
c3t5000C5001A670DE6d0  ONLINE   0 0 0
c3t5000C5001A66CDA4d0  ONLINE   0 0 0
c3t5000C5001A66A43Bd0  ONLINE   0 0 0
c3t5000C5001A66994Dd0  ONLINE   0 0 0
c3t5000C5001A663062d0  ONLINE   0 0 0
c3t5000C5001A659F79d0  ONLINE   0 0 0
c3t5000C5001A6591B2d0  ONLINE   0 0 0
c3t5000C5001A658481d0  ONLINE   0 0 0
c3t5000C5001A4C47C8d0  ONLINE   0 0 0
  raidz2-1 ONLINE   0 0 0
c3t5000C5001A6548A2d0  ONLINE   0 0 0
c3t5000C5001A6546AAd0  ONLINE   0 0 0
c3t5000C5001A65400Ed0  ONLINE   0 0 0
c3t5000C5001A653B70d0  ONLINE   0 0 0
c3t5000C5001A6531F5d0  ONLINE   0 0 0
c3t5000C5001A64332Ed0  ONLINE   0 0 0
c3t5000C500112A5AF8d0  ONLINE   0 0 0
c3t5000C5001A5D61A8d0  ONLINE   0 0 0
c3t5000C5001A5C5EA9d0  ONLINE   0 0 0
c3t5000C5001A55F7A6d0  ONLINE   0 0 0  114K repaired
c3t5000C5001A5347FEd0  ONLINE   0 0 0
spares
  c3t5000C5001A485C88d0AVAIL
  c3t5000C50026A0EC78d0AVAIL

errors: No known data errors

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS issue on read performance

2011-10-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of deg...@free.fr
> 
> I'm not familiar with ZFS stuff, so I'll try to give you as much as info I
can get
> with our environment
> We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris
10),
> and we are faced with very low read performance (whilst write performance
> is much better, i.e : up to 40GB/h to migrate data onto LTO-3 tape from
disk,
> and up to 100GB/h to unstage data from LTO-3 tape to disk, either with
Time
> Navigator 4.2 software or directly using dd commands)
> We have tunned ZFS parameters for ARC and disabled preftech but
> performance is poor. If we dd from disk to RAM or tape, it's very slow,
but if
> we dd from tape or RAM to disk, it's faster. I can't figure out why. I've
read
> other posts related to this, but I'm not sure what can of tuning can be
made.
> For disks concern, I have no idea on how our System team created the ZFS
> volume.
> Can you help ?

Normally, even a single cheap disk in the dumbest configuration should
vastly outperform an LTO3 tape device.  And 100 GB/h is nowhere near what
you should expect, unless you're using highly fragmented or scattered small
files.  In the optimal configuration, you'll read/write something like
1Gbit/sec per disk, until you saturate your controller, let's just pick
rough numbers and say 6Gbit/sec = 2.7 TB per hour.  So there's a ballpark to
think about.

Next things next.  I am highly skeptical of dd.  I constantly get weird
performance problems when using dd.  Especially if you're reading/writing
tapes.  Instead, this is a good benchmark for how fast your disks can
actually go in the present configuration:  zfs send somefilesystem@somesnap
| pv -i 30 > /dev/null(You might have to install pv, for example using
opencsw or blastwave.  If you don't have pv and don't want to install it,
you might want to time zfs send | wc > /dev/null, so you can get the total
size and the total time.)  Expect the performance to go up and down...  So
watch it a while.  Or wait for it to complete and then you'll have the
average.

Also...  In what way are you using dd?  dd is not really an appropriate tool
for backing up a ZFS filesystem.  Well, there are some corner cases where it
might be ok, but generally speaking, no.  So the *very* first question you
should be asking is probably not about the bad performance you're seeing,
but verifying the validity of your backup technique.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS issue on read performance

2011-10-11 Thread degger

Hi,

I'm not familiar with ZFS stuff, so I'll try to give you as much as info I can 
get with our environment
We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris 10), and 
we are faced with very low read performance (whilst write performance is much 
better, i.e : up to 40GB/h to migrate data onto LTO-3 tape from disk, and up to 
100GB/h to unstage data from LTO-3 tape to disk, either with Time Navigator 4.2 
software or directly using dd commands)
We have tunned ZFS parameters for ARC and disabled preftech but performance is 
poor. If we dd from disk to RAM or tape, it's very slow, but if we dd from tape 
or RAM to disk, it's faster. I can't figure out why. I've read other posts 
related to this, but I'm not sure what can of tuning can be made.
For disks concern, I have no idea on how our System team created the ZFS 
volume. 
Can you help ?

Thank you

David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss