Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-20 Thread Marko Milisavljevic

It is definitely defined in b63... not sure when it got introduced.

http://src.opensolaris.org/source/xref/onnv/aside/usr/src/cmd/mdb/common/modules/zfs/zfs.c

shows tunable parameters for ZFS, under "zfs_params(...)"

On 5/20/07, Trygve Laugstøl <[EMAIL PROTECTED]> wrote:

Marko Milisavljevic wrote:
> Given how common Sil3114 chipset is in
> my-old-computer-became-home-server segment, I am sure this workaround
> will be appreciated by many who google their way here. And just in
> case it is not clear, what j means below is to add these two lines in
> /etc/system:
>
> set zfs:zfs_vdev_min_pending=1
> set zfs:zfs_vdev_max_pending=1

I just tried the same myself but got these warnins when booting:

May 20 01:22:29 deservio genunix: [ID 492708 kern.notice] sorry,
variable 'zfs_vdev_min_pending' is not defined in the 'zfs'
May 20 01:22:29 deservio genunix: [ID 966847 kern.notice] module
May 20 01:22:29 deservio genunix: [ID 10 kern.notice]
May 20 01:22:29 deservio genunix: [ID 492708 kern.notice] sorry,
variable 'zfs_vdev_max_pending' is not defined in the 'zfs'
May 20 01:22:29 deservio genunix: [ID 966847 kern.notice] module
May 20 01:22:29 deservio genunix: [ID 10 kern.notice]

I'm running b60.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-20 Thread Trygve Laugstøl

Marko Milisavljevic wrote:

Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I can see that it can't be identical - when
reading mirrored drives simultaneously, some data will need to be
skipped if the file is laid out sequentially, but it doesn't seem
intuitively obvious how my broken drvers/card would affect it to that
degree, especially since reading from a file from one-disk zpool gives
me 70MB/s. My plan was to make 4-disk RAID-Z - we'll see how it works
out when all drives arrive.

Given how common Sil3114 chipset is in
my-old-computer-became-home-server segment, I am sure this workaround
will be appreciated by many who google their way here. And just in
case it is not clear, what j means below is to add these two lines in
/etc/system:

set zfs:zfs_vdev_min_pending=1
set zfs:zfs_vdev_max_pending=1


I just tried the same myself but got these warnins when booting:

May 20 01:22:29 deservio genunix: [ID 492708 kern.notice] sorry, 
variable 'zfs_vdev_min_pending' is not defined in the 'zfs'

May 20 01:22:29 deservio genunix: [ID 966847 kern.notice] module
May 20 01:22:29 deservio genunix: [ID 10 kern.notice]
May 20 01:22:29 deservio genunix: [ID 492708 kern.notice] sorry, 
variable 'zfs_vdev_max_pending' is not defined in the 'zfs'

May 20 01:22:29 deservio genunix: [ID 966847 kern.notice] module
May 20 01:22:29 deservio genunix: [ID 10 kern.notice]

I'm running b60.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread Marko Milisavljevic

Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I can see that it can't be identical - when
reading mirrored drives simultaneously, some data will need to be
skipped if the file is laid out sequentially, but it doesn't seem
intuitively obvious how my broken drvers/card would affect it to that
degree, especially since reading from a file from one-disk zpool gives
me 70MB/s. My plan was to make 4-disk RAID-Z - we'll see how it works
out when all drives arrive.

Given how common Sil3114 chipset is in
my-old-computer-became-home-server segment, I am sure this workaround
will be appreciated by many who google their way here. And just in
case it is not clear, what j means below is to add these two lines in
/etc/system:

set zfs:zfs_vdev_min_pending=1
set zfs:zfs_vdev_max_pending=1

I've been doing a lot of reading, and it seem unlikely that any effort
will be made to address the driver performance with either ATA or
Sil311x chipset specifically - by the time more pressing enhancements
are made with various SATA drivers, this will be too obsolete to
matter.

With your workaround things are working well enough for the purpose
that I am able to chose Solaris over Linux - thanks again.

Marko

On 5/16/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.

It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk.  It looks like prefetch
aggravates this problem.

When I asked Matt what we could do to verify that it's the number of
concurrent I/Os that is causing performance to be poor, he had the
following suggestions:

set zfs_vdev_{min,max}_pending=1 and run with prefetch on, then
iostat should show 1 outstanding io and perf should be good.

or turn prefetch off, and have multiple threads reading
concurrently, then iostat should show multiple outstanding ios
and perf should be bad.

Let me know if you have any additional questions.

-j

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.

It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk.  It looks like prefetch
aggravates this problem.

When I asked Matt what we could do to verify that it's the number of
concurrent I/Os that is causing performance to be poor, he had the
following suggestions:

set zfs_vdev_{min,max}_pending=1 and run with prefetch on, then
iostat should show 1 outstanding io and perf should be good.

or turn prefetch off, and have multiple threads reading
concurrently, then iostat should show multiple outstanding ios
and perf should be bad.

Let me know if you have any additional questions.

-j

On Wed, May 16, 2007 at 11:38:24AM -0700, [EMAIL PROTECTED] wrote:
> At Matt's request, I did some further experiments and have found that
> this appears to be particular to your hardware.  This is not a general
> 32-bit problem.  I re-ran this experiment on a 1-disk pool using a 32
> and 64-bit kernel.  I got identical results:
> 
> 64-bit
> ==
> 
> $ /usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
> count=1
> 1+0 records in
> 1+0 records out
> 
> real   20.1
> user0.0
> sys 1.2
> 
> 62 Mb/s
> 
> # /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=1
> 1+0 records in
> 1+0 records out
> 
> real   19.0
> user0.0
> sys 2.6
> 
> 65 Mb/s
> 
> 32-bit
> ==
> 
> /usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
> count=1
> 1+0 records in
> 1+0 records out
> 
> real   20.1
> user0.0
> sys 1.7
> 
> 62 Mb/s
> 
> # /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=1
> 1+0 records in
> 1+0 records out
> 
> real   19.1
> user0.0
> sys 4.3
> 
> 65 Mb/s
> 
> -j
> 
> On Wed, May 16, 2007 at 09:32:35AM -0700, Matthew Ahrens wrote:
> > Marko Milisavljevic wrote:
> > >now lets try:
> > >set zfs:zfs_prefetch_disable=1
> > >
> > >bingo!
> > >
> > >   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
> > > 609.00.0 77910.00.0  0.0  0.80.01.4   0  83 c0d0
> > >
> > >only 1-2 % slower then dd from /dev/dsk. Do you think this is general
> > >32-bit problem, or specific to this combination of hardware?
> > 
> > I suspect that it's fairly generic, but more analysis will be necessary.
> > 
> > >Finally, should I file a bug somewhere regarding prefetch, or is this
> > >a known issue?
> > 
> > It may be related to 6469558, but yes please do file another bug report. 
> >  I'll have someone on the ZFS team take a look at it.
> > 
> > --matt
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
At Matt's request, I did some further experiments and have found that
this appears to be particular to your hardware.  This is not a general
32-bit problem.  I re-ran this experiment on a 1-disk pool using a 32
and 64-bit kernel.  I got identical results:

64-bit
==

$ /usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
count=1
1+0 records in
1+0 records out

real   20.1
user0.0
sys 1.2

62 Mb/s

# /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=1
1+0 records in
1+0 records out

real   19.0
user0.0
sys 2.6

65 Mb/s

32-bit
==

/usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
count=1
1+0 records in
1+0 records out

real   20.1
user0.0
sys 1.7

62 Mb/s

# /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=1
1+0 records in
1+0 records out

real   19.1
user0.0
sys 4.3

65 Mb/s

-j

On Wed, May 16, 2007 at 09:32:35AM -0700, Matthew Ahrens wrote:
> Marko Milisavljevic wrote:
> >now lets try:
> >set zfs:zfs_prefetch_disable=1
> >
> >bingo!
> >
> >   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
> > 609.00.0 77910.00.0  0.0  0.80.01.4   0  83 c0d0
> >
> >only 1-2 % slower then dd from /dev/dsk. Do you think this is general
> >32-bit problem, or specific to this combination of hardware?
> 
> I suspect that it's fairly generic, but more analysis will be necessary.
> 
> >Finally, should I file a bug somewhere regarding prefetch, or is this
> >a known issue?
> 
> It may be related to 6469558, but yes please do file another bug report. 
>  I'll have someone on the ZFS team take a look at it.
> 
> --matt
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread Marko Milisavljevic

I will do that, but I'll do a couple of things first, to try to isolate the
problem more precisely:

- Use ZFS on a plain PATA drive on onboard IDE connector to see if it works
with prefetch on this 32-bit machine.
- Use this PCI-SATA card in a 64-bit, 2g RAM machine and see how it performs
there, and also compare it to that machine's onboard ICH7 SATA interface (I
assume I can force it to use AHCI drivers or not by changing the mode of
operation for ICH7 in BIOS).

Marko

On 5/16/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:



> Finally, should I file a bug somewhere regarding prefetch, or is this
> a known issue?

It may be related to 6469558, but yes please do file another bug report.
  I'll have someone on the ZFS team take a look at it.

--matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread Matthew Ahrens

Marko Milisavljevic wrote:

now lets try:
set zfs:zfs_prefetch_disable=1

bingo!

   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 609.00.0 77910.00.0  0.0  0.80.01.4   0  83 c0d0

only 1-2 % slower then dd from /dev/dsk. Do you think this is general
32-bit problem, or specific to this combination of hardware?


I suspect that it's fairly generic, but more analysis will be necessary.


Finally, should I file a bug somewhere regarding prefetch, or is this
a known issue?


It may be related to 6469558, but yes please do file another bug report. 
 I'll have someone on the ZFS team take a look at it.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread Matthew Ahrens

Marko Milisavljevic wrote:
Got excited too quickly on one thing... reading single zfs file does 
give me almost same speed as dd /dev/dsk... around 78MB/s... however, 
creating a 2-drive stripe, still doesn't perform as well as it ought to:


Yes, that makes sense.  Because prefetch is disabled, ZFS will only 
issue one read i/o at a time (for that stream).  This is one of the 
reasons prefetch is important :-)


Eg, in your output below you can see that each disk is only busy 40% of 
the time when using ZFS with no prefetch:



r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  294.30.0 37675.60.0  0.0  0.40.01.4   0  40 c3d0
  293.00.0 37504.90.0  0.0  0.40.01.4   0  40 c3d1

Simultaneous dd on those 2 drives from /dev/dsk runs at 46MB/s per drive.
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  800.40.0 44824.60.0  0.0  1.80.02.2   0  99 c3d0
  792.10.0 44357.90.0  0.0  1.80.02.2   0  98 c3d1


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread Marko Milisavljevic

Got excited too quickly on one thing... reading single zfs file does give me
almost same speed as dd /dev/dsk... around 78MB/s... however, creating a
2-drive stripe, still doesn't perform as well as it ought to:

   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 294.30.0 37675.60.0  0.0  0.40.01.4   0  40 c3d0
 293.00.0 37504.90.0  0.0  0.40.01.4   0  40 c3d1

Simultaneous dd on those 2 drives from /dev/dsk runs at 46MB/s per drive.
   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 800.40.0 44824.60.0  0.0  1.80.02.2   0  99 c3d0
 792.10.0 44357.90.0  0.0  1.80.02.2   0  98 c3d1

(and in Linux it saturates PCI bus at 60MB/s per drive)

On 5/15/07, Marko Milisavljevic <[EMAIL PROTECTED]> wrote:


set zfs:zfs_prefetch_disable=1

bingo!

r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  609.00.0 77910.00.0  0.0  0.80.01.4   0  83 c0d0

only 1-2 % slower then dd from /dev/dsk. Do you think this is general
32-bit problem, or specific to this combination of hardware? I am
using PCI/SATA Sil3114 card, and other then ZFS, performance of this
interface has some limitations in Solaris. That is, single drive gives
80MB/s, but doing dd /dev/dsk/xyz simultaneously on 2 drives attached
to the card gives only 46MB/s each. On Linux, however, that gives
60MB/s each, close to saturating theoretical throughput of PCI bus.
Having both drives in zpool stripe gives, with prefetch disabled,
close to 45MB/s each through dd from zfs file.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Marko Milisavljevic

Hello Matthew,

Yes, my machine is 32-bit, with 1.5G of RAM.

-bash-3.00# echo ::memstat | mdb -k
Page SummaryPagesMB  %Tot
     
Kernel 123249   481   32%
Anon33704   1319%
Exec and libs7637292%
Page cache   1116 40%
Free (cachelist)   222661   869   57%
Free (freelist)  2685101%

Total  391052  1527
Physical   391051  1527

-bash-3.00# echo ::arc | mdb -k
{
   anon = -759566176
   mru = -759566136
   mru_ghost = -759566096
   mfu = -759566056
   mfu_ghost = -759566016
   size = 0x17f20c00
   p = 0x160ef900
   c = 0x17f16ae0
   c_min = 0x400
   c_max = 0x1da0
   hits = 0x353b
   misses = 0x264b
   deleted = 0x13bc
   recycle_miss = 0x31
   mutex_miss = 0
   evict_skip = 0
   hash_elements = 0x127b
   hash_elements_max = 0x1a19
   hash_collisions = 0x61
   hash_chains = 0x4c
   hash_chain_max = 0x1
   no_grow = 1
}

now lets try:
set zfs:zfs_prefetch_disable=1

bingo!

   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 609.00.0 77910.00.0  0.0  0.80.01.4   0  83 c0d0

only 1-2 % slower then dd from /dev/dsk. Do you think this is general
32-bit problem, or specific to this combination of hardware? I am
using PCI/SATA Sil3114 card, and other then ZFS, performance of this
interface has some limitations in Solaris. That is, single drive gives
80MB/s, but doing dd /dev/dsk/xyz simultaneously on 2 drives attached
to the card gives only 46MB/s each. On Linux, however, that gives
60MB/s each, close to saturating theoretical throughput of PCI bus.
Having both drives in zpool stripe gives, with prefetch disabled,
close to 45MB/s each through dd from zfs file. I think that under
Solaris, this card is accessed through ATA driver.

There shouldn't be any issues on inside vs outside. all the reading is
done on the first gig or two of the drive, as there is nothing else on
them, except one 2 gig file. (well, i'm assuming simple copy onto a
newly formatted zfs drive puts it at start of the drive.) Drives are
completely owned by ZFS, using zpool create c0d0 c0d1

Finally, should I file a bug somewhere regarding prefetch, or is this
a known issue?

Many thanks.

On 5/15/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:

Marko Milisavljevic wrote:
> I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
> deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives
> me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only
> 35MB/s!?.

Our experience is that ZFS gets very close to raw performance for streaming
reads (assuming that there is adequate CPU and memory available).

When doing reads, prefetching (and thus caching) is a critical component of
performance.  It may be that ZFS's prefetching or caching is misbehaving 
somehow.

Your machine is 32-bit, right?  This could be causing some caching pain...
How much memory do you have?  While you're running the test on ZFS, can you
send the output of:

echo ::memstat | mdb -k
echo ::arc | mdb -k

Next, try running your test with prefetch disabled, by putting
set zfs:zfs_prefetch_disable=1
in /etc/system and rebooting before running your test.  Send the 'iostat
-xnpcz' output while this test is running.

Finally, on modern drive the streaming performance can vary by up to 2x when
reading the outside vs. the inside of the disk.  If your pool had been used
before you created your test file, it could be laid out on the inside part of
the disk.  Then you would be comparing raw reads of the outside of the disk
vs. zfs reads of the inside of the disk.  When the pool is empty, ZFS will
start allocating from the outside, so you can try destroying and recreating
your pool and creating the file on the fresh pool.  Alternatively, create a
small partition (say, 10% of the disk size) and do your tests on that to
ensure that the file is not far from where your raw reads are going.

Let us know how that goes.

--matt


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Matthew Ahrens

Marko Milisavljevic wrote:

I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives
me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only
35MB/s!?.


Our experience is that ZFS gets very close to raw performance for streaming 
reads (assuming that there is adequate CPU and memory available).


When doing reads, prefetching (and thus caching) is a critical component of 
performance.  It may be that ZFS's prefetching or caching is misbehaving somehow.


Your machine is 32-bit, right?  This could be causing some caching pain... 
How much memory do you have?  While you're running the test on ZFS, can you 
send the output of:


echo ::memstat | mdb -k
echo ::arc | mdb -k

Next, try running your test with prefetch disabled, by putting
set zfs:zfs_prefetch_disable=1
in /etc/system and rebooting before running your test.  Send the 'iostat 
-xnpcz' output while this test is running.


Finally, on modern drive the streaming performance can vary by up to 2x when 
reading the outside vs. the inside of the disk.  If your pool had been used 
before you created your test file, it could be laid out on the inside part of 
the disk.  Then you would be comparing raw reads of the outside of the disk 
vs. zfs reads of the inside of the disk.  When the pool is empty, ZFS will 
start allocating from the outside, so you can try destroying and recreating 
your pool and creating the file on the fresh pool.  Alternatively, create a 
small partition (say, 10% of the disk size) and do your tests on that to 
ensure that the file is not far from where your raw reads are going.


Let us know how that goes.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread Richard Elling

Marko Milisavljevic wrote:

I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver 
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) 
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result 
whether it is single zfs drive, mirror or a stripe (I am testing with two 
Seagate 7200.10 320G drives hanging off the same interface card).


Checksum is a contributor.  AthlonXPs are long in the tooth.  Disable checksum 
and experiment.
 -- richard


On the test machine I also have an old disk with UFS on PATA interface (Seagate 
7200.7 120G). dd from raw disk gives 58MB/s and dd from file on UFS gives 
45MB/s - far less relative slowdown compared to raw disk.

This is just an AthlonXP 2500+ with 32bit PCI SATA sil3114 card, but 
nonetheless, the hardware has the bandwidth to fully saturate the hard drive, 
as seen by dd from the raw disk device. What is going on? Am I doing something 
wrong or is ZFS just not designed to be used on humble hardware?

My goal is to have it go fast enough to saturate gigabit ethernet - around 
75MB/s. I don't plan on replacing hardware - after all, Linux with RAID10 gives 
me this already. I was hoping to switch to Solaris/ZFS to get checksums (which 
wouldn't seem to account for slowness, because CPU stays under 25% during all 
this).

I can temporarily scrape together an x64 machine with ICH7 SATA interface - 
I'll try the same test with same drives on that to elliminate 32-bitness and 
PCI slowness from the equation. And while someone will say dd has little to do 
with real-life file server performance - it actually has a lot to do with it, 
because most of use of this server is to copy multi-gigabyte files to and fro a 
few times per day. Hardly any random access involved (fragmentation aside).
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread Al Hopper
On Mon, 14 May 2007, Marko Milisavljevic wrote:

[ ... reformatted ]
> I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
> deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null
> gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me
> only 35MB/s!?. I am getting basically the same result whether it is
> single zfs drive, mirror or a stripe (I am testing with two Seagate
> 7200.10 320G drives hanging off the same interface card).

Which interface card?

... snip 

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread Marko Milisavljevic
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver 
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) 
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result 
whether it is single zfs drive, mirror or a stripe (I am testing with two 
Seagate 7200.10 320G drives hanging off the same interface card).

On the test machine I also have an old disk with UFS on PATA interface (Seagate 
7200.7 120G). dd from raw disk gives 58MB/s and dd from file on UFS gives 
45MB/s - far less relative slowdown compared to raw disk.

This is just an AthlonXP 2500+ with 32bit PCI SATA sil3114 card, but 
nonetheless, the hardware has the bandwidth to fully saturate the hard drive, 
as seen by dd from the raw disk device. What is going on? Am I doing something 
wrong or is ZFS just not designed to be used on humble hardware?

My goal is to have it go fast enough to saturate gigabit ethernet - around 
75MB/s. I don't plan on replacing hardware - after all, Linux with RAID10 gives 
me this already. I was hoping to switch to Solaris/ZFS to get checksums (which 
wouldn't seem to account for slowness, because CPU stays under 25% during all 
this).

I can temporarily scrape together an x64 machine with ICH7 SATA interface - 
I'll try the same test with same drives on that to elliminate 32-bitness and 
PCI slowness from the equation. And while someone will say dd has little to do 
with real-life file server performance - it actually has a lot to do with it, 
because most of use of this server is to copy multi-gigabyte files to and fro a 
few times per day. Hardly any random access involved (fragmentation aside).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss