Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Tom Buskey
There's been discussion on the ZFS
bloghttp://www.opensolaris.org/jive/thread.jspa?messageID=150818tstart=0#150818
.

Someone else did a benchmark comparing ZFS on hardware raid vs software
raid.  Software was faster on this system, plus you get the ECC type stuff
with ZFS.


On 8/30/07, Dan Miller [EMAIL PROTECTED] wrote:

 Interesting benchmarks:

 http://tastic.brillig.org/%7Ejwb/zfs-xfs-ext4.html

 I would also like to see Reiserfs4 in those results to see how it
 compares, but those are not included.

 Dan
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Ben Scott
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
 Someone else did a benchmark comparing ZFS on hardware raid vs software
 raid.  Software was faster on this system, plus you get the ECC type stuff
 with ZFS.

  I regard most such storage-related benchmarks with a great deal of
suspicion.  They always seem to assume the computer won't be doing
anything else when the filesystem is being used.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Tom Buskey
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:

 On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
  Someone else did a benchmark comparing ZFS on hardware raid vs software
  raid.  Software was faster on this system, plus you get the ECC type
 stuff
  with ZFS.

   I regard most such storage-related benchmarks with a great deal of
 suspicion.  They always seem to assume the computer won't be doing
 anything else when the filesystem is being used.



The test was on a Sun x4200(?) aka Thumper.  Multiple Opeterons with dual
cores.

IMHO we're going to see more  more cores in a system by default and most
software will not be keeping up.  I think the OS will be able to take
advantage of the multiple cores and apps will not.  Unless you're running
lots of apps, cores will be idle.  I'd rather have one of those cores
consumed by software raid instead of being idle.  Maybe that won't be quite
as fast as a dedicated hardware raid but it's in my system already so I can
buy a $20 multi SATA port card instead of a $xxx multi SATA RAID card.

The bottleneck will be, IMO, I/O.  Disk data will go though that no matter
if you have hardware or software raid.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Ben Scott
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
 IMHO we're going to see more  more cores in a system by default ...

  Sure, if you've actually got a surplus of cores.  Going forward, for
most small systems, that's going to be true.  But it's not a given for
everything today.  That's all I'm saying.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Bill Ricker
   I regard most such storage-related benchmarks with a great deal of
 suspicion.  They always seem to assume the computer won't be doing
 anything else when the filesystem is being used.

Well said.

Amplifying ...

ALL benchmarks are at best hints of reality, since they're ALL
over-simplifications.

It takes an actual workload simulation to properly benchmark a
balanced system design (like IBM Power Series P) in comparison to
systems where each subsystem was tuned to a simplistic benchmark.

-- 
Bill
[EMAIL PROTECTED] [EMAIL PROTECTED]
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Tom Buskey
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:

 On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
  IMHO we're going to see more  more cores in a system by default ...

   Sure, if you've actually got a surplus of cores.  Going forward, for
 most small systems, that's going to be true.  But it's not a given for
 everything today.  That's all I'm saying.



Hence my saying  we're going to see in above.  Most new computers and CPUs I
see advertised today are dual core or more.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Ben Scott
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
  ... Going forward, for most small systems, that's going to be true.  ...

 ... Hence my saying  *we're going to see* in above.   ...

  Hence my saying that's going to be true in above.  ;-)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Dan Miller

 The bottleneck will be, IMO, I/O.  Disk data will go though that no
 matter if you have hardware or software raid.
 

The bottle neck has and always will be I/O. I doubt the day will ever
come that a fetch out to the disk will take the same time as a fetch out
to memory.

Some of the latest flash drives make it faster, but still no where near
the time it takes to go out to memory.

Dan
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Ben Scott
On 8/31/07, Dan Miller [EMAIL PROTECTED] wrote:
 The bottle neck has and always will be I/O. I doubt the day will ever
 come that a fetch out to the disk will take the same time as a fetch out
 to memory.

  That assumes the constraints are not cumulative, and that workload
is a fungible thing, equally affected by both I/O and CPU wait.

  A process which is I/O bound and light on the CPU will likely behave
in the manner you describe.  Say, copying a file.  The CPU isn't doing
anything hard anyway.  The system is just pushing bytes through
buffers.  If the CPU is put to work doing storage management, so much
the better.

  On the other hand, something which is keeping CPU busy while also
doing some I/O (say, processing of a dataset), may well find that
latency stacks up, as the throughput is delayed first by an I/O wait,
and then a processor wait.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ZFS vs EXT4 vs XFS

2007-08-31 Thread Ben Scott
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:
   On the other hand, something which is keeping CPU busy while also
 doing some I/O (say, processing of a dataset), may well find that
 latency stacks up, as the throughput is delayed first by an I/O wait,
 and then a processor wait.

  It may be worth pointing out that I don't necessarily discount the
software approach to storage management.  If I've got a workload
that is CPU-bound, and it is being dragged down by I/O wait due to
storage management being done in software, I've got two options: Buy
dedicated storage controllers, or buy more general-purpose cores.  The
GP cores may well be the more effective route.  In addition to often
being cheaper, scaling better, and distributing better, I can use GP
cores for other things when I/O isn't an issue.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


ZFS vs EXT4 vs XFS

2007-08-30 Thread Dan Miller
Interesting benchmarks:

http://tastic.brillig.org/%7Ejwb/zfs-xfs-ext4.html

I would also like to see Reiserfs4 in those results to see how it
compares, but those are not included.

Dan
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/