There's been discussion on the ZFS
bloghttp://www.opensolaris.org/jive/thread.jspa?messageID=150818tstart=0#150818
.
Someone else did a benchmark comparing ZFS on hardware raid vs software
raid. Software was faster on this system, plus you get the ECC type stuff
with ZFS.
On 8/30/07, Dan
I didn't get in until late last night so I didn't have a lot of time
to play around with movie rotating. But I did happen to try the
mencoder hack that VirginSnow recommended, and that worked well
enough for my needs.
I want to thank everybody for their helpful responses. I intend on
checking
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
Someone else did a benchmark comparing ZFS on hardware raid vs software
raid. Software was faster on this system, plus you get the ECC type stuff
with ZFS.
I regard most such storage-related benchmarks with a great deal of
suspicion. They
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
Someone else did a benchmark comparing ZFS on hardware raid vs software
raid. Software was faster on this system, plus you get the ECC type
stuff
with ZFS.
I regard most such
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
IMHO we're going to see more more cores in a system by default ...
Sure, if you've actually got a surplus of cores. Going forward, for
most small systems, that's going to be true. But it's not a given for
everything today. That's all I'm
I regard most such storage-related benchmarks with a great deal of
suspicion. They always seem to assume the computer won't be doing
anything else when the filesystem is being used.
Well said.
Amplifying ...
ALL benchmarks are at best hints of reality, since they're ALL
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
IMHO we're going to see more more cores in a system by default ...
Sure, if you've actually got a surplus of cores. Going forward, for
most small systems, that's going to be true. But it's
On 8/31/07, Tom Buskey [EMAIL PROTECTED] wrote:
... Going forward, for most small systems, that's going to be true. ...
... Hence my saying *we're going to see* in above. ...
Hence my saying that's going to be true in above. ;-)
-- Ben
___
The annual InfoeXchange conference, run by the Software Association of
New Hampshire (SWaNH) will be on October 11th in Bedford, NH. A
discounted rate of $89 for SWaNH members and $109 for the public expires
September 1, and you'll have to pay $10 more.
Conference Announcement:
The bottleneck will be, IMO, I/O. Disk data will go though that no
matter if you have hardware or software raid.
The bottle neck has and always will be I/O. I doubt the day will ever
come that a fetch out to the disk will take the same time as a fetch out
to memory.
Some of the latest
On 8/31/07, Dan Miller [EMAIL PROTECTED] wrote:
The bottle neck has and always will be I/O. I doubt the day will ever
come that a fetch out to the disk will take the same time as a fetch out
to memory.
That assumes the constraints are not cumulative, and that workload
is a fungible thing,
On 8/31/07, Ben Scott [EMAIL PROTECTED] wrote:
On the other hand, something which is keeping CPU busy while also
doing some I/O (say, processing of a dataset), may well find that
latency stacks up, as the throughput is delayed first by an I/O wait,
and then a processor wait.
It may be
12 matches
Mail list logo