On Tue, Aug 08, 2006 at 04:47:51PM +0200, Robert Milkowski wrote:
Hello przemolicc,
Tuesday, August 8, 2006, 3:54:26 PM, you wrote:
ppf Hello,
ppf Solaris 10 GA + latest recommended patches:
ppf while runing dtrace:
ppf bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED],
ppf
On Tue, Aug 08, 2006 at 11:33:28AM -0500, Tao Chen wrote:
On 8/8/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hello,
Solaris 10 GA + latest recommended patches:
while runing dtrace:
bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]-fi_pathname] =
count();}'
...
John Danielson wrote:
.
Patrick Petit wrote:
David Edmondson wrote:
On 4 Aug 2006, at 1:22pm, Patrick Petit wrote:
When you're talking to Xen (using three control-A's) you should
hit 'q', which
Hello Torrey,
Wednesday, August 9, 2006, 5:39:54 AM, you wrote:
TM I read through the entire thread, I think, and have some comments.
TM * There are still some granny smith to Macintosh comparisons
TM going on. Different OS revs, it looks like different server types,
TM and I
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
write that with a single operation.
Eric Schrock wrote:
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug if it still
mario heimel writes:
Hi.
i am very interested in ZFS compression on vs off tests maybe you can run
another one with the 3510.
i have seen a slightly benefit with compression on in the following test
(also with high system load):
S10U2
v880 8xcore 16Ggb ram
(only six
Hello Roch,
Wednesday, August 9, 2006, 5:36:39 PM, you wrote:
R mario heimel writes:
Hi.
i am very interested in ZFS compression on vs off tests maybe you can run
another one with the 3510.
i have seen a slightly benefit with compression on in the following test
(also with high
Eric Lowe wrote:
Eric Schrock wrote:
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug
Jesus Cea wrote:
Anton B. Rang wrote:
I have a two-vdev pool, just plain disk slices
If the vdev's are from the same disk, your are doomed.
ZFS tries to spread the load among the vdevs, so if the vdevs are from
the same disk, you will have a seek hell.
It is not clear to me that this is a
I'd like to get a concensus of how to describe ZFS RAID configs in a
short-hand method. For example,
single-level
no RAID (1 disk)
RAID-0 (dynamic stripe, 1 disk)
RAID-1
RAID-Z
RAID-Z2
mutliple
...
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I did it - created one
I'm with ya on that one. I'd even go so far as to change single parity
RAID to single parity block. The talk of RAID throws people off
pretty easily especially when you start layering ZFS on top of things
other then a JBOD.
Eric Schrock wrote:
I don't see why you would distinguish between
Hi,
Note that these are page cache rates and that if the application pushes harder
and exposes the supporting device rates there is another world of performance
to be observed. This is where ZFS gets to be a challenge as the relationship
between the application level I/O and the pool level is
Hi Eric,
Thanks for the information.
I am aware of the recsize option and its intended use. However, when I
was exploring it to confirm the expected behavior, what I found was the
opposite!
The test case was build 38, Solaris 11, a 2 GB file, initially
created with 1 MB SW, and a recsize
15 matches
Mail list logo