Hi Robert,
Out of curiosity would it be possible to see the same test but hitting
the disk with write operations instead of read?
Best Regards,
Jason
On 11/2/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello zfs-discuss,
Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB
file
Checksum the individual RAID-5 blocks, rather than the entire stripe?
Depending on your the number of drives in your RAID-Z, this will
increase your metadata size by N-1 * 32 bytes. Would this not be an
undesirable cost increase on the metadata size?
In more detail: Allow the pointer to t
Matt,
> Matthew Flanagan wrote:
> > mkfile 100m /data
> > zpool create tank /data
> ...
> > rm /data
> ...
> > panic[cpu0]/thread=2a1011d3cc0: ZFS: I/O failure
> (write on off 0: zio 60007432bc0 [L0
> unallocated] 4000L/400P DVA[0]=<0:b000:400>
> DVA[1]=<0:120a000:400> fletcher4 lzjb BE contiguou
> I don't see how you can get both end-to-end data integrity and
> read avoidance.
Checksum the individual RAID-5 blocks, rather than the entire stripe?
In more detail: Allow the pointer to the block to contain one checksum per
device used (the count will vary if you're using a RAID-Z style algo
Robert Milkowski wrote:
REP> P.S. did you upgrade the OS? I'd consider the need for 'zpool upgrade' to
be
REP> a bug.
on one thumper I reinstalled OS to S10U3 beta and imported default
pool. On another I put snv_49 and imported pool. Then I destroyed
pools and I'm experimenting with different
Hello Richard,
Thursday, November 2, 2006, 7:08:17 PM, you wrote:
REP> Robert Milkowski wrote:
>> Thumpers come with Solaris pre-installed and already configured one pool.
>> It's a collection of raid-z1 groups but some groups are smaller than the
>> others.
>> I'll reconfigure it anyway but
Matthew Flanagan wrote:
mkfile 100m /data
zpool create tank /data
...
rm /data
...
panic[cpu0]/thread=2a1011d3cc0: ZFS: I/O failure (write on off 0: zio 60007432bc0
[L0 unallocated] 4000L/400P DVA[0]=<0:b000:400> DVA[1]=<0:120a000:400> fletcher4 lzjb
BE contiguous birth=6 fill=0 cksum=6721
Cyril Plisko wrote:
On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
>> That's how the shareiscsi property works today.
>
> So, why manipulating LUN is impossible via zfs ???
>
A ZVOL is a single LU, so there's nothing to manipulate. Could you give
me an example of what you think should
Wow. Thanks for the data. This is somewhat consistent with what I
predict in RAIDoptimizer.
Robert Milkowski wrote:
Hello zfs-discuss,
Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB
filebench/randomread script, filesize=256GB
Your performance numbers are better than I predi
On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
>> That's how the shareiscsi property works today.
>
> So, why manipulating LUN is impossible via zfs ???
>
A ZVOL is a single LU, so there's nothing to manipulate. Could you give
me an example of what you think should/could be changed?
I
Cyril Plisko wrote:
On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
>
The administration of FC devices for the target mode needs some serious
thinking so that we don't end up with a real nightmare on our hands.
As you point out the FC world doesn't separate the port address from the
targe
Robert Milkowski wrote:
Thumpers come with Solaris pre-installed and already configured one pool.
It's a collection of raid-z1 groups but some groups are smaller than the others.
I'll reconfigure it anyway but I'm just curious what side-effects can there be
with such a config?
Any performanc
comment below...
Robert Milkowski wrote:
Hello Richard,
Wednesday, November 1, 2006, 11:36:14 PM, you wrote:
REP> Adam Leventhal wrote:
On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote:
Lets say server A has the pool with NFS shared, or iSCSI shared,
volumes. Server A exports t
On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
>
The administration of FC devices for the target mode needs some serious
thinking so that we don't end up with a real nightmare on our hands.
As you point out the FC world doesn't separate the port address from the
target name. Therefore each
Adam Leventhal wrote:
On Thu, Nov 02, 2006 at 12:10:06AM -0800, eric kustarz wrote:
Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should
be exported as an iSCSI target. The acceptable values for this
property
are 'on', 'off', and 'direct'. In the future, we
Dick Davies wrote:
On 01/11/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
I too must be missing something. I can't imagine why it would take 5
minutes to online a target. A ZVOL should automatically be brought
online since now initialization is required.
s/now/no/ ?
Correct. That should have
Cyril Plisko wrote:
On 11/1/06, Adam Leventhal <[EMAIL PROTECTED]> wrote:
> >What properties are you specifically interested in modifying?
>
> LUN for example. How would I configure LUN via zfs command ?
You can't. Forgive my ignorance about how iSCSI is deployed, but why
would
you want/nee
Hello zfs-discuss,
Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB
filebench/randomread script, filesize=256GB
2 disks for system, 2 disks as hot-spares, atime set to off for a
pool, cache_bshift set to 8K (2^13), recordsize untouched (default).
pool: 4x raid-z (5 disks) + 4x rai
Torrey McMahon wrote:
This thread has diverged a bit but I'm still a little paranoid that a
sysadmin is going to move a pool from one host to an other and all of a
sudden the new system is serving NFS shares and iSCSI LUNs all of a
sudden when really they just wanted to copy some data or fix a
This thread has diverged a bit but I'm still a little paranoid that a
sysadmin is going to move a pool from one host to an other and all of a
sudden the new system is serving NFS shares and iSCSI LUNs all of a
sudden when really they just wanted to copy some data or fix a problem.
In a lot of
Roch,
On 11/2/06 12:51 AM, "Roch - PAE" <[EMAIL PROTECTED]> wrote:
> This one is not yet fixed :
> 6415647 Sequential writing is jumping
Yep - I mistook this one for another problem with drive firmware on
pre-revenue units. Since Robert has a customer release X4500 it doesn't
have the firmware
Roch - PAE wrote:
> Chris Gerhard writes:
>
> > One question that keeps coming up in my discussions about ZFS is the lack
> of user quotas.
> >
> > Typically this comes from people who have many tens of thousands
> > (30,000 - 100,000) of users where they feel that having a file system
> >
On Thu, Darren J Moffat wrote:
> Ceri Davies wrote:
> >For NFS, it's possible (but likely suboptimal) for clients to be
> >configured to mount the filesystem from server A and fail over to
> >server B, assuming that the pool import can happen quickly enough for
> >them not to receive ENOENT.
>
> I
The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not
shairing any ports on the MCdata, but with so much cache we aren't close to
taxing the disk. You mentioned the 50MB on the throughput and that's something
we've been wondering around here as to what the average is fo
Ceri Davies wrote:
For NFS, it's possible (but likely suboptimal) for clients to be
configured to mount the filesystem from server A and fail over to
server B, assuming that the pool import can happen quickly enough for
them not to receive ENOENT.
IIRC NFS client side failover is really only in
On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote:
> Spencer Shepler wrote:
> >On Wed, Adam Leventhal wrote:
> >
> >>On Wed, Nov 01, 2006 at 01:17:02PM -0500, Torrey McMahon wrote:
> >>
> >>>Is there going to be a method to override that on the import? I can see
> >>>a situation
How much memory in the V210 ?
UFS will recycle it's own pages while creating files that
are big. ZFS working against a large heap of free memory will
cache the data (why not?). The problem is that ZFS does not
know when to stop. During the subsequent memory/cache
reclaim, ZFS is potentially not
Chris Gerhard writes:
> One question that keeps coming up in my discussions about ZFS is the lack of
> user quotas.
>
> Typically this comes from people who have many tens of thousands
> (30,000 - 100,000) of users where they feel that having a file system
> per user will not be manageab
Chris Gerhard wrote:
One question that keeps coming up in my discussions about ZFS is the lack of
user quotas.
Typically this comes from people who have many tens of thousands (30,000 -
100,000) of users where they feel that having a file system per user will not
be manageable. I would agree
On Thu, Nov 02, 2006 at 12:10:06AM -0800, eric kustarz wrote:
> > Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should
> > be exported as an iSCSI target. The acceptable values for this
> > property
> > are 'on', 'off', and 'direct'. In the future, we may support o
Luke Lonergan writes:
> Robert,
>
> > I belive it's not solved yet but you may want to try with
> > latest nevada and see if there's a difference.
>
> It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
> post build 47 I think.
>
> - Luke
>
This one is not yet fi
Hi.
Thumpers come with Solaris pre-installed and already configured one pool.
It's a collection of raid-z1 groups but some groups are smaller than the others.
I'll reconfigure it anyway but I'm just curious what side-effects can there be
with such a config?
Any performance hit? All space will
Hello Richard,
Wednesday, November 1, 2006, 11:36:14 PM, you wrote:
REP> Adam Leventhal wrote:
>> On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote:
>>> Lets say server A has the pool with NFS shared, or iSCSI shared,
>>> volumes. Server A exports the pool or goes down. Server B imp
Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC. Comments and suggestions are welcome.
Adam
---8<---
iSCSI/ZFS Integration
A. Overview
The goal of this projec
34 matches
Mail list logo