[zfs-discuss] Default zpool on Thumpers

2006-11-02 Thread Robert Milkowski
Hi. Thumpers come with Solaris pre-installed and already configured one pool. It's a collection of raid-z1 groups but some groups are smaller than the others. I'll reconfigure it anyway but I'm just curious what side-effects can there be with such a config? Any performance hit? All space

RE: [zfs-discuss] ZFS Performance Question

2006-11-02 Thread Roch - PAE
Luke Lonergan writes: Robert, I belive it's not solved yet but you may want to try with latest nevada and see if there's a difference. It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express post build 47 I think. - Luke This one is not yet fixed :

Re: [zfs-discuss] User quotas. A recurring question

2006-11-02 Thread Darren J Moffat
Chris Gerhard wrote: One question that keeps coming up in my discussions about ZFS is the lack of user quotas. Typically this comes from people who have many tens of thousands (30,000 - 100,000) of users where they feel that having a file system per user will not be manageable. I would agree

Re: [zfs-discuss] User quotas. A recurring question

2006-11-02 Thread Roch - PAE
Chris Gerhard writes: One question that keeps coming up in my discussions about ZFS is the lack of user quotas. Typically this comes from people who have many tens of thousands (30,000 - 100,000) of users where they feel that having a file system per user will not be manageable. I

[zfs-discuss] Re: [storage-discuss] ZFS/iSCSI target integration

2006-11-02 Thread eric kustarz
Adam Leventhal wrote: Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I'll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8--- iSCSI/ZFS Integration A. Overview The goal of this

Re: [storage-discuss] Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Robert Milkowski
Hello Richard, Wednesday, November 1, 2006, 11:36:14 PM, you wrote: REP Adam Leventhal wrote: On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote: Lets say server A has the pool with NFS shared, or iSCSI shared, volumes. Server A exports the pool or goes down. Server B imports the

[zfs-discuss] Re: [storage-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Adam Leventhal
On Thu, Nov 02, 2006 at 12:10:06AM -0800, eric kustarz wrote: Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should be exported as an iSCSI target. The acceptable values for this property are 'on', 'off', and 'direct'. In the future, we may support other

Re: [zfs-discuss] ZFS Performance Question

2006-11-02 Thread Roch - PAE
How much memory in the V210 ? UFS will recycle it's own pages while creating files that are big. ZFS working against a large heap of free memory will cache the data (why not?). The problem is that ZFS does not know when to stop. During the subsequent memory/cache reclaim, ZFS is potentially not

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Ceri Davies
On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote: Spencer Shepler wrote: On Wed, Adam Leventhal wrote: On Wed, Nov 01, 2006 at 01:17:02PM -0500, Torrey McMahon wrote: Is there going to be a method to override that on the import? I can see a situation where you want to

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Darren J Moffat
Ceri Davies wrote: For NFS, it's possible (but likely suboptimal) for clients to be configured to mount the filesystem from server A and fail over to server B, assuming that the pool import can happen quickly enough for them not to receive ENOENT. IIRC NFS client side failover is really only

[zfs-discuss] Re: ZFS Performance Question

2006-11-02 Thread Jay Grogan
The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not shairing any ports on the MCdata, but with so much cache we aren't close to taxing the disk. You mentioned the 50MB on the throughput and that's something we've been wondering around here as to what the average is

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Spencer Shepler
On Thu, Darren J Moffat wrote: Ceri Davies wrote: For NFS, it's possible (but likely suboptimal) for clients to be configured to mount the filesystem from server A and fail over to server B, assuming that the pool import can happen quickly enough for them not to receive ENOENT. IIRC NFS

Re: [zfs-discuss] User quotas. A recurring question

2006-11-02 Thread Robert Petkus
Roch - PAE wrote: Chris Gerhard writes: One question that keeps coming up in my discussions about ZFS is the lack of user quotas. Typically this comes from people who have many tens of thousands (30,000 - 100,000) of users where they feel that having a file system per user

Re: [zfs-discuss] ZFS Performance Question

2006-11-02 Thread Luke Lonergan
Roch, On 11/2/06 12:51 AM, Roch - PAE [EMAIL PROTECTED] wrote: This one is not yet fixed : 6415647 Sequential writing is jumping Yep - I mistook this one for another problem with drive firmware on pre-revenue units. Since Robert has a customer release X4500 it doesn't have the firmware

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal
Cyril Plisko wrote: On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote: What properties are you specifically interested in modifying? LUN for example. How would I configure LUN via zfs command ? You can't. Forgive my ignorance about how iSCSI is deployed, but why would you want/need to

Re: [zfs-discuss] Re: [storage-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal
Adam Leventhal wrote: On Thu, Nov 02, 2006 at 12:10:06AM -0800, eric kustarz wrote: Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should be exported as an iSCSI target. The acceptable values for this property are 'on', 'off', and 'direct'. In the future, we

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Cyril Plisko
On 11/2/06, Rick McNeal [EMAIL PROTECTED] wrote: The administration of FC devices for the target mode needs some serious thinking so that we don't end up with a real nightmare on our hands. As you point out the FC world doesn't separate the port address from the target name. Therefore each FC

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Cyril Plisko
On 11/2/06, Rick McNeal [EMAIL PROTECTED] wrote: That's how the shareiscsi property works today. So, why manipulating LUN is impossible via zfs ??? A ZVOL is a single LU, so there's nothing to manipulate. Could you give me an example of what you think should/could be changed? I was

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal
Cyril Plisko wrote: On 11/2/06, Rick McNeal [EMAIL PROTECTED] wrote: That's how the shareiscsi property works today. So, why manipulating LUN is impossible via zfs ??? A ZVOL is a single LU, so there's nothing to manipulate. Could you give me an example of what you think should/could

Re[2]: [zfs-discuss] Default zpool on Thumpers

2006-11-02 Thread Robert Milkowski
Hello Richard, Thursday, November 2, 2006, 7:08:17 PM, you wrote: REP Robert Milkowski wrote: Thumpers come with Solaris pre-installed and already configured one pool. It's a collection of raid-z1 groups but some groups are smaller than the others. I'll reconfigure it anyway but I'm

Re: [zfs-discuss] Default zpool on Thumpers

2006-11-02 Thread Richard Elling - PAE
Robert Milkowski wrote: REP P.S. did you upgrade the OS? I'd consider the need for 'zpool upgrade' to be REP a bug. on one thumper I reinstalled OS to S10U3 beta and imported default pool. On another I put snv_49 and imported pool. Then I destroyed pools and I'm experimenting with different

[zfs-discuss] Re: raid-z random read performance

2006-11-02 Thread Anton B. Rang
I don't see how you can get both end-to-end data integrity and read avoidance. Checksum the individual RAID-5 blocks, rather than the entire stripe? In more detail: Allow the pointer to the block to contain one checksum per device used (the count will vary if you're using a RAID-Z style

[zfs-discuss] Re: reproducible zfs panic on Solaris 10 06/06

2006-11-02 Thread Matthew Flanagan
Matt, Matthew Flanagan wrote: mkfile 100m /data zpool create tank /data ... rm /data ... panic[cpu0]/thread=2a1011d3cc0: ZFS: I/O failure (write on unknown off 0: zio 60007432bc0 [L0 unallocated] 4000L/400P DVA[0]=0:b000:400 DVA[1]=0:120a000:400 fletcher4 lzjb BE contiguous birth=6

Re: [zfs-discuss] Re: raid-z random read performance

2006-11-02 Thread James Blackburn
Checksum the individual RAID-5 blocks, rather than the entire stripe? Depending on your the number of drives in your RAID-Z, this will increase your metadata size by N-1 * 32 bytes. Would this not be an undesirable cost increase on the metadata size? In more detail: Allow the pointer to