Re: [zfs-discuss] zpool iostat

2006-09-18 Thread przemolicc
On Mon, Sep 18, 2006 at 11:05:07AM -0400, Krzys wrote: > Hello folks, is there any way to get timestamps when doing "zpool iostat 1" > for example? > > Well I did run zpool iostat 60 starting last night and I got some loads > indicating along the way but without a time stamps I cant figure out a

Re: [zfs-discuss] Re: zfs clones

2006-09-18 Thread Matthew Ahrens
Mike Gerdts wrote: A couple scenarios from environments that I work in, using "legacy" file systems and volume managers: 1) Various test copies need to be on different spindles to remove any perceived or real performance impact imposed by one or the other. Arguably by having the IO activity spre

Re: [zfs-discuss] Re: Re: zfs clones

2006-09-18 Thread Matthew Ahrens
Jan Hendrik Mangold wrote: I didn't ask the original question, but I have a scenario where I want to use clone as well and encounter a (designed?) behaviour I am trying to understand. I create a filesystem A with ZFS and modify it to a point where I create a snapshot [EMAIL PROTECTED] Then I clo

Re: [zfs-discuss] slow zpool create ( and format )

2006-09-18 Thread Eric Schrock
Is there anything in /var/adm/messages? This sounds like some flakey hardware causing I/O retries. After you create the pool, is it usable in any sense of the word? Does 'zpool status' show any errors after running a scrub? - Eric On Tue, Sep 19, 2006 at 02:40:43PM +1000, Chun, Peter non Unisy

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 11:55:27PM -0400, Jonathan Edwards wrote: > > 1) If the zpool was imported when the split was done, can the > secondary pool be imported by another host if the /dev/dsk entries > are different? I'm assuming that you could simply use the -f > option .. would the guid

[zfs-discuss] slow zpool create ( and format )

2006-09-18 Thread Chun, Peter non Unisys
Hi, I am running Solaris 10 6/06. system was upgraded from Solaris10 via live upgrade.   zpool create mirror c0t10d0 c0t11d0 takes about 30 minutes to complete or even just to produce the error message "invalid vdev specification, use -f to override.."   Strangely enough the format c

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Jonathan Edwards
On Sep 18, 2006, at 23:16, Eric Schrock wrote: Here's an example: I've three LUNs in a ZFS pool offered from my HW raid array. I take a snapshot onto three other LUNs. A day later I turn the host off. I go to the array and offer all six LUNs, the pool that was in use as well as the snapsh

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread David Dyer-Bennet
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote: Interestingly, the operation may succeed and yet we will get an error which recommends replacing the drive. For example, if the failure prediction threshold is exceeded. You might also want to replace the drive when there are no spare

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread Richard Elling - PAE
more below... David Dyer-Bennet wrote: On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote: [appologies for being away from my data last week] David Dyer-Bennet wrote: > The more I look at it the more I think that a second copy on the same > disk doesn't protect against very much real-w

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 06:03:47PM -0400, Torrey McMahon wrote: > > Its not the transport layer. It works fine as the LUN IDs are different > and the devices will come up with different /dev/dsk entries. (And if > not then you can fix that on the array in most cases.) The problem is > that devi

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread David Dyer-Bennet
On 9/18/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote: [appologies for being away from my data last week] David Dyer-Bennet wrote: > The more I look at it the more I think that a second copy on the same > disk doesn't protect against very much real-world risk. Am I wrong > here? Are parti

Re: [zfs-discuss] Re: zfs clones

2006-09-18 Thread Mike Gerdts
On 9/1/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Marlanne DeLaSource wrote: > Thanks for all your answers. > > The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. > > The s

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread Richard Elling - PAE
[appologies for being away from my data last week] David Dyer-Bennet wrote: The more I look at it the more I think that a second copy on the same disk doesn't protect against very much real-world risk. Am I wrong here? Are partial(small) disk corruptions more common than I think? I don't have

[zfs-discuss] Re: Re: zfs clones

2006-09-18 Thread Jan Hendrik Mangold
> > The initial idea was to make a dataset/snapshot and > clone (fast) and then separate the clone from its > snapshot. The clone could be then used as a new > independant dataset. > > > > The send/receive subcommands are probably the only > way to duplicate a dataset. > > I'm still not sure I u

Re: [zfs-discuss] drbd using zfs send/receive?

2006-09-18 Thread Frank Cusack
On September 18, 2006 5:45:08 PM +0200 Jakob Praher <[EMAIL PROTECTED]> wrote: hi everyone, I am planning on creating a local SAN via NFS(v4) and several redundant nodes. huh. How do you create a SAN with NFS? I have been using DRBD on linux before and now am asking whether some of you h

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Darren Dunham
> In my experience, we would not normally try to mount two different > copies of the same data at the same time on a single host. To avoid > confusion, we would especially not want to do this if the data represents > two different points of time. I would encourage you to stick with more > traditi

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Richard Elling - PAE
Joerg Haederli wrote: I'm really not an expert on ZFS, but at least from my point to handle such cases ZFS has to handle at least the following points - GUID a new/different GUID has to be assigned - LUNs ZFS has to be aware that device trees are different, if these are part of some k

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Torrey McMahon
Torrey McMahon wrote: A day later I turn the host off. I go to the array and offer all six LUNs, the pool that was in use as well as the snapshot that I took a day previously, and offer all three LUNs to the host. Errrthat should be A day later I turn the host off. I go to the arr

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Torrey McMahon
Eric Schrock wrote: On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote: It looks as this has not been implemented yet nor even tested. What hasn't been implemented? As far as I can tell, this is a request for the previously mentioned RFE (ability to change GUIDs on import)

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote: > I'm really not an expert on ZFS, but at least from my point to > handle such cases ZFS has to handle at least the following points > > - GUID a new/different GUID has to be assigned As I mentioned previously, ZFS handles this gr

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 03:29:49PM -0400, Jonathan Edwards wrote: > > err .. i believe the point is that you will have multiple disks > claiming to be the same disk which can wreak havoc on a system (eg: > I've got a 4 disk pool with a unique GUID and 8 disks claiming to be > part of that sa

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Joerg Haederli
I'm really not an expert on ZFS, but at least from my point to handle such cases ZFS has to handle at least the following points - GUID a new/different GUID has to be assigned - LUNs ZFS has to be aware that device trees are different, if these are part of some kind of metadata stored

[zfs-discuss] drbd using zfs send/receive?

2006-09-18 Thread Jakob Praher
hi everyone, I am planning on creating a local SAN via NFS(v4) and several redundant nodes. I have been using DRBD on linux before and now am asking whether some of you have experience on on-demand network filesystem mirrors. I have yet little Solaris sysadmin know how, but i am interestin

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Jonathan Edwards
On Sep 18, 2006, at 14:41, Eric Schrock wrote: 2 - If you import LUNs with the same label or ID as a currently mounted pool then ZFS will no one seems to know. For example: I have a pool on two LUNS X and Y called mypool. I take a snapshot of LUN X & Y, ignoring issue #1 above for no

RE: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Ellis, Mike
It's a valid use case in the high-end enterprise space. While it probably makes good sense to use ZFS for snapshot creation, there are still cases where array-based snapshots/clones/BCVs make sense. (DR/Array-based replication, data-verification, separate spindle-pool, legacy/migration reasons, an

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote: > > 1 - ZFS is self consistent but if you take a LUN snapshot then any > transactions in flight might not be completed and the pool - Which you > need to snap in its entirety - might not be consistent. The more LUNs > you have in t

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-18 Thread Richard Elling - PAE
Robert Milkowski wrote: Hello James, I belive that storing hostid, etc. in a label and checking if it matches on auto-import is the right solution. Before it's implemented you can use -R right now with home-clusters and don't worry about auto-import. hostid isn't sufficient (got a scar), so pe

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Torrey McMahon
Hans-Joerg Haederli - Sun Switzerland Zurich - Sun Support Services wrote: Hi colleagues IHAC who wants to use ZFS with his HDS box. He asks now how he can do the following: - Create ZFS pool/fs on HDS LUNs - Create Copy with ShadowImage inside HDS - Disconnect ShadowImage - Import ShadowIm

Re: [zfs-discuss] ZFS layout on hardware RAID-5?

2006-09-18 Thread Bill Sommerfeld
I would go with: > (3) Three 4D+1P h/w RAID-5 groups, no hot spare, mapped to one LUN each. > Setup a ZFS pool of one RAID-Z group consisting of those three LUN's. > Only ~3200GB available space, but what looks like very good resiliency > in face of multiple disk failures. IMHO buildi

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-18 Thread Anantha N. Srirama
I don't see a patch for this on the SunSolve website. I've opened a service request to get this patch for Sol10 06/06. Stay tuned. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolar

Re: [zfs-discuss] zpool df mypool

2006-09-18 Thread Cindy Swearingen
Hi Chris, The man page is out-of-date. However, I'm looking at the s10u2 zpool man page and I see no such reference to df. I'm not sure how this happened. Even in an upgrade, the man pages should have been updated (?). Cindy Krzys wrote: in man page it does say "zpool df home" dows work, when

[zfs-discuss] zpool iostat

2006-09-18 Thread Krzys
Hello folks, is there any way to get timestamps when doing "zpool iostat 1" for example? Well I did run zpool iostat 60 starting last night and I got some loads indicating along the way but without a time stamps I cant figure out at around what time did they happen. Thanks. Chris _

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
> We use FSS, but CPU load was really load under the > tests. errata: We use FSS, but CPU load was really LOW under the tests. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
> > Hi Gino, > > Can you post the 'zpool status' for each pool and > 'zfs get all' > for each fs; Any interesting data in the dmesg output > ? sure. 1) nothing on dmesg (are you thinking about shared IRQ?) 2) Only using one pool for tests: # zpool status pool: zpool1 state: ONLINE scrub:

[zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Hans-Joerg Haederli - Sun Switzerland Zurich - Sun Support Services
Hi colleagues IHAC who wants to use ZFS with his HDS box. He asks now how he can do the following: - Create ZFS pool/fs on HDS LUNs - Create Copy with ShadowImage inside HDS - Disconnect ShadowImage - Import ShadowImage with ZFS in addition to the existing ZFS pool/fs I wonder how ZFS is hand

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
Hi Chris, both server same setup. OS on local hw raid mirror, other filesystem on a SAN. We found really bad performance but also that under that heavy I/O zfs pool was something like freezed. I mean, a zone living on the same zpool was completely unusable because of I/O load. We use FSS, but C

[zfs-discuss] Physical Clone of zpool

2006-09-18 Thread Mika Borner
Hi We have following scenario/problem: Our zpool resides on a single LUN on a Hitachi Storage Array. We are thinking about making a physical clone of the zpool with the ShadowImage functionality. ShadowImage takes a snapshot of the LUN, and copies all the blocks to a new LUN (physical copy). In

Re: [zfs-discuss] Re: low disk performance

2006-09-18 Thread Roch - PAE
Hi Gino, Can you post the 'zpool status' for each pool and 'zfs get all' for each fs; Any interesting data in the dmesg output ? -r Gino Ruopolo writes: > Other test, same setup. > > > SOLARIS10: > > zpool/a filesystem containing over 10Millions subdirs each containing > 10 fil