Re: Re: [zfs-discuss] Re: Re: Re[2]: Re: Dead drives and ZFS

2006-11-14 Thread Chris Csanady

On 11/14/06, Robert Milkowski [EMAIL PROTECTED] wrote:

Hello Rainer,

Tuesday, November 14, 2006, 4:43:32 AM, you wrote:

RH Sorry for the delay...

RH No, it doesn't. The format command shows the drive, but zpool
RH import does not find any pools. I've also used the detached bad
RH SATA drive for testing; no go. Once a drive is detached, there
RH seems to be no (not enough?) information about the pool that allows import.

Aha, you did zpool detach - sorry I missed it. Then zpool import won't
show you any pools to import from such disk. I agree with you it would
be useful to do so.


After examining the source, it clearly wipes the vdev label during a detach.
I suppose it does this so that the machine can't get confused at a later date.
It would be nice if the detach simply renamed something, rather than
destroying the pool though.  At the very least, the manual page ought
to
reflect the destructive nature of the detach command.

That said, it looks as if the code only zeros the first uberblock, so the
data may yet be recoverable.  In order to reconstruct the pool, I think
you would need to replace the vdev labels with ones from another of
your mirrors, and possibly the EFI label so that the GUID matched.
Then, corrupt the first uberblock, and pray that it imports.  (It may be
necessary to modify the txg in the labels as well, though I have
already speculated enough...)

Can anyone say for certain?

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Jeremy Teo

On 11/14/06, Bill Sommerfeld [EMAIL PROTECTED] wrote:

On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
 After examining the source, it clearly wipes the vdev label during a detach.
 I suppose it does this so that the machine can't get confused at a later date.
 It would be nice if the detach simply renamed something, rather than
 destroying the pool though.  At the very least, the manual page ought
 to reflect the destructive nature of the detach command.

rather than patch it up after the detach, why not have the filesystem do
it?

seems like the problem would be solved by something looking vaguely
like:

   zpool fork -p poolname -n newpoolname [devname ...]

   Create the new exported pool newpoolname from poolname by detaching
   one side from each mirrored vdev, starting with the
   device names listed on the command line.  Fails if the pool does not
   consist exclusively of mirror vdevs, if any device listed on the
   command line is not part of the pool, or if there is a scrub or resilver
   necessary or in progress.   Use on bootable pools not recommended.
   For best results, snapshot filesystems you care about before the fork.


I'm more inclined to split instead of fork. ;)


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Supermicro X7DAE and AHCI

2006-11-14 Thread Sanjay G. Nadkarni
 BOOTING AND ACCESSING 6 SATA DRIVES USING AHCI
 
 I have installed b48 running 64 bit succesfully on
 this machine using dual core Intel Woodcrest
 processors. The hardware supports up to 6 SATA II
 drives. I have installed 6 Western Digital Raptor
 drives. Using Parallell ATA mode I can only see 4
 drives. Using ZFS the sustained throughput is a
 disapointing 40MB/s per drive. I was expecting
 between 60 and 80.
 
 If I configure the BIOS to using AHCI mode the AHCI
 Bios can see all six drives. Grub appears and
 displays the nevada boot menu. However the machine
  resets when I try and boot Solaris.
 
 Can I simply tell solaris to use an AHCI SATA device
 for the root filesystem?
 
 If so what command line should I use?

ZFS discussion group would be a better place of this question.  I am cc'ing 
discussion group
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-14 Thread Wee Yeh Tan

On 11/11/06, Bart Smaalders [EMAIL PROTECTED] wrote:

It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.


I'll be more concerned about the confusion caused by losing the
changes when booting off different datasets but that problem exists
ZFS or otherwise.  I see a clear advantage in keeping more bootable
images than I can have partitions for especially when monkeying
around with the kernel codes.

--
Just me,
Wire ...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Wee Yeh Tan

On 11/14/06, Jeremy Teo [EMAIL PROTECTED] wrote:

I'm more inclined to split instead of fork. ;)


I prefer split too since that's what most of the storage guys are
using for mirrors.  Still, we are not making any progress on helping
Rainer out of his predicaments.


--
Just me,
Wire ...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Bill Sommerfeld
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
 After examining the source, it clearly wipes the vdev label during a detach.
 I suppose it does this so that the machine can't get confused at a later date.
 It would be nice if the detach simply renamed something, rather than
 destroying the pool though.  At the very least, the manual page ought
 to reflect the destructive nature of the detach command.

rather than patch it up after the detach, why not have the filesystem do
it?

seems like the problem would be solved by something looking vaguely
like:

   zpool fork -p poolname -n newpoolname [devname ...]

   Create the new exported pool newpoolname from poolname by detaching
   one side from each mirrored vdev, starting with the
   device names listed on the command line.  Fails if the pool does not
   consist exclusively of mirror vdevs, if any device listed on the
   command line is not part of the pool, or if there is a scrub or resilver
   necessary or in progress.   Use on bootable pools not recommended.
   For best results, snapshot filesystems you care about before the fork.

(just a concept...  )

- Bill







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Casper . Dik

zpool fork -p poolname -n newpoolname [devname ...]

Create the new exported pool newpoolname from poolname by detaching
one side from each mirrored vdev, starting with the
device names listed on the command line.  Fails if the pool does not
consist exclusively of mirror vdevs, if any device listed on the
command line is not part of the pool, or if there is a scrub or resilver
necessary or in progress.   Use on bootable pools not recommended.
For best results, snapshot filesystems you care about before the fork.

I'm more inclined to split instead of fork. ;)

Seems that break is a more obvious thing to do with mirrors; does this
allow me to peel of one bit of a three-way mirror?

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Re[2]: Re: Dead drives and ZFS

2006-11-14 Thread Rainer Heilke
Neither clear nor scrub clean up the errors on the pool. I've done this about a 
dozen times in the past several days, without success.

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-14 Thread Casper . Dik

On 11/11/06, Bart Smaalders [EMAIL PROTECTED] wrote:
 It would seem useful to separate the user's data from the system's data
 to prevent problems with losing mail, log file data, etc, when either
 changing boot environments or pivoting root boot environments.

I'll be more concerned about the confusion caused by losing the
changes when booting off different datasets but that problem exists
ZFS or otherwise.  I see a clear advantage in keeping more bootable
images than I can have partitions for especially when monkeying
around with the kernel codes.

And I think this may also open the door for a fallback boot; if booting
one root fs fails, we might be able to restart with another (but this
does seem to require modifying the pool in some way so it is not obvious
how this is done with errors really early in boot)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Robert Milkowski
Hello Bill,

Tuesday, November 14, 2006, 2:31:11 PM, you wrote:

BS On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
 After examining the source, it clearly wipes the vdev label during a detach.
 I suppose it does this so that the machine can't get confused at a later 
 date.
 It would be nice if the detach simply renamed something, rather than
 destroying the pool though.  At the very least, the manual page ought
 to reflect the destructive nature of the detach command.

BS rather than patch it up after the detach, why not have the filesystem do
BS it?

BS seems like the problem would be solved by something looking vaguely
BS like:

BSzpool fork -p poolname -n newpoolname [devname ...]

BSCreate the new exported pool newpoolname from poolname by detaching
BSone side from each mirrored vdev, starting with the
BSdevice names listed on the command line.  Fails if the pool does not
BSconsist exclusively of mirror vdevs, if any device listed on the
BScommand line is not part of the pool, or if there is a scrub or resilver
BSnecessary or in progress.   Use on bootable pools not recommended.
BSFor best results, snapshot filesystems you care about before the fork.

BS (just a concept...  )

Could you please create an RFE for it and give us id?
I would immediately add a call record to it :)


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Dead drives and ZFS

2006-11-14 Thread Rainer Heilke
This makes sense for the most part (and yes, I think it should be done by the 
file system, not a manual grovelling through vdev labels).

The one difference I would make is that it should not fail if the pool 
_requires_ a scrub (but yes, if a scrub is in progress...). I worry about this 
requirement, as my pool has had errors since the second SATA drive was attached 
(admitedly, it was clean as I detached the EIDE drive). If a scrub cannot clean 
up the errors on the (bad) disk, the inability to cleanly detach the good disk 
in a mirror and import the pool from that disk on another system leaves you in 
the same limbo that I am in now. Thus, fail on the normal attempt, but allow a 
force if the scrub or resilver are finished but you still have errors on what 
would be the last (disk) mirror on the system.

This would also provide a way to offline a fixed file system state. That is, 
dettach the mirror, power down, and pull the disk. Some time later, put the 
disk into a system and pull in the pool(s), creating a data pool of a known, 
clean state, with known files. One could think of this as a special case of the 
export function, where one only exports one side of a mirrored pool..

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-14 Thread Tomas Ögren
On 13 November, 2006 - Eric Kustarz sent me these 2,4K bytes:

 Tomas Ögren wrote:
 On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
 Regarding the huge number of reads, I am sure you have already tried 
 disabling the VDEV prefetch.
 If not, it is worth a try.
 That was part of my original question, how? :)
 
 On recent bits, you can set 'zfs_vdev_cache_max' to 1 to disable the 
 vdev cache.

On earlier versions (snv_48), I did similar with ztune.sh[0], adding
cache_size which I set to 0 (instead of 10M).

This helped quite a lot, but there seems to be one more level of
prefetching..

Example:
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
ftp 1.67T  2.15T  1.26K 23  40.9M   890K
  raidz21.37T   551G674 10  22.3M   399K
c4t0d0  -  -210  3  3.19M  80.4K
c4t1d0  -  -211  3  3.19M  80.4K
c4t2d0  -  -211  3  3.19M  80.4K
c5t0d0  -  -210  3  3.19M  80.4K
c5t1d0  -  -242  4  3.19M  80.4K
c5t2d0  -  -211  3  3.19M  80.4K
c5t3d0  -  -211  3  3.19M  80.4K
  raidz2 305G  1.61T614 12  18.6M   491K
c4t3d0  -  -222  5  2.66M  99.1K
c4t4d0  -  -223  5  2.66M  99.1K
c4t5d0  -  -224  5  2.66M  99.1K
c4t8d0  -  -190  5  2.66M  99.1K
c5t4d0  -  -190  5  2.66M  99.1K
c5t5d0  -  -226  5  2.66M  99.1K
c5t8d0  -  -225  5  2.66M  99.1K
--  -  -  -  -  -  -

Before this fix, the 'read bandwidth' for disks in the first raidz2
added up to way more than the raidz2 itself.. now it adds up correctly,
but some other readahead causes a 1-10x factor too much, mostly hovering
around 2-3x.. before it was hovering around 8-10x..

[0]:
http://blogs.sun.com/roch/resource/ztune.sh

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-14 Thread Lori Alt

[EMAIL PROTECTED] wrote:

On 11/11/06, Bart Smaalders [EMAIL PROTECTED] wrote:


It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
  

I'll be more concerned about the confusion caused by losing the
changes when booting off different datasets but that problem exists
ZFS or otherwise.  I see a clear advantage in keeping more bootable
images than I can have partitions for especially when monkeying
around with the kernel codes.



And I think this may also open the door for a fallback boot; if booting
one root fs fails, we might be able to restart with another (but this
does seem to require modifying the pool in some way so it is not obvious
how this is done with errors really early in boot)
  

Actually, we have considered this.  On both SPARC and x86, there will be
a way to specify the root file system (i.e., the bootable dataset) to be 
booted,

at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
If no root file system is specified, the current default 'bootfs' specified
in the root pool's metadata will be booted.  But it will be possible to
override the default, which will provide that fallback boot capability.

Lori

Casper

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Mirror Configuration??

2006-11-14 Thread oab
Hi All,
How would I do the following in ZFS. I have four arrays connected to an 
E6900.
Each array is connect to a seperate IB board on the back of the server. Each 
array
is presenting 4 disks.

c2t40d0 c3t40d0 c4t40d0 c5t40d0
c2t40d1 c3t40d1 c4t40d1 c5t40d1
c2t40d2 c3t40d2 c4t40d2 c5t40d2
c2t40d3 c3t40d3 c4t40d3 c5t40d3

Controllers c2/c3 are in one power grid and c4/c5 are in another power grid

I want to create a RAID 1+0 mirorr configuration

{c2t40d0 c2t40d1 c2t40d2 c2t40d3 c3t40d0 c3t40d1 c3t40d2 c3t40d3} stripe
  |||
MIRROR
  |||
{c4t40d0 c4t40d1 c4t40d2 c4t40d3 c5t40d0 c5t40d1 c5t40d2 c5t40d3} stripe

It has to be mirored like this as I need this to survive a power outage

Regards  Thanks

OAB
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Dead drives and ZFS

2006-11-14 Thread Rainer Heilke
Well, I haven't overwritten the disk, in the hopes that I can get the data 
back. So, how do I go about copying or otherwise repairing the vdevs?

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Dead drives and ZFS

2006-11-14 Thread Rainer Heilke
 Seems that break is a more obvious thing to do with
 mirrors; does this
 allow me to peel of one bit of a three-way mirror?
 
 Casper

I would think that this makes sense, and splitting off one side of a two-way 
mirror is more the edge case (though emphatically required/desired).

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirror Configuration??

2006-11-14 Thread Tomas Ögren
On 14 November, 2006 - oab sent me these 1,0K bytes:

 Hi All,
 How would I do the following in ZFS. I have four arrays connected to 
 an E6900.
 Each array is connect to a seperate IB board on the back of the server. Each 
 array
 is presenting 4 disks.
 
 c2t40d0 c3t40d0 c4t40d0 c5t40d0
 c2t40d1 c3t40d1 c4t40d1 c5t40d1
 c2t40d2 c3t40d2 c4t40d2 c5t40d2
 c2t40d3 c3t40d3 c4t40d3 c5t40d3
 
 Controllers c2/c3 are in one power grid and c4/c5 are in another power grid
 
 I want to create a RAID 1+0 mirorr configuration
 
 {c2t40d0 c2t40d1 c2t40d2 c2t40d3 c3t40d0 c3t40d1 c3t40d2 c3t40d3} stripe
   |||
 MIRROR
   |||
 {c4t40d0 c4t40d1 c4t40d2 c4t40d3 c5t40d0 c5t40d1 c5t40d2 c5t40d3} stripe
 
 It has to be mirored like this as I need this to survive a power outage

If I interpret your ascii gfx right, you want two big stripes and then
mirror them.. problem is if one disk from each mirror dies - you lose..
Another way is to create 8 mirrors and stripe them..

zpool create blah \
mirror c2t40d0 c4t40d0 \
mirror c2t40d1 c4t40d1 \
mirror c2t40d2 c4t40d2 \
mirror c2t40d3 c2t40d3 \
mirror c3t40d0 c5t40d0 \
mirror c3t40d1 c5t40d1 \
mirror c3t40d2 c5t40d2

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # devices in raidz.

2006-11-14 Thread Torrey McMahon

Richard Elling - PAE wrote:

Torrey McMahon wrote:

Robert Milkowski wrote:

Hello Torrey,

Friday, November 10, 2006, 11:31:31 PM, you wrote:

[SNIP]

Tunable in a form of pool property, with default 100%.

On the other hand maybe simple algorithm Veritas has used is good
enough - simple delay between scrubing/resilvering some data.


I think a not-to-convoluted algorithm as people have suggested would 
be ideal and then let people override it as necessary. I would think 
a 100% default might be a call generator but I'm up for debate. (Hey 
my array just went crazy. All the lights are blinking but my 
application isn't doing any I/O. What gives?)


I'll argue that *any* random % is bogus.  What you really want to
do is prioritize activity where resources are constrained.  From a RAS
perspective, idle systems are the devil's playground :-).  ZFS already
does prioritize I/O that it knows about.  Prioritizing on CPU might have
some merit, but to integrate into Solaris' resource management system
might bring some added system admin complexity which is unwanted.



I agree but the problem as I see it as that nothing has a overview of 
the entire environment. ZFS knows what I/O is coming in and what its 
sending out but that's it. Even if we had an easy to use resource 
management framework across all the Sun applications and devices we'd 
still run into non-Sun bits that place demands on shared components like 
networking, san, arrays, etc. Anything that can be auto-tuned is great 
but I'm afraid we're still going to need manual tuning in some cases.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snv_51 hangs

2006-11-14 Thread Chris Csanady

I have experienced two hangs so far with snv_51.  I was running snv_46
until recently, and it was rock solid, as were earlier builds.

Is there a way for me to force a panic?  It is an x86 machine, with
only a serial console.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_51 hangs

2006-11-14 Thread Sean Ye
Hi, Chris,

You may force a panic by reboot -d.

Thanks,
Sean
On Tue, Nov 14, 2006 at 09:11:58PM -0600, Chris Csanady wrote:
 I have experienced two hangs so far with snv_51.  I was running snv_46
 until recently, and it was rock solid, as were earlier builds.
 
 Is there a way for me to force a panic?  It is an x86 machine, with
 only a serial console.
 
 Chris
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_51 hangs

2006-11-14 Thread Nathan Kroenert
Hm.

If the system is hung, it's unlikely that a reboot -d will help.

You want to be booting into kmdb, then using the F1-a interrupt sequence
then dumping using $systemdump at the kmdb prompt.

See the following documents:
Index of lots of useful stuff:
http://docs.sun.com/app/docs/doc/817-1985/6mhm8o5p3?a=view

Forcing a crashdump on x86 boxes:
http://docs.sun.com/app/docs/doc/817-1985/6mhm8o5q5?a=view

And booting from grub into kmdb:
http://docs.sun.com/app/docs/doc/817-1985/6mhm8o5q2?a=view

I'm not sure how the serial console is going to impact you. I'm
expecting it'll still be f1-a to drop to the debugger...

That's assuming it's not a hard hang. :)

Cheers.

Nathan.





On Wed, 2006-11-15 at 14:16, Sean Ye wrote:
 Hi, Chris,
 
 You may force a panic by reboot -d.
 
 Thanks,
 Sean
 On Tue, Nov 14, 2006 at 09:11:58PM -0600, Chris Csanady wrote:
  I have experienced two hangs so far with snv_51.  I was running snv_46
  until recently, and it was rock solid, as were earlier builds.
  
  Is there a way for me to force a panic?  It is an x86 machine, with
  only a serial console.
  
  Chris
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_51 hangs

2006-11-14 Thread John Cecere

Chris,

To force a panic on an x86 system using GRUB, you'll first need to boot kmdb. This can be accomplished by adding the 'kmdb' option 
to the multiboot line in menu.lst. Rather than hacking your menu.lst:


- power your machine on
- arrow to the OS you want to boot in GRUB
- type 'e'
- arrow to the line that says 'kernel /platform/i86pc/multiboot'
- type 'e' again
- type a space, then the string kmdb. It should read: kernel 
/platform/i86pc/multiboot kmdb
- Hit return
- Type 'b' (for boot)

After the system boots, you should be able to drop to kmdb via the console key sequence F1-a (simultaneously, like L1-a on SPARC 
machines).


Once you drop to kmdb, type:

$systemdump

This should dump core and reboot.

This is all contingent on what caused the system to hang. You may or may not be 
able to get to kmdb.

hth,
John



Chris Csanady wrote:

I have experienced two hangs so far with snv_51.  I was running snv_46
until recently, and it was rock solid, as were earlier builds.

Is there a way for me to force a panic?  It is an x86 machine, with
only a serial console.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
John Cecere
Sun Microsystems
732-302-3922 / [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] performance question

2006-11-14 Thread listman
hi all, i'm considering using ZFS for a Perforce server where the repository might have the following characteristicsNumber of branches 	 	68Number of changes 		85,987Total number of files(at head revision) 			2,675,545Total number of users 		36Total number of clients 		3,219Perforce depot size 			15 GBI'm being told that raid 0/1 XFS on linux would be the most efficient way to manage this repository, I was wondering if the list thoughtthat ZFS would be a good choice?Thx!___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_51 hangs

2006-11-14 Thread Chris Csanady

Thank you all for the very quick and informative responses.  If it
happens again, I will try to get a core out of it.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] performance question

2006-11-14 Thread Frank Cusack

On November 14, 2006 7:57:52 PM -0800 listman [EMAIL PROTECTED] wrote:


hi all, i'm considering using ZFS for a Perforce server where the
repository might have the following characteristics

Number of branches  68
Number of changes   85,987
Total number of files
(at head revision)  2,675,545
Total number of users   36
Total number of clients 3,219
Perforce depot size 15 GB


I'm being told that raid 0/1 XFS on linux would be the most efficient way
to manage this repository, I was wondering if the list thought
that ZFS would be a good choice?


I was thinking of getting a fast car.  I wonder if blue is a good choice?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bogus zfs error message on boot

2006-11-14 Thread Frank Cusack

After swapping some hardware and rebooting:

SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b31acc-0de8-c1f3-84ec-935574615804
DESC: A ZFS pool failed to open.  Refer to http://sun.com/msg/ZFS-8000-CS 
for more information.

AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
   restore from backup.

# zpool status -x
all pools are healthy

And in fact they are.  What gives?  This message occurs on every boot now.
It didn't occur before I changed the hardware.

I had replaced the FC card with a fw800 card, then I changed it back.
(the fw800 card didn't work)

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS moving from one zone to another

2006-11-14 Thread Marlanne DeLaSource
This question is both for the ZFS forum and the Zones forum.

I have a global zone with a pool (mapool). 
I have 2 zones, z1 and z2,.
I want to pass a dataset (mapool/fs1) from z1 to z2.

Solution 1 : 
mapool/fs1 is mounted under /thing on the global zone (legacy) and I configure 
a lofs on z1 and z2.
add zonecfg:z1 add fs
zonecfg:z1:fs set dir=/thing
zonecfg:z1:fs set special=/fs1
zonecfg:z1:fs set type=lofs
zonecfg:z1:fs end
- Advantage : the fs is seen on both zones.
- Disadvantage : both zones can use it and I only want the visibility of the 
fs on one zone at a time.

Solution 2 :
zonecfg:z1 add dataset
zonecfg:z1:dataset set name=mapool/fs2
zonecfg:z1:dataset end

Disadvantage : I can't get rid of it when z1 is booted.

Is there a smarter solution ?

Thanks

Message was edited by: 
marlanne
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss