[zfs-discuss] Re: Re: Re: Metadata corrupted

2006-10-10 Thread Siegfried Nikolaivich
 Yeah, good catch.  So this means that it seems to be
 able to read the 
 label off of each device OK, and the labels look
 good.  I'm not sure 
 what else would cause us to be unable to open the
 pool...  Can you try 
 running 'zpool status -v'?

The command seems to return the same thing:

% zpool status -v
  pool: tank
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-CS
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankFAULTED  0 0 6  corrupted data
  raidz ONLINE   0 0 6
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0


I can provide you with SSH access if you want.

Thanks,
Siegfried
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: A versioning FS

2006-10-10 Thread Bill Sommerfeld
On Fri, 2006-10-06 at 00:07 -0700, Richard L. Hamilton wrote:
 Some people are making money on the concept, so I
 suppose there are those who perceive benefits:
 
 http://en.wikipedia.org/wiki/Rational_ClearCase
 
 (I dimly remember DSEE on the Apollos; ...)

I used both fairly extensively.  Much of the apollo DSEE team left HP to
write ClearCase.  Neither are versioning filesystems; instead, both are
software configuration management systems which export a limited virtual
filesystem interface.   With such systems, versioning is not transparent
but instead involves interaction with a CLI or GUI around
checkout/checkin.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS file system create/export question

2006-10-10 Thread Luke Schwab
Hi,

I am wandering HOW ZFS ensures that a storage pool isn't imported by two 
machines at one time? Does it stamp the disks the hostID or hostName?  Below is 
a snipplet from the ZFS Admin Guide. It appears that this can be overwritten 
with import -f. 

importing a pool that is currently in use by another system over a storage 
network can result in data corruption and panics as both systems attempt to 
write to the same storage.

ljs
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import fails

2006-10-10 Thread Marion Hakanson
[EMAIL PROTECTED] said:
 While trouble shooting a full-disk scenario I booted from DVD after   adding
 two new disks. Still under DVD boot I created a pool from   those two disks
 and moved iso images I had downloaded to the zfs   filesystem. Next I fixed
 my grub, exported the zpool and rebooted.

 Now zpool import comes up empty. Have I lost all my data on that ZFS?   How
 can I check? 

Sorry if I'm stating something too basic here -- no insult intended

It sounds like the on-disk Solaris isn't aware of the new drives.  Did you
do a reconfigure-boot after adding them?  Do they show up as configured
in a cfgadm -al list?

Regards,

Marion



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import fails

2006-10-10 Thread Jan Hendrik Mangold
On Oct 10, 2006, at 11:13 AM, Marion Hakanson wrote:[EMAIL PROTECTED] said: While trouble shooting a full-disk scenario I booted from DVD after   addingtwo new disks. Still under DVD boot I created a pool from   those two disksand moved iso images I had downloaded to the zfs   filesystem. Next I fixedmy grub, exported the zpool and rebooted.  Now zpool import comes up empty. Have I lost all my data on that ZFS?   Howcan I check?  Sorry if I'm stating something too basic here -- no insult intendedno problem :)It sounds like the on-disk Solaris isn't aware of the new drives.  Did youdo a reconfigure-boot after adding them?  Do they show up as "configured"in a "cfgadm -al" list?I can see and access the disks with prtvtoc. cfgadm returnscfgadm: Configuration administration not supported --Jan Hendrik Mangold Sun Microsystems650-585-5484 (x81371)"idle hands are the developers workshop" 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-10 Thread clockwork
All, So I have started working with Solaris 10 at work a bit (I'm a Linux guy by trade) and I have a dying nfs box at home. So the long and short of it is as follows: I would like to setup a SATAII whitebox that uses ZFS as its filesystem. The box will probably be very lightly used, streaming media to my laptop and workstation would be the bulk of the work. However I do have quite a good deal of data, roughly 400G. So what I would like to know is what hardware solutions work best for this ? I dont need to have 2TB of storage on day one, but I might need it sometime down the road. I would prefer to keep the price low(400 - 600), but I dont buy house brand motherboard, or controllers either. So who makes a native supported board, controller (pci-e ?), gigE card and so on. I have a DVD+_RW made by samsung which I would imagine would work.  Any assistance is welcomed and appreciated.
Regards.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss