Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-05-14 Thread Frank Cusack

On 4/21/10 3:48 PM +0100 Bayard Bell wrote:

Oracle has a number of technologies that they've acquired that have
remained dual-licensed, and that includes acquiring InnoTech, which they
carried forward despite being able to use it as nearly an existential
threat to MySQL. In the case of their acquisition of Sleepycat, I'm aware
of open-source licensing terms becoming more generous after the Oracle
acquisition, where Oracle added a clear stipulation that redistribution
requiring commercial licensing had to involve third parties, where prior
to the acquisition Sleepycat had taken a less more expansive
interpretation that covered just about any form of software distribution.


I'm no supporter of Oracle's business practices, but I am 90% sure that
Sleepycat changed their license before the Oracle acquisition.  Yes,
it was particularly onerous before they went to standard GPL.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-05-14 Thread Jan Hellevik
j...@opensolaris:~$ pfexec zpool import -D
no pools available to import

Any other ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jan Hellevik
j...@opensolaris:~$ zpool clear vault
cannot open 'vault': no such pool
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jan Hellevik
Yes, I turned the system off before I connected the disks to the other 
controller. And I turned the system off beore moving them back to the original 
controller.

Now it seems like the system does not see the pool at all. 

The disks are there, and they have not been used so I do not understand why I 
cannot see the pool anymore.

Short version of what I did (actual output is in the original post):
zpool status - pool is there but unavailable
zpool import - pool already created
zpool export - I/O error
format
cfgadm
zpool status - pool is gone..

It seems like the pool vanished after cfgadm?

Any pointers? I am really getting worried now that the pool is gone for good.

What I do not understand is why it is gone - the disks are still there, so it 
should be possible to import the pool?

What am I missing here? Any ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-14 Thread Ragnar Sundblad

On 12 maj 2010, at 22.39, Miles Nordin wrote:

 bh == Brandon High bh...@freaks.com writes:
 
bh If you boot from usb and move your rpool from one port to
bh another, you can't boot. If you plug your boot sata drive into
bh a different port on the motherboard, you can't
bh boot. Apparently if you are missing a device from your rpool
bh mirror, you can't boot.
 
 yeah, this is retarded and should get fixed.
 
bh zpool.cache saves the device path to make importing pools
bh faster. It would be nice if there was a boot flag you could
bh give it to ignore the file...
 
 I've no doubt this is true but ISTR it's not related to the booting
 problem above becuase I do not think zpool.cache is used to find the
 root pool.  It's only used for finding other pools.  ISTR the root
 pool is found through devid's that grub reads from the label on the
 BIOS device it picks, and then passes to the kernel.  note that
 zpool.cache is ON THE POOL, so it can't be used to find the pool (ok,
 it can---on x86 it can be sync'ed into the boot archive, and on SPARC
 it can be read through the PROM---but although I could be wrong ISTR
 this is not what's actually done).
 
 I think you'll find you CAN move drives among sata ports, just not
 among controller types, because the devid is a blob generated by the
 disk driver, and pci-ide and AHCI will yeild up different devid's for
 the same disk.  Grub never calculates a devid, just reads one from the
 label (reads a devid that some earlier kernel got from pci-ide or ahci
 and wrote into the label).  so when ports and device names change,
 rewriting labels is helpful but not urgent.  When disk drivers change,
 rewriting labels is urgent.

Are you talking about booting here?

Because with the OS booted, the devid should only be a hint, and if it
is found to not be correct the the disks should be found with their
guids by searching all /dev/dsk/* devices, shouldn't they?

So, if everything worked as expected, I don't see any reason at all to
ever have to remove/ignore the zpool.cache. (Well - except for the case
when you don't want the OS to import pools that where imported before
shutdown/crash - but that hasn't been discussed here yet.)

Finding disks at boot is a very different animal and have it's
limitations, though GRUB tries to get work around some of them.

Am I missing something?

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror resilver @500k/s

2010-05-14 Thread Marc Bevand
Oliver Seidel osol at os1.net writes:
 
 Hello,
 
 I'm a grown-up and willing to read, but I can't find where to read.
 Please point me to the place that explains how I can diagnose this
 situation: adding a mirror to a disk fills the mirror with an
 apparent rate of 500k per second.

I don't know where to point you, but I know that iostat -nx 1
(not to be confused with zpool iostat) can often give you enough
information. Send us its output over a period of at least 10 sec.

-mrb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror resilver @500k/s

2010-05-14 Thread Will Murnane
On Thu, May 13, 2010 at 16:51, Oliver Seidel o...@os1.net wrote:
 Hello,

 I'm a grown-up and willing to read, but I can't find where to read.  Please 
 point me to the place that explains how I can diagnose this situation: adding 
 a mirror to a disk fills the mirror with an apparent rate of 500k per second.
iostat (and zpool iostat) report an average-since-boot for the first
set of values, which is all you see in your example output.  Try
zpool iostat -v data 10, and wait 10 seconds.  Then the second set
of data represents the past 10 seconds, and will likely be larger than
500 KB/s.

Also, iostat -x 10 output for the same time period might be informative.

            c9t0d0  ONLINE       0     0     0  20.4G resilvered
Note this number: 20GB would take ~10 hours to resilver at 500 KB/s.
Did it take that long?

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror resilver @500k/s

2010-05-14 Thread Oliver Seidel
Hello Will,

thank you for the explanation of zpool iostat -v data without any further 
arguments!

I will run the two suggested commands when I get back from work.

Yes, the 20gb have taken about 12h to resilver.  Now there's just 204gb left to 
do ...

Thanks to everyone for your replies,

Oliver

(now back to my temporarily disabled user ID)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] useradd(1M) and ZFS dataset homedirs

2010-05-14 Thread David Magda
I have a suggestion on modifying useradd(1M) and am not sure where to
input it.

Since individual ZFS file systems often make it easy to manage things,
would it be possible to modify useradd(1M) so that if the 'base_dir' is in
a zpool, a new dataset is created for the user's homedir?

So if you specify -m, a regular directory is created, but if you specify
(say) -z, a new dataset is created. Usermod(1M) would also probably have
this option.


GNU / Linux already as a -Z (capital-zed) option AFAICT:

   -Z, --selinux-user SEUSER
   The SELinux user for the user´s login. The default is
   to leave this field blank, which causes the system to
   select the default SELinux user.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] useradd(1M) and ZFS dataset homedirs

2010-05-14 Thread Darren J Moffat

On 14/05/2010 14:15, David Magda wrote:

I have a suggestion on modifying useradd(1M) and am not sure where to
input it.

Since individual ZFS file systems often make it easy to manage things,
would it be possible to modify useradd(1M) so that if the 'base_dir' is in
a zpool, a new dataset is created for the user's homedir?

So if you specify -m, a regular directory is created, but if you specify
(say) -z, a new dataset is created. Usermod(1M) would also probably have
this option.


A CR already exists for this: 6675187

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] useradd(1M) and ZFS dataset homedirs

2010-05-14 Thread Jason King
In the meantime, you can use autofs to do something close to this if
you like (sort of like the pam_mkhomedir module) -- you can have it
execute a script that returns the appropriate auto_user entry (given a
username as input).  I wrote one a long time ago that would do a zfs
create if the dataset didn't already exist (and assuming the homedir
for the user was /home/USERNAME from getent).

On Fri, May 14, 2010 at 8:15 AM, David Magda dma...@ee.ryerson.ca wrote:
 I have a suggestion on modifying useradd(1M) and am not sure where to
 input it.

 Since individual ZFS file systems often make it easy to manage things,
 would it be possible to modify useradd(1M) so that if the 'base_dir' is in
 a zpool, a new dataset is created for the user's homedir?

 So if you specify -m, a regular directory is created, but if you specify
 (say) -z, a new dataset is created. Usermod(1M) would also probably have
 this option.


 GNU / Linux already as a -Z (capital-zed) option AFAICT:

       -Z, --selinux-user SEUSER
           The SELinux user for the user´s login. The default is
           to leave this field blank, which causes the system to
           select the default SELinux user.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-05-14 Thread David Koch
The J4400 (and J4200) are not Sun's own hardware products.  They are 
manufactured by Quanta as the 1400 and 1200 SAS JBODs.  Sun do their own 
firmware to make it a Sun product.  You will be able to get both the drive 
brackets and SATA interposer cards through Quanta distribution.

The other way to get the bracket and SATA interposer card is to buy the 
cheapest Sun SATA drives - see if you can find someone with stock of the old 
250GB or 500GB SATA HDDs, remove that HDD and fit your new 2GB SATA HDD.  We 
bought a lot of the Sun 250GB SATA HDDs for this very purpose, they cost us 
about US $140 each.

Also, the J4200/J4400 air management sleds (MN: XTA-4400-6AMS) are a fully 
functional drive bracket with a screwed in metal tray that substitutes for the 
actual HDD.  It doesn't have the SATA interposer card, but if use use SAS HDDs 
rather than SATA HDDs you are now home free.

You can use the Seagate 1TB ST31000640SS SAS drive with a drive bracket, or you 
can use the Seagate 1TB ST31000340NS with a SATA interposer card.  The drive 
brackets have two sets of drive mount holes to deal with the differing position 
of the HDD depending on whether it needs the SATA interposer card, or the SAS 
HDD directly plugging into the backplane.  We have used both of these as 
methods of adding HDDs to J4400 systems, and we have these in constant use 
without issue (Sun CAM complains but that's OK).

So for 2GB HDDs, you would use the Seagate ST32000644SS for SAS though it's a 
6Gbps SAS HDD but it will have to run at 3Gbps in the J4x00 JBOD - so check 
this before buying any quantity. The Seagate ST32000644NS is the 3Gbps 2GB SATA 
unit that should work with the SATA interposer. Once again, check before 
committing to any quantity. Or send us either or both and we'll run it up and 
check it for you!

Our preference is to use the SAS drives rather than the SATA drives with the 
SATA interposer. It's simpler, true dual-path, provides clean MPX-IO, and the 
SAS HDD may be less that $100 more expensive than the SATA HDD.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permanent error in 'metadata'

2010-05-14 Thread Germano Caronni
Quite funny, I recently resurrected my storage array, again hit the error 
described in the post above, and came to the forums to look for anybody posting 
about this. Pretty much only person seems to be me, 9 months ago ;-)

Anybody know what '(metadata):(0x0)' is? I take it is part of the Metadata 
Object Set (MOS) but don't know what that means, what information has actually 
been lost, and if I can selectively 'remove' the error by deleting something.

I saw related post 
http://opensolaris.org/jive/thread.jspa?messageID=422024#422024 , clear and 
scrub don't help.

Any advice?

Germano
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permanent error in 'metadata'

2010-05-14 Thread Germano Caronni
Actually, this may apply 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6727872
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS home server

2010-05-14 Thread Brandon High
I've been thinking about building a small NAS box for my father in law
to back his home systems up to. He mistakenly bought a Drobo, but his
Macs refuse to use it as a Time Machine target, even with the afp
protocol.

I came across a review of the ASUS TS Mini, which comes with an Atom
N280, 1GB RAM, 2 drive bays (one with a Seagate 7200.12 500gb drive),
and lots of external ports. Photos in some review show an RGB port
inside. Since it was cheap, I ordered one to play with.

It's turned out to be a great small NAS and case. It's 9.5 high,
3.75 wide, and 8 deep. Power is from an external brick. The top is
held on with thumb screws, which once removed let you pull out the
drive cage. The bottom cover is held on by some philips screws. This
also gives you access to the single DDR2 SO-DIMM slot. There are also
solder pads for a second memory slot and for a PCIe 1x slot. If you're
handy with a soldering iron, you could double your memory.

Taking the back cover off lets you get at the VGA. You need to use a
Torx-9 driver and remove the 8 or so screws, then loosen the
motherboard to take it out. Once out, you can see that the RBG port
can be trimmed out of the back plate with a razor or dremel.

The two internal drives are connected to the ICH7-M southbridge. It
looks like a sideways PCIe 1x slot on the motherboard, but it's the
sata and power connectors for the internal drives, so don't think you
can plug a different card in.

The two external eSATA ports are provided via a Marvell 88SE6121 PCIe
SATA controller, which supports PMP. There are also 6 USB ports on the
back. All of this is supported by OpenSolaris.

When booting with a monitor and keyboard attached, you can hit DEL to
get into the BIOS and change any settings. There's nothing that
prevents you from replacing the provided Windows Home Server.

I've currently got the system running NexentaStor Community, booting
off of a 4GB USB drive. Large writes (eg: DVD iso) go at about 20MB/s
over GigE, and reads are about 40MB/s.

It's not the fanciest or fastest system, but I think it'll work fine
as an iSCSI target for Time Machine. And my FIL can even use the Drobo
as external USB drives if he wants.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Haudy Kazemi
Now that you've re-imported, it seems like zpool clear may be the 
command you need, based on discussion in these links about missing and 
broken zfs logs:


http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37554.html
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg30469.html
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6707530
http://www.sun.com/msg/ZFS-8000-6X


Jan Hellevik wrote:
Hey! It is there! :-) Cannot believe I did not try the import command 
again. :-)


But I still have problems - I had added a slice of a SSD as log and 
another slice as cache to the pool. The SSD is there - c10d1 but ...
Ideas? The log part showed under the pool when I initially tried the 
import, but now it is gone. I am afraid of doing something stupid at 
this point in time. Any help is really appreciated!


j...@opensolaris:~$ pfexec zpool import
  pool: vault
id: 8738898173956136656
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

vaultUNAVAIL  missing device
  raidz1-0   ONLINE
c11d0ONLINE
c12d0ONLINE
c12d1ONLINE
c10d1ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
j...@opensolaris:~$ pfexec format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c8d0 DEFAULT cyl 6394 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c10d0 DEFAULT cyl 465 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@11/i...@0/c...@0,0
   2. c10d1 SAMSUNG-S0MUJFWQ38208-0001-465.76GB
  /p...@0,0/pci-...@11/i...@0/c...@1,0
   3. c11d0 SAMSUNG-S0MUJFWQ38207-0001-465.76GB
  /p...@0,0/pci-...@11/i...@1/c...@0,0
   4. c12d0 SAMSUNG-S0MUJ1DPC0399-0001-465.76GB
  /p...@0,0/pci-...@14,1/i...@1/c...@0,0
   5. c12d1 SAMSUNG-S0MUJ1EPB1834-0001-465.76GB
  /p...@0,0/pci-...@14,1/i...@1/c...@1,0
   6. c13t0d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  /p...@0,0/pci1022,9...@2/pci1000,3...@0/s...@0,0
   7. c13t1d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  /p...@0,0/pci1022,9...@2/pci1000,3...@0/s...@1,0
   8. c13t2d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  /p...@0,0/pci1022,9...@2/pci1000,3...@0/s...@2,0
   9. c13t3d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  /p...@0,0/pci1022,9...@2/pci1000,3...@0/s...@3,0
Specify disk (enter its number): ^C
j...@opensolaris:~$


On Thu, May 13, 2010 at 7:15 PM, Richard Elling 
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:


now try zpool import to see what it thinks the drives are
 -- richard

On May 13, 2010, at 2:46 AM, Jan Hellevik wrote:

 Short version: I moved the disks of a pool to a new controller
without exporting it first. Then I moved them back to the original
controller, but I still cannot import the pool.


 j...@opensolaris:~$ zpool status

  pool: vault
 state: UNAVAIL
 status: One or more devices could not be opened.  There are
insufficient
replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool
online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:

NAMESTATE READ WRITE CKSUM
vault   UNAVAIL  0 0 0  insufficient replicas
  raidz1-0  UNAVAIL  0 0 0  insufficient replicas
c12d1   UNAVAIL  0 0 0  cannot open
c12d0   UNAVAIL  0 0 0  cannot open
c10d1   UNAVAIL  0 0 0  cannot open
c11d0   UNAVAIL  0 0 0  cannot open
logs
  c10d0p1   ONLINE   0 0 0

 j...@opensolaris:~$ zpool status
  cannot see the pool

 j...@opensolaris:~$ pfexec zpool import vault
 cannot import 'vault': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.
 j...@opensolaris:~$ pfexec poweroff

  moved the disks back to the original controller

 j...@opensolaris:~$ pfexec zpool import vault
 cannot import 'vault': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.
 j...@opensolaris:~$ pfexec format
 Searching for disks...done


 j...@opensolaris:~$ uname -a
 SunOS opensolaris 5.11 snv_133 i86pc i386 i86pc Solaris

 j...@opensolaris:~$ pfexec zpool history vault
 cannot open 'vault': no such pool


 ... and this is where I am now.

 The zpool contains my digital images and videos and I would be
really 

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Haudy Kazemi
Is there any chance that the second controller wrote something onto the 
disks when it saw the disks attached to it, thus corrupting the ZFS 
drive signatures or more?


I've heard that some controllers require drives to be initialized by 
them and/or signatures written to drives by them.  Maybe your second 
controller wrote to the drives without you knowing about it.  If you 
have a pair of (small) spare drives, make a ZFS mirror out of them and 
try to recreate the problem by repeating your steps on them.


If you can recreate the problem, try to narrow it down to whether the 
problem is caused by the second controller changing things, or if the 
skipped zfs export is playing a role.  I think the skipped zfs export 
might have lead to zfs import needing to be forced (-f), but as long as 
you weren't trying to access the disks from two systems at the same time 
it shouldn't have been catastrophic.  Forcing shouldn't be necessary if 
things are being handled cleanly and correctly.


My hunch is the second controller did something when it saw the drives 
connected to it, particularly if the second controller was configured in 
RAID mode rather than JBOD or passthrough.  Or maybe you changed some 
settings on the second controller's BIOS that caused it to write to the 
drives while you were trying to get things to work?



I've seen something similar by the BIOS on a Gigabyte X38 chipset 
motherboard that has Quad BIOS.  This is partly documented by Gigabyte at

http://www.gigabyte.com.tw/FileList/NewTech/2006_motherboard_newtech/how_does_quad_bios_work_dq6.htm

From my testing, the BIOS on this board writes a copy of itself using 
an HPA (Host Protected Area) to a hard drive for BIOS recovery purposes 
in case of a bad flashing/BIOS upgrade.  There is no prompting for the 
writing, it appears to simply happen to whichever drive was the first 
one connected to the PC, which is usually the current boot drive.  On a 
new clean disk, this would be harmless, but it risks data loss when 
reusing drives or transferring drives between systems.  This behavior is 
able to cause data loss and has affected people using Windows Dynamic 
Disks and UnRaid as can be seen by searching Google for Gigabyte HPA.


More details:
As long as that drive is connected to the PC, the BIOS recognizes it as 
being the 'recovery' drive and doesn't write to another drive.  If that 
drive is removed, then another drive will have an HPA created on it.  
The easiest way to control this is to initially have just one drive 
connected...the one you don't mind the HPA being placed on.  Then you 
can add the other drives without them being modified.


The HPA is created on 2113 sectors at the end of the drive.  HDAT (a low 
level drive diag/repair/config utility) cannot remove this HPA while the 
drive is still the first drive (the BIOS must be enforcing protection of 
that area).  Making this drive a secondary drive by forcing the BIOS to 
create another HPA on another drive allows HDAT to remove the HPA.  
Manually examining the last 2114 (one more for good measure) sectors 
will now show that it contains a BIOS backup image.


Other observations:
Device order in Linux (e.g. /dev/sda /dev/sdb) made no difference to 
where the HPA ended up.




Jan Hellevik wrote:

Yes, I turned the system off before I connected the disks to the other 
controller. And I turned the system off beore moving them back to the original 
controller.

Now it seems like the system does not see the pool at all. 


The disks are there, and they have not been used so I do not understand why I 
cannot see the pool anymore.

Short version of what I did (actual output is in the original post):
zpool status - pool is there but unavailable
zpool import - pool already created
zpool export - I/O error
format
cfgadm
zpool status - pool is gone..

It seems like the pool vanished after cfgadm?

Any pointers? I am really getting worried now that the pool is gone for good.

What I do not understand is why it is gone - the disks are still there, so it 
should be possible to import the pool?

What am I missing here? Any ideas?
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jim Horng
You may or may not need to add the log device back.
zfs clear should bring the pool online.
either way shouldn't affect the data.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss