[zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Marc Friesacher
Ok,

I'm trying to import a raidz zpool on my server but keep getting:

cannot import 'zedpool': one or more devices is currently unavailable

even though all disks are showing as ONLINE as below

fr...@vault:~# zpool import
  pool: zedpool
id: 10232199590840258590
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

zedpoolONLINE
  raidz1   ONLINE
c4d0   ONLINE
c5d0   ONLINE
c6d0   ONLINE
c7d0   ONLINE
logs
zedpoolONLINE
  mirror   ONLINE
c12t0d0p0  ONLINE
c10t0d0p0  ONLINE

fr...@vault:~# zpool import zedpool
cannot import 'zedpool': one or more devices is currently unavailable

Forcing it has the same result.

fr...@vault:~# zpool import -f zedpool
cannot import 'zedpool': one or more devices is currently unavailable

All the disks are definitely there:
fr...@vault:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 
  /p...@0,0/pci-...@d/i...@0/c...@0,0
   1. c4d0 
  /p...@0,0/pci-...@e/i...@0/c...@0,0
   2. c5d0 
  /p...@0,0/pci-...@e/i...@1/c...@0,0
   3. c6d0 
  /p...@0,0/pci-...@f/i...@0/c...@0,0
   4. c7d0 
  /p...@0,0/pci-...@f/i...@1/c...@0,0

fr...@vault:~# rmformat
Looking for devices...
 1. Logical Node: /dev/rdsk/c10t0d0p0
Physical Node: 
/p...@0,0/pci10de,2...@10/pci1106,3...@9,2/stor...@2/d...@0,0
Connected Device: SHINTARO EXTREME  0.00
Device Type: Removable
Bus: USB
Size: 4.0 GB
Label: 
Access permissions: Medium is not write protected.
 2. Logical Node: /dev/rdsk/c12t0d0p0
Physical Node: 
/p...@0,0/pci10de,2...@10/pci1106,3...@9,2/stor...@1/d...@0,0
Connected Device: Generic  USB Flash Disk   0.00
Device Type: Removable
Bus: USB
Size: 4.0 GB
Label: 
Access permissions: Medium is not write protected.

Can someone please help me work out what is going on?
I have searched the forums but can't find any posts where a pool won't import 
even though all disks are online. I've seen mention of the zdb command and 
using zdb -l /dev/dsk/ or zdb -l /dev/rdsk/ returns the same labels 
on all disks as well as pool details etc.
But other than that I am lost.

Thanks in advance,

Frizz.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What Happend to my OpenSolaris X86 Install?

2010-02-11 Thread Gary Gendel
My guess is that the grub bootloader wasn't upgraded on the actual boot disk.  
Search for directions on how to mirror ZFS boot drives and you'll see how to 
copy the correct grub loader onto the boot disk.

If you want to do this simpler, swap the disks.  I did this when I was moving 
from SXCE to OSOL so I could make sure that things worked before making one of 
the drives a mirror.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread taemun
Can anyone comment about whether the on-boot "Reading ZFS confi" is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?

I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached. The system locked up whilst
destroying a 934GB dedup'd dataset, and I was forced to reboot it. I
can hear hard drive activity presently - ie its doing
something, but am really hoping there is a better way :)

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-11 Thread Ray Van Dolson
Had a two-device SSD (mirrored) slog/zil and needed to pull each disk
to read some numbers off of them.  Unfortunately, when I placed the
devices back in the system, I used "zpool add" and created a separate
log device instead of reattaching the device to the existing log device
which would have resulted in a mirror.

Fortunately, there are two other SSD's in the system acting as L2ARC
which I was able to borrow from and set up temp mirrors so I could pull
out the other drive, but, in the end I'm left with:

logs
  c0t29d0ONLINE   0 0 0
  c0t31d0ONLINE   0 0 0


I'm running Solaris 10 U8.

Am I hosed?  Obviously my zpool is fine, but I'm thinkin there's no way
to rid myself of this extra log device now... what if I physically yank
the drive?  Is there any way to get the above two merged as a mirror?

Just for fun, I tried:

  # zpool attach storage c0t31d0 c0t29d0
  invalid vdev specification
  use '-f' to override the following errors:
  /dev/dsk/c0t29d0s0 is part of active ZFS pool storage. Please see zpool(1M).

What will happen if I do a -f? :)

It looks like support for removing log devices is pending or available
already in OpenSolaris, but, like I said, this is Solaris 10.

I'll throw it to support as well, but thought I'd check here.

Thanks,
Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What Happend to my OpenSolaris X86 Install?

2010-02-11 Thread Lori Alt
This sounds more like an install issue than a zfs issue.  I suggest you 
take this to caiman-disc...@opensolaris.org .


Lori

On 02/10/10 23:44, Jeff Rogers wrote:

Thanks for the tip but it was not that. The two hard drives where running under 
RAID 1 on my Linux install so the two drives have identical information on them 
when I installed OpenSolaris. I disable the hardware RAID support in my BIOS to 
install OpenSolaris. Looking at the disk from the still functional linux OS on 
the disk it appears that OpenSolaris only formatted some of the partitions to 
ZFS. As luck would have it the unformatted ZFS partitions (which are still 
EXT3) has the MBR and the rest of the file system to run just fine. Looking at 
the partition table under Linux I can see the partitions that are formatted ZFS 
as they are shown but not supported with Linux.

Bummer. Looks like I will have to reformat the drives with GParted and 
reinstall OpenSolaris. I can not see how a LiveCD of GParted could gain access 
to my filesystem before I say it is okay to rewrite the MBR. I suppose this is 
a GParted issue and not a OpenSolaris one. I'll go bark up that tree now.

Does anyone have any good pointers of info on OpenSolaris, Postgres with WAL 
enabled and ZFS?
I would like to put the WAL files on a different disk than the one with the OS 
and Postgres.

Thanks

Jeff
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread Lori Alt

On 02/11/10 08:15, taemun wrote:

Can anyone comment about whether the on-boot "Reading ZFS confi" is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?

I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached. The system locked up whilst
destroying a 934GB dedup'd dataset, and I was forced to reboot it. I
can hear hard drive activity presently - ie its doing
something, but am really hoping there is a better way :)

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
I think that this is a consequence of 6924390.-  
 ZFS 
destroy on de-duped dataset locks all I/O


This bug is closed as a dup of another bug which is not readable from 
the opensolaris site, (I'm not clear what makes some bugs readable and 
some not).


While trying to reproduce 6924390 (or its equivalent) yesterday, my 
system hung as yours did, and when I rebooted, it hung at "Reading ZFS 
config".


Someone who knows more about the root cause of this situation (i.e., the 
bug named above) might be able tell you what's going on and how to 
recover (it might be that what's going on is that the destroy has 
resumed and you have to wait for it to complete, which I think it will, 
but it might take a long time).


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread Bill Sommerfeld

On 02/11/10 10:33, Lori Alt wrote:

This bug is closed as a dup of another bug which is not readable from
the opensolaris site, (I'm not clear what makes some bugs readable and
some not).


the other bug in question was opened yesterday and probably hasn't had 
time to propagate.


- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /usr/bin/chgrp destroys ACL's?

2010-02-11 Thread Paul B. Henson
On Wed, 10 Feb 2010, David Dyer-Bennet wrote:

> My experience with ACLs is that they suck dead diseased rats through a
> straw and I wish I could turn them off.

That seems overly harsh ;).

> What I would dearly love is an option to disable all ACL suppport.

If you never explicitly use ACL's on zfs, and only ever manipulate
permissions with legacy chmod mode bits, I believe zfs will behave like
ACL's don't exist. For what scenario is the existance of ACL's resulting in
failure for use cases that don't apply them?


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Tony MacDoodle
I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
mirror and plunk it in another system?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs promote

2010-02-11 Thread Cindy Swearingen

Hi Tester,

It is difficult for me to see all that is going on here. Can you provide 
the steps and the complete output?


I tried to reproduce this on latest Nevada bits and I can't. The 
snapshot sizing looks correct to me after a snapshot/clone promotion.


Thanks,

Cindy

# zfs create tank/fs1
# mkfile 100m /tank/fs1/f1
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   100M   134G23K  /tank

# zfs snapshot tank/f...@now
# zfs clone tank/f...@now tank/fsnew
# zpool set listsnapshots=on tank
# zfs  list -r tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   100M   134G24K  /tank
tank/fs1   100M   134G   100M  /tank/fs1
tank/f...@now  0  -   100M  -
tank/fsnew0   134G   100M  /tank/fsnew

The newly promoted file system is charged the
USED space.

# zfs promote tank/fsnew
# zfs  list -r tank
NAME USED  AVAIL  REFER  MOUNTPOINT
tank 100M   134G24K  /tank
tank/fs10   134G   100M  /tank/fs1
tank/fsnew   100M   134G   100M  /tank/fsnew
tank/fs...@now  0  -   100M  -

On 02/10/10 14:06, tester wrote:

Hello,

Immediately after a promote,  the snapshot of the promoted clone has 1.25G used.
NAME   USED  AVAIL  REFER
q2/fs1  4.01G  9.86G  8.54G 
q2/f...@test1 [b]1.25G[/b]  -  5.78G  -



prior to the promote the snapshot of the origin file system looked like this
NAME   USED  AVAIL  REFER
q2/fs1-o1  5.81G  9.86G  5.78G  
q2/fs1...@test1  33.2M  -  5.78G  -


Where is q2/f...@test1 picking 1.25G from?

Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Dennis Clarke

> I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
> mirror and plunk it in another system?

You can remove it fine. You can plunk it in another system fine.
I think you will end up with the same zpool name and id number.
Also, I do not know if that disk would be bootable. You probably have to
go through the installboot procedure for that.

-- 
Dennis Clarke
dcla...@opensolaris.ca  <- Email related to the open source Solaris
dcla...@blastwave.org   <- Email related to open source for Solaris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Daniel Carosone
On Thu, Feb 11, 2010 at 02:50:06PM -0500, Tony MacDoodle wrote:
> I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
> mirror and plunk it in another system?

Yes.  If you have a recent opensolaris, there is "zpool split"
specifically to help this use case.  

Otherwise, you can unplug the drive (use zpool offline, or shut down
the host), connect it to the other host, and zpool import it there.
Then zpool detach the missing half from each host at the end (not
before).  These resulting pools have to stay separate from here on,
they still have the same id and can't be used on the same host at the
same time again.

If you want to keep redundancy on the original host, try attaching a 
third submirror and letting it resilver first.

It's worth asking: why do you want to do it this way, rather than
using a send|recv to copy?  That's usually (though not always) the
best way.

--
Dan.

pgpOMYppPiRzC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Mark J Musante

On Thu, 11 Feb 2010, Tony MacDoodle wrote:

I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the 
mirror and plunk it in another system?


Intact?  Or as a new disk in the other system?

If you want to break the mirror, and create a new pool on the disk, you 
can just do 'zpool detach'.  This will remove the disk from the pool and 
put it in a state where you can create a brand new pool.


If you want to use the mirror, keeping it intact, you can upgrade to build 
132 and use 'zpool split'.


If you don't want to upgrade, then you can use this procedure, given the 
fact that you have only one pair of mirrors:


(assuming your pool is made of disk0 and disk1, on hostname box0

box0# zpool offline pool disk1
box0# zpool detach pool disk1

(remove disk1 from box0 and plug into box1)

box1# zpool import -f pool
box1# zpool detach pool disk0

It's important that you offline disk1 first before detaching it, or you 
will not be able to import it into box1.  And the -f is necessary on 
import because the pool will complain about having been imported on 
another system.


Note that this procedure works only for very simple configs.  If you have 
a config with a stripe of mirror sets, or logs, spares, or cache devices, 
this will not work.  Instead, you'll have to upgrade & use zpool split.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ARC Hits By App

2010-02-11 Thread Abdullah Al-Dahlawi
Hi Sanjeev

linking the application to the ARCSTAT_BUMP(arcstat_hits) is not
straightforward and time consuming especially if I am running many
experiments.

Brendan has commented on on the post by providing an alternative DTrace
script , here is his post ...
--
dtrace -n 'sdt:::arc-hit { @[execname] = count(); }'

Or to filter just on firefox:

 dtrace -n 'sdt:::arc-hit /execname == "firefox-bin"/ { @ = count(); }'

If neither of them work, it just means that that particular sdt provider
probe wasn't available, and we can get this information from using the fbt
provider instead.

Note: ZFS prefetch is pretty effective, so streaming reads from disk will
often
show up as arc hits since ZFS has put the data in cache before the
application
reads it.  Which is the intent.

I ran Brendan script while watching arcstat from kstat and notice great deal
of discrepancy in terms of cache hits & misses numbers between kstat &
Brendan's scrip. I am not sure exactly where 'sdt' probe collects the cache
hits information !!.

Any feed back ?




On Wed, Feb 10, 2010 at 11:43 PM, Sanjeev  wrote:

> Abdullah,
>
> On Tue, Feb 09, 2010 at 02:12:24PM -0500, Abdullah Al-Dahlawi wrote:
> > Greeting ALL
> >
> > I am wondering if it is possible to monitor the ZFS ARC cache hits using
> > DTRACE. In orher words, would be possible to know how many ARC cache hits
> > have been resulted by a particular application such as firefox ??
>
> ZFS has kstat for a whole bunch of these. You could run "kstat -m zfs" and
> see
> if which ones help you.
>
> Correlating these stats to a particular application would be a little
> difficult.
>
> Using Dtrace, you could track down the read/write operations and see if
> they
> lead to a hit/miss in ARC. Take a look at the routine arc_read here :
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c
>
> "ARCSTAT_BUMP(arcstat_hits);" is the routine which increments the hit
> count.
>
> You could track arc_read()s called from the context of the application and
> check
> if it causes a hit or a miss.
>
> Thanks and regards,
> Sanjeev
> >
> > Your response is highly appreciated.
> >
> >
> > Thanks
> >
> >
> > --
> > Abdullah
> > dahl...@ieee.org
> > (IM) ieee2...@hotmail.com
> > 
> > Check The Fastest 500 Super Computers Worldwide
> > http://www.top500.org/list/2009/11/100
>
>
> --
> 
> Sanjeev Bagewadi
> Solaris RPE
> Bangalore, India
>



-- 
Abdullah Al-Dahlawi
http://www.google.com/profiles/aldahlawi

Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2009/11/100
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Cindy Swearingen

Hi Marc,

I've not seen an unimportable pool when all the devices are reported as
ONLINE.

You might see if the fmdump -eV output reports any issues that happened
prior to this failure.

You could also attempt to rename the /etc/zfs/zpool.cache file and then
try to re-import the pool so that the device paths for this pool are
regenerated.

Thanks,

Cindy

On 02/11/10 04:01, Marc Friesacher wrote:

Ok,

I'm trying to import a raidz zpool on my server but keep getting:

cannot import 'zedpool': one or more devices is currently unavailable

even though all disks are showing as ONLINE as below

fr...@vault:~# zpool import
  pool: zedpool
id: 10232199590840258590
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

zedpoolONLINE
  raidz1   ONLINE
c4d0   ONLINE
c5d0   ONLINE
c6d0   ONLINE
c7d0   ONLINE
logs
zedpoolONLINE
  mirror   ONLINE
c12t0d0p0  ONLINE
c10t0d0p0  ONLINE

fr...@vault:~# zpool import zedpool
cannot import 'zedpool': one or more devices is currently unavailable

Forcing it has the same result.

fr...@vault:~# zpool import -f zedpool
cannot import 'zedpool': one or more devices is currently unavailable

All the disks are definitely there:
fr...@vault:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 
  /p...@0,0/pci-...@d/i...@0/c...@0,0
   1. c4d0 
  /p...@0,0/pci-...@e/i...@0/c...@0,0
   2. c5d0 
  /p...@0,0/pci-...@e/i...@1/c...@0,0
   3. c6d0 
  /p...@0,0/pci-...@f/i...@0/c...@0,0
   4. c7d0 
  /p...@0,0/pci-...@f/i...@1/c...@0,0

fr...@vault:~# rmformat
Looking for devices...
 1. Logical Node: /dev/rdsk/c10t0d0p0
Physical Node: 
/p...@0,0/pci10de,2...@10/pci1106,3...@9,2/stor...@2/d...@0,0
Connected Device: SHINTARO EXTREME  0.00
Device Type: Removable
Bus: USB
Size: 4.0 GB
Label: 
Access permissions: Medium is not write protected.
 2. Logical Node: /dev/rdsk/c12t0d0p0
Physical Node: 
/p...@0,0/pci10de,2...@10/pci1106,3...@9,2/stor...@1/d...@0,0
Connected Device: Generic  USB Flash Disk   0.00
Device Type: Removable
Bus: USB
Size: 4.0 GB
Label: 
Access permissions: Medium is not write protected.

Can someone please help me work out what is going on?
I have searched the forums but can't find any posts where a pool won't import even though 
all disks are online. I've seen mention of the zdb command and using zdb -l 
/dev/dsk/ or zdb -l /dev/rdsk/ returns the same labels on all disks 
as well as pool details etc.
But other than that I am lost.

Thanks in advance,

Frizz.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Mark J Musante

On Thu, 11 Feb 2010, Cindy Swearingen wrote:


On 02/11/10 04:01, Marc Friesacher wrote:


fr...@vault:~# zpool import
  pool: zedpool
id: 10232199590840258590
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

zedpoolONLINE
  raidz1   ONLINE
c4d0   ONLINE
c5d0   ONLINE
c6d0   ONLINE
c7d0   ONLINE
logs
zedpoolONLINE
  mirror   ONLINE
c12t0d0p0  ONLINE
c10t0d0p0  ONLINE


Is this the actual unedited config output?  I've never seen the name of 
the pool show up after "logs".


One thing you can try is to use dtrace to look at any ldi_open_by_name(), 
ldi_open_by_devid(), or ldi_open_by_dev() calls that zfs makes.  That may 
give a clue as to what's going wrong.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-11 Thread Dan Pritts
On Tue, Feb 09, 2010 at 03:44:02PM -0600, Wes Felter wrote:
> Have you considered Promise JBODs? They officially support 
> bring-your-own-drives.

Have you used these yourself, Wes?

I've been considering it, but I talked to a colleague at another
institution who had some really awful tales to tell about promise
FC arrays.  They were clearly not ready for prime time.

OTOH a SAS jbod is a lot less complicated.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Internet2 Spring Member Meeting
April 26-28, 2010 - Arlington, Virginia
http://events.internet2.edu/2010/spring-mm/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-11 Thread Daniel Bakken
On Thu, Feb 11, 2010 at 2:46 PM, Dan Pritts  wrote:
> I've been considering it, but I talked to a colleague at another
> institution who had some really awful tales to tell about promise
> FC arrays.  They were clearly not ready for prime time.
>
> OTOH a SAS jbod is a lot less complicated.

We have used two Promise Vtrak m500f fibre channel arrays in heavy I/O
applications for several years. They don't always handle failing disks
gracefully-- sometimes requiring a hard reboot to recover. This is
partly due to crappy disks with weird failure modes. But a RAID system
should never require a reboot to recover from a single disk failure.
That defeats the whole purpose of RAID, which is supposed to survive
disk failures through redundancy.

However, Promise iSCSI and JBOD systems (we own one of each) are more
stable. We use them with Linux (XFS) and OpenSolaris (ZFS), and
haven't any experienced problems to date. Their JBOD systems, when
filled with Western Digital RE3 disks, are an extremely reliable,
low-cost, high performance ZFS storage solution.

Daniel Bakken
Systems Administrator
Economic Modeling Specialists Inc
1187 Alturas Drive
Moscow, Idaho 83843
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Victor L could you *please* help me with ZFS bug #6924390 - dedup hosed it

2010-02-11 Thread Eff Norwood
OpenSolaris snv_131 (problem also in snv_132 still) on an X4500, bug #6924390

Victor,

I see in researching this issue that you know ZFS really well. I would 
appreciate your help so much and this problem seems interesting.

I created a large zpool named xpool and then created 3 filesystems on that pool 
called vms, bkp and alt. Of course I enabled dedup for the entire zpool - why 
not. And then today we decided to delete bkp which was a 13TB filesystem with 
around 900GB of data in it. And now I am very familiar with bug #6924390.

When I try to import, of course it seems to hang but it's really just going 
really slowly. Someone from OpenSolaris calculated it might take 2 weeks to 
import that volume so, now I know why it's a bug.

My main objective is to be able to rescue the data in vms - none of the rest 
matters. I am currently booted into snv_132 via the ILOM and can boot it to the 
network for ssh as well. Thank you very much in advance!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Removing Cloned Snapshot

2010-02-11 Thread Tony MacDoodle
I am getting the following message when I try and remove a snapshot from a
clone:

bash-3.00# zfs destroy data/webser...@sys_unconfigd
cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
use '-R' to destroy the following datasets:


The datasets are being used, so why can't I delete the snapshot?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-11 Thread Fajar A. Nugraha
On Fri, Feb 12, 2010 at 10:55 AM, Tony MacDoodle  wrote:
> I am getting the following message when I try and remove a snapshot from a
> clone:
>
> bash-3.00# zfs destroy data/webser...@sys_unconfigd
> cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
> use '-R' to destroy the following datasets:

Is there something else below that line? Like the name of the clones?

> The datasets are being used, so why can't I delete the snapshot?

because it's used as base for clones.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-11 Thread Daniel Carosone
On Thu, Feb 11, 2010 at 10:55:20PM -0500, Tony MacDoodle wrote:
> I am getting the following message when I try and remove a snapshot from a
> clone:
> 
> bash-3.00# zfs destroy data/webser...@sys_unconfigd
> cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
> use '-R' to destroy the following datasets:
> 
> The datasets are being used, so why can't I delete the snapshot?

Clones are writable copies of snapshots, and share space with the
snapshot that is their basis (initially, all the space).  That space
belongs to the snapshot, which in turn belongs to another dataset
(from which it was originally taken).  For clones, you will see that
"referenced" is often much more than "usedbydataset", as a result.

You can use zfs promote to change around which dataset owns the base
snapshot, and which is the dependant clone with a parent, so you can
deletehe other - but if you want both datasets you will need to keep
the snapshot they share.

--
Dan.



pgpjCc6bUAkTe.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread taemun
Do you think that more RAM would help this progress faster? We've just
hit 48 hours. No visible progress (although that doesn't really mean
much).

It is presently in a system with 8GB of ram, I could try to move the
pool across to a system with 20GB of ram, if that is likely to
expedite the process. Of course, if it isn't going to make any
difference, I'd rather not restart this process.

Thanks

On 12 February 2010 06:08, Bill Sommerfeld  wrote:
> On 02/11/10 10:33, Lori Alt wrote:
>>
>> This bug is closed as a dup of another bug which is not readable from
>> the opensolaris site, (I'm not clear what makes some bugs readable and
>> some not).
>
> the other bug in question was opened yesterday and probably hasn't had time
> to propagate.
>
>                                        - Bill
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss