Re: [zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Alan Wright

On 1/3/11 10:51 AM, Chris Ridd wrote:


On 3 Jan 2011, at 17:08, Volker A. Brandt wrote:


On our build 147 server (pool version 22) I've noticed that some directories called 
".$EXTEND" (no quotes) are appearing underneath some shared NFS filesystems, containing 
an empty file called "$QUOTA". We aren't using quotas.

What are these ? Googling for the names doesn't really work too well :-(

I don't think they're doing any harm, but I'm curious. Someone's bound to
notice and ask me as well :-)


Well, googling for '.$EXTEND' and '$QUOTA' does give some results,
especially when combined with 'NTFS'. :-)


Aha! Foolishly I'd used zfs in my search string :-)


Check out the table on "Metafiles" here:

  http://en.wikipedia.org/wiki/NTFS


OK, so they're probably an artefact of having set sharesmb=on, even though I've 
not joined the box to a domain yet.


Those objects are created automatically when you share a dataset
over SMB to support remote ZFS user/group quota management from
the Windows desktop.  The dot in .$EXTEND is to make the directory
less intrusive on Solaris.

There is no Solaris or ZFS functionality associated with those
objects and you can safely delete them on ZFS: they will be
recreated as required whenever the dataset is shared over SMB.

For more information on those files, look for Quota Tracking in
http://msdn.microsoft.com/en-us/library/ms995846.aspx

Alan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Single VDEV pool permanent and checksum errors after replace

2011-01-03 Thread Chris Murray
Hi,

I have some strange goings-on with my VM of Solaris Express 11, and I
hope someone can help.

It shares out other virtual machine files for use in ESXi 4.0 (it,
too, runs in there)

I had two disks inside the VM - one for rpool and one for 'vmpool'.
All was fine.
vmpool has some deduped data. That was also fine.
I added a Samsung SSD to the ESXi host, created a 512MB VMDK and a
20GB VMDK, and added as log and cache, respectively. This also worked
fine.

At this point, the pool is made of c8t1d0 (data), c8t2d0 (logs),
c8t3d0 (cache). I decide that to add some redundancy, I'll add a
mirrored virtual disk. At this point, it happens that the VMDK for
this disk (c8t4d0) actually resides on the same physical disk as
c8t1d0. The idea was to perform the logical split in Solaris Express
first, deal with the IO penalty of writing everything twice to the
same physical disk (even though Solaris thinks they're two separate
ones), then move that VMDK onto a separate physical disk shortly. This
should in the short term protect against bit-flips and small errors on
the single physical disk that ESXi has, until a second one is
installed. I have a think about capacity, though, and decide I'd
prefer the mirror to be of c8t4d0 and c8t5d0 instead. So, it seems I
want to go from one single disk (c8t1d0), to a mirror of c8t4d0 and
c8t5d0. In my mind, that's a 'zpool replace' onto c8t4d0 and a 'zpool
attach' of c8t5d0. I kick off the replace, and all goes fine. Part way
through I try to do the attach as well, but am politely told I can't.

The replace itself completed without complaint, however on completion,
virtual machines whose disks are inside 'vmpool' start hanging,
checksum errors rapidly start counting up, and since there's no
redundancy, nothing can be done to repair them.


 pool: vmpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
       corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
       entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scan: resilvered 48.2G in 2h53m with 0 errors on Mon Jan  3 20:45:49 2011
config:

       NAME        STATE     READ WRITE CKSUM
       vmpool      DEGRADED     0     0 25.6K
         c8t4d0    DEGRADED     0     0 25.6K  too many errors
       logs
         c8t2d0    ONLINE       0     0     0
       cache
         c8t3d0    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

       /vmpool/nfs/duck/duck_1-flat.vmdk
       /vmpool/nfs/panda/template.xppro-flat.vmdk


At this point, I remove disk c8t1d0, and snapshot the entire VM in
case I do any further damage. This leads to my first two questions:

#1 - are there any suspicions as to what's happened here? How come
the resilver completed fine but now there are checksum errors on the
replacement disk? It does reside on the same physical disk, after all.
Could this be something to do with me attempting the attach during the
replace?
#2 - in my mind, c8t1d0 contains the state of the pool just prior
to the cutover to c8t4d0. Is there any way I can get this back, and
scrap the contents of c8t4d0? A 'zpool import -D' is fruitless, but I
imagine there's some way of tricking Solaris into seeing c8t1d0 this
as a single disk pool again?

Now that I've snapshotted the VM and have a sort of safety net, I run
a scrub, which unsurprisingly unearths checksum errors and lists all
of the files which have problems:


 pool: vmpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
       corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
       entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scan: scrub repaired 0 in 0h30m with 95 errors on Mon Jan  3 21:47:25 2011
config:

       NAME        STATE     READ WRITE CKSUM
       vmpool      ONLINE       0     0   190
         c8t4d0    ONLINE       0     0   190
       logs
         c8t2d0    ONLINE       0     0     0
       cache
         c8t3d0    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

       /vmpool/nfs/duck/duck-flat.vmdk
       /vmpool/nfs/duck/Windows Server 2003 Standard Edition.nvram
       /vmpool/nfs/duck/duck_1-flat.vmdk
       /vmpool/nfs/eagle/eagle-flat.vmdk
       /vmpool/nfs/eagle/eagle_1-flat.vmdk
       /vmpool/nfs/eagle/eagle_2-flat.vmdk
       /vmpool/nfs/eagle/eagle_3-flat.vmdk
       /vmpool/nfs/eagle/eagle_5-flat.vmdk
       /vmpool/nfs/panda/Windows XP Professional.nvram
       /vmpool/nfs/panda/panda-flat.vmdk
       /vmpool/nfs/panda/template.xppro-flat.vmdk


I 'zpool clear vmpool', power on one of the VMs, and the checksum
count quickly reaches 970.

#3 - why would this be the case? I thought the purpose of a scrub
was to traverse all blocks, read them, and unearth problems? I'm
wondering why these 970 errors haven

Re: [zfs-discuss] zpool status keeps telling "resilvered"

2011-01-03 Thread pieterjan
We "zpool clear" ed about a hundred times: didn't work. After the kernel patch 
from december 20th and a reboot or two, it finally disappeared, though.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs recv failing - "invalid backup stream"

2011-01-03 Thread Brandon High
On an snv_151a system I'm trying to do a send of rpool, and works when
using -n, but when I actually try to receive it's failing.

scrubs pass without issue, it's just the recv that fails.

# zfs send -R rp...@copy | zfs recv -n -vduF radar/foo
would receive full stream of rp...@copy into radar/f...@copy
would receive full stream of rpool/r...@copy into radar/foo/r...@copy
cannot create 'radar/foo/ROOT/snv_1...@copy': parent does not exist
# zfs send -R rp...@copy | zfs recv -vduF radar/foo
receiving full stream of rp...@copy into radar/f...@copy
cannot receive new filesystem stream: invalid backup stream

zstreamdump shows this:
BEGIN record
hdrtype = 2
features = 4
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname = rp...@copy
nvlist version: 0
tosnap = copy
fss = (embedded nvlist)
nvlist version: 0
0x4f65ffc5611d9a16 = (embedded nvlist)
nvlist version: 0
name = rpool
parentfromsnap = 0x0
props = (embedded nvlist)
nvlist version: 0
copies = 0x1
compression = 0x2
dedup = 0x2
sync = 0x2
com.sun:auto-snapshot = false
org.opensolaris.caiman:install = ready
atime = 0x0
(end props)

snaps = (embedded nvlist)
nvlist version: 0
copy = 0xf7b725c8fb4909cc
(end snaps)

snapprops = (embedded nvlist)
nvlist version: 0
copy = (embedded nvlist)
nvlist version: 0
(end copy)

(end snapprops)

(end 0x4f65ffc5611d9a16)

0x26c5ffeede47502 = (embedded nvlist)
nvlist version: 0
name = rpool/ROOT
parentfromsnap = 0xf7b725c8fb4909cc
props = (embedded nvlist)
nvlist version: 0
canmount = 0x0
mountpoint = legacy
(end props)

snaps = (embedded nvlist)
nvlist version: 0
copy = 0x5ac3999f0be01307
(end snaps)

snapprops = (embedded nvlist)
nvlist version: 0
copy = (embedded nvlist)
nvlist version: 0
(end copy)

(end snapprops)

(end 0x26c5ffeede47502)

0x90cb12b83fc2546a = (embedded nvlist)
nvlist version: 0
name = rpool/ROOT/snv_151a
parentfromsnap = 0x5ac3999f0be01307
props = (embedded nvlist)
nvlist version: 0
org.opensolaris.libbe:uuid =
ac29b2b5-fe1f-6c55-ab3b-ed3e9e9d53db
mountpoint = /
canmount = 0x2
org.opensolaris.libbe:policy = static
(end props)

snaps = (embedded nvlist)
nvlist version: 0
copy = 0xec9bfc4eddeadb9c
(end snaps)

snapprops = (embedded nvlist)
nvlist version: 0
copy = (embedded nvlist)
nvlist version: 0
(end copy)

(end snapprops)

(end 0x90cb12b83fc2546a)

(end fss)

END checksum = 428d2b3e38/3a24e0eff5bb/21ff89e9b44e75/f3ec1d43d884647
BEGIN record
hdrtype = 1
features = 4
magic = 2f5bacbac
creation_time = 4d218647
type = 2
flags = 0x0
toguid = f7b725c8fb4909cc
fromguid = 0
toname = rp...@copy
END checksum = a4a5178c744c/a8332b7147dc247c/6d134a0269a1a1dd/f88d9b05376123dd
BEGIN record
hdrtype = 1
features = 4
magic = 2f5bacbac
creation_time = 4d218647
type = 2
flags = 0x0
toguid = 5ac3999f0be01307
fromguid = 0
toname = rpool/r...@copy
END checksum = 30116b3946/10d44de627105/3aa95a2a944e4ff/6f4e3100a0b41f08
BEGIN record
hdrtype = 1
features = 4
magic = 2f5bacbac
creation_time = 4d218647
t

Re: [zfs-discuss] A few questions

2011-01-03 Thread Richard Elling
On Jan 3, 2011, at 2:10 PM, Erik Trimble wrote
> On 1/3/2011 8:28 AM, Richard Elling wrote:
>> 
>> On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
>>> On 12/26/10 05:40 AM, Tim Cook wrote:
 On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling 
  wrote:
 
 There are more people outside of Oracle developing for ZFS than inside 
 Oracle.
 This has been true for some time now.
 
 
 Pardon my skepticism, but where is the proof of this claim (I'm quite 
 certain you know I mean no disrespect)?  Solaris11 Express was a massive 
 leap in functionality and bugfixes to ZFS.  I've seen exactly nothing out 
 of "outside of Oracle" in the time since it went closed.  We used to see 
 updates bi-weekly out of Sun.  Nexenta spending hundreds of man-hours on a 
 GUI and userland apps isn't work on ZFS.
 
 
>>> 
>>> Exactly my observation as well. I haven't seen any ZFS related development 
>>> happening at Ilumos or Nexenta, at least not yet.
>> 
>> I am quite sure you understand how pipelines work :-)
>>  -- richard
> 
> I'm getting pretty close to my pain threshold on the BP_rewrite stuff, since 
> not having that feature's holding up a big chunk of work I'd like to push.
> 
> If anyone outside of Oracle is working on some sort of change to ZFS that 
> will allow arbitrary movement/placement of pre-written slabs, can they please 
> contact me?  I'm pretty much at the point where I'm going to start diving 
> into that chunk of the source to see if there's something little old me can 
> do, and I'd far rather help on someone else's implementation than have to do 
> it myself from scratch.
> 
> I'd prefer a private contact, as I realize that such work may not be ready 
> for public discussion yet.
> 
> Thanks, folks!
> 
> Oh, and this is completely just me, not Oracle talking in any way.

Oracle doesn't seem to say much at all :-(

But for those interested, Nexenta is actively hiring people to work in this 
area.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-03 Thread Erik Trimble

On 1/3/2011 8:28 AM, Richard Elling wrote:

On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:

On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling 
mailto:richard.ell...@gmail.com>> wrote:



There are more people outside of Oracle developing for ZFS than
inside Oracle.
This has been true for some time now.


Pardon my skepticism, but where is the proof of this claim (I'm 
quite certain you know I mean no disrespect)?  Solaris11 Express was 
a massive leap in functionality and bugfixes to ZFS.  I've seen 
exactly nothing out of "outside of Oracle" in the time since it went 
closed.  We used to see updates bi-weekly out of Sun.  Nexenta 
spending hundreds of man-hours on a GUI and userland apps isn't work 
on ZFS.





Exactly my observation as well. I haven't seen any ZFS related 
development happening at Ilumos or Nexenta, at least not yet.


I am quite sure you understand how pipelines work :-)
 -- richard




I'm getting pretty close to my pain threshold on the BP_rewrite stuff, 
since not having that feature's holding up a big chunk of work I'd like 
to push.


If anyone outside of Oracle is working on some sort of change to ZFS 
that will allow arbitrary movement/placement of pre-written slabs, can 
they please contact me?  I'm pretty much at the point where I'm going to 
start diving into that chunk of the source to see if there's something 
little old me can do, and I'd far rather help on someone else's 
implementation than have to do it myself from scratch.


I'd prefer a private contact, as I realize that such work may not be 
ready for public discussion yet.


Thanks, folks!


Oh, and this is completely just me, not Oracle talking in any way.

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Chris Ridd

On 3 Jan 2011, at 17:08, Volker A. Brandt wrote:

>> On our build 147 server (pool version 22) I've noticed that some directories 
>> called ".$EXTEND" (no quotes) are appearing underneath some shared NFS 
>> filesystems, containing an empty file called "$QUOTA". We aren't using 
>> quotas.
>> 
>> What are these ? Googling for the names doesn't really work too well :-(
>> 
>> I don't think they're doing any harm, but I'm curious. Someone's bound to
>> notice and ask me as well :-) 
> 
> Well, googling for '.$EXTEND' and '$QUOTA' does give some results,
> especially when combined with 'NTFS'. :-)

Aha! Foolishly I'd used zfs in my search string :-)

> Check out the table on "Metafiles" here:
> 
>  http://en.wikipedia.org/wiki/NTFS

OK, so they're probably an artefact of having set sharesmb=on, even though I've 
not joined the box to a domain yet.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Running on Dell hardware?

2011-01-03 Thread Stephan Budach

Am 03.01.11 19:41, schrieb Edward Ned Harvey:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach

Well a couple of weeks before christmas, I enabled the onboard bcom nics
on my R610 again, to use them as IMPI ports - I didn't even use them in

You don't have to enable the broadcom nic in order for them to do IPMI.  In
my R710, I went into BIOS, and disabled all the bcom nics.  The primary NIC
doesn't allow you to *fully* disable it.  It says something like "Disabled
(OS)"...  This means the OS can't see it, but it's still doing IPMI assuming
you configured IPMI in the BIOS interface (Ctrl-E)

It seems to work fine in this configuration.


That's worth a try. I will check that tomorrow.

Thanks,
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status keeps telling "resilvered"

2011-01-03 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of pieterjan
> 
> so I
> suppose the message is just informational.
> 
> I don't want that message there, 

I'm pretty sure the answer is "zpool clear"

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Running on Dell hardware?

2011-01-03 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
> 
> Well a couple of weeks before christmas, I enabled the onboard bcom nics
> on my R610 again, to use them as IMPI ports - I didn't even use them in

You don't have to enable the broadcom nic in order for them to do IPMI.  In
my R710, I went into BIOS, and disabled all the bcom nics.  The primary NIC
doesn't allow you to *fully* disable it.  It says something like "Disabled
(OS)"...  This means the OS can't see it, but it's still doing IPMI assuming
you configured IPMI in the BIOS interface (Ctrl-E)

It seems to work fine in this configuration.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS advice for laptop

2011-01-03 Thread Jana Mann
Hi All,

I have a laptop with twin 500 GB hard disks and I'm looking how best to protect 
my data.  I would run Open Solaris as my primary system however I have specific 
hardware requirements which means I am forced to use Windows 7 as my primary 
o/s.  However I would like to utilise the power of ZFS manage my data needs.  

Currently I am experimenting with the following set-up:

1) Create 2 RAW partitions in Windows 7 on each physical disk of 350 GB, 
remember 1 disk will have Windows 7 NTFS partitions, and I might install a 
Linux host on the other disk meaning a few EXT4 partitions
2) Create VirtualBox Open Solaris guest on a 10 GB vdi disk
3) Give the guest VM raw disk access to the 2 partitions created in step 1 and 
attach using VirtualBox SATA controller, a VMDK container file is created to 
represent each partition
4) Make VirtualBox honour flushing by issuing the following command for the two 
partitions:

VBoxManage setextradata "VM name" 
"VBoxInternal/Devices/ahci/0/LUN#[x]/Config/IgnoreFlush" 0

5) In Open Solaris create a mirrored pool using the two partitions and make it 
a SMB share
6) Mount share in Windows 7 host

Now the above has worked a treat so far, but I am yet to test by pulling the 
plug on the VM during read and write operations etc, but the reason for posting 
is that the official documentation recommends NOT using partitions on disks and 
using whole physical disks is encouraged.

What I would like to know is what risks do I carry by using partitions, I 
attempted to search for the rationale behind this advice but I can't seem to 
find it...

If the implications are serious and the likelihood of them occurring then I 
have thought of using one the internal physical 500 GB disks coupled with an 
external USB 500 GB disk that I have, in which case at times the pool would 
operate in a degraded state as the USB drive will not always be connected, in 
which case re-silvering would take place to sync the mirror, again any serious 
implications of doing this??

Many Thanks,

Jana
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Volker A. Brandt
> On our build 147 server (pool version 22) I've noticed that some directories 
> called ".$EXTEND" (no quotes) are appearing underneath some shared NFS 
> filesystems, containing an empty file called "$QUOTA". We aren't using quotas.
> 
> What are these ? Googling for the names doesn't really work too well :-(
> 
> I don't think they're doing any harm, but I'm curious. Someone's bound to
> notice and ask me as well :-) 

Well, googling for '.$EXTEND' and '$QUOTA' does give some results,
especially when combined with 'NTFS'. :-)

Check out the table on "Metafiles" here:

  http://en.wikipedia.org/wiki/NTFS


Regards -- Volker
-- 

Volker A. Brandt   Consulting and Support for Oracle Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Chris Ridd
On our build 147 server (pool version 22) I've noticed that some directories 
called ".$EXTEND" (no quotes) are appearing underneath some shared NFS 
filesystems, containing an empty file called "$QUOTA". We aren't using quotas.

What are these ? Googling for the names doesn't really work too well :-(

I don't think they're doing any harm, but I'm curious. Someone's bound to 
notice and ask me as well :-)

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disks are unavailable

2011-01-03 Thread Richard Elling
On Dec 30, 2010, at 9:06 PM, Jeff Ruetten wrote:

> I am using virtualbox and accessing three 2 tb entire raw disks in a Windows 
> 7 ultimate host.  One day, the guest (Nexenta) was stopped and when I 
> restarted it all three disks are showing as unavailable.  Is there anyway to 
> recover from this?  I would really like to not loose all my families pictures 
> for the last 7 years.  I am wondering if I use the command to from virtualbox 
> to recreated the drives.  Would that mess up the data?  Any help would be 
> greatly appreciated.  There is an irritating lack of help on Nexenta and 
> Virtualbox forms.  Lots of views and not even a "your probably screwed" 
> message.  Anyway, I would appreciate any help.

>From my perspective, I have no idea how you have configured the thing.
In general, if a virtual environment does not provide the LUNs to a guest
OS, then that is the place to start.  So far, I see nothing in this post that 
relates
to ZFS (or Nexenta).  Perhaps a picture will help?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stuck "Reading ZFS config" on boot after destroy of dedupe volume

2011-01-03 Thread Richard Elling
On Dec 25, 2010, at 8:47 AM, Clint Priest wrote:

> I have a zfs system (Nexenta 3.03) that froze up after I tried to destroy a 
> zfs volume which was set to dedupe data.  
> 
> After rebooting the console is stopped on the line "Reading ZFS config: -"
> 
> Any suggestions on what I can I do to get this system up and running?

You need to wait for it to finish removing the references to the dedup'ed data.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-03 Thread Richard Elling
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:

> On 12/26/10 05:40 AM, Tim Cook wrote:
>> 
>> 
>> 
>> On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling  
>> wrote:
>> 
>> There are more people outside of Oracle developing for ZFS than inside 
>> Oracle.
>> This has been true for some time now.
>> 
>>> 
>> 
>> 
>> 
>> Pardon my skepticism, but where is the proof of this claim (I'm quite 
>> certain you know I mean no disrespect)?  Solaris11 Express was a massive 
>> leap in functionality and bugfixes to ZFS.  I've seen exactly nothing out of 
>> "outside of Oracle" in the time since it went closed.  We used to see 
>> updates bi-weekly out of Sun.  Nexenta spending hundreds of man-hours on a 
>> GUI and userland apps isn't work on ZFS.
>> 
>> 
> 
> Exactly my observation as well. I haven't seen any ZFS related development 
> happening at Ilumos or Nexenta, at least not yet.

I am quite sure you understand how pipelines work :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disks are unavailable

2011-01-03 Thread Stephan Budach

Am 31.12.10 06:06, schrieb Jeff Ruetten:

I am using virtualbox and accessing three 2 tb entire raw disks in a Windows 7 ultimate 
host.  One day, the guest (Nexenta) was stopped and when I restarted it all three disks 
are showing as unavailable.  Is there anyway to recover from this?  I would really like 
to not loose all my families pictures for the last 7 years.  I am wondering if I use the 
command to from virtualbox to recreated the drives.  Would that mess up the data?  Any 
help would be greatly appreciated.  There is an irritating lack of help on Nexenta and 
Virtualbox forms.  Lots of views and not even a "your probably screwed" 
message.  Anyway, I would appreciate any help.



Jeff, could you please be more specific, as of how you used the "raw 
disks"? Did you connect them via iSCSI? Or do you run Nexenta inside VB 
and share out via iSCSI from there?


In any way - I'd stand back from using any disk-related command inside 
VB until you exactly know what is going on!


Cheers,
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hard Errors on HDDs

2011-01-03 Thread Benji
Thanks for the input!

I am using an Ipass to Ipass cable that connects my HBA to my backplane. It was 
firmly locked into both connectors.

I offlined 2 supposedly faulty SAMSUNG drives, scanned their whole surface 
using estools and it did not report any errors. 

I'm starting to think that it may be an issue with the mpt driver and the HBA 
card. Anyone else using an LSI 1068E based HBA card and having issues?

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Host based zfs config with Oracle's Unified Storage 7000 series

2011-01-03 Thread Shawn Joy
My question regarding the 7000 series storage is in more of the perspective of 
the HOST side ZFS config. It is my understanding that the 7000 storage displays 
a FC lun to the host. Yes, this LUN is a ZFS lun in the 7000 storage, however 
the host still sees this as only one LUN. If I configure a host based ZFS 
storage device on top of this LUN I have no host based zfs redundancy. So do we 
still need to create a host based ZFS mirror or a host based ZFS raidz device 
when use a 7000 series storage array?

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hard Errors on HDDs

2011-01-03 Thread Orvar Korvar
Maybe a cable is loose? Reinsert all the cables into all drives? And the 
controller card? 

Yes, ZFS detects such problems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS znapshot of zone that contains ufs SAN attached file systems

2011-01-03 Thread Shawn Joy
Hi All, 

If a zone root is on zfs but that zone also contains SAN attached UFS devices 
what is recorded in a zfs snapshot of the zone? 

Does the snapshot only contain the ZFS root info? 

How would one recover this complete zone?

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Hard Errors on HDDs

2011-01-03 Thread Benji
Hi,

I recently noticed that there are a lot of Hard Errors on multiple drives 
that's being reported by iostat. Also, dmesg reports various messages from the 
mpt driver.

My config is:
MB: SUPERMICRO X8SIL-F
HBA: AOC-USAS-L8i (LSI 1068)
RAM: 4GB ECC
SunOS SAN 5.11 snv_134 i86pc i386 i86pc Solaris

My configuration is a striped mirrored vdev of 13 drives (one mirror had an 
error on a drive, which I cleared. But just to be safe I added another drive to 
the mirror):

 NAME STATE READ WRITE CKSUM
zpoolONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c4t13d0  ONLINE   0 0 0
c4t19d0  ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c4t25d0  ONLINE   0 0 0
c4t31d0  ONLINE   0 0 0
  mirror-2   ONLINE   0 0 0
c4t12d0  ONLINE   0 0 0
c4t18d0  ONLINE   0 0 0
  mirror-3   ONLINE   0 0 0
c4t24d0  ONLINE   0 0 0
c4t30d0  ONLINE   0 0 0
  mirror-4   ONLINE   0 0 0
c4t11d0  ONLINE   0 0 0
c4t17d0  ONLINE   0 0 0
c4t10d0  ONLINE   0 0 0
  mirror-5   ONLINE   0 0 0
c4t23d0  ONLINE   0 0 0
c4t29d0  ONLINE   0 0 0


Here's the output from iostat -En:

c6d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: WDC WD3200BEKT- Revision:  Serial No:  WD-WXR1A30 Size: 320.07GB 
<320070352896 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
c7d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: WDC WD3200BEKT- Revision:  Serial No:  WD-WXR1A30 Size: 320.07GB 
<320070352896 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
c4t12d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t13d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t18d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t19d0  Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t24d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t25d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t30d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t31d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t17d0  Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD20EADS-32S Revision: 0A01 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t11d0  Soft Errors: 0 Hard Errors: 17 Transport Errors: 116
Vendor: ATA  Product: WDC WD20EADS-32S Revision: 5G04 Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 17 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t23d0  Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST31500341AS

[zfs-discuss] Disks are unavailable

2011-01-03 Thread Jeff Ruetten
I am using virtualbox and accessing three 2 tb entire raw disks in a Windows 7 
ultimate host.  One day, the guest (Nexenta) was stopped and when I restarted 
it all three disks are showing as unavailable.  Is there anyway to recover from 
this?  I would really like to not loose all my families pictures for the last 7 
years.  I am wondering if I use the command to from virtualbox to recreated the 
drives.  Would that mess up the data?  Any help would be greatly appreciated.  
There is an irritating lack of help on Nexenta and Virtualbox forms.  Lots of 
views and not even a "your probably screwed" message.  Anyway, I would 
appreciate any help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Stuck "Reading ZFS config" on boot after destroy of dedupe volume

2011-01-03 Thread Clint Priest
I have a zfs system (Nexenta 3.03) that froze up after I tried to destroy a zfs 
volume which was set to dedupe data.  

After rebooting the console is stopped on the line "Reading ZFS config: -"

Any suggestions on what I can I do to get this system up and running?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intermittent ZFS hang

2011-01-03 Thread Paul Armstrong
I'm not sure if they still apply to B134, but it seems similar to problems 
caused by transaction group issues in the past.

Have you looked at the threads involving setting zfs:zfs_write_limit_override, 
zfs:zfs_vdev_max_pending or zfs:zfs_txg_timeout in /etc/system?

Paul
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /export/home/username not mounting after legacy mounts on 11 Express

2011-01-03 Thread alan pae
The issue was somehow a file got created in /export and ZFS prefers an empty 
mount point.

rm * and the issue was resolved.

alan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool status keeps telling "resilvered"

2011-01-03 Thread pieterjan
Hi!

We have a raidz2 pool with 1 spare. Recently, one of the drives generated a lot 
of checksum errors, so it was automatically replaced with the spare. Since the 
errors stopped at some point, we figured that the drive itself was not at 
fault. We offlined it, zeroed it and onlined it again, started resilvering, and 
manually detached the spare drive. The zpool status is ONLINE and mentions 
"resilver completed with 0 errors", but the resilvered drive still has "624G 
resilvered" next to it. There's no performance impact for the time being, so I 
suppose the message is just informational.

I don't want that message there, however; I know what went on. How do I get rid 
of it? I didn't scrub the pool yet, because there are 20 drives of 1TB in that 
array: I can only imagine the time that would take.

Any ideas, besides scrubbing?

Thank in advance,
Miejas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] /export/home/username not mounting after legacy mounts on 11 Express

2011-01-03 Thread alan pae
Hi,

On OpenSolaris build 134 you could:

zfs set mountpoint=legacy rpool/some/zfs

and then

mount -F zfs rpool/some/zfs /mnt_point

and then do the work, reboot and the system didn't care.

This seems to have changed with Solaris 11 Express.

The above commands still work but whereas before I did nothing and just 
rebooted and didn't care this no longer applies.

After the two commands shown above I rebooted as usual and got the Gnome logon 
prompt, logged on, and then was informed that files in /home/username could no 
longer be set and then desktop just sat there.

So I rebooted and hit escape so I could watch the messages and /home/username 
complained and then stopped.

So I did a zfs list and everything looked normal except that 
/export/home/username was listed as legacy and  everything else was listed as 
some zfs filesystem.

So then I did:

zfs set mountpoint=/export/home/username rpool/export/home/username and then 
checked zfs list and everything looked good so I rebooted.  

Then I got an error message stating that /export was not empty so it couldn't 
be mounted and /export/home wasn't empty so it couldn't be  mounted and that 
was that.

I searched the docs on docs.sun.com and they mention going from zfs to legacy 
but the only thing that is says about going back is that zfs will take care of 
it.  Which is what it is not doing.

Console logon states no home_dir so using /home.

The only way to get a graphical logon back was to set /export and /export/home 
to legacy and then it would mount /export/home/username again.  I looked over 
zfs get * rpoo/export and nothing looks exciting.

The current zfs list is:

NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 27.9G  9.23G  93.5K  /rpool
rpool/ROOT24.6G  9.23G31K  legacy
rpool/ROOT/Oracle_Solaris_11_Express  24.6G  9.23G  18.5G  /
rpool/ROOT/solaris  37M  9.23G  3.44G  /
rpool/dump1018M  9.23G  1018M  -
rpool/export   276M  9.23G32K  legacy
rpool/export/home  276M  9.23G32K  legacy
rpool/export/home/alan 276M  9.23G   276M  /export/home/alan
rpool/swap2.04G  9.23G  2.04G  -

Any ideas?

thanks in advance, 
alan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /export/home/username not mounting after legacy mounts on 11 Express

2011-01-03 Thread alan pae
I tried to recreate this scenario using VirtualBox under Windows XP so I could 
capture the actual messages.

Could not duplicate.

Oh well.

alan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-03 Thread Bob Friesenhahn

On Mon, 3 Jan 2011, Robert Milkowski wrote:


Exactly my observation as well. I haven't seen any ZFS related 
development happening at Ilumos or Nexenta, at least not yet.


There seems to be plenty of zfs work on the FreeBSD project, but 
primarily with porting the latest available sources to FreeBSD (going 
very well!) rather than with developing zfs itself.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-03 Thread Robert Milkowski

 On 12/26/10 05:40 AM, Tim Cook wrote:



On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling 
mailto:richard.ell...@gmail.com>> wrote:



There are more people outside of Oracle developing for ZFS than
inside Oracle.
This has been true for some time now.




Pardon my skepticism, but where is the proof of this claim (I'm quite 
certain you know I mean no disrespect)?  Solaris11 Express was a 
massive leap in functionality and bugfixes to ZFS.  I've seen exactly 
nothing out of "outside of Oracle" in the time since it went closed. 
 We used to see updates bi-weekly out of Sun.  Nexenta spending 
hundreds of man-hours on a GUI and userland apps isn't work on ZFS.





Exactly my observation as well. I haven't seen any ZFS related 
development happening at Ilumos or Nexenta, at least not yet.


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Running on Dell hardware?

2011-01-03 Thread Stephan Budach

Am 22.12.10 18:47, schrieb Lasse Osterild:

I've just noticed that Dell has a 6.0.1 firmware upgrade available, at least for my 
R610's they do (they are about 3 months old).  Oddly enough it doesn't show up on 
support.dell.com when I search using my servicecode, but if I check through "System 
Services / Lifecycle Controller" it does find them.

Two of the same servers are running Ubuntu 10.04.1 and RHEL 5.4, several TB's 
of data have gone through the interfaces on those two boxes without a single 
glitch.

So has anyone tried 6.0.1 yet, or is it simply v4.x repackaged with a new 
version number?

  - Lasse

On 12/12/2010, at 14.39, Markus Kovero wrote:


Oh well.  Already, the weird crash has happened again.  So we're concluding
two things:
-1-  The broadcom nic is definitely the cause of the crash.
and
-2-  Even with the new "upgrade" downgrade, the problem is not solved.
So the solution is add-on intel nic, and disable broadcom integrated nic.

And if I may conclude my own findings.

"random crashes" and Broadcom issues are separate non-related problems afaik, 
we have some R710's with Broadcom nics that seem to be stable over several months and 
other R710's with cannot keep it together for even week or so. Both have identical 
fw/bios versions.

1) there is/was problem with Broadcom nics loosing network connectivity with 
every OS, including solaris, this was fixed by software patches in sol10 and 
sol11 express, and non official driver update was made for snv_134.
workaround for this issue was to disable c-states from bios under processor 
configuration.
2) there is somekind of unstability issue not related to nics with latest batch of R710 
series servers, crashes occur randomly, but seemed to get fixed in Solaris 11 Express, no 
idea on sol10 though. We have yet to test if this is somehow related to processor/memory 
configuration being used, mind you that software and firmware versions are identical on 
"stable" R710's and crashing ones.
3) there was also problems with system disk going missing suddenly (when using 
sas 6ir), I think it's somewhat related to problem 2), happens rarely though.
4) Solaris 11 Express and latest R710's introduced new Broadcom problem, random 
network hiccups. Disabling C-states does not help. Planning to open SR for 
this, seems very much different from original problem (OS is not aware what 
happens at all).

My solution for issues would be not to use R710 in anything more serious, it is 
definitely platform that has more problems than I'm interested in debugging for 
(:

Yours
Markus Kovero
Well a couple of weeks before christmas, I enabled the onboard bcom nics 
on my R610 again, to use them as IMPI ports - I didn't even use them in 
Sol11, but as of this morning, the system has entered the state again, 
in which a successful login to the system was not anymore possible.


Neither logging in to the local console was possible - the system didn't 
even prompt for the password.
So, just having the bcom nics present on the host seem to cause these 
troubles, even if Sol11 doesn't have to deal with the nics for anything.


Cheers,
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss