Re: [zfs-discuss] [storage-discuss] Supermicro AOC-SASLP-MV8

2009-04-22 Thread James Andrewartha
myxi...@googlemail.com wrote:
 Bouncing a thread from the device drivers list:
 http://opensolaris.org/jive/thread.jspa?messageID=357176
 
 Does anybody know if OpenSolaris will support this new Supermicro card,
 based on the Marvell 88SE6480 chipset? It's a true PCI Express 8 port
 JBOD SAS/SATA controller with pricing apparently around $125.
 
 If it works with OpenSolaris it sounds pretty much perfect.

The Linux support for the 6480 builds on the 6440 mvsas support, so I don't
think marvell88sx would work, and there doesn't seem to be a Marvell SAS
driver for Solaris at all, so I'd say it's not supported.
http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
testing it out, but mostly under Windows.

-- 
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-04-22 Thread Haudy Kazemi

Brad Hill wrote:

I've seen reports of a recent Seagate firmware update
bricking drives again.

What's the output of 'zpool import' from the LiveCD?
 It sounds like
ore than 1 drive is dropping off.




r...@opensolaris:~# zpool import
  pool: tank
id: 16342816386332636568
 state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

tankFAULTED  corrupted data
  raidz1DEGRADED
c6t0d0  ONLINE
c6t1d0  ONLINE
c6t2d0  ONLINE
c6t3d0  UNAVAIL  cannot open
c6t4d0  ONLINE

  pool: rpool
id: 9891756864015178061
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

rpool   ONLINE
  c3d0s0ONLINE
  
1.) Here's a similar report from last summer from someone running ZFS on 
FreeBSD.  No resolution there either:

raidz vdev marked faulted with only one faulted disk
http://kerneltrap.org/index.php?q=mailarchive/freebsd-fs/2008/6/15/2132754

2.) This old thread from Dec 2007 for a different raidz1 problem, titled 
'Faulted raidz1 shows the same device twice' suggests trying these 
commands (see the link for the context they were run under):

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg13214.html

# zdb -l /dev/dsk/c18t0d0

# zpool export external
# zpool import external

# zpool clear external
# zpool scrub external
# zpool clear external

3.) Do you have ECC RAM? Have you verified that your memory, cpu, and 
motherboard are reliable?


4.) 'Bad exchange descriptor' is mentioned very sparingly across the 
net, mostly in system error tables.  Also here: 
http://opensolaris.org/jive/thread.jspa?threadID=88486tstart=165


5.) More raidz setup caveats, at least on MacOS: 
http://lists.macosforge.org/pipermail/zfs-discuss/2008-March/000346.html


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] Supermicro AOC-SASLP-MV8

2009-04-22 Thread Blake
I'm quite happy so far with my LSI cards, which replaced a couple of
the Supermicro Marvell cards:

# scanpci
...
pci bus 0x0007 cardnum 0x00 function 0x00: vendor 0x1000 device 0x0058
 LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS


On Wed, Apr 22, 2009 at 2:45 AM, James Andrewartha jam...@daa.com.au wrote:
 myxi...@googlemail.com wrote:
 Bouncing a thread from the device drivers list:
 http://opensolaris.org/jive/thread.jspa?messageID=357176

 Does anybody know if OpenSolaris will support this new Supermicro card,
 based on the Marvell 88SE6480 chipset? It's a true PCI Express 8 port
 JBOD SAS/SATA controller with pricing apparently around $125.

 If it works with OpenSolaris it sounds pretty much perfect.

 The Linux support for the 6480 builds on the 6440 mvsas support, so I don't
 think marvell88sx would work, and there doesn't seem to be a Marvell SAS
 driver for Solaris at all, so I'd say it's not supported.
 http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
 testing it out, but mostly under Windows.

 --
 James Andrewartha
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] Supermicro AOC-SASLP-MV8

2009-04-22 Thread Will Murnane
On Wed, Apr 22, 2009 at 02:45, James Andrewartha jam...@daa.com.au wrote:
 myxi...@googlemail.com wrote:
 Bouncing a thread from the device drivers list:
 http://opensolaris.org/jive/thread.jspa?messageID=357176

 Does anybody know if OpenSolaris will support this new Supermicro card,
 based on the Marvell 88SE6480 chipset? It's a true PCI Express 8 port
 JBOD SAS/SATA controller with pricing apparently around $125.

 If it works with OpenSolaris it sounds pretty much perfect.

 The Linux support for the 6480 builds on the 6440 mvsas support, so I don't
 think marvell88sx would work, and there doesn't seem to be a Marvell SAS
 driver for Solaris at all, so I'd say it's not supported.
 http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
 testing it out, but mostly under Windows.
... and one testing it under OpenSolaris:
http://www.hardforum.com/showpost.php?p=1034002793postcount=167 .  It
doesn't appear to work.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] posix_fadvise on ZFS

2009-04-22 Thread Jignesh K. Shah



This is wrt Postgres 8.4 beta1 which has a new effective_io_concurrency 
tunable which uses posix_fadvice
http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html  
(Go to the bottom)


Quote:
synchronous I/O depends on an effective |posix_fadvise| function, which 
some operating systems lack. If the function is not present then setting 
this parameter to anything but zero will result in an error. On some 
operating systems the function is present but does not actually do 
anything (e.g. Solaris).


I am trying to understand if posix_fadvise is useful on ZFS or not? 
Currently postgres now does not do any fadvise on OpenSolaris since ZFS 
ignores the advise (after all who takes advise) without letting the app 
know that it is being ignored. On Linux it uses the POSIX_FADV_WILLNEED 
flag (example shown below)

http://doxygen.postgresql.org/fd_8c-source.html#l01054

But in general I am trying to figure out few things:
1. Is it even worth doing posix_fadvise on zfs
2. How does it impact UFS
3. What should be the equivalent workarounds for zfs and/or UFS?

Thanks in advance

regards
Jignesh



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What causes slow performance under load?

2009-04-22 Thread David Dyer-Bennet

On Tue, April 21, 2009 14:20, Joerg Schilling wrote:
 Miles Nordin car...@ivy.net wrote:

 First, there is plain-GPLv2, Linux-modified-GPLv2 with the ``or any
 later version'' clause deleted and the suspect ``interpretation'' of
 kernel modules, and plain-GPLv3: there are three GPL licenses to
 worry about.

 You just verified that you don't understand what you are talking about -
 sorry.
 The clause or any later version is _not_ part of the GPL. The Linux
 Kernel
 of course uses a plain vanilla GPLv2.

 The clause or any later version is even illegal in many juristrictions
 as these juristrictions forbid to sign a contract that you don't know at
 the time you sign.

So are you saying you've never previously noticed section 14 of the GPL as
displayed at http://www.gnu.org/copyleft/gpl.html?

It contains:

If the Program specifies that a certain numbered version of the GNU
General Public License “or any later version” applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software Foundation.
If the Program does not specify a version number of the GNU General Public
License, you may choose any version ever published by the Free Software
Foundation.

So you're just plain wrong.  The GPL contains the exact clause you say it
doesn't contain.  Furthermore, it does in fact say that (unless otherwise
restricted by the license grant) that one may use any version every
published by the Free Software Association.  That's not limited to
versions published after the license grant.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What causes slow performance under load?

2009-04-22 Thread Gary Mills
On Tue, Apr 21, 2009 at 04:09:03PM -0400, Oscar del Rio wrote:
 There's a similar thread on hied-emailad...@listserv.nd.edu
 that might help or at least can get you in touch with other University 
 admins in a similar situation.
 
 https://listserv.nd.edu/cgi-bin/wa?A1=ind0904L=HIED-EMAILADMIN
 Thread: mail systems using ZFS filesystems?

Thanks.  Those problems do sound similar.  I also see positive
experiences with T2000 servers, ZFS, and Cyrus IMAP from UC Davis.

None of the people involved seem to be active on either the ZFS
mailing list or the Cyrus list.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs promote/destroy enhancements?

2009-04-22 Thread Edward Pilatowicz
hey all,

in both nevada and opensolaris, the zones infrastructure tries to
leverage zfs where ever possible. we take advantage of snapshotting and
cloning for things like zone cloning and zone be management.  because of
this, we've recently run into multiple scenarios where a zoneadm
uninstall fails.

6787557 zoneadm uninstall fails when zone has zfs clones
http://bugs.opensolaris.org/view_bug.do?bug_id=6787557

7491 problems destroying zones with cloned dependents
http://defect.opensolaris.org/bz/show_bug.cgi?id=7491

these failures occur when we try to destroy the zfs filesystem
associated with a zone, but that filesystem has been snapshotted and
cloned.  the way we're fixing these problems is by doing a promotion
before the destroy.  jerry has fixed 6787557 for nevada in zoneadm, but
now i'm looking at having to re-implement a similar fix for opensolaris
in the ipkg brand for 7491.

hence, i'm wondering if it would make more sense just to add this
functionality directly into zfs(1m)/libzfs.  this would involve
enhancements to the zfs promote and destroy subcommands.  here's what
i'm thinking.

the first component would be a new -t template option to zfs
promote.  this option would instruct zfs promote to check for snapshot
naming collisions between the origin and promotion target filesystems,
and to rename any origin filesystem snapshots with conflicting names
before attempting the promotion.  the conflicting snapshots will be
renamed to templateXXX, where XXX is an integer used to make the
snapshot name unique.  today users have to do this renaming manually if
they want the promotion to succeed.

to illustrate how this new functionality would work, say i have the
following filesystems/snapshots:

tank/zones/zone1
tank/zones/zo...@sunwzone1
tank/zones/zo...@user1
tank/zones/zone2(clone of tank/zones/zo...@sunwzone1)
tank/zones/zo...@sunwzone1

if i do a zfs promote -t SUNWzone tank/zones/zone2, then this would
involved a rename of zo...@sunwzone1 to zo...@sunwzone2, and a promotion
of tank/zones/zone2.  the @user1 snapshot would not be renamed because
there was no naming conflict with the filesystem being promoted.  hence
i would end up with:

tank/zones/zone2
tank/zones/zo...@sunwzone1
tank/zones/zo...@sunwzone2
tank/zones/zo...@user1
tank/zones/zone1(clone of tank/zones/zo...@sunwzone2)

if i did a zfs promote -t user tank/zones/zone2, then this would this
involved a rename of zo...@sunwzone1 to zo...@user2, and then a
promotion of tank/zones/zone2.  hence i would end up with:

tank/zones/zone2
tank/zones/zo...@sunwzone1
tank/zones/zo...@user1
tank/zones/zo...@user2
tank/zones/zone1(clone of tank/zones/zo...@user2)


the second component would be two new flags to zfs destroy:
zfs destroy [-p [-t template]]

the -p would instruct zfs destroy to try to promote the oldest clone of
the youngest snapshot of the filesystem being destroyed before doing the
destroy.  if the youngest filesystem doesn't have a clone, the command
will fail unless -r was specified.  if -r was specified we will continue
to look through snapshot from youngest to oldest looking for the first
one with a clone.  if a snapshot with a clone is found, the oldest clone
will be promoted before the destroy.  if a template was specified via
-t, this will be passed through to the promote operation.

thoughts?
ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] prstat -Z and load average values in different zones give same numeric results

2009-04-22 Thread Nobel Shelby

Folks,
Perplexing question about load average display with prstat -Z
Solaris 10 OS U4 (08/07)
We have 4 zones with very different processes and workloads..
The prstat  -Z command issued within each of the zones, correctly displays
the number of processes and lwps, but the load average value looks 
exactly the
same on all non-global zones..I mean all 3 values (1,5,15 load averages) 
are the same

which is quasi impossible given the different workloads..
Is there a bug here?
Thanks,
-Nobel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss