Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Beau J. Bechdol
I do apologies but I am completely lost here Maybe I am just not
understanding. Are you saying that a slice has to be created on the seond
drive before it can bee added to the pool?

Thanks

On Mon, Jul 12, 2010 at 4:22 PM, Cindy Swearingen 
cindy.swearin...@oracle.com wrote:

 Hi John,

 Follow the steps in this section:

 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

 Replacing/Relabeling the Root Pool Disk

 If the disk is correctly labeled with an SMI label, then you can skip
 down to steps 5-8 of this procedure.

 Thanks,

 Cindy


 On 07/12/10 16:06, john wrote:

 Hello all. I am new...very new to opensolaris and I am having an issue and
 have no idea what is going wrong. So I have 5 drives in my machine. all
 500gb. I installed open solaris on the first drive and rebooted. . Now what
 I want to do is ad a second drive so they are mirrored. How does one do
 this!!! I am getting no where and need some help.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-13 Thread Linder, Doug
While we're on the topic, has anyone used ZFS much with Vormetric's encryption 
product?  Any feedback? 


Doug Linder

--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Ross Walker

The whole disk layout should be copied from disk 1 to 2, then the slice on disk 
2 that corresponds to the slice on disk 1 should be attached to the rpool which 
forms an rpool mirror (attached not added).

Then you need to add the grub bootloader to disk 2.

When it finishes resilvering then you have an rpool mirror.

-Ross



On Jul 12, 2010, at 6:30 PM, Beau J. Bechdol bbech...@gmail.com wrote:

 I do apologies but I am completely lost here Maybe I am just not 
 understanding. Are you saying that a slice has to be created on the seond 
 drive before it can bee added to the pool?
 
 Thanks
 
 On Mon, Jul 12, 2010 at 4:22 PM, Cindy Swearingen 
 cindy.swearin...@oracle.com wrote:
 Hi John,
 
 Follow the steps in this section:
 
 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
 
 Replacing/Relabeling the Root Pool Disk
 
 If the disk is correctly labeled with an SMI label, then you can skip
 down to steps 5-8 of this procedure.
 
 Thanks,
 
 Cindy
 
 
 On 07/12/10 16:06, john wrote:
 Hello all. I am new...very new to opensolaris and I am having an issue and 
 have no idea what is going wrong. So I have 5 drives in my machine. all 
 500gb. I installed open solaris on the first drive and rebooted. . Now what I 
 want to do is ad a second drive so they are mirrored. How does one do this!!! 
 I am getting no where and need some help.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Linder, Doug
 
 Out of sheer curiosity - and I'm not disagreeing with you, just
 wondering - how does ZFS make money for Oracle when they don't charge
 for it?  Do you think it's such an important feature that it's a big
 factor in customers picking Solaris over other platforms?

ZFS was the sole factor in my decision to buy a Sun server with solaris this
year, to replace my netapp.  In addition, I bought some dell machines and
paid for solaris on those, to keep around as backup destinations for the
production sun file server.

I absolutely do believe ZFS is a huge selling point for sun hardware and
solaris.  Especially for file servers.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Edward Ned Harvey
 From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]

  A private license, with support and indemnification from Sun, would
  shield Apple from any lawsuit from Netapp.
 
 The patent holder is not compelled
 in any way to offer a license for use of the patent.  Without a patent
 license, shipping products can be stopped dead in their tracks.

It may be true, that Netapp could stop apple from shipping OSX, if Apple had
ZFS in OSX, and Netapp won the lawsuit.  But there was a time when it was
absolutely possible for Sun  Apple to reach an agreement which would limit
Apple's liability in the event of lawsuit waged against them.

CDDL contains an explicit disclaimer of warranty, which means, if Apple were
to download CDDL ZFS source code and compile and distribute it themselves,
they would be fully liable for any lawsuit waged against them.  But CDDL
also allows for Sun to distribute ZFS binaries under a different license, in
which Sun could have assumed responsibility for losses, in the event Apple
were to be sued.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Garrett D'Amore
On Tue, 2010-07-13 at 10:51 -0400, Edward Ned Harvey wrote:
  From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
 
   A private license, with support and indemnification from Sun, would
   shield Apple from any lawsuit from Netapp.
  
  The patent holder is not compelled
  in any way to offer a license for use of the patent.  Without a patent
  license, shipping products can be stopped dead in their tracks.
 
 It may be true, that Netapp could stop apple from shipping OSX, if Apple had
 ZFS in OSX, and Netapp won the lawsuit.  But there was a time when it was
 absolutely possible for Sun  Apple to reach an agreement which would limit
 Apple's liability in the event of lawsuit waged against them.
 
 CDDL contains an explicit disclaimer of warranty, which means, if Apple were
 to download CDDL ZFS source code and compile and distribute it themselves,
 they would be fully liable for any lawsuit waged against them.  But CDDL
 also allows for Sun to distribute ZFS binaries under a different license, in
 which Sun could have assumed responsibility for losses, in the event Apple
 were to be sued.

That would not, IMO, have prevented any potential stop-ship order from
keeping MacOS X shipping.  I just think it would have created a
situation where Apple could have insisted that Oracle (well Sun)
reimburse it for lost revenue.

The lawyers at Sun were typically defensive in that they frowned (very
much) upon any legal agreements which left Sun in a position if
unlimited legal liability.  This actually nearly prevented the
development of certain software, since that software required an NDA
clause which provided for unlimited liability due to lost revenue were
Sun to leak the NDA content.  (We developed the software using openly
obtainable materials rather than NDA content, to prevent this
possibility.)

- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Dave Pooser
On 7/12/10 Jul 12, 10:49 AM, Linder, Doug doug.lin...@merchantlink.com
wrote:

 Out of sheer curiosity - and I'm not disagreeing with you, just wondering -
 how does ZFS make money for Oracle when they don't charge for it?  Do you
 think it's such an important feature that it's a big factor in customers
 picking Solaris over other platforms?

I'm looking at a new web server for the company, and am considering Solaris
specifically because of ZFS. (Oracle's lousy sales model-- specifically the
unwillingness to give a price for a Solaris support contract without my
having to send multiple emails to multiple addresses-- may yet push me back
to my default CentOS platform, but to the extent that Oracle is even in the
running it's because of ZFS.)
-- 
Dave Pooser, ACSA
Manager of Information Services
Alford Media  http://www.alfordmedia.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Joerg Schilling
Edward Ned Harvey solar...@nedharvey.com wrote:

 CDDL contains an explicit disclaimer of warranty, which means, if Apple were
 to download CDDL ZFS source code and compile and distribute it themselves,
 they would be fully liable for any lawsuit waged against them.  But CDDL
 also allows for Sun to distribute ZFS binaries under a different license, in
 which Sun could have assumed responsibility for losses, in the event Apple
 were to be sued.

And in terms of market and commerce, you will not find a partner that will grant
you full liability for software you got for free.

Apple could probably have a chance to get indemnified by Sun if they did pay 
royalties for ZFS..

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-13 Thread David Dyer-Bennet

On Fri, July 9, 2010 16:49, BJ Quinn wrote:
 I have a couple of systems running 2009.06 that hang on relatively large
 zfs send/recv jobs.  With the -v option, I see the snapshots coming
 across, and at some point the process just pauses, IO and CPU usage go to
 zero, and it takes a hard reboot to get back to normal.  The same script
 running against the same data doesn't hang on 2008.05.

 There are maybe 100 snapshots, 200GB of data total.  Just trying to send
 to a blank external USB drive in one case, and in the other, I'm restoring
 from a USB drive to a local drive, but the behavior is the same.

 I see that others have had a similar problem, but there doesn't seem to be
 any answers -

 https://opensolaris.org/jive/thread.jspa?messageID=384540
 http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34493.html
 http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37158.html

 I'd like to stick with a released version of OpenSolaris, so I'm hoping
 that the answer isn't to switch to the dev repository and pull down b134.

I still have this problem (I was msg34493 there).

My original plan was to wait for the Spring release, to get me to a stable
release on more recent code.  I'm still following that plan, i.e. haven't
done anything else yet.  At the time the March release was expected to
actually appear by April.

Other than trying more recent code, I don't recall any useful ideas coming
through the list.

It seems like the thing people recommend as the backup scheme for ZFS
simply doesn't work yet.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-13 Thread David Dyer-Bennet

On Fri, July 9, 2010 18:42, Giovanni Tirloni wrote:
 On Fri, Jul 9, 2010 at 6:49 PM, BJ Quinn bjqu...@seidal.com wrote:
 I have a couple of systems running 2009.06 that hang on relatively large
 zfs send/recv jobs.  With the -v option, I see the snapshots coming
 across, and at some point the process just pauses, IO and CPU usage go
 to zero, and it takes a hard reboot to get back to normal.  The same
 script running against the same data doesn't hang on 2008.05.

 There are issues running concurrent zfs receive in 2009.6. Try to run
 just one at a time.

He's doing the same thing I'm doing -- one send, one receive.  (But
incremental replication.)

 Switching to a development build (b134) is probably the answer until
 we've a new release.

Given that the spring stable release was my planned solution, I'm
starting to think about doing something else myself.

Does anybody have any idea what's up with the stable release, though?  Has
anything been said about the plans that I've maybe missed?

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-13 Thread Brian Leonard
Hi Cindy,

I'm trying to demonstrate how ZFS behaves when a disk fails. The drive 
enclosure I'm using (http://www.icydock.com/product/mb561us-4s-1.html) says it 
supports hot swap, but that's not what I'm experiencing. When I plug the disk 
back in, all 4 disks are no longer recognizable until I restart the enclosure.

This same demo works fine when using USB sticks, and maybe that's because each 
USB stick has its own controller.

Thanks for your help,
Brian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS list snapshots incurs large delay

2010-07-13 Thread Brent Jones
I have been running a pair of X4540's for almost 2 years now, the
usual spec (Quad core, 64GB RAM, 48x 1TB).
I have a pair of mirrored drives for rpool, and a Raidz set with 5-6
disks in each vdev for the rest of the disks.
I am running snv_132 on both systems.

I noticed an oddity on one particular system, that when running a
scrub, or a zfs list -t snapshot, the results take forever.
Mind you, these are identical systems in hardware, and software. The
primary system replicates all data sets to the secondary nightly, so
there isn't much of a discrepancy of space used.

Primary system:
# time zfs list -t snapshot | wc -l
979

real1m23.995s
user0m0.360s
sys 0m4.911s

Secondary system:
# time zfs list -t snapshot | wc -l
979

real0m1.534s
user0m0.223s
sys 0m0.663s


At the time of running both of those, no other activity was happening,
load average of .05 or so. Subsequent runs also take just as long on
the primary, no matter how many times I run it, it will take about 1
minute and 25 seconds each time, very little drift (+- 1 second if
that)

Both systems are at about 77% used space on the storage pool, no other
distinguishing factors that I can discern.
Upon a reboot, performance is respectable for a little while, but
within days, it will sink back to those levels. I suspect a memory
leak, but both systems run the same software versions and packages, so
I can't envision that.

Would anyone have any ideas what may cause this?

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Ketan
I have created a flash archive from a Ldom on T5220 with zfs root solaris 
10_u8. But after creation of flar the info shows the 
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s but not sun4v due to 
which i 'm unable to install this flash archive on another Ldom on the same 
host. Is there any way to modify the content_architectures ? and whats the 
reason for missing sun4v architecture form the content_architecture ? 

flar -i zfsflar_ldom
archive_id=4506fd9b45fba5b2c5e042715da50f0a
files_archived_method=cpio
creation_date=20100713054458
creation_master=e-u013
content_name=zfsBE
creation_node=ezzz-u013
creation_hardware_class=sun4v
creation_platform=SUNW,SPARC-Enterprise-T5220
creation_processor=sparc
creation_release=5.10
creation_os_name=SunOS
creation_os_version=Generic_141444-09
rootpool=rootpool
bootfs=rootpool/ROOT/zfsBE
snapname=zflash.100713.00.07
files_compressed_method=none
files_archived_size=3869213825
files_unarchived_size=3869213825
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s
type=FULL
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] ZFS list snapshots incurs large delay

2010-07-13 Thread Giovanni Tirloni
On Tue, Jul 13, 2010 at 2:44 PM, Brent Jones br...@servuhome.net wrote:
 I have been running a pair of X4540's for almost 2 years now, the
 usual spec (Quad core, 64GB RAM, 48x 1TB).
 I have a pair of mirrored drives for rpool, and a Raidz set with 5-6
 disks in each vdev for the rest of the disks.
 I am running snv_132 on both systems.

 I noticed an oddity on one particular system, that when running a
 scrub, or a zfs list -t snapshot, the results take forever.
 Mind you, these are identical systems in hardware, and software. The
 primary system replicates all data sets to the secondary nightly, so
 there isn't much of a discrepancy of space used.

 Primary system:
 # time zfs list -t snapshot | wc -l
 979

 real    1m23.995s
 user    0m0.360s
 sys     0m4.911s

 Secondary system:
 # time zfs list -t snapshot | wc -l
 979

 real    0m1.534s
 user    0m0.223s
 sys     0m0.663s


 At the time of running both of those, no other activity was happening,
 load average of .05 or so. Subsequent runs also take just as long on
 the primary, no matter how many times I run it, it will take about 1
 minute and 25 seconds each time, very little drift (+- 1 second if
 that)

 Both systems are at about 77% used space on the storage pool, no other
 distinguishing factors that I can discern.
 Upon a reboot, performance is respectable for a little while, but
 within days, it will sink back to those levels. I suspect a memory
 leak, but both systems run the same software versions and packages, so
 I can't envision that.

 Would anyone have any ideas what may cause this?

It could be a disk failing and dragging I/O down with it.

Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`

Any timeouts or retries in /var/adm/messages ?

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Lori Alt


The setting of the content_architectures field is likely to be 
independent of the file system type, so at least at first glance, I 
don't think that this is zfs issue.  You might try this question at 
install-disc...@opensolaris.org.


Lori


On 07/13/10 11:45 AM, Ketan wrote:

I have created a flash archive from a Ldom on T5220 with zfs root solaris 
10_u8. But after creation of flar the info shows the 
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s but not sun4v due to 
which i 'm unable to install this flash archive on another Ldom on the same 
host. Is there any way to modify the content_architectures ? and whats the 
reason for missing sun4v architecture form the content_architecture ?

flar -i zfsflar_ldom
archive_id=4506fd9b45fba5b2c5e042715da50f0a
files_archived_method=cpio
creation_date=20100713054458
creation_master=e-u013
content_name=zfsBE
creation_node=ezzz-u013
creation_hardware_class=sun4v
creation_platform=SUNW,SPARC-Enterprise-T5220
creation_processor=sparc
creation_release=5.10
creation_os_name=SunOS
creation_os_version=Generic_141444-09
rootpool=rootpool
bootfs=rootpool/ROOT/zfsBE
snapname=zflash.100713.00.07
files_compressed_method=none
files_archived_size=3869213825
files_unarchived_size=3869213825
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s
type=FULL
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] ZFS list snapshots incurs large delay

2010-07-13 Thread Brent Jones

 It could be a disk failing and dragging I/O down with it.

 Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`

 Any timeouts or retries in /var/adm/messages ?

 --
 Giovanni Tirloni
 gtirl...@sysdroid.com


I checked for high service times during a scrub, and all disks are
pretty equal.During a scrub, each disks peaks about 350 reads/sec,
with an asvc time of up to 30 during those read spikes (I assume it
means 30ms, which isn't terrible for a highly loaded SATA disk).
No errors reported by smartctl, iostat, or adm/messages

I opened a case on Sunsolve, but I fear since I am running a dev build
that I will be out of luck. I cannot run 2009.06 due to CIFS
segfaults, and problems with zfs send/recv hanging pools (well
documented issues).
I'd run Solaris proper, but not having in-kernel CIFS or COMSTAR would
be a major setback for me.



-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-13 Thread BJ Quinn
I was going with the spring release myself, and finally got tired of waiting.  
Got to build some new servers.

I don't believe you've missed anything.  As I'm sure you know, it was 
originally officially 2010.02, then it was officially 2010.03, then it was 
rumored to be .04,  sort of leaked as .05, semi-officially .06/.1H, and when 
that last one passed, even the rumor mill has gone pretty well dead.  The best 
I can find now is someone rumoring Q4 (although there was some discussion as to 
whether that was calendar Q4, or Oracle's fiscal year Q4, which would make it a 
year away).  At any rate, I'm done waiting on the new release, and out of 
principle I'm not going to use a development release in a real world 
environment.  I don't care what the condition of the code is, if Oracle won't 
declare it as a release, then I can't either to my clients.

FYI 2008.11 doesn't appear to have this problem.  I've done some testing that 
seemed to break 2009.06 every time, and so far it has passed.  That's important 
to me since I need the zfs_write_limit_override setting, which isn't 
available in 2008.05.

So for me it looks like 2008.11 until 2010.Unicorn comes out or BTRFS gets 
deduplication (or maybe even if not).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL SSD failed

2010-07-13 Thread Miles Nordin
 ds == Dmitry Sorokin dmitry.soro...@bmcorp.ca writes:

ds The SSD drive has failed and zpool is unavailable anymore.

AIUI,

 6733267 Allow a pool to be imported with a missing slog

is only fixed for the case where the pool is still imported.  If you
export it without removing the slog first, the pool is lost.

Instructions here:

 http://opensolaris.org/jive/thread.jspa?messageID=377018
 http://github.com/pjjw/logfix/tree/master

how how to ``fake out'' the lazy assertions, but you have to prepare
to use the workaround before your slog fails by noting its GUID.

If you don't know the GUID, then it is as Richard Elling says, ``a
rather long trial-and-error process.''  Decoded from Fanboi-ese into
English, the ``rather long'' process is ``finding a sha1 hash
collision.''

so either UTFS or ``restore from backup.'' :(


pgpyK7PHBQp9Y.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot zvols/iscsi send backup

2010-07-13 Thread Gary Leong
Thanks for quick response. I appreciate it much.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-13 Thread Ian Collins

On 07/14/10 03:55 AM, David Dyer-Bennet wrote:

On Fri, July 9, 2010 16:49, BJ Quinn wrote:
   

I have a couple of systems running 2009.06 that hang on relatively large
zfs send/recv jobs.  With the -v option, I see the snapshots coming
across, and at some point the process just pauses, IO and CPU usage go to
zero, and it takes a hard reboot to get back to normal.  The same script
running against the same data doesn't hang on 2008.05.

There are maybe 100 snapshots, 200GB of data total.  Just trying to send
to a blank external USB drive in one case, and in the other, I'm restoring
from a USB drive to a local drive, but the behavior is the same.

I see that others have had a similar problem, but there doesn't seem to be
any answers -

https://opensolaris.org/jive/thread.jspa?messageID=384540
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34493.html
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37158.html

I'd like to stick with a released version of OpenSolaris, so I'm hoping
that the answer isn't to switch to the dev repository and pull down b134.
 

I still have this problem (I was msg34493 there).

My original plan was to wait for the Spring release, to get me to a stable
release on more recent code.  I'm still following that plan, i.e. haven't
done anything else yet.  At the time the March release was expected to
actually appear by April.

Other than trying more recent code, I don't recall any useful ideas coming
through the list.

It seems like the thing people recommend as the backup scheme for ZFS
simply doesn't work yet.
   


It has been working for a long time.

All of the lock-up issues I had were fixed in Solaris 10 update 8.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL SSD failed

2010-07-13 Thread Dmitry Sorokin
Hi Richard,

What happened is this SSD gave some IO errors and the pool become degraded, so 
after the machine got rebooted, I found that the pool became unavailable.
The SSD drive itself is toasted, as bios now reports it as 8 GB in size and 
name is Inuem SS E Cootmoader!, so all the partitions are gone and it is 
completely unusable at the moment.
I was able to find GUID of the slog from zpool.cache file that I backed up in 
January this year. However I was unable to compile logfix binary and the one 
provided by the author (compiled on snv_111) dumps core on snv_129 and snv_134.
Does anyone have logfix binary compiled for snv_129?

Thanks,
Dmitry


-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com] 
Sent: Tuesday, July 13, 2010 5:12 PM
To: Dmitry Sorokin
Subject: Re: [zfs-discuss] ZIL SSD failed

Is the drive still there? If so, then try removing it or temporarily zeroing 
out the s0 label. If that allows zpool import -F to work, then please let me 
know and I'll add your post to a new bug report. I have seen something like 
this recently and have been trying to reproduce. 

 -- richard

On Jul 12, 2010, at 6:22 PM, Dmitry Sorokin dmitry.soro...@bmcorp.ca wrote:

 I have/had Intel M25-E 32GB SSD drive as ZIL/cache device (2 GB ZIL 
 slice0 and the rest is cache slice1)
 
 The SSD drive has failed and zpool is unavailable anymore.
 
 Is there any way to import the pool/recover data, even with some latest 
 transactions lost?
 
 I’ve tried zdb –e -bcsvL pool name but it didn’t work.
 
  
 
 Below are the details:
 
  
 
 [r...@storage ~]# uname -a
 
 SunOS storage 5.11 snv_129 i86pc i386 i86pc
 
  
 
 [r...@storage ~]# zpool import
 
   pool: neosys
 
 id: 1346464136813319526
 
 state: UNAVAIL
 
 status: The pool was last accessed by another system.
 
 action: The pool cannot be imported due to damaged devices or data.
 
see: http://www.sun.com/msg/ZFS-8000-EY
 
 config:
 
  
 
 neosys   UNAVAIL  missing device
 
   raidz2-0   ONLINE
 
 c4t0d0   ONLINE
 
 c4t1d0   ONLINE
 
 c4t2d0   ONLINE
 
 c4t3d0   ONLINE
 
 c4t4d0   ONLINE
 
 c4t5d0   ONLINE
 
 c4t6d0   ONLINE
 
 c4t7d0   ONLINE
 
  
 
 [r...@storage ~]# zdb -e neosys
 
  
 
 Configuration for import:
 
 vdev_children: 2
 
 version: 22
 
 pool_guid: 1346464136813319526
 
 name: 'neosys'
 
 state: 0
 
 hostid: 577477
 
 hostname: 'storage'
 
 vdev_tree:
 
 type: 'root'
 
 id: 0
 
 guid: 1346464136813319526
 
 children[0]:
 
 type: 'raidz'
 
 id: 0
 
 guid: 12671265726510370964
 
 nparity: 2
 
 metaslab_array: 25
 
 metaslab_shift: 35
 
 ashift: 9
 
 asize: 4000755744768
 
 is_log: 0
 
 children[0]:
 
 type: 'disk'
 
 id: 0
 
 guid: 10831801542309994254
 
 phys_path: 
 '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@0,0:a'
 
 whole_disk: 1
 
 DTL: 3489
 
 path: '/dev/dsk/c4t0d0s0'
 
 devid: 'id1,s...@n5000cca32cc21642/a'
 
 children[1]:
 
 type: 'disk'
 
 id: 1
 
 guid: 39402223705908332
 
 phys_path: 
 '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@1,0:a'
 
 whole_disk: 1
 
 DTL: 3488
 
 path: '/dev/dsk/c4t1d0s0'
 
 devid: 'id1,s...@n5000cca32cc1f061/a'
 
 children[2]:
 
 type: 'disk'
 
 id: 2
 
 guid: 5642566785254158202
 
 phys_path: 
 '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@2,0:a'
 
 whole_disk: 1
 
 DTL: 3487
 
 path: '/dev/dsk/c4t2d0s0'
 
 devid: 'id1,s...@n5000cca32cc20121/a'
 
 children[3]:
 
 type: 'disk'
 
 id: 3
 
 guid: 5006664765902732873
 
 phys_path: 
 '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@3,0:a'
 
 whole_disk: 1
 
 DTL: 3486
 
 path: '/dev/dsk/c4t3d0s0'
 
 devid: 'id1,s...@n5000cca32cf43053/a'
 
 children[4]:
 
 type: 'disk'
 
 id: 4
 
 guid: 106648579627377843
 
 phys_path: 
 '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@4,0:a'
 
 whole_disk: 1
 
 DTL: 3485
 
 path: 

[zfs-discuss] question re how data is stored in L2ARC cache

2010-07-13 Thread Frederic Vander Elst

Hello,

I'm using Nexenta and am impressed by allof the OpenSolaris, ZFS and 
Nexenta parts, thank you for feeing us of the RAID hole.


Having created zvols in a pool (6 mirror sets, 2 SSDs for L2ARC), with 
compression turned on, I am now realising that the data stored in the 
L2ARC SSD is the 'raw', not the 'compressed' form (just finished reading 
http://blogs.sun.com/dap/entry/zfs_compression).


Does anyone know if it would be technically feasible (one day) to cache 
the compressed data there ?


For data (like databases) that compresses quite well (e.g. 3.5x), if the 
compressed form is stored, a 100GB L2ARC cache essentially would hold 
350GB, which would be a big win.


If this is potentially in the pipeline great... Otherwise, I'll go back 
to the SSD shop...


Many thanks again

Frederic

--
Frederic Vander Elst
Group Head of IT
The pH Group
Direct Line: 020 7598 0320
Office Line: 020 7598 0310
Mobile: 07817 179 593
Fax: 020 7598 0311
www.phgroup.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Status Graph?

2010-07-13 Thread Beau J. Bechdol
One question I did have and it might be a stupid one. SO on this video
http://www.youtube.com/watch?v=CN6iDzesEs0feature=related or this one
http://www.youtube.com/watch?v=QGIwg6ye1gE

What is the greenbars that is on the right of the projector screen?

Thanks

-Beau
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-13 Thread Brian Leonard
Actually, there's still the primary issue of this post - the apparent hang. At 
the moment, I have 3 zpool commands running, all apparently hung and doing 
nothing:

bleon...@opensolaris:~$ ps -ef | grep zpool
root 20465 20411   0 18:10:44 pts/4   0:00 zpool clear r5pool
root 20408 20403   0 18:08:19 pts/3   0:00 zpool status r5pool
root 20396 17612   0 18:08:04 pts/2   0:00 zpool scrub r5pool

You can see all of them are not very busy, and seem to be waiting on something:

bleon...@opensolaris:~# ptime -p 20465
real12:25.188031517
user0.004037420
sys 0.008682963

bleon...@opensolaris:~# ptime -p 20408
real15:03.977246851
user0.002700817
sys 0.005662413

bleon...@opensolaris:~# ptime -p 20396
real15:24.793176743
user0.002954137
sys 0.014851215

And as I said earlier, I can't control+break or kill any of these processes. 
Time for hard-reboot.

/Brian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread BM
On Tue, Jul 13, 2010 at 11:40 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
 ZFS was the sole factor in my decision to buy a Sun server with solaris this
 year, to replace my netapp.  In addition, I bought some dell machines and
 paid for solaris on those, to keep around as backup destinations for the
 production sun file server.

 I absolutely do believe ZFS is a huge selling point for sun hardware and
 solaris.  Especially for file servers.

Yes, as long as you're buying that OS from Oracle. :-)

But don't forget that Oracle looks like killing OpenSolaris and entire community
after all: there are no latest builds at genunix.org (latest is 134 and seems
like that's it), Oracle stopped build OSOL after build 135 (I have no idea where
this build is) and Oracle is building Solaris Next or something like that —
I have no idea where to get that thing either.

So no more free Solaris that you can use in a business, supporting by yourself,
no more chance to build a reliable free storage or something like that (Nexenta
is building their stuff on top of *outdated* 134 build). Latest
checkout won't build
OS either (I tried and it fails). So the repository might be
intentionally broken,
in order you not to build stuff yourself, but actually go and buy
Oracle product.

Also no more free security updates and no more hardware-only support.
That means that community soon will shrink to zero. Oracle basically lied
about Fedora/RHEL model analogy (which would be great if that would
happen).

I wish I am wrong, but looks to me pretty much game over, folks:
Oracle appeared to be complete idiots towards the community. Same probably
will happen to Java.

:-(

-- 
Kind regards, BM

Things, that are stupid at the beginning, rarely ends up wisely.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Ketan
Ok thanx will do that .
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dave Pooser
 
 I'm looking at a new web server for the company, and am considering
 Solaris
 specifically because of ZFS. (Oracle's lousy sales model-- specifically
 the
 unwillingness to give a price for a Solaris support contract without my
 having to send multiple emails to multiple addresses-- may yet push me
 back
 to my default CentOS platform, but to the extent that Oracle is even in
 the
 running it's because of ZFS.)

Here's a really simple way to get some pricing information:

Go to Dell.com.  Servers.  Servers.  Rack.  Enhanced.  PowerEdge R710
(Customize.)
You could pick any server that supports solaris.  I just chose the R710
because I know it does.

Operating system: 
None$1299  ($0)
1yr basic $1768  ($469)
1yr standard $2149 ($850)
1yr premium $2569 ($1270)
3yr basic $2533 ($1234)
3yr standard $3619 ($2320)
3yr premium $4753 ($3454)

If you're new to solaris etc, I might not recommend the Dell because
installation isn't straightforward.  Hardware support exists, but it's less
enterprise than what you might expect.  The sun hardware is the
recommended way to go, but it's also more expensive.

Then again, if you're considering centos, you're probably not running on
enterprise grade hardware.  I know I can't get centos to reliably run
things like OpenManage for raid controller configuration.  Which is
necessary if you want to replace hotspare drives without rebooting the
server.  You don't need openmanage if you have no hotspares ... for example
... all disks in a raid6 would be ok and autoresilver without intervention.

I do run solaris on an R710.  There is no openmanage, but there is MegaCLI,
which was ridiculously hard to find, and ridiculously confusing to use, and
poorly documented and poorly supported if you're confused.

On the dell, assuming you use perc, assuming you have a hotspare, the
recommended solution for linux would be RHEL or SLES and not centos.  You'd
pay $350/yr as compared to solaris $470/yr

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Ian Collins

On 07/14/10 04:20 PM, Edward Ned Harvey wrote:

Here's a really simple way to get some pricing information:

Go to Dell.com.  Servers.  Servers.  Rack.  Enhanced.  PowerEdge R710
(Customize.)
You could pick any server that supports solaris.  I just chose the R710
because I know it does.

Operating system:
None$1299  ($0)
1yr basic $1768  ($469)
1yr standard $2149 ($850)
1yr premium $2569 ($1270)
3yr basic $2533 ($1234)
3yr standard $3619 ($2320)
3yr premium $4753 ($3454)

If you're new to solaris etc, I might not recommend the Dell because
installation isn't straightforward.  Hardware support exists, but it's less
enterprise than what you might expect.  The sun hardware is the
recommended way to go, but it's also more expensive.
   


Not in my neck of the woods, Sun have always been most competitive.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Edward Ned Harvey
 From: BM [mailto:bogdan.maryn...@gmail.com]
 
 But don't forget that Oracle looks like killing OpenSolaris and entire
 community
 after all: there are no latest builds at genunix.org (latest is 134 and
 seems
 like that's it), Oracle stopped build OSOL after build 135 (I have no
 idea where
 this build is) 

It is true there's no new build published in the last 3 months.  But you can't 
use that to assume they're killing the community.

I have said many times:  Consider what the possibilities are.  Solaris 10 is 
lacking features and bugfixes which are present in the free opensolaris.  Very 
marketable features, such as dedupe, and log device removal.

Oracle has stated that their focus is on commercialization of solaris.  Solaris 
10 is long overdue for a new release.  It's entirely possible (and would make 
total sense) that they're shifting development effort away from the opensolaris 
community, to push out the Next version of solaris...  Probably called Oracle 
Solaris 11.

They said they would release the next opensolaris in H1 this year, but they're 
overdue.  They also said they would release the next Solaris this year.  It's 
entirely possible they just need all the developers they have, to deliver that 
goal.

IMHO, I think this possibility is more likely than the we are killing 
opensolaris possibility.  The latter wouldn't make any sense.


 and Oracle is building Solaris Next or something like
 that —
 I have no idea where to get that thing either.

There's no known name for that.  The Next version of solaris, I suspect, will 
be called Oracle Solaris 11, but until that is announced, nobody knows.  And 
it's colloquially and unofficially called Solaris Next or Solaris 11

Whenever that becomes available, it will be available via the usual commercial 
channels.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Edward Ned Harvey
 From: Ian Collins [mailto:i...@ianshome.com]
  From: Edward Ned Harvey
  The sun hardware is the
  recommended way to go, but it's also more expensive.
 
 Not in my neck of the woods, Sun have always been most competitive.

Interesting.  I wonder what's different between you and me?  I most often
buy relatively low-end servers, say, one CPU, 4 cores, 16G ram, 6TB disks
SATA.  I might expect to pay $3k or $4k.  Last October, I didn't see any sun
offering below $6k...

I know Sun has better high end servers.  Maybe that's what you're buying and
maybe that's the area where sun's prices are more competitive?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss