Re: [zfs-discuss] ZFS snapshot zvols/iscsi send backup

2010-07-12 Thread Ian Collins

On 07/13/10 12:26 PM, Gary Leong wrote:

I'm looking to use ZFS to export ISCSI volumes to a Windows/Linux client.  
Essentially, I'm looking to create two storage ZFS machines that I will export 
ISCSI targets from.  Then from the client side, I will enable mirrorings.  The 
two ZFS machines will be independent of each other.  I had question about 
snapshoting of ISCSI zvols.

If I do a snapshot of ISCSI volume, it snapshots the blocks.  I know the 
sending the blocks will allow from some from of replication.  However, if I 
send the snapshot to a file, will I be able to recover the ISCSI volume from 
the file(s)?

e.g.

zfs send tank/t...@1 | gzip -c>  zfs.tank.test.gz

Can I recover this ISCSI volume from zfs.tank.test.gz by sending it directly to 
another ZFS machine?


Yes.  The send data stream is just that, a stream of data.  If you want 
to archive the file, do a test receive first to make sure there isn't 
any data corruption.



Will I then be able to mount the ZFS volume created from this file and have my 
filesystem be the way it was?
   


Yes.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZIL SSD failed

2010-07-12 Thread Dmitry Sorokin
I have/had Intel M25-E 32GB SSD drive as ZIL/cache device (2 GB ZIL
slice0 and the rest is cache slice1)

The SSD drive has failed and zpool is unavailable anymore. 

Is there any way to import the pool/recover data, even with some latest
transactions lost?

I've tried zdb -e -bcsvL  but it didn't work.

 

Below are the details:

 

[r...@storage ~]# uname -a

SunOS storage 5.11 snv_129 i86pc i386 i86pc

 

[r...@storage ~]# zpool import

  pool: neosys

id: 1346464136813319526

state: UNAVAIL

status: The pool was last accessed by another system.

action: The pool cannot be imported due to damaged devices or data.

   see: http://www.sun.com/msg/ZFS-8000-EY

config:

 

neosys   UNAVAIL  missing device

  raidz2-0   ONLINE

c4t0d0   ONLINE

c4t1d0   ONLINE

c4t2d0   ONLINE

c4t3d0   ONLINE

c4t4d0   ONLINE

c4t5d0   ONLINE

c4t6d0   ONLINE

c4t7d0   ONLINE

 

[r...@storage ~]# zdb -e neosys

 

Configuration for import:

vdev_children: 2

version: 22

pool_guid: 1346464136813319526

name: 'neosys'

state: 0

hostid: 577477

hostname: 'storage'

vdev_tree:

type: 'root'

id: 0

guid: 1346464136813319526

children[0]:

type: 'raidz'

id: 0

guid: 12671265726510370964

nparity: 2

metaslab_array: 25

metaslab_shift: 35

ashift: 9

asize: 4000755744768

is_log: 0

children[0]:

type: 'disk'

id: 0

guid: 10831801542309994254

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@0,0:a'

whole_disk: 1

DTL: 3489

path: '/dev/dsk/c4t0d0s0'

devid: 'id1,s...@n5000cca32cc21642/a'

children[1]:

type: 'disk'

id: 1

guid: 39402223705908332

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@1,0:a'

whole_disk: 1

DTL: 3488

path: '/dev/dsk/c4t1d0s0'

devid: 'id1,s...@n5000cca32cc1f061/a'

children[2]:

type: 'disk'

id: 2

guid: 5642566785254158202

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@2,0:a'

whole_disk: 1

DTL: 3487

path: '/dev/dsk/c4t2d0s0'

devid: 'id1,s...@n5000cca32cc20121/a'

children[3]:

type: 'disk'

id: 3

guid: 5006664765902732873

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@3,0:a'

whole_disk: 1

DTL: 3486

path: '/dev/dsk/c4t3d0s0'

devid: 'id1,s...@n5000cca32cf43053/a'

children[4]:

type: 'disk'

id: 4

guid: 106648579627377843

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@4,0:a'

whole_disk: 1

DTL: 3485

path: '/dev/dsk/c4t4d0s0'

devid: 'id1,s...@n5000cca34ddf64e4/a'

children[5]:

type: 'disk'

id: 5

guid: 16829737373647293224

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@5,0:a'

whole_disk: 1

DTL: 3484

path: '/dev/dsk/c4t5d0s0'

devid: 'id1,s...@n5000cca34ddf6489/a'

children[6]:

type: 'disk'

id: 6

guid: 8848352534289847923

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@6,0:a'

whole_disk: 1

DTL: 3503

path: '/dev/dsk/c4t6d0s0'

devid: 'id1,s...@n5000cca357ec765c/a'

children[7]:

type: 'disk'

id: 7

guid: 6940643930453962294

phys_path:
'/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@7,0:a'

whole_disk: 1

DTL: 3469

path: '/dev/dsk/c4t7d0s0'

devid: 'id1,s...@n5000cca357ec9b07/a'

children[1]:

type: 'missing'

id: 1

guid: 0

zdb: can't open 'neosys': No such device or address

 

[r...@storage ~]#

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Jens Elkner
On Mon, Jul 12, 2010 at 05:05:41PM +0100, Andrew Gabriel wrote:
> Linder, Doug wrote:
> >Out of sheer curiosity - and I'm not disagreeing with you, just wondering 
> >- how does ZFS make money for Oracle when they don't charge for it?  Do 
> >you think it's such an important feature that it's a big factor in 
> >customers picking Solaris over other platforms?
> >  
> 
> Yes, it is one of many significant factors in customers choosing Solaris 
> over other OS's.
> Having chosen Solaris, customers then tend to buy Sun/Oracle systems to 
> run it on.

2x hit the nail on the head. But only if one doesn't have to sell
its kingdom to get recommended/security patches. Otherwise the windooze
nerds take over ...

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot zvols/iscsi send backup

2010-07-12 Thread Gary Leong
I'm looking to use ZFS to export ISCSI volumes to a Windows/Linux client.  
Essentially, I'm looking to create two storage ZFS machines that I will export 
ISCSI targets from.  Then from the client side, I will enable mirrorings.  The 
two ZFS machines will be independent of each other.  I had question about 
snapshoting of ISCSI zvols.  

If I do a snapshot of ISCSI volume, it snapshots the blocks.  I know the 
sending the blocks will allow from some from of replication.  However, if I 
send the snapshot to a file, will I be able to recover the ISCSI volume from 
the file(s)?  

e.g.

zfs send tank/t...@1 | gzip -c > zfs.tank.test.gz

Can I recover this ISCSI volume from zfs.tank.test.gz by sending it directly to 
another ZFS machine?  Will I then be able to mount the ZFS volume created from 
this file and have my filesystem be the way it was?  If I assemble the blocks 
like they were before, I assume it assembles everything the way it was before, 
including the filesytem and such.  

Or am I incorrect about this?

Gary
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-12 Thread Kris Kasner


Thanks for the reply..

I got derailed by a DBA while writing the email, I should have been more 
clear - I realize that the 'DEGRADED' states should resolve after I replace the 
disk, but what about the section that states:

" errors: Permanent errors have been detected in the following files: "


Will those resolve too? or will it still think that there are corrupt files 
lying around. They all had valid paths at the start of the process, when I 
unlinked them and replaced them with good copies they changed to the

 zroot/packages:<0x2531d>
 <0x6e>:<0xc0f2>

format.

I'm mostly concerned because I want spool status to show up clean and error 
free so our monitoring can catch it correctly.


Thanks again.

--Kris

Today at 16:15, Garrett D'Amore  wrote:


Hey Kris (glad to see someone from my QCOM days!):

It should automatically clear itself when you replace the disk.  Right
now you're still degraded since you don't have full redundancy.

- Garrett


On Mon, 2010-07-12 at 16:10 -0700, Kris Kasner wrote:

Hi Folks..

I have a system that was inadvertently left unmirrored for root. We were able
to add a mirror disk, resilver, and fix the corrupted files (nothing very
interesting was corrupt, whew), but zpool status -v still shows errors..

Will this self correct when we replace the degraded disk and resilver? Or is
there something else that I'm not finding that I need to do to clean up?

This is Solaris 10 u8, zpool v15
15:52:50 catalina(34)> sudo zpool status -v
   pool: zroot
  state: DEGRADED
status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: resilver completed after 0h48m with 15 errors on Mon Jul 12 15:41:50
2010
config:

 NAME  STATE READ WRITE CKSUM
 zroot DEGRADED18 0 0
   mirror  DEGRADED44 023
 c1t1d0s2  DEGRADED74 023  too many errors
 c1t0d0s2  ONLINE   0 067  29.8G resilvered

errors: Permanent errors have been detected in the following files:

 zroot/packages:<0xad58>
 zroot/packages:<0x11477>
 zroot/packages:<0x2531d>
 <0x6e>:<0xc0f2>
 <0x6e>:<0xce68>
 <0x6e>:<0x28d9f>
 <0x6e>:<0x2b5c1>
 <0x76>:<0x17369>
 <0x86>:<0x11fda>
 <0x86>:<0x13253>
 <0x86>:<0x13346>
 <0x86>:<0x33ed3>
 <0x86>:<0x38fcd>
 <0x86>:<0x39007>
15:53:04 catalina(35)>


Thanks for any suggestions. The system is in another city, so I can't quickly
test replacing the disk and see what happens..

Kris






--

Thomas Kris Kasner
Qualcomm Inc.
5775 Morehouse Drive
San Diego, CA 92121
(858)658-4932


"Do not meddle in the affairs of cats,
for they are subtle and will
pee on your computer." --Bruce Graham
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-12 Thread Ian Collins

On 07/13/10 11:10 AM, Kris Kasner wrote:


Hi Folks..

I have a system that was inadvertently left unmirrored for root. We 
were able to add a mirror disk, resilver, and fix the corrupted files 
(nothing very interesting was corrupt, whew), but zpool status -v 
still shows errors..


Will this self correct when we replace the degraded disk and resilver? 
Or is there something else that I'm not finding that I need to do to 
clean up?


This is Solaris 10 u8, zpool v15
15:52:50 catalina(34)> sudo zpool status -v
  pool: zroot
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 0h48m with 15 errors on Mon Jul 12 
15:41:50 2010

config:

NAME  STATE READ WRITE CKSUM
zroot DEGRADED18 0 0
  mirror  DEGRADED44 023
c1t1d0s2  DEGRADED74 023  too many errors
c1t0d0s2  ONLINE   0 067  29.8G resilvered


What happens if you zpool detach the degraded drive?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-12 Thread Garrett D'Amore
Hey Kris (glad to see someone from my QCOM days!):

It should automatically clear itself when you replace the disk.  Right
now you're still degraded since you don't have full redundancy.

- Garrett


On Mon, 2010-07-12 at 16:10 -0700, Kris Kasner wrote:
> Hi Folks..
> 
> I have a system that was inadvertently left unmirrored for root. We were able 
> to add a mirror disk, resilver, and fix the corrupted files (nothing very 
> interesting was corrupt, whew), but zpool status -v still shows errors..
> 
> Will this self correct when we replace the degraded disk and resilver? Or is 
> there something else that I'm not finding that I need to do to clean up?
> 
> This is Solaris 10 u8, zpool v15
> 15:52:50 catalina(34)> sudo zpool status -v
>pool: zroot
>   state: DEGRADED
> status: One or more devices has experienced an error resulting in data
>  corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
>  entire pool from backup.
> see: http://www.sun.com/msg/ZFS-8000-8A
>   scrub: resilver completed after 0h48m with 15 errors on Mon Jul 12 15:41:50 
> 2010
> config:
> 
>  NAME  STATE READ WRITE CKSUM
>  zroot DEGRADED18 0 0
>mirror  DEGRADED44 023
>  c1t1d0s2  DEGRADED74 023  too many errors
>  c1t0d0s2  ONLINE   0 067  29.8G resilvered
> 
> errors: Permanent errors have been detected in the following files:
> 
>  zroot/packages:<0xad58>
>  zroot/packages:<0x11477>
>  zroot/packages:<0x2531d>
>  <0x6e>:<0xc0f2>
>  <0x6e>:<0xce68>
>  <0x6e>:<0x28d9f>
>  <0x6e>:<0x2b5c1>
>  <0x76>:<0x17369>
>  <0x86>:<0x11fda>
>  <0x86>:<0x13253>
>  <0x86>:<0x13346>
>  <0x86>:<0x33ed3>
>  <0x86>:<0x38fcd>
>  <0x86>:<0x39007>
> 15:53:04 catalina(35)>
> 
> 
> Thanks for any suggestions. The system is in another city, so I can't quickly 
> test replacing the disk and see what happens..
> 
> Kris
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-12 Thread Kris Kasner


Hi Folks..

I have a system that was inadvertently left unmirrored for root. We were able 
to add a mirror disk, resilver, and fix the corrupted files (nothing very 
interesting was corrupt, whew), but zpool status -v still shows errors..


Will this self correct when we replace the degraded disk and resilver? Or is 
there something else that I'm not finding that I need to do to clean up?


This is Solaris 10 u8, zpool v15
15:52:50 catalina(34)> sudo zpool status -v
  pool: zroot
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 0h48m with 15 errors on Mon Jul 12 15:41:50 
2010

config:

NAME  STATE READ WRITE CKSUM
zroot DEGRADED18 0 0
  mirror  DEGRADED44 023
c1t1d0s2  DEGRADED74 023  too many errors
c1t0d0s2  ONLINE   0 067  29.8G resilvered

errors: Permanent errors have been detected in the following files:

zroot/packages:<0xad58>
zroot/packages:<0x11477>
zroot/packages:<0x2531d>
<0x6e>:<0xc0f2>
<0x6e>:<0xce68>
<0x6e>:<0x28d9f>
<0x6e>:<0x2b5c1>
<0x76>:<0x17369>
<0x86>:<0x11fda>
<0x86>:<0x13253>
<0x86>:<0x13346>
<0x86>:<0x33ed3>
<0x86>:<0x38fcd>
<0x86>:<0x39007>
15:53:04 catalina(35)>


Thanks for any suggestions. The system is in another city, so I can't quickly 
test replacing the disk and see what happens..


Kris

--

Kris Kasner
Qualcomm Inc.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need ZFS master!

2010-07-12 Thread Cindy Swearingen

Hi John,

Follow the steps in this section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

If the disk is correctly labeled with an SMI label, then you can skip
down to steps 5-8 of this procedure.

Thanks,

Cindy


On 07/12/10 16:06, john wrote:

Hello all. I am new...very new to opensolaris and I am having an issue and have 
no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I 
installed open solaris on the first drive and rebooted. . Now what I want to do 
is ad a second drive so they are mirrored. How does one do this!!! I am getting 
no where and need some help.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
Actually my current servers are 2008.05, and I noticed the problems I was 
having with 2009.06 BEFORE I put those up as the new servers, so my pools are 
not too new to revert back to 2008.11, I'd actually be upgrading from 2008.05.

I do not have paid support, but it's just not going to go over well with the 
client to use a development build (especially if something goes wrong).

I'd really like to use 2008.11 if someone can confirm that the zfs send/recv 
hangs were introduced AFTER 2008.11.  I'm in the process of trying it myself, 
but since it's intermittent, I'd feel better if someone knew when the problems 
were introduced.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need ZFS master!

2010-07-12 Thread john
Hello all. I am new...very new to opensolaris and I am having an issue and have 
no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I 
installed open solaris on the first drive and rebooted. . Now what I want to do 
is ad a second drive so they are mirrored. How does one do this!!! I am getting 
no where and need some help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread Ian Collins

On 07/13/10 06:48 AM, BJ Quinn wrote:

Yeah, it's just that I don't think I'll be allowed to put up a dev version, but 
I would probably get away with putting up 2008.11 if it doesn't have the same 
problems with zfs send/recv.  Does anyone know?
   


That would be a silly thing to do.  Your pools and filesystems would be 
to too new to revert back.  You would also have all the bugs that were 
fixed in your current release.


Unless you have paid support, there is no sensible reason not to use the 
latest build.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-07-12 Thread Alex Krasnov
> From this output it appears as if Solaris, via the
> BIOS I presume, it looks like my BIOS thinks it
> doesn't have ECC RAM, even though all the memory
> modules are indeed ECC modules.
> 
> Might be time to check (1) my current BIOS settings,
> even though I felt sure ECC was enabled in the BIOS
> already, and (2) check for a newer BIOS update. A
> pity, as the machine has been rock-solid so far, and
> I don't like changing stable BIOSes...

My apologies for resurrecting this thread, but I am curious whether you have 
had any success enabling ECC on your M2N-SLI machine, using either the BIOS or 
the setpci scripts. I am experiencing a similar issue with my M2N32-SLI 
machine. The BIOS reports that ECC is turned on, but smbios reports that it is 
turned off:

IDSIZE TYPE
0 106  SMB_TYPE_BIOS (BIOS information)

  Vendor: Phoenix Technologies, LTD
  Version String: ASUS M2N32-SLI DELUXE ACPI BIOS Revision 2001
  Release Date: 05/19/2008
  Address Segment: 0xe000
  ROM Size: 1048576 bytes
  Image Size: 131072 bytes
  Characteristics: 0x7fcb9e80

IDSIZE TYPE
6315   SMB_TYPE_MEMARRAY (physical memory array)

  Location: 3 (system board or motherboard)
  Use: 3 (system memory)
  ECC: 3 (none)
  Number of Slots/Sockets: 4
  Memory Error Data: Not Supported
  Max Capacity: 17179869184 bytes
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
Garrett wrote:
>I don't know about ramifications (though I suspect that a broadening
>error scope would decrease ZFS' ability to isolate and work around
>problematic regions on the media), but one thing I do know.  If you use
>FreeBSD disk encryption below ZFS, then you won't be able able to import
>your pools to another implementation -- you will be stuck with FreeBSD.


This is an excellent point.  Geli isn't a good option for me, then, though 
using 
encryption outside of the VM would still work.

>Btw, if you want a commercially supported and maintained product, have
>you looked at NexentaStor?  Regardless of what happens with OpenSolaris,
>we aren't going anywhere. (Full disclosure: I'm a Nexenta Systems
>employee. :-)


I probably ought to consider other OpenSolaris alternatives, like NexentaStor. 
 (Though I'd be looking at the free version, not the commercial one: this is 
just for personal use, despite how careful I'm being with it. :) )  However 
(and 
please correct me if I'm wrong), isn't your future still tied to the future of 
OpenSolaris?  The code is open, of course, but my understanding is that there 
isn't the same kind of developer community supporting OpenSolaris itself that 
you see with Linux (or even the BSDs).

In other words, if Oracle stops development of OpenSolaris, there wouldn't be 
enough developers still working on it to keep it from stagnating.  Or are you 
saying that you employ enough kernel hackers to keep up even without Oracle?  
(I 
am admittedly ignorant about the OpenSolaris developer community; this is all 
based on others' statements and opinions that I've read.)

Michael


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Garrett D'Amore
On Mon, 2010-07-12 at 12:55 -0700, Brandon High wrote:
> On Mon, Jul 12, 2010 at 10:00 AM, Garrett D'Amore
>  wrote:
> Btw, if you want a commercially supported and maintained
> product, have
> you looked at NexentaStor?  Regardless of what happens with
> OpenSolaris,
> we aren't going anywhere. (Full disclosure: I'm a Nexenta
> Systems
> employee. :-)
> 
> 
> I'm trying to decide for myself when I'll give up on Oracle releasing
> another dev or release build and moving to something like Nexenta
> Core.
> 
> 
> I actually *like* the Solaris user space, so GNU/Debian userspace
> isn't that compelling for me. 

The distinction is quickly shrinking.  I think the trend has been to
value compatibility with Linux over compatibility with legacy Solaris,
at least for OpenSolaris and probably also for whatever next release of
Solaris might be forthcoming.  (At least at the shell/command line
level.  The *library* level -i.e. C API - is a totally different story,
of course.)

> I see it enough at work that using something different at home is
> novel and helps keep me honest. I also don't see a roadmap for the
> upcoming releases or what release of Debian or Ubuntu they'll be based
> on.

We have plans centered around 3.0.x and 3.1, and our plans for 4.0 are
still forming.  For 3.0. and 3.1, we will remain based on the same
release of Ubuntu.  For 4.0, there will be a major change, but the
ultimate base of this is still under debate.

I don't know if marketing has released any timelines yet, so I won't do
so here.  But you should contact our sales group if you want to find out
more -- they can probably say more than I can.

- Garrett

> 
> 
> -B
> 
> -- 
> Brandon High : bh...@freaks.com
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Brandon High
On Mon, Jul 12, 2010 at 10:00 AM, Garrett D'Amore wrote:

> Btw, if you want a commercially supported and maintained product, have
> you looked at NexentaStor?  Regardless of what happens with OpenSolaris,
> we aren't going anywhere. (Full disclosure: I'm a Nexenta Systems
> employee. :-)
>

I'm trying to decide for myself when I'll give up on Oracle releasing
another dev or release build and moving to something like Nexenta Core.

I actually *like* the Solaris user space, so GNU/Debian userspace isn't that
compelling for me. I see it enough at work that using something different at
home is novel and helps keep me honest. I also don't see a roadmap for the
upcoming releases or what release of Debian or Ubuntu they'll be based on.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cache flush (or the lack of such) and corruption

2010-07-12 Thread James Van Artsdalen
ZFS is a copy-on-write filesystem.  The important point is that if a single 
byte in a file is changed then the containing block is rewritten elsewhere, 
requiring that the file block pointers be rewritten - and when these are 
rewritten they are likewise written elsewhere and pointers to *them* need to be 
rewritten, "recursively" all the way to the root of the filesystem, or 
überblock.

In other words, a write anywhere necessitates changes in high-level filesystem 
metadata.  In an archaic filesystem such as NTFS changes are local: write a 
file require little metadata be changed and rarely metadata for the entire 
filesystem.

As a result ZFS *is* more prone to pool loss *when the hardware screws up* 
since filesystem metadata is written more often.

At one time there was talk of ZFS implementing a "deferred reallocation" scheme 
in überblock updates.  The would greatly improve ZFS' ability to withstand 
poorly designed hardware.

PS RAIDZ2 is a good thing but is mostly irrelevant to pool corruption.  You 
need backups.  I set up all client installations with a hot backup via zfs 
send/recv.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but 
I would probably get away with putting up 2008.11 if it doesn't have the same 
problems with zfs send/recv.  Does anyone know?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread Brent Jones
On Mon, Jul 12, 2010 at 10:04 AM, BJ Quinn  wrote:
> I'm actually only running one at a time.  It is recursive / incremental (and 
> hundreds of GB), but it's only one at a time.  Was there still problems in 
> 2009.06 in that scenario?
>
> Does 2008.11 have these problems?  2008.05 didn't, and I'm considering moving 
> back to that rather than using a development build.
>

I would guess you would have less problems on 132 or 134 than you
would on 2009.06  :)
Just from my experience


-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-12 Thread Cindy Swearingen

Hi Brian,

What are you trying to determine? How the pool behaves when a drive is
yanked out?

Its hard to tell how a pool will react with external USB drives. I think
it will also depend on how the system handles a device removal.

I created a similar raidz pool with non-USB devices, offlined a disk,
and ran a scrub. It works as expected. See the output below. Could
you retry your test with an offline rather than a yank and see if
the system hangs?

In addition, we don't support pools that are created on p* devices.
Use the c1t0d* names instead.

Thanks,

Cindy

# zpool create rzpool raidz1 c2t6d0 c2t7d0 c2t8d0
# zpool offline rzpool c2t8d0
# zpool status rzpool
  pool: rzpool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
rzpool  DEGRADED 0 0 0
  raidz1-0  DEGRADED 0 0 0
c2t6d0  ONLINE   0 0 0
c2t7d0  ONLINE   0 0 0
c2t8d0  OFFLINE  0 0 0

errors: No known data errors
# zpool scrub rzpool
# zpool status rzpool
  pool: rzpool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jul 12 09:56:36 2010
config:

NAMESTATE READ WRITE CKSUM
rzpool  DEGRADED 0 0 0
  raidz1-0  DEGRADED 0 0 0
c2t6d0  ONLINE   0 0 0
c2t7d0  ONLINE   0 0 0
c2t8d0  OFFLINE  0 0 0

errors: No known data errors
# zpool status rzpool
  pool: rzpool
 state: ONLINE
 scan: resilvered 14K in 0h0m with 0 errors on Mon Jul 12 10:12:55 2010
config:

NAMESTATE READ WRITE CKSUM
rzpool  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c2t6d0  ONLINE   0 0 0
c2t7d0  ONLINE   0 0 0
c2t8d0  ONLINE   0 0 0

errors: No known data errors


On 07/12/10 10:45, Brian Leonard wrote:

Hi,

I'm currently trying to work with a quad-bay USB drive enclosure. I've created 
a raidz pool as follows:

bleon...@opensolaris:~# zpool status r5pool
  pool: r5pool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
r5poolONLINE   0 0 0
  raidz1  ONLINE   0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d2p0  ONLINE   0 0 0
c1t0d3p0  ONLINE   0 0 0

errors: No known data errors

If I pop a disk and run a zpool scrub, the fault is noted:

bleon...@opensolaris:~# zpool scrub r5pool
bleon...@opensolaris:~# zpool status r5pool
  pool: r5pool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub completed after 0h0m with 0 errors on Mon Jul 12 12:35:46 2010
config:

NAME  STATE READ WRITE CKSUM
r5poolDEGRADED 0 0 0
  raidz1  DEGRADED 0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d2p0  FAULTED  0 0 0  corrupted data
c1t0d3p0  ONLINE   0 0 0

errors: No known data errors

However, it's when I pop the disk back in that everything goes south. If I run 
a zpool scrub at this point, the command appears to just hang.

Running zpool status again shows the scrub will finish in 2 minutes, but I 
never does. You can see it's been running for 33 minutes already, and there's 
no data in the pool.

bleon...@opensolaris:/r5pool# zpool status r5pool
  pool: r5pool
 state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: scrub in progress for 0h33m, 92.41% done, 0h2m to go
config:

NAME  STATE READ WRITE CKSUM
r5poolONLINE   0 0 0
  raidz1  ONLINE   0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d

Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
I'm actually only running one at a time.  It is recursive / incremental (and 
hundreds of GB), but it's only one at a time.  Was there still problems in 
2009.06 in that scenario?

Does 2008.11 have these problems?  2008.05 didn't, and I'm considering moving 
back to that rather than using a development build.

Message was edited by: bjquinn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Garrett D'Amore
On Mon, 2010-07-12 at 09:41 -0700, Michael Johnson wrote:
> Nikola M wrote:
> >Freddie Cash wrote:
> >> You definitely want to do the ZFS bits from within FreeBSD.
> >Why not using ZFS in OpenSolaris? At least it has most stable/tested
> >implementation and also the newest one if needed?
> 
> 
> I'd love to use OpenSolaris for exactly those reasons, but I'm wary of using 
> an 
> operating system that may not continue to be updated/maintained.  If 
> OpenSolaris 
> had continued to be regularly released after Oracle bought Sun I'd be 
> choosing 
> it.  As it is, I don't want to be pessimistic, but the doubt about 
> OpenSolaris's 
> future is enough to make me choose FreeBSD instead.  (I'm sure that such 
> sentiments won't make me popular here, but so far Oracle has been 
> frustratingly 
> silent on their plans for OpenSolaris.)  At the very least, if FreeBSD 
> doesn't 
> do what I want I can switch the system disk to OpenSolaris and keep using the 
> same pool.  (Right?)
> 
> Going back to my original question: does anyone know of any problems that 
> could 
> be caused by using raidz on top of encrypted drives?  If there were a 
> physical 
> read error, which would get amplified by the encryption layer (if I'm 
> understanding full-disk encryption correctly, which I may not be), would ZFS 
> still be able to recover?
> 

I don't know about ramifications (though I suspect that a broadening
error scope would decrease ZFS' ability to isolate and work around
problematic regions on the media), but one thing I do know.  If you use
FreeBSD disk encryption below ZFS, then you won't be able able to import
your pools to another implementation -- you will be stuck with FreeBSD.

Btw, if you want a commercially supported and maintained product, have
you looked at NexentaStor?  Regardless of what happens with OpenSolaris,
we aren't going anywhere. (Full disclosure: I'm a Nexenta Systems
employee. :-)

-- Garrett
> 
>   
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-12 Thread Brian Leonard
Hi,

I'm currently trying to work with a quad-bay USB drive enclosure. I've created 
a raidz pool as follows:

bleon...@opensolaris:~# zpool status r5pool
  pool: r5pool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
r5poolONLINE   0 0 0
  raidz1  ONLINE   0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d2p0  ONLINE   0 0 0
c1t0d3p0  ONLINE   0 0 0

errors: No known data errors

If I pop a disk and run a zpool scrub, the fault is noted:

bleon...@opensolaris:~# zpool scrub r5pool
bleon...@opensolaris:~# zpool status r5pool
  pool: r5pool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub completed after 0h0m with 0 errors on Mon Jul 12 12:35:46 2010
config:

NAME  STATE READ WRITE CKSUM
r5poolDEGRADED 0 0 0
  raidz1  DEGRADED 0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d2p0  FAULTED  0 0 0  corrupted data
c1t0d3p0  ONLINE   0 0 0

errors: No known data errors

However, it's when I pop the disk back in that everything goes south. If I run 
a zpool scrub at this point, the command appears to just hang.

Running zpool status again shows the scrub will finish in 2 minutes, but I 
never does. You can see it's been running for 33 minutes already, and there's 
no data in the pool.

bleon...@opensolaris:/r5pool# zpool status r5pool
  pool: r5pool
 state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: scrub in progress for 0h33m, 92.41% done, 0h2m to go
config:

NAME  STATE READ WRITE CKSUM
r5poolONLINE   0 0 0
  raidz1  ONLINE   0 0 0
c1t0d0p0  ONLINE   0 0 0
c1t0d1p0  ONLINE   0 0 0
c1t0d2p0  ONLINE   0 0 0
c1t0d3p0  ONLINE   0 0 0

errors: 24 data errors, use '-v' for a list

zpool scrub -s r5pool doesn't have any effect.

I can't even kill the scrub process. Even a reboot command at this point will 
hang the machine, so I have to hard power-cycle the machine to get everything 
back to normal. There must be a more elegant solution, right?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
Nikola M wrote:
>Freddie Cash wrote:
>> You definitely want to do the ZFS bits from within FreeBSD.
>Why not using ZFS in OpenSolaris? At least it has most stable/tested
>implementation and also the newest one if needed?


I'd love to use OpenSolaris for exactly those reasons, but I'm wary of using an 
operating system that may not continue to be updated/maintained.  If 
OpenSolaris 
had continued to be regularly released after Oracle bought Sun I'd be choosing 
it.  As it is, I don't want to be pessimistic, but the doubt about 
OpenSolaris's 
future is enough to make me choose FreeBSD instead.  (I'm sure that such 
sentiments won't make me popular here, but so far Oracle has been frustratingly 
silent on their plans for OpenSolaris.)  At the very least, if FreeBSD doesn't 
do what I want I can switch the system disk to OpenSolaris and keep using the 
same pool.  (Right?)

Going back to my original question: does anyone know of any problems that could 
be caused by using raidz on top of encrypted drives?  If there were a physical 
read error, which would get amplified by the encryption layer (if I'm 
understanding full-disk encryption correctly, which I may not be), would ZFS 
still be able to recover?


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Bob Friesenhahn

On Mon, 12 Jul 2010, Edward Ned Harvey wrote:


Precisely.

A private license, with support and indemnification from Sun, would 
shield Apple from any lawsuit from Netapp.


This sort of statement illustrates a lack of knowledge of how 
indemnification and patents work.  The patent holder is not compelled 
in any way to offer a license for use of the patent.  Without a patent 
license, shipping products can be stopped dead in their tracks.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA 6G controller for OSOL

2010-07-12 Thread Matt Urbanowski
Well, it is good to hear that there likely isn't a patent problem with the
SATA functionality of the card.  Hopefully the filed bug will be addressed,
and support added to the AHCI driver.

On Mon, Jul 12, 2010 at 2:43 AM, Vladimir Kotal wrote:

> Brandon High wrote:
>
>  On Fri, Jul 9, 2010 at 2:40 AM, Vladimir Kotal 
> > vladimir.ko...@sun.com>> wrote:
>>
>>Could you be more specific about the problems with 88SE9123,
>>especially with SATA ? I am in the process of setting up a system
>>with AD2SA6GPX1 HBA based on this chipset (at least according to the
>>product pages [*]).
>>
>>
>>  http://lmgtfy.com/?q=marvell+9123+problems
>>
>> The problems seem to be mostly with the PATA controller that's built in.
>> Regardless, Marvell no longer offers the 9123. Any vendor offering cards
>> based on it is probably using chips bought as surplus or for recycling.
>>
>
> I spent some time going through the past news about various motherboard
> vendors delaying new products because of the PATA issue. According to the
> Marvell PR there seems to be no issue with the SATA side of the chip. Other
> than that (and the possible PCIe x1 scaling issue with SATA III) there
> should be no problem with the card itself.
>
> However, the card does not work in OpenSolaris yet because of:
>  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6967746
>
>
> v.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Matt Urbanowski
Graduate Student
5-51 Medical Sciences Building
Dept. Of Cell Biology
University of Alberta
Edmonton, Alberta, Canada
T6G 2H7
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Erik Trimble

On 7/12/2010 9:09 AM, Garrett D'Amore wrote:

And, the next release of Solaris (whenever it comes out) is supposed to
make far more use of zfs for things like its packaging system (upgrades
using snapshots, etc.) and zones.  Indeed, its possible (I've not
checked in a long time) that S10 makes of snapshots for live upgrade if
root is zfs.

   
Solaris 10 LiveUpgrade does indeed currently use ZFS snapshots. It has 
for at least the last couple of Update release (I want to say it 
appeared in Update 6, but I can't remember exactly). I'd have to look, 
but I don't think ZFS is *currently* used for the zone scripts, though 
there's no barrier for it to be used with (or inside) zones.



ZFS is a key strategic component of Solaris going forward.  Having to
abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at
least to its future with Oracle.

- Garrett
   
Losing ZFS would indeed be disastrous, as it would leave Solaris with 
only the Veritas File System (VxFS) as a semi-modern filesystem, and a 
non-native FS at that (i.e. VxFS is a 3rd-party for-pay FS, which 
severely inhibits its uptake). UFS is just way to old to be competitive 
these days.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Jason King
On Mon, Jul 12, 2010 at 11:09 AM, Garrett D'Amore  wrote:
> On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote:
>> Linder, Doug wrote:
>> > Out of sheer curiosity - and I'm not disagreeing with you, just wondering 
>> > - how does ZFS make money for Oracle when they don't charge for it?  Do 
>> > you think it's such an important feature that it's a big factor in 
>> > customers picking Solaris over other platforms?
>> >
>>
>> Yes, it is one of many significant factors in customers choosing Solaris
>> over other OS's.
>> Having chosen Solaris, customers then tend to buy Sun/Oracle systems to
>> run it on.
>>
>> Of course, there are the 7000 series products too, which are heavily
>> based on the capabilities of ZFS, amongst other Solaris features.
>>
>
> And, the next release of Solaris (whenever it comes out) is supposed to
> make far more use of zfs for things like its packaging system (upgrades
> using snapshots, etc.) and zones.  Indeed, its possible (I've not
> checked in a long time) that S10 makes of snapshots for live upgrade if
> root is zfs.

It does.

>
> ZFS is a key strategic component of Solaris going forward.  Having to
> abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at
> least to its future with Oracle.
>
>        - Garrett
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Garrett D'Amore
On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote:
> Linder, Doug wrote:
> > Out of sheer curiosity - and I'm not disagreeing with you, just wondering - 
> > how does ZFS make money for Oracle when they don't charge for it?  Do you 
> > think it's such an important feature that it's a big factor in customers 
> > picking Solaris over other platforms?
> >   
> 
> Yes, it is one of many significant factors in customers choosing Solaris 
> over other OS's.
> Having chosen Solaris, customers then tend to buy Sun/Oracle systems to 
> run it on.
> 
> Of course, there are the 7000 series products too, which are heavily 
> based on the capabilities of ZFS, amongst other Solaris features.
> 

And, the next release of Solaris (whenever it comes out) is supposed to
make far more use of zfs for things like its packaging system (upgrades
using snapshots, etc.) and zones.  Indeed, its possible (I've not
checked in a long time) that S10 makes of snapshots for live upgrade if
root is zfs.

ZFS is a key strategic component of Solaris going forward.  Having to
abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at
least to its future with Oracle.

- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Andrew Gabriel

Linder, Doug wrote:

Out of sheer curiosity - and I'm not disagreeing with you, just wondering - how 
does ZFS make money for Oracle when they don't charge for it?  Do you think 
it's such an important feature that it's a big factor in customers picking 
Solaris over other platforms?
  


Yes, it is one of many significant factors in customers choosing Solaris 
over other OS's.
Having chosen Solaris, customers then tend to buy Sun/Oracle systems to 
run it on.


Of course, there are the 7000 series products too, which are heavily 
based on the capabilities of ZFS, amongst other Solaris features.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Erik Trimble

On 7/12/2010 8:49 AM, Linder, Doug wrote:

Erik Trimble wrote:

   

it does look like they'll win, I would bet huge chunks of money that
Oracle cross-licenses the patents or pays for a license, rather than
kill ZFS (it simply makes too much money for Oracle to abandon).
 

Out of sheer curiosity - and I'm not disagreeing with you, just wondering - how 
does ZFS make money for Oracle when they don't charge for it?  Do you think 
it's such an important feature that it's a big factor in customers picking 
Solaris over other platforms?
   
It's a core part of the Storage 7000-series appliances.  They would be 
significantly less appealing without ZFS.


And, yes, it *is* a huge selling point for Solaris.  Solaris/ZFS is a 
decisive factor in much of Oracle's storage server sales.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Linder, Doug
Erik Trimble wrote:

> it does look like they'll win, I would bet huge chunks of money that
> Oracle cross-licenses the patents or pays for a license, rather than
> kill ZFS (it simply makes too much money for Oracle to abandon).

Out of sheer curiosity - and I'm not disagreeing with you, just wondering - how 
does ZFS make money for Oracle when they don't charge for it?  Do you think 
it's such an important feature that it's a big factor in customers picking 
Solaris over other platforms?
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Erik Trimble

On 7/12/2010 8:13 AM, David Magda wrote:

On Mon, July 12, 2010 10:03, Tim Cook wrote:

   

Everyone's SNAPSHOTS are copy on write BESIDES ZFS and WAFL's.   The
filesystem itself is copy-on-write for NetApp/Oracle, which is why there
is no performance degradation when you take them.

Per Microsoft:
When a change to the original volume occurs, but before it is written to
disk, the block about to be modified is read and then written to a
“differences area”, which preserves a copy of the data block before it is
overwritten with the change.

That is exactly how pretty much everyone else takes snapshots in the
industry, and exactly why nobody can keep more than a handful on disk at
any one time, and sometimes not even that for data that has heavy change
rates.
 

The nice thing about VSS is that they can be requested by applications.
Though ZFS is ACID, and you can design an application to have ACID writes
to disk, linking the two can be tricky. And not all applications are ACID
(image editors, word processors, etc.).
   
ZFS is NOT automatically ACID. There is no guaranty of commits for async 
write operations. You would have to use synchronous writes to guaranty 
commits. And, furthermore, I think that there is a strong probability 
that ZFS won't pass other aspects of ACID. Despite what certain folks 
have been saying for awhile (*cough* Oracle *cough* Microsoft *cough*), 
the filesystem is NOT a relational database. They have very distinctly 
different design criteria.


You can also easily have applications request a ZFS snapshot, though not 
specifically through an API right now.




It'd be handy to have a mechanism where applications could register for
snapshot notifications. When one is about to happen, they could be told
about it and do what they need to do. Once all the applications have
acknowledged the snapshot alert--and/or after a pre-set timeout--the file
system would create the snapshot, and then notify the applications that
it's done.
   
Why would an application need to be notified? I think you're under the 
misconception that something happens when a ZFS snapshot is taken. 
NOTHING happens when a snapshot is taken (OK, well, there is the 
snapshot reference name created). Blocks aren't moved around, we don't 
copy anything, etc. Applications have no need to "do anything" before a 
snapshot it taken.



Given that snapshots will probably be more popular in the future (WAFL
NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots, etc.), an agreed upon
consensus would be handy (D-Bus? POSIX?).

   



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread David Magda
On Mon, July 12, 2010 10:03, Tim Cook wrote:

> Everyone's SNAPSHOTS are copy on write BESIDES ZFS and WAFL's.   The
> filesystem itself is copy-on-write for NetApp/Oracle, which is why there
> is no performance degradation when you take them.
>
> Per Microsoft:
> When a change to the original volume occurs, but before it is written to
> disk, the block about to be modified is read and then written to a
> “differences area”, which preserves a copy of the data block before it is
> overwritten with the change.
>
> That is exactly how pretty much everyone else takes snapshots in the
> industry, and exactly why nobody can keep more than a handful on disk at
> any one time, and sometimes not even that for data that has heavy change
> rates.

The nice thing about VSS is that they can be requested by applications.
Though ZFS is ACID, and you can design an application to have ACID writes
to disk, linking the two can be tricky. And not all applications are ACID
(image editors, word processors, etc.).

It'd be handy to have a mechanism where applications could register for
snapshot notifications. When one is about to happen, they could be told
about it and do what they need to do. Once all the applications have
acknowledged the snapshot alert--and/or after a pre-set timeout--the file
system would create the snapshot, and then notify the applications that
it's done.

Given that snapshots will probably be more popular in the future (WAFL
NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots, etc.), an agreed upon
consensus would be handy (D-Bus? POSIX?).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Tim Cook
On Mon, Jul 12, 2010 at 8:32 AM, Edward Ned Harvey
wrote:

> > From: Tim Cook [mailto:t...@cook.ms]
> >
> > Because VSS isn't doing anything remotely close to what WAFL is doing
> > when it takes snapshots.
>
> It may not do what you want it to do, but it's still copy on write, as
> evidenced by the fact that it takes instantaneous snapshots, and snapshots
> don't get overwritten when new data is written.
>
> I wouldn't call that "not even remotely close."  It's different, but
> definitely the same ballpark.
>
>

Everyone's SNAPSHOTS are copy on write BESIDES ZFS and WAFL's.   The
filesystem itself is copy-on-write for NetApp/Oracle, which is why there is
no performance degradation when you take them.

Per Microsoft:
When a change to the original volume occurs, but before it is written to
disk, the block about to be modified is read and then written to a
“differences area”, which preserves a copy of the data block before it is
overwritten with the change.

That is exactly how pretty much everyone else takes snapshots in the
industry, and exactly why nobody can keep more than a handful on disk at any
one time, and sometimes not even that for data that has heavy change rates.

It's not in the same ballpark, it's a completely different implementation.
 It's about as similar as a gas and diesel engine.  They might both go in
cars, they might both move the car.  They aren't remotely close to each
other from a design perspective.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Edward Ned Harvey
> From: Tim Cook [mailto:t...@cook.ms]
> 
> Because VSS isn't doing anything remotely close to what WAFL is doing
> when it takes snapshots.

It may not do what you want it to do, but it's still copy on write, as
evidenced by the fact that it takes instantaneous snapshots, and snapshots
don't get overwritten when new data is written.  

I wouldn't call that "not even remotely close."  It's different, but
definitely the same ballpark.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> 
> On Jul 10, 2010, at 14:20, Edward Ned Harvey wrote:
> 
> >> A few companies have already backed out of zfs
> >> as they cannot afford to go through a lawsuit.
> >
> > Or, in the case of Apple, who could definitely afford a lawsuit, but
> > choose
> > to avoid it anyway.
> 
> This was covered already:
> 
> http://mail.opensolaris.org/pipermail/zfs-discuss/2009-
> October/033125.html

Precisely.

A private license, with support and indemnification from Sun, would shield
Apple from any lawsuit from Netapp.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
> 
> ZFS-FUSE is horribly unstable, 

That may be true.  I couldn't say.


> although that's more an indication of
> the stability of the storage stack on Linux.  

But this, I take issue with.  Ext3 isn't unstable.  It may not be as awesome
as ZFS, but it isn't unstable.  And the same goes for all the other storage
and filesystems in Linux.

Years ago, when I first started using Gimp, it was stable in Linux and not
stable in Windows.  I went to gimp.org, and looked at the FAQ's, and FAQ #1
said "Gimp is unstable on windows."  And the reply was "Everything is
unstable on Windows."  Which was simply false.  It was a developer (or group
of developers) expressing a biased point of view, and blaming a platform
that they were not fond of.

In later versions of gimp, even on the same version of Windows, gimp evolved
into something that was stable.

To simply blame gimp, or simply blame windows, both positions are not
accurate.  The truth is, gimp was stable on linux, and other apps were
stable on windows.  The truth is, there was something unstable in the
interaction between the two.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Andriy Gapon
on 11/07/2010 14:21 Roy Sigurd Karlsbakk said the following:
> 
> 
> 
> I'm planning on running FreeBSD in VirtualBox (with a Linux host)
> and giving it raw disk access to four drives, which I plan to
> configure as a raidz2 volume.
> 
> Wouldn't it be better or just as good to use fuse-zfs for such a
> configuration? I/O from VirtualBox isn't really very good, but then, I
> haven't tested the linux/fbsd configuration...

Hmm, an unexpected question IMHO - wouldn't it better to just install FreeBSD on
the hardware? :-)
If an original poster is using Linux as a host OS, then probably he has some
very good reason to do that.  But performance and etc -wise, directly using
FreeBSD, of course, should win over fuse-zfs.  Right?

[Installing and maintaining one OS instead of two is the first thing that comes
to mind]

-- 
Andriy Gapon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs pool / iscsi lun with windows initiator.

2010-07-12 Thread unbounde
Hi friends,

i have a problem. I have a file server which initiates large volumes with iscsi 
initiator. Problem is, zfs side it shows non aviable space, but i am %100 sure 
there is at least, 5 TB space. Problem is, because zfs pool shows as 0 aviable 
all iscsi connection got lost and all sharing setup is gone and need restart to 
fix. all time till today i keep delete snapshots and make it alive and working 
but its not working now. Why zfs pool shows not aviable, even when there is 
space?

please help
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA 6G controller for OSOL

2010-07-12 Thread Vladimir Kotal

Brandon High wrote:
On Fri, Jul 9, 2010 at 2:40 AM, Vladimir Kotal > wrote:


Could you be more specific about the problems with 88SE9123,
especially with SATA ? I am in the process of setting up a system
with AD2SA6GPX1 HBA based on this chipset (at least according to the
product pages [*]).


 
http://lmgtfy.com/?q=marvell+9123+problems


The problems seem to be mostly with the PATA controller that's built in. 
Regardless, Marvell no longer offers the 9123. Any vendor offering cards 
based on it is probably using chips bought as surplus or for recycling.


I spent some time going through the past news about various motherboard 
vendors delaying new products because of the PATA issue. According to 
the Marvell PR there seems to be no issue with the SATA side of the 
chip. Other than that (and the possible PCIe x1 scaling issue with SATA 
III) there should be no problem with the card itself.


However, the card does not work in OpenSolaris yet because of:
  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6967746


v.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Nikola M
Freddie Cash wrote:
> You definitely want to do the ZFS bits from within FreeBSD.
Why not using ZFS in OpenSolaris? At least it has most stable/tested
implementation and also the newest one if needed?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss