[zfs-discuss] Is It a SATA problem or there is something else !!!!!!

2008-10-06 Thread Anas Ayad
Hi there 
 I post this problem in Xen discussion before but with different title, I 
thought it is something has to do with the memory .. so guys can you read the 
thread first !! 
 http://www.opensolaris.org/jive/thread.jspa?threadID=76870tstart=0

I tried this yesterday , I brought my friend pc and its ( intel dual core 1.6 , 
4GB Ram , IDE hard disk ) and I installed the same version that I used it on my 
pc and I did the same things that cause my computer hang and it didn't hang 
everything went by swimmingly ! .. so I start thinking that it's my hard 
disk , Am using 500GB SATA from Seagate and my friend using also Seagate hard 
disk but IDE . so guys can anyone help to pinpoint the real cause of this 
problem ...

I appreciate your helps and thanks in advance ...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is It a SATA problem or there is something else !!!!!!

2008-10-06 Thread Anas Ayad
Hi there 
 I post this problem in Xen discussion before but with different title, I 
thought it is something has to do with the memory .. so guys can you read the 
thread first !! 
 http://www.opensolaris.org/jive/thread.jspa?threadID=76870tstart=0

I tried this yesterday , I brought my friend pc and its ( intel dual core 1.6 , 
4GB Ram , IDE hard disk ) and I installed the same version that I used it on my 
pc and I did the same things that cause my computer hang and it didn't hang 
everything went by swimmingly ! .. so I start thinking that it's my hard 
disk , Am using 500GB SATA from Seagate and my friend using also Seagate hard 
disk but IDE . so guys can anyone help to pinpoint the real cause of this 
problem ...

I appreciate your helps and thanks in advance ...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Fusion-IO?

2008-10-06 Thread Ross
Just a thought, will we be able to split the ioDrive into slices and use it 
simultaneously as a ZIL and slog device?  5GB of write cache and 75GB of read 
cache sounds to me like a nice way to use the 80GB model.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Ross
Very interesting idea, thanks for sharing it.

Infiniband would definately be worth looking at for performance, although I 
think you'd need iSER to get the benefits and that might still be a little new: 
 http://www.opensolaris.org/os/project/iser/Release-notes/.  

It's also worth bearing in mind that you can have multiple mirrors.  I don't 
know what effect that will have on the performance, but it's an easy way to 
boost the reliability even further.  I think this idea configured on a set of 
2-3 servers, with separate UPS' for each, and a script that can export the pool 
and save the ramdrive when the power fails, is potentially a very neat little 
system.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is It a SATA problem or there is something else !!!!!!

2008-10-06 Thread Sanjeev
Anas,

Are both (IDE and SATA) disks plugged in ?
I had similar problems where the machine woudl just drop into GRUB and never 
boot up despite giving the right GRUB commands.
I finally disconnected the IDE disk and things are fine now.

Thanks and regards,
Sanjeev.

On Mon, Oct 06, 2008 at 12:03:08AM -0700, Anas Ayad wrote:
 Hi there 
  I post this problem in Xen discussion before but with different title, I 
 thought it is something has to do with the memory .. so guys can you read the 
 thread first !! 
  http://www.opensolaris.org/jive/thread.jspa?threadID=76870tstart=0
 
 I tried this yesterday , I brought my friend pc and its ( intel dual core 1.6 
 , 4GB Ram , IDE hard disk ) and I installed the same version that I used it 
 on my pc and I did the same things that cause my computer hang and it didn't 
 hang everything went by swimmingly ! .. so I start thinking that it's my 
 hard disk , Am using 500GB SATA from Seagate and my friend using also Seagate 
 hard disk but IDE . so guys can anyone help to pinpoint the real cause of 
 this problem ...
 
 I appreciate your helps and thanks in advance ...
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-06 Thread Darren J Moffat
Fajar A. Nugraha wrote:
 On Fri, Oct 3, 2008 at 10:37 PM, Vasile Dumitrescu
 [EMAIL PROTECTED] wrote:
 
 VMWare 6.0.4 running on Debian unstable,
 Linux bigsrv 2.6.26-1-amd64 #1 SMP Wed Sep 24 13:59:41 UTC 2008 x86_64 
 GNU/Linux

 Solaris is vanilla snv_90 installed with no GUI.
 
 
 in summary: physical disks, assigned 100% to the VM
 
 That's weird. I thought one of the point of using physical disks
 instead of files was to avoid problems caused by caching on host/dom0?

The data still flows through the host/dom0 device drivers and is thus at 
the mercy of the commands they issue to the physical devices.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: Re: ZSF Solaris]

2008-10-06 Thread Pramod Batni



 Original Message 
Subject:Re: [zfs-discuss] ZSF Solaris
Date:   Wed, 01 Oct 2008 07:21:56 +0200
From:   Jens Elkner [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
References: 	[EMAIL PROTECTED] 
[EMAIL PROTECTED] 
[EMAIL PROTECTED] 
[EMAIL PROTECTED]




On Tue, Sep 30, 2008 at 09:44:21PM -0500, Al Hopper wrote:
 
 This behavior is common to tmpfs, UFS and I tested it on early ZFS

 releases.  I have no idea why - I have not made the time to figure it
 out.  What I have observed is that all operations on your (victim)
 test directory will max out (100% utilization) one CPU or one CPU core
 - and all directory operations become single-threaded and limited by
 the performance of one CPU (or core).

And sometimes its just a little bug: E.g. with a recent version of Solaris
(i.e. = snv_95 || = S10U5) on UFS:

SunOS graf 5.10 Generic_137112-07 i86pc i386 i86pc (X4600, S10U5)
=
admin.graf /var/tmp   time sh -c 'mkfile 2g xx ; sync'
0.05u 9.78s 0:29.42 33.4%
admin.graf /var/tmp  time sh -c 'mkfile 2g xx ; sync'
0.05u 293.37s 5:13.67 93.5%
admin.graf /var/tmp  rm xx
admin.graf /var/tmp  time sh -c 'mkfile 2g xx ; sync'
0.05u 9.92s 0:31.75 31.4%
admin.graf /var/tmp  time sh -c 'mkfile 2g xx ; sync'
0.05u 305.15s 5:28.67 92.8%
admin.graf /var/tmp  time dd if=/dev/zero of=xx bs=1k count=2048
2048+0 records in
2048+0 records out
0.00u 298.40s 4:58.46 99.9%
admin.graf /var/tmp  time sh -c 'mkfile 2g xx ; sync'
0.05u 394.06s 6:52.79 95.4%

SunOS kaiser 5.10 Generic_137111-07 sun4u sparc SUNW,Sun-Fire-V440 (S10, U5)
=
admin.kaiser /var/tmp  time mkfile 1g xx
0.14u 5.24s 0:26.72 20.1%
admin.kaiser /var/tmp  time mkfile 1g xx
0.13u 64.23s 1:25.67 75.1%
admin.kaiser /var/tmp  time mkfile 1g xx
0.13u 68.36s 1:30.12 75.9%
admin.kaiser /var/tmp  rm xx
admin.kaiser /var/tmp  time mkfile 1g xx
0.14u 5.79s 0:29.93 19.8%
admin.kaiser /var/tmp  time mkfile 1g xx
0.13u 66.37s 1:28.06 75.5%

SunOS q 5.11 snv_98 i86pc i386 i86pc (U40, S11b98)
=
elkner.q /var/tmp  time mkfile 2g xx
0.05u 3.63s 0:42.91 8.5%
elkner.q /var/tmp  time mkfile 2g xx
0.04u 315.15s 5:54.12 89.0%

SunOS dax 5.11 snv_79a i86pc i386 i86pc (U40, S11b79)
=
elkner.dax /var/tmp  time mkfile 2g xx
0.05u 3.09s 0:43.09 7.2%
elkner.dax /var/tmp  time mkfile 2g xx
0.05u 4.95s 0:43.62 11.4%

  
The reason why the (implicit) truncation could be taking long  might be 
due to


   *6723423 UFS slow following large file deletion with fix for 6513858 
installed http://monaco.sfbay/detail.jsf?cr=6723423
 
*To overcome this problem for S10, the offending patch 127866-03 can be 
removed.*


*Pramod*
*




Regards,
jel.
--
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Moore, Joe
Nicolas Williams wrote
 There have been threads about adding a feature to support slow mirror
 devices that don't stay synced synchronously.  At least IIRC.  That
 would help.  But then, if the pool is busy writing then your slow ZIL
 mirrors would generally be out of sync, thus being of no help in the
 even of a power failure given fast slog devices that don't
 survive power
 failure.

I wonder if an AVS-replicated storage device on the backends would be 
appropriate?

write - ZFS-mirrored slog - ramdisk -AVS- physical disk
   \
+-iscsi- ramdisk -AVS- physical disk

You'd get the continuous replication of the ramdisk to physical drive (and 
perhaps automagic recovery on reboot) but not pay the syncronous write to 
remote physical disk penalty


 Also, using remote devices for a ZIL may defeat the purpose of fast
 ZILs, even if the actual devices are fast, because what really matters
 here is latency, and the farther the device, the higher the latency.

A .5-ms RTT on an ethernet link to the iSCSI disk may be faster than a 9-ms 
latency on physical media.

There was a time when it was better to place workstations' swap files on the 
far side of a 100Mbps ethernet link rather than using the local spinning rust.  
Ah, the good old days...

--Joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA/SAS (Re: Quantifying ZFS reliability)

2008-10-06 Thread Richard Elling
Anton B. Rang wrote:
 Erik:
   
 (2)  a SAS drive has better throughput and IOPs than a SATA drive
   

 Richard:
   
 Disagree.  We proved that the transport layer protocol has no bearing
 on throughput or iops.  Several vendors offer drives which are
 identical in all respects except for transport layer protocol: SAS or
 SATA.  You can choose either transport layer protocol and the
 performance remains the same.
 

 Reference, please?

 I draw your attention to Seagate's SPC-2 benchmark of the Barracuda ES.2 with 
 SATA  SAS.

   
 http://www.seagate.com/docs/pdf/whitepaper/tp_sas_benefits_to_tier_2_storage.pdf

 Certainly there are a wide variety of workloads and you won't see a benefit 
 everywhere, but there are cases where the SAS protocol provides a significant 
 improvement over SATA.
   

This really is a moot point.  Comparing a single channel SATA disk to a
dual channel SAS disk is disingenuous -- perhaps one could argue that
Seagate should sell a dual-port SATA disk.  Also, SATA drives which are
faster than 15k rpm SAS disks are available or imminent from Intel,
Samsung, and others (Super Talent, STEC, Crucial, et.al.)

I think we can all agree that most of the vendors, to date, have been trying
to differentiate their high-margin products to maintain the high margins
(FC is a better example of this than SAS).  But it is not a good habit to
claim one transport is always superior to another when the real comparison
must occur at the device.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-06 Thread andrew
 I've upgraded to b98, checked if zpool.cache is not
 being added to
 boot archive and tried to boot from VB by presenting
 a prtition to it.
 It didn't.

I got it working by installing a new build of OpenSolaris 2008.11 from scratch 
rather than upgrading, but deleting zpool.cache, deleting both boot archives, 
then doing a bootadm update-archive should work.

Cheers

Andrew.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-06 Thread Jürgen Keil
 Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL 
 PROTECTED],0:a fstype zfs

Is that physical device path correct for your new  system?

Or is this the physical device path (stored on-disk in the zpool label)
from some other system?   In this case you may be able to work around
the problem by passing a -B bootpath=...  option to the kernel

e.g. something like this:

kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,bootpath=/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a


You can find out the correct physical device path string
for the zfs root disk by booting the system from the optical 
installation media, and running the format utility.

OTOH, if you already have booted from the optical installation
media, it's easiest to just import the root zpool from the
installation system, because that'll update the physical device
path in the zpool's label on disk (and it clears the hostid stored in
the zpool label - another problem that could prevent mounting
the zfs root).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Luke Schwab
Hi,
I am having a problem running zpool imports when we import multiple storage 
pools at one time. Below are the details of the setup:

- We are using a SAN with Sun 6140 storage arrays. 
- Dual port HBA on each server is Qlogic running the QLC driver with Sun 
mpxio(SFCSM) running. 
- We have 400+ luns on the SAN. We can't split them up because the luns any of 
the luns may be able to failover between the different servers. We are running 
rather large farm where all servers need to see all the luns in case of 
failover.
-Running Solaris 10 U5 kernal. 

Below are the times I see when importing pools from the SAN storage:
1 pool - 30 seconds
2 pools - 1 minute for both to finish
3 pools - 1 min 30 sec for all 3 to finish
4 pools - 2 minutes for all 4 to finish

When I run more then one zpool import in parallel, I can see that all zpool 
imports are queued up. The more pools I try to import, the longer the pools 
take to import. For example, when importing 2 pools, the first pool takes 1 
minute but then just a few seconds later the 2nd pool finishes its import. When 
I import 3 pools, the first pool takes 1.5 minutes and then the other two pools 
finish just after that.

The problem we are seeing it that we need to failover up to 32 pools at one 
time on a server and the imports end up timing out after 5-10 minutes because 
we are trying to import too many pools at one time. 

Is this a design choice with ZFS coding or a bug? Is there anything I can do to 
increase my import times? We do have the same setup on one of our SANs with 
only 10-20 luns instead of 400+ and the imports take only 1-3 seconds. My guess 
here is that the large number of luns is effecting imports. But our virtual 
farm design is broken if we can imported at least 30-40 luns in under a minute 
on a given server.

Any thoughts or questions would be great.

Thanks,
Luke
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Tomas Ögren
On 06 October, 2008 - Luke Schwab sent me these 2,0K bytes:

 Is this a design choice with ZFS coding or a bug? Is there anything I
 can do to increase my import times? We do have the same setup on one
 of our SANs with only 10-20 luns instead of 400+ and the imports take
 only 1-3 seconds. My guess here is that the large number of luns is
 effecting imports. But our virtual farm design is broken if we can
 imported at least 30-40 luns in under a minute on a given server.

I believe it will scan all available devices (LUNs) for pool
information.. If you only want to scan a subset, you can for instance
make a new directory somewhere and put symlinks there to the real
devices, then 'zpool import -d /that/directory'  to only search there
for devices to consider.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Richard Elling
Do you have a lot of snapshots?  If so, CR 6612830 could be contributing.
Alas, many such fixes are not yet available in S10.
 -- richard

Luke Schwab wrote:
 Hi,
 I am having a problem running zpool imports when we import multiple storage 
 pools at one time. Below are the details of the setup:

 - We are using a SAN with Sun 6140 storage arrays. 
 - Dual port HBA on each server is Qlogic running the QLC driver with Sun 
 mpxio(SFCSM) running. 
 - We have 400+ luns on the SAN. We can't split them up because the luns any 
 of the luns may be able to failover between the different servers. We are 
 running rather large farm where all servers need to see all the luns in case 
 of failover.
 -Running Solaris 10 U5 kernal. 

 Below are the times I see when importing pools from the SAN storage:
 1 pool - 30 seconds
 2 pools - 1 minute for both to finish
 3 pools - 1 min 30 sec for all 3 to finish
 4 pools - 2 minutes for all 4 to finish

 When I run more then one zpool import in parallel, I can see that all zpool 
 imports are queued up. The more pools I try to import, the longer the pools 
 take to import. For example, when importing 2 pools, the first pool takes 1 
 minute but then just a few seconds later the 2nd pool finishes its import. 
 When I import 3 pools, the first pool takes 1.5 minutes and then the other 
 two pools finish just after that.

 The problem we are seeing it that we need to failover up to 32 pools at one 
 time on a server and the imports end up timing out after 5-10 minutes because 
 we are trying to import too many pools at one time. 

 Is this a design choice with ZFS coding or a bug? Is there anything I can do 
 to increase my import times? We do have the same setup on one of our SANs 
 with only 10-20 luns instead of 400+ and the imports take only 1-3 seconds. 
 My guess here is that the large number of luns is effecting imports. But our 
 virtual farm design is broken if we can imported at least 30-40 luns in under 
 a minute on a given server.

 Any thoughts or questions would be great.

 Thanks,
 Luke
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Fusion-IO?

2008-10-06 Thread Ross
D'oh, meant ZIL / slog and L2ARC device.  Must have posted that before my early 
morning cuppa!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Comments on green-bytes

2008-10-06 Thread C. Bergström
Hi all

In another thread a short while ago.. A cool little movie with some 
gumballs was all we got to learn about green-bytes.  The product 
launched and maybe some of the people that follow this list have had a 
chance to take a look at the code/product more closely?  Wstuart asked 
how they were going to handle section 3.1 of the CDDL, but nobody from 
green-bytes even made an effort to clarify this.  I called since I'm 
consulting with companies who are potential customers, but are any of 
developers even subscribed to this list?

After a call and exchanging a couple emails I'm left with the impression 
the source will *not* be released publicly or to customers.  I'm not the 
copyright holder, a legal expert, or even a customer, but can someone 
from Sun or green-bytes make a comment.  I apologize for being a bit off 
topic, but is this really acceptable to the community/Sun in general?  
Maybe the companies using Solaris and NetApp don't care about source 
code, but then the whole point of opening Solaris is just reduced to 
marketing hype.

In the defense of green-bytes.. I think they've truly spent some time 
developing an interesting product and want to protect their ideas and 
investment.  I said this on the phone, but in my very humble opinion 
nobody is going to steal their patches.  In a way I'm curious what 
others think before a good company gets a lot of bad PR over an honest 
and small oversight.

Cheers,

Christopher Bergström

+1.206.279.5000


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Root pool mirror wasn't automatically configured during install

2008-10-06 Thread Eric Boutilier
On Fri, 3 Oct 2008, [EMAIL PROTECTED] wrote:
 Eric Boutilier wrote:
 Is the following issue related to (will probably get fixed by) bug 6748133? 
 ...
 
 During a net-install of b96, I modified the name of the root pool,
 overriding the default name, rpool. After the install, the pool was on
 a single device instead of mirrored on two devices, and doing a manual
 zpool attach was required.
 

 Hi Eric,

 Are you saying that you selected two-disks for a mirrored root pool
 during the initial install and because you changed the default
 rpool name, the pool was created with just one disk?

I netinstalled build 96, selected two disks for the root pool mirror,
backspaced over rpool with mypool and still ended up with a 2-disk
mirrored root pool named mypool.

 Are you sure that you selected a two-disk mirror during the initial
 install of build 96?

 See the output below.

 Cindy

 Preparing system for Solaris install

 Configuring disk (c0t0d0)
- Creating Solaris disk label (VTOC)

 Configuring disk (c0t1d0)
- Creating Solaris disk label (VTOC)
- Creating pool mypool
- Creating swap zvol for pool mypool
- Creating dump zvol for pool mypool
 .
 .
 .

 Cleaning devices

 Customizing system devices
- Physical devices (/devices)
- Logical devices (/dev)

 Installing boot information
- Installing boot blocks (c0t1d0s0)
- Installing boot blocks (/dev/rdsk/c0t0d0s0)
- Installing boot blocks (/dev/rdsk/c0t1d0s0)

 # zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
 config:

NAME  STATE READ WRITE CKSUM
mypoolONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c0t1d0s0  ONLINE   0 0 0

 errors: No known data errors


Cindy,

Thanks for the reply -- I retraced my steps, and I'm now convinced it
was indeed pilot error. I went back through the install screens (I'm
still getting used to the new screens -- also, we're doing remote
console based installs), and apparently on the one that says to select
multiple disks if you want mirroring, I simply F2'd right past it.  :-)

Then when I went to rectify the situation (make it mirrored by doing a
zpool attach manually), it told me to use -f because the slice belonged
to rpool. I now realize that was probably a vestige of a previous
install, not something that happened behind the scenes during this install.

Thank you!
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 10/06/2008 01:57:10 PM:

 Hi all

 In another thread a short while ago.. A cool little movie with some
 gumballs was all we got to learn about green-bytes.  The product
 launched and maybe some of the people that follow this list have had a
 chance to take a look at the code/product more closely?  Wstuart asked
 how they were going to handle section 3.1 of the CDDL, but nobody from
 green-bytes even made an effort to clarify this.  I called since I'm
 consulting with companies who are potential customers, but are any of
 developers even subscribed to this list?

 After a call and exchanging a couple emails I'm left with the impression
 the source will *not* be released publicly or to customers.  I'm not the
 copyright holder, a legal expert, or even a customer, but can someone
 from Sun or green-bytes make a comment.  I apologize for being a bit off
 topic, but is this really acceptable to the community/Sun in general?
 Maybe the companies using Solaris and NetApp don't care about source
 code, but then the whole point of opening Solaris is just reduced to
 marketing hype.


Yes,  this would be interesting.  CDDL requires them to release code for
any executable version they ship.  Considering they claim to have ...start
with ZFS and makes it better  it sounds like they have modified CDDL
covered code.   Since Sun owns that code they would need to rattle the
cage.  Sun? Anyone have any talks with these guys yet?



 In the defense of green-bytes.. I think they've truly spent some time
 developing an interesting product and want to protect their ideas and
 investment.  I said this on the phone, but in my very humble opinion
 nobody is going to steal their patches.  In a way I'm curious what
 others think before a good company gets a lot of bad PR over an honest
 and small oversight.

If they take opensource code and modify it and that code requires release
of the derivative code then there is no stealing involved.  It is kind of
like walking into a car dealership that had a sign Free winter tires with
purchase of car and feeling slighted when you can't walk out with just the
tires for free.


-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread C. Bergström
Matt Aitkenhead wrote:
 I see that you have wasted no time. I'm still determining if you have a 
 sincere interest in working with us or alternatively have an axe to grind. 
 The latter is shining through.

 Regards,
 Matt
   
Hi Matt,

I'd like to make our correspondence in public if you don't mind so my 
intention isn't mistaken.  My point wasn't at all to grind an axe.

1) That's no way to encourage a company which is already scared of open 
source to even think about releasing patches. (Sun's marketing isn't 
stupid.. they did this because it's good for them)
2) I am sincerely interested in your product (as others seem to be as well)

Code review, increased testing and viral marketing are all typically 
good things.  Anyway, hope this clears things up.

Cheers,

./C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-06 Thread mike
I posted a thread here...
http://forums.opensolaris.com/thread.jspa?threadID=596

I am trying to finish building a system and I kind of need to pick
working NIC and onboard SATA chipsets (video is not a big deal - I can
get a silent PCIe card for that, I already know one which works great)

I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit
port. That's about it.

I built a list in that thread of all the options I found from the
major manufacturers that Newegg has as the pool of possible
chipsets/etc... any help is appreciated (anyone actually using any of
these) - and remember I'm trying to use Nevada out of the box, not
have to download specific drivers and tweak all this myself...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Tim
On Mon, Oct 6, 2008 at 3:00 PM, C. Bergström [EMAIL PROTECTED]wrote:

 Matt Aitkenhead wrote:
  I see that you have wasted no time. I'm still determining if you have a
 sincere interest in working with us or alternatively have an axe to grind.
 The latter is shining through.
 
  Regards,
  Matt
 
 Hi Matt,

 I'd like to make our correspondence in public if you don't mind so my
 intention isn't mistaken.  My point wasn't at all to grind an axe.

 1) That's no way to encourage a company which is already scared of open
 source to even think about releasing patches. (Sun's marketing isn't
 stupid.. they did this because it's good for them)
 2) I am sincerely interested in your product (as others seem to be as well)

 Code review, increased testing and viral marketing are all typically
 good things.  Anyway, hope this clears things up.

 Cheers,

 ./C


ZFS is licensed under the CDDL, and as far as I know does not require
derivative works to be open source.  It's truly free like the BSD license in
that companies can take CDDL code, modify it, and keep the content closed.
They are not forced to share their code.  That's why there are closed
patches that go into mainline Solaris, but are not part of OpenSolaris.

While you may not like it, this isn't the GPL.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Scott Williamson
Speaking of this, is there a list anywhere that details what we can expect
to see for (zfs) updates in S10U6?

On Mon, Oct 6, 2008 at 2:44 PM, Richard Elling [EMAIL PROTECTED]wrote:

 Do you have a lot of snapshots?  If so, CR 6612830 could be contributing.
 Alas, many such fixes are not yet available in S10.
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Rich Teer
On Mon, 6 Oct 2008, Tim wrote:

 ZFS is licensed under the CDDL, and as far as I know does not require
 derivative works to be open source.  It's truly free like the BSD license in

It doesn't, but changes made to CDDL-licensed files must be released
(under the CDDL).

 that companies can take CDDL code, modify it, and keep the content closed.

Nope.  Others can take CDDL code, develop new code (in other source
files), and keep the new code secret, but they must publish the
source code to any changes they make to CDDL-ed files.

 They are not forced to share their code.  That's why there are closed
 patches that go into mainline Solaris, but are not part of OpenSolaris.

The closed code will be in separate files to those covered by the CDDL.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Wade . Stuart


 On Mon, Oct 6, 2008 at 3:00 PM, C. Bergström [EMAIL PROTECTED]
  wrote:
 Matt Aitkenhead wrote:
  I see that you have wasted no time. I'm still determining if you
 have a sincere interest in working with us or alternatively have an
 axe to grind. The latter is shining through.
 
  Regards,
  Matt
 
 Hi Matt,

 I'd like to make our correspondence in public if you don't mind so my
 intention isn't mistaken.  My point wasn't at all to grind an axe.

 1) That's no way to encourage a company which is already scared of open
 source to even think about releasing patches. (Sun's marketing isn't
 stupid.. they did this because it's good for them)
 2) I am sincerely interested in your product (as others seem to be as
well)

 Code review, increased testing and viral marketing are all typically
 good things.  Anyway, hope this clears things up.

 Cheers,

 ./C

 ZFS is licensed under the CDDL, and as far as I know does not
 require derivative works to be open source.  It's truly free like
 the BSD license in that companies can take CDDL code, modify it, and
 keep the content closed.  They are not forced to share their code.
 That's why there are closed patches that go into mainline Solaris,
 but are not part of OpenSolaris.

 While you may not like it, this isn't the GPL.


Tim,

  I am not a lawyer, yet that is not how I understand it.  Sun is not
required to publish source patches because they own the code and grant the
license.  They are free to sell, ship or do whatever they want with the
code.  On the other hand,  company X if they use the source and deliver
executables are required by the license (that gives them access to the code
in the first place) give source code. See below for the relevant section in
the CDDL.

-Wade


3.1. Availability of Source Code.

Any Covered Software that You distribute or otherwise make available in
Executable form must also be made available in Source Code form and that
Source Code form must be distributed only under the terms of this License.
You must include a copy of this License with every copy of the Source Code
form of the Covered Software You distribute or otherwise make available.
You must inform recipients of any such Covered Software in Executable form
as to how they can obtain such Covered Software in Source Code form in a
reasonable manner on or through a medium customarily used for software
exchange.

3.5. Distribution of Executable Versions.

You may distribute the Executable form of the Covered Software under the
terms of this License or under the terms of a license of Your choice, which
may contain terms different from this License, provided that You are in
compliance with the terms of this License and that the license for the
Executable form does not attempt to limit or alter the recipient?s rights
in the Source Code form from the rights set forth in this License. If You
distribute the Covered Software in Executable form under a different
license, You must make it absolutely clear that any terms which differ from
this License are offered by You alone, not by the Initial Developer or
Contributor. You hereby agree to indemnify the Initial Developer and every
Contributor for any liability incurred by the Initial Developer or such
Contributor as a result of any such terms You offer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Joerg Schilling
Tim [EMAIL PROTECTED] wrote:

 ZFS is licensed under the CDDL, and as far as I know does not require
 derivative works to be open source.  It's truly free like the BSD license in
 that companies can take CDDL code, modify it, and keep the content closed.
 They are not forced to share their code.  That's why there are closed
 patches that go into mainline Solaris, but are not part of OpenSolaris.

The CDDL requires to make modifications public.



 While you may not like it, this isn't the GPL.

The GPL is more free than many people may believe now ;-)

The GPL is unfortunately missunderstood by most people.

The GPL allows you to link GPLd projects against other code
of _any_ other license that does not forbid you some basic things.
This is because the GPL ends at the work limit. The binary in this
case is just a container for more than one work and the license of
the binary is the aggregation of the requirements of the licenses
in use by the sources.


The influence of the CDDL ends at file level. All changes are covered by
the copyleft from the CDDL.


The influence of the BSD license ends at line level. The original
code remains under the BSD license but you may add new code under
a different license. Note that all GPL enhanced BSD code I am 
aware of violates the GPL as GPL section 2 a) requires that every 
change has to be logged by author and date _inline_ in the changed
file. Do you know of any such code where is is possible to track down
which part of the code is from the GPLd enhancements?

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Brian Hechinger
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
 
 There have been threads about adding a feature to support slow mirror
 devices that don't stay synced synchronously.  At least IIRC.  That
 would help.  But then, if the pool is busy writing then your slow ZIL

That would definitely be a great help.

 mirrors would generally be out of sync, thus being of no help in the
 even of a power failure given fast slog devices that don't survive power
 failure.

Maybe not, but it would at least save *something* as opposed to not saving
anything at all.  Still, with enough UPS power, there should be at least
enough run time left to get the rest of the ZIL to the disk mirror.

 Also, using remote devices for a ZIL may defeat the purpose of fast
 ZILs, even if the actual devices are fast, because what really matters
 here is latency, and the farther the device, the higher the latency.

4Gb FC is slow and low latency?  Tell that to all my local fast disks that
are attached via FC. :)

 Yes, it's pretty smart.  Add UPS and it's sortof like battery-backed
 RAM.  You can probably get a good enough reliability rate out of this
 for your purposes, though actual slog devices would be better if you can
 afford them.

Or would they?  A box dedicated to being a RAM based slog is going to be
faster than any SSD would be.  Especially if you make the expensive jump
to 8Gb FC.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Brian Hechinger
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
 
 I wonder if an AVS-replicated storage device on the backends would be 
 appropriate?
 
 write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
 +-iscsi- ramdisk -AVS- physical disk
 
 You'd get the continuous replication of the ramdisk to physical drive (and 
 perhaps automagic recovery on reboot) but not pay the syncronous write to 
 remote physical disk penalty

Hmmm, AVS *might* just be the ticket here.  Will have to look at that.

 A .5-ms RTT on an ethernet link to the iSCSI disk may be faster than a 9-ms 
 latency on physical media.

Or, if you're looking into what I'm thinking with 4Gb/8Gb FC, it gets even 
better.

 There was a time when it was better to place workstations' swap files on the 
 far side of a 100Mbps ethernet link rather than using the local spinning 
 rust.  Ah, the good old days...

I remember those days.  My SPARCstation LX ran that way.  Not due to speed,
however, due to lack of disk space in the LX. ;)

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Nicolas Williams
On Mon, Oct 06, 2008 at 05:38:33PM -0400, Brian Hechinger wrote:
 On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
  There have been threads about adding a feature to support slow mirror
  devices that don't stay synced synchronously.  At least IIRC.  That
  would help.  But then, if the pool is busy writing then your slow ZIL
 
 That would definitely be a great help.
 
  mirrors would generally be out of sync, thus being of no help in the
  even of a power failure given fast slog devices that don't survive power
  failure.
 
 Maybe not, but it would at least save *something* as opposed to not saving
 anything at all.  Still, with enough UPS power, there should be at least
 enough run time left to get the rest of the ZIL to the disk mirror.

Yes.  But again, you get somewhat more protection from writing to a
write-biased SSD in that once the ZIL bits are committed then you get
protection from panics in the OS too, not just power failure.

  Also, using remote devices for a ZIL may defeat the purpose of fast
  ZILs, even if the actual devices are fast, because what really matters
  here is latency, and the farther the device, the higher the latency.
 
 4Gb FC is slow and low latency?  Tell that to all my local fast disks that
 are attached via FC. :)

The comparison was to RAM, not local fast disks.

I'm pretty sure that local RAM beats remote-anything, no matter what the
anything (as long as it isn't RAM) and what the protocol to get to it
(as long as it isn't a normal backplane).  (You could claim with NUMA
memory can be remote, so let's say that for a reasonable value of
remote.)

  Yes, it's pretty smart.  Add UPS and it's sortof like battery-backed
  RAM.  You can probably get a good enough reliability rate out of this
  for your purposes, though actual slog devices would be better if you can
  afford them.
 
 Or would they?  A box dedicated to being a RAM based slog is going to be
 faster than any SSD would be.  Especially if you make the expensive jump
 to 8Gb FC.

Unless the SSD had a battery-backed RAM cache, or were based entierly on
battery-backed RAM (but then you have to worry about battery upkeep).

To me this is a performance/reliability trade-off.  RAM slogs mirrored
in cluster + UPS - very fast, works as well as the UPS.  Write-biased
flash slogs - fast, no UPS to worry about.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Bob Friesenhahn
On Mon, 6 Oct 2008, Joerg Schilling wrote:

 While you may not like it, this isn't the GPL.

 The GPL is more free than many people may believe now ;-)

 The GPL is unfortunately missunderstood by most people.

The GPL is missunderstood due the profusion of confusing technobabble 
such as you provided in your explanation.

Regardless, the Green Bytes CDDL issue is between the copyright holder 
(Sun) and Green Bytes and is no concern of ours.  The actual changes 
to existing source modules may be on the order of a few lines of code, 
or potentially no change at all.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Brian Hechinger
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
 
 I wonder if an AVS-replicated storage device on the backends would be 
 appropriate?
 
 write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
 +-iscsi- ramdisk -AVS- physical disk
 
 You'd get the continuous replication of the ramdisk to physical drive (and 
 perhaps automagic recovery on reboot) but not pay the syncronous write to 
 remote physical disk penalty

It looks like the answer is no.

[EMAIL PROTECTED] sudo sndradm -e localhost /dev/rramdisk/avstest1 
/dev/zvol/rdsk/SYS0/bitmap1 \wintermute /dev/zvol/dsk/SYS0/avstest2 
/dev/zvol/rdsk/SYS0/bitmap2 ip async
Enable Remote Mirror? (Y/N) [N]: y
sndradm: Error: both localhost and wintermute are local

In order to use AVS, it looks like you'd have to replicate between two (or more)
ZIL Boxes.  Not the worst thing in the world to have to do, but it certainly
complicates things.  Also, you don't get that super fast RAM-Disk sync anymore
as you now have to traverse an IP network to get there.  Still might be an
acceptable way to achieve the goals we are looking at here.

I guess at this point falling back to 'zfs send' run in a continuous loop might
be an alternative.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Joerg Schilling
Bob Friesenhahn [EMAIL PROTECTED] wrote:

  The GPL is unfortunately missunderstood by most people.

 The GPL is missunderstood due the profusion of confusing technobabble 
 such as you provided in your explanation.

If you don't understand it, just don't comment it ;-)

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Brian Hechinger
On Mon, Oct 06, 2008 at 01:13:40AM -0700, Ross wrote:
 
 It's also worth bearing in mind that you can have multiple mirrors.  I don't 
 know what effect that will have on the performance, but it's an easy way to 
 boost the reliability even further.  I think this idea configured on a set of 
 2-3 servers, with separate UPS' for each, and a script that can export the 
 pool and save the ramdrive when the power fails, is potentially a very neat 
 little system.

The more slog devices, the better. :)

If the host using the slogs could trigger the shutdown, that would be even
better I think.  Once we know the zpool is exported, the slogs have just
entered a nicely consistent state at which point the copies could be made.

Also, it would also be nice if the host using these slogs would be able to
wait until enough of them are online to attempt to mount its pool.  That
shouldn't be too hard, nothing more than some startup script modifications.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Richard Elling
Scott Williamson wrote:
 Speaking of this, is there a list anywhere that details what we can 
 expect to see for (zfs) updates in S10U6?

The official release name is Solaris 10 10/08
http://www.sun.com/software/solaris/10 
has links to what's new videos.
When the release is downloadable, full doc set will
be ready.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: Re: ZSF Solaris]

2008-10-06 Thread Jens Elkner
On Mon, Oct 06, 2008 at 08:01:39PM +0530, Pramod Batni wrote:
 
 On Tue, Sep 30, 2008 at 09:44:21PM -0500, Al Hopper wrote:
 
  This behavior is common to tmpfs, UFS and I tested it on early ZFS
  releases.  I have no idea why - I have not made the time to figure it
  out.  What I have observed is that all operations on your (victim)
  test directory will max out (100% utilization) one CPU or one CPU core
  - and all directory operations become single-threaded and limited by
  the performance of one CPU (or core).
 
 And sometimes its just a little bug: E.g. with a recent version of Solaris
 (i.e. = snv_95 || = S10U5) on UFS:
 
 SunOS graf 5.10 Generic_137112-07 i86pc i386 i86pc (X4600, S10U5)
 =
 admin.graf /var/tmp   time sh -c 'mkfile 2g xx ; sync'
 0.05u 9.78s 0:29.42 33.4%
 admin.graf /var/tmp  time sh -c 'mkfile 2g xx ; sync'
 0.05u 293.37s 5:13.67 93.5%
 
 SunOS q 5.11 snv_98 i86pc i386 i86pc (U40, S11b98)
 =
 elkner.q /var/tmp  time mkfile 2g xx
 0.05u 3.63s 0:42.91 8.5%
 elkner.q /var/tmp  time mkfile 2g xx
 0.04u 315.15s 5:54.12 89.0%
 
The reason why the (implicit) truncation could be taking long  might be due
to
6723423 [6]UFS slow following large file deletion with fix for 6513858
installed
 
To overcome this problem for S10, the offending patch 127866-03 can be
removed.

Yes - removing 127867-05 (x86, i.e. going back to 127867-02) resolved
the problem. On sparc removing 127866-05 brought me back to 127866-01
which didn't seem to solve the problem (maybe because didn't init 6
before). However installing 127866-02 and init 6 fixed it on sparc as well.

Any hints, in which snv release it is fixed?

Thanx a lot,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss