Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Peter Jeremy
On 2010-Aug-16 08:17:10 +0800, Garrett D'Amore  wrote:
>For either ZFS or BTRFS (or any other filesystem) to survive, there have
>to be sufficiently skilled developers with an interest in developing and
>maintaining it (whether the interest is commercial or recreational).

Agreed.  And this applies to OpenSolaris (or Illumos or any other fork)
as well.

>Honestly, I think both ZFS and btrfs will continue to be invested in by
>Oracle.

Given that both provide similar features, it's difficult to see why
Oracle would continue to invest in both.  Given that ZFS is the more
mature product, it would seem more logical to transfer all the effort
to ZFS and leave btrfs to die.

Irrespective of the above, there is nothing requiring Oracle to release
any future btrfs or ZFS improvements (or even bugfixes).  They can't
retrospectively change the license on already released code but they
can put a different (non-OSS) license on any new code.

-- 
Peter Jeremy


pgpuCWzXnMlHq.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the error threshold for a degraded device configurable?

2010-08-15 Thread Ian Collins

On 08/16/10 12:37 PM, Richard Elling wrote:

On Aug 15, 2010, at 4:59 PM, Ian Collins wrote:

   

I look after an x4500 for a client and wee keep getting drives marked as 
degraded with just over 20 checksum errors.

Most of these errors appear to be driver or hardware related and thier frequency 
increases during a resilver, which can lead to a death spiral.  The increase in errors 
within a vdev during a resilver (I recently had three drives in an 8 drive raidz vdev 
"degraded") points to high read activity triggering the bug.

I would like to raise threshold for marking a drive degraded to give me more 
time to spot and clear the checksum errors.  Is this possible?
 

There is not a documented system-admin visible interface to this.
The settings in question can be set as properties in the zfs-diagnosis.conf
file, similar to props set in other FMA modules.

The source is also currently available.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/fm/modules/common/zfs-diagnosis/zfs_de.c#957

Examples of setting FMA module properties are in
/usr/lib/fm/fmd/plugins/cpumem-retire.conf
and other .conf files.

   

Thanks for the links Richard.

Looking through the code, the only configurable read from the file is 
remove_timeout.  Anything else will require code changes.  Maybe it's 
time to upgrade the box to something newer than Solaris 10!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File system ownership details of ZFS file system.

2010-08-15 Thread Richard Elling
On Aug 11, 2010, at 11:41 PM, Ramesh Babu wrote:
> Hi,
> 
> I am looking for the file system ownership information of ZFS file system. I 
> would like to know the amount of space used and number of files owned by each 
> user in ZFS file system. I could get the user space using 'ZFS userspace' 
> command. However i didn't find any switch to get Number of files owned by 
> each user. quot command will display this information of ufs but not for zfs. 
> Please let me know how to get number of files owned by each user for ZFS file 
> system.

Sometimes the questions asked don't apply the the environment...

ZFS does not have a fixed number of inodes, so there is no way to
calculate a limit, per se.  ZFS uses space for metadata, so as long 
as you have available space, you can use it for metadata.

> Also I would like to know how to get the hard and soft limit quota of disk 
> space and inodes for each user.

There is no "hard and soft" limits.  There are quotas at the file system
and, for later releases, per-user and per-group quotas. Since this can
be a deep topic, I suggest you read Cindy's excellent descriptions in 
the ZFS Admin Guide.
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfsadmin.pdf

 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VMware

2010-08-15 Thread Richard Elling
On Aug 11, 2010, at 12:52 PM, Paul Kraus wrote:

>   I am looking for references of folks using ZFS with either NFS
> or iSCSI as the backing store for VMware (4.x) backing store for
> virtual machines. We asked the local VMware folks and they had not
> even heard of ZFS. Part of what we are looking for is a recommendation
> for NFS or iSCSI, and all VMware would say is "we support both". We
> are currently using Sun SE-6920, 6140, and 2540 hardware arrays via
> FC. We have started playing with ZFS/NFS, but have no experience with
> iSCSI. The ZFS backing store in some cases will be the hardware arrays
> (the 6920 has fallen off of VMware's supported list and if we front
> end it with either NFS or iSCSI it'll be supported, and VMware
> suggested that) and some of it will be backed by J4400 SATA disk.

At Nexenta, we have many customers using ZFS as backing store for
VMware and Citrix XenServer.  Nexenta also has a plugin to help you
integrate your VMware, XenServer, and Hyper-V virtual hosts with the
storage appliance. For more info, see
http://www.nexenta.com/corp/applications/vmdc
and the latest Nexenta docs, including the VMDC User's Guide are at:
http://www.nexenta.com/corp/documentation/product-documentation

Please share and enjoy that the joint EMC+NetApp storage best practices 
for configuring ESX applies to all NFS and block storage (*SCSI) 
environments. Google "TR-3428" and point me in the direction of any
later versions you find :-)

 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Frank Cusack

On 8/14/10 10:18 PM -0700 Richard Elling wrote:

On Aug 13, 2010, at 7:06 PM, Frank Cusack wrote:

Interesting POV, and I agree.  Most of the many "distributions" of
OpenSolaris had very little value-add.  Nexenta was the most interesting
and why should Oracle enable them to build a business at their expense?



Markets dictate behaviour. Oracle has clearly stated their goal of
focusing the Sun-acquired assets at the Fortune-500 market.  Nexenta has
a different  market -- the rest of the world. There is plenty of room for
both to be successful.  -- richard


Great point.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the error threshold for a degraded device configurable?

2010-08-15 Thread Richard Elling
On Aug 15, 2010, at 4:59 PM, Ian Collins wrote:

> I look after an x4500 for a client and wee keep getting drives marked as 
> degraded with just over 20 checksum errors.
> 
> Most of these errors appear to be driver or hardware related and thier 
> frequency increases during a resilver, which can lead to a death spiral.  The 
> increase in errors within a vdev during a resilver (I recently had three 
> drives in an 8 drive raidz vdev "degraded") points to high read activity 
> triggering the bug.
> 
> I would like to raise threshold for marking a drive degraded to give me more 
> time to spot and clear the checksum errors.  Is this possible?

There is not a documented system-admin visible interface to this.  
The settings in question can be set as properties in the zfs-diagnosis.conf
file, similar to props set in other FMA modules.

The source is also currently available.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/fm/modules/common/zfs-diagnosis/zfs_de.c#957

Examples of setting FMA module properties are in 
/usr/lib/fm/fmd/plugins/cpumem-retire.conf
and other .conf files.

If you get this to work, please publicly document your changes and 
why you felt the new settings were better.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Garrett D'Amore
Any code can become abandonware; where it effectively bitrots into
oblivion.

For either ZFS or BTRFS (or any other filesystem) to survive, there have
to be sufficiently skilled developers with an interest in developing and
maintaining it (whether the interest is commercial or recreational).

Honestly, I think both ZFS and btrfs will continue to be invested in by
Oracle.

(The only way I could see this changing would be if there was a sudden
license change which would permit either ZFS to overtake btrfs in the
Linux kernel, or permit btrfs to overtake zfs in the Solaris kernel.  I
think from a technical perspective, the latter of those two is
exceedingly unlikely -- if I understand correctly btrfs has a lot of
ground to make up to catch zfs, and zfs continues to receive
improvements and innovation.  The only way I could see zfs being
abandoned would be if there were some legal reason why Oracle couldn't
continue to develop it.  I don't think that is in the cards, honestly.)

- Garrett

On Sun, 2010-08-15 at 19:33 -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Jerome Warnier
> > 
> > Do not forget Btrfs is mainly developed by ... Oracle. Will it survive
> > better than Free Solaris/ZFS?
> 
> It's gpl.  Just as zfs is cddl.  They cannot undo, or revoke the free
> license they've granted to use and develop upon whatever they've released.
> 
> ZFS is not dead, although it is yet to be seen if future development will be
> closed source.
> 
> BTRFS is not dead, and cannot be any more dead than zfs.
> 
> So honestly ... your comment above ... really has no bearing in reality.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS diaspora (was Opensolaris is apparently dead)

2010-08-15 Thread Garrett D'Amore
I'd recommend typical end-users not interested in purchasing equipment
from Oracle consider Nexenta's product line for storage serving.

I can tell you that we offer real support, and we have the latest code
base with the most tightly integrated kernel other than Oracle's
product.  (And in many cases, we bring features into a shipping product
sooner than Oracle does.)

I've not tried the FreeBSD ZFS implementation, but I've heard that it
suffers from a performance standpoint -- its also a bit behind the
Solaris derived platforms.  The Linux effort is far too immature to
trust any real data to it, and may wind up never getting any real legs
underneath it due to license related conflicts.

- Garrett


On Sun, 2010-08-15 at 18:31 -0500, Haudy Kazemi wrote:
> For the ZFS diaspora:
> 
> 1.) For the immediate and near term future (say 1 year), what makes a
> better choice for a new install of a ZFS-class filesystem? Would it be
> FreeBSD 8 with it's older ZFS version (pool version 14), or
> NexentaCore with newer ZFS (pool version 25(?) ), NexentaStor, or
> something else?  OpenSolaris 2009.06, Solaris 10 10/09, FreeBSD
> 8-STABLE and 8.1-RELEASE all use pool version 14.  Linux ZFS-FUSE
> 0.6.9 is at pool version 23, and Linux zfs-0.5.0 is at pool vesion 26.
> 
> Are there any other ZFS or ZFS-class filesystems on a supported
> distribution that are worthy of consideration for this timeframe?
> 
> 
> 2.) IllumOS appears to be the likely heir to what was known as
> OpenSolaris.  They have their own mailing lists at
> http://lists.illumos.org/m/listinfo .  Interested community members
> might like to sign up there in case there is a sudden unavailability
> of opensolaris.org and its forums and lists.  Nexenta is sponsoring
> IllumOS.  Nexenta also appears somewhat insulated from the demise of
> OpenSolaris, and is a refuge for several former Sun engineers who were
> active on OpenSolaris.  Genunix.org and the Phoronix.com forums are
> other places to watch.
> 
> 
> Other comments inline:
> 
> 
> Russ Price wrote: 
> > My guess is that the theoretical Solaris Express 11 will be crippled
> > by any or all of: missing features, artificial limits on
> > functionality, or a restrictive license. I consider the latter most
> > likely, much like the OTN downloads of Oracle DB, where you can
> > download and run it for development purposes, but don't even THINK
> > of using it as a production server for your home or small business.
> > Of course, an Oracle DB is overkill for such a purpose anyway, but
> > that's a different kettle of fish. 
> > 
> > For me, Solaris had zero mindshare since its beginning, on account
> > of being prohibitively expensive. When OpenSolaris came out, I
> > basically ignored it once I found out that it was not completely
> > open source, since I figured that there was too great a risk of a
> > train wreck like we have now. Then, I decided this winter to give
> > ZFS a spin, decided I liked it, and built a home server around it -
> > and within weeks Oracle took over, tore up the tracks without
> > telling anybody, and made the train wreck I feared into a reality. I
> > should have listened to my own advice. 
> > 
> > As much as I'd like to be proven wrong, I don't expect SX11 to be
> > useful for my purposes, so my home file server options are: 
> > 
> > 1. Nexenta Core. It's maintained, and (somewhat) more up-to-date
> > than the late OpenSolaris. As I've been running Linux since the days
> > when a 486 was a cutting-edge system, I don't mind having a GNU
> > userland. Of course, now that Oracle has slammed the door, it'll be
> > difficult for it to move forward - which leads to: 
> 1a. NexentaStor Community Edition may also be suitable for home file
> server class uses, depending on your actual storage needs.  It
> currently has a 12 TB limit, measured in actual used capacity.
> http://support.nexenta.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=69&nav=0,15
> 
> 
> > 2. IllumOS. In 20/20 hindsight, a project like this should have
> > begun as soon as OpenSolaris first came out the door, but better
> > late than never. In the short term, it's not yet an option, but in
> > the long term, it may be the best (or only) hope. At the very least,
> > I won't be able to use it until an open mpt driver is in place. 
> > 
> > 3. Just stick with b134. Actually, I've managed to compile my way up
> > to b142, but I'm having trouble getting beyond it - my attempts to
> > install later versions just result in new boot environments with the
> > old kernel, even with the latest pkg-gate code in place. Still, even
> > if I get the latest code to install, it's not viable for the long
> > term unless I'm willing to live with stasis. 
> > 
> > 4. FreeBSD. I could live with it if I had to, but I'm not fond of
> > its packaging system; the last time I tried it I couldn't get the
> > package tools to pull a quick binary update. Even IPS works better.
> > I could go to the ports tree instead, but i

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-15 Thread Richard Elling
On Aug 11, 2010, at 9:46 PM, Ville Ojamo wrote:

> I am having a similar issue at the moment.. 3 GB RAM under ESXi, but dedup 
> for this zvol (1.2 T) was turned off and only 300 G was used. The pool does 
> contain other datasets with dedup turned on but are small enough so I'm not 
> hitting the memory limits (been there, tried that, never again without maxing 
> out the RAM + SSD).
> 
> Tried to destroy the zvol, waited for a long time, and due to some unexpected 
> environmental problems needed to pull the plug on the box quickly to save it. 
> Now the boot is at "Reading ZFS config: *" since a few days, but I have time 
> to wait. ESXi monitoring confirms CPU activity but very little I/O.
> 
> My point, this particular zvol did not have deduplication turned on but it 
> seems I am still hitting the same problem.
> 
> snv_134
> 
> BTW this is a PoC box with nothing too important on it and I have some spare 
> time, so if I can help somehow, for example with kernel debugging let me know.

There are several fixes since b134, such as CR6948890 snapshot deletion can
induce pathologically long spa_sync() times
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6948890

You might be hitting this, also. I know we've integrated this fix into the later
Nexenta releases.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is the error threshold for a degraded device configurable?

2010-08-15 Thread Ian Collins
I look after an x4500 for a client and wee keep getting drives marked as 
degraded with just over 20 checksum errors.


Most of these errors appear to be driver or hardware related and thier 
frequency increases during a resilver, which can lead to a death 
spiral.  The increase in errors within a vdev during a resilver (I 
recently had three drives in an 8 drive raidz vdev "degraded") points to 
high read activity triggering the bug.


I would like to raise threshold for marking a drive degraded to give me 
more time to spot and clear the checksum errors.  Is this possible?


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool and filesystem version list, OpenSolaris builds list

2010-08-15 Thread Richard Elling
On Aug 15, 2010, at 4:30 PM, Haudy Kazemi wrote:
> Hello,
> 
> This is a consolidated list of ZFS pool and filesystem versions, along with 
> the builds and systems they are found in. It is based on multiple online 
> sources. Some of you may find it useful in figuring out where things are at 
> across the spectrum of systems supporting ZFS including FreeBSD and FUSE. At 
> the end of this message there is a list of the builds OpenSolaris releases 
> and some OpenSolaris derivatives are based on. The list is sort-of but not 
> strictly comma delimited, and of course may contain errata.

It is nice to have the distros section, thanks.
For the versions and what they do, use the links 

zpool: http://hub.opensolaris.org/bin/view/Community+Group+zfs/N
zfs: http://hub.opensolaris.org/bin/view/Community+Group+zfs/N

But these links are dead, or at least haven't been updated since the CIC :-(

zpool:
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance

zfs:
 4   userquota, groupquota properties
 5   System attributes

 -- richard

> 
> -hk
> 
> 
> Solaris Nevada xx = snv_xx = onnv_xx ~= testing builds for Solaris 11
> SXCE = Solaris Express Community Edition
> 
> ZFS Pool Version, Where found (multiple), Notes about this version
> 1, Nevada/SXCE 36, Solaris 10 6/06, Initial ZFS on-disk format integrated on 
> 10/31/05. During the next six months of internal use, there were a few 
> on-disk format changes that did not result in a version number change, but 
> resulted in a flag day since earlier versions could not read the newer 
> changes. For '6389368 fat zap should use 16k blocks (with backwards 
> compatibility)' and '6390677 version number checking makes upgrades 
> challenging'
> 2, Nevada/SXCE 38, Solaris 10 10/06 (build 9), Ditto blocks (replicated 
> metadata) for '6410698 ZFS metadata needs to be more highly replicated (ditto 
> blocks)'
> 3, Nevada/SXCE 42, Solaris 10 11/06 (build 3), Hot spares and double parity 
> RAID-Z for '6405966 Hot Spare support in ZFS' and '6417978 double parity 
> RAID-Z a.k.a. RAID6' and '6288488 du reports misleading size on RAID-Z'
> 4, Nevada/SXCE 62, Solaris 10 8/07, zpool history for '6529406 zpool history 
> needs to bump the on-disk version' and '6343741 want to store a command 
> history on disk'
> 5, Nevada/SXCE 62, Solaris 10 10/08, gzip compression algorithm for '6536606 
> gzip compression for ZFS'
> 6, Nevada/SXCE 62, Solaris 10 10/08, FreeBSD 7.0, 7.1, 7.2, bootfs pool 
> property for '4929890 ZFS boot support for the x86 platform' and '6479807 
> pools need properties'
> 7, Nevada/SXCE 68, Solaris 10 10/08, Separate intent log devices for '6339640 
> Make ZIL use NVRAM when available'
> 8, Nevada/SXCE 69, Solaris 10 10/08, Delegated administration for '6349470 
> investigate non-root restore/backup'
> 9, Nevada/SXCE 77, Solaris 10 10/08, refquota and refreservation properties 
> for '6431277 want filesystem-only quotas' and '6483677 need immediate 
> reservation' and '6617183 CIFS Service - PSARC 2006/715'
> 10, Nevada/SXCE 78, OpenSolaris 2008.05, Solaris 10 5/09 (Solaris 10 10/08 
> supports ZFS version 10 except for cache devices), Cache devices for '6536054 
> second tier ("external") ARC'
> 11, Nevada/SXCE 94, OpenSolaris 2008.11, Solaris 10 10/09, Improved 
> scrub/resilver performance for '6343667 scrub/resilver has to start over when 
> a snapshot is taken'
> 12, Nevada/SXCE 96, OpenSolaris 2008.11, Solaris 10 10/09, added Snapshot 
> properties for '6701797 want user properties on snapshot'
> 13, Nevada/SXCE 98, OpenSolaris 2008.11, Solaris 10 10/09, FreeBSD 7.3+, 
> FreeBSD 8.0-RELEASE, Linux ZFS-FUSE 0.5.0, added usedby properties for 
> '6730799 want user properties on snapshots' and 'PSARC/2008/518 ZFS space 
> accounting enhancements'
> 14, Nevada/SXCE 103, OpenSolaris 2009.06, Solaris 10 10/09, FreeBSD 8-STABLE, 
> 8.1-RELEASE, 9-CURRENT, added passthrough-x aclinherit property support for 
> '6765166 Need to provide mechanism to optionally inherit ACE_EXECUTE' and 
> 'PSARC 2008/659 New ZFS "passthrough-x" ACL inheritance rules'
> 15, Nevada/SXCE 114, added quota property support for '6501037 want 
> user/group quotas on ZFS' and 'PSARC 2009/204 ZFS user/group quotas & space 
> accounting'
> 16, Nevada/SXCE 116, Linux ZFS-FUSE 0.6.0, added stmf property support for 
> '6736004 zvols need an additional property for comstar support'
> 17, Nevada/SXCE 120, added triple-parity RAID-Z for '6854612 triple-parity 
> RAID-Z'
> 18, Nevada/SXCE 121, Linux zfs-0.4.9, added ZFS snapshot holds for '6803121 
> want user-settable refcounts on snapshots'
> 19, Nevada/SXCE 125, added ZFS log device removal option for '6574286 
> removing a slog doesn't work'
> 20, Nevada/SXCE 128, added zle compression to support dedupe in version 21 
> for 'PSARC/2009/571 ZFS Deduplication Properties'
> 21, Nevada/SXCE 128, added deduplication properties for 'PSARC/2009/571 ZFS 
> Deduplication Properties'
> 22, Nevada/SXCE

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
>
> The cost discussion is ridiculous, period.  $400 is a steal for
> support.  You'll pay 3x or more for the same thing from Redhat or
> Novell.

Actually, as a comparison with the message I sent 1 minute ago... in order
to compare apples to apples ...


> [Solaris is] $450 for 1yr, or $1200 for 3yrs to buy solaris 10 with basic
> support on
> a dell server.  It costs more with a higher level of support, and it
> costs
> less if you have a good relationship with Dell with a strong corporate
> discount, or if you buy it at the end of Dell's quarter, when they have
> the
> best sales going on.

If you buy RHEL ES support with the same dell servers, the cost would be
$350/yr for basic support.  Plus or minus, based on AS and level of support
and your relationship with Dell.

Solaris costs more, but the ballpark is certainly the same.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
> 
> The $400 number is bogus since the amount that Oracle quotes now
> depends on the value of the hardware that the OS will run on.  For my

Using the same logic, if I said MS Office costs $140, that's a bogus number,
because different vendors sell it at different prices.

It's $450 for 1yr, or $1200 for 3yrs to buy solaris 10 with basic support on
a dell server.  It costs more with a higher level of support, and it costs
less if you have a good relationship with Dell with a strong corporate
discount, or if you buy it at the end of Dell's quarter, when they have the
best sales going on.

I don't know how much it costs at other vendors.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jerome Warnier
> 
> Do not forget Btrfs is mainly developed by ... Oracle. Will it survive
> better than Free Solaris/ZFS?

It's gpl.  Just as zfs is cddl.  They cannot undo, or revoke the free
license they've granted to use and develop upon whatever they've released.

ZFS is not dead, although it is yet to be seen if future development will be
closed source.

BTRFS is not dead, and cannot be any more dead than zfs.

So honestly ... your comment above ... really has no bearing in reality.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS diaspora (was Opensolaris is apparently dead)

2010-08-15 Thread Haudy Kazemi

For the ZFS diaspora:

1.) For the immediate and near term future (say 1 year), what makes a 
better choice for a new install of a ZFS-class filesystem? Would it be 
FreeBSD 8 with it's older ZFS version (pool version 14), or NexentaCore 
with newer ZFS (pool version 25(?) ), NexentaStor, or something else?  
OpenSolaris 2009.06, Solaris 10 10/09, FreeBSD 8-STABLE and 8.1-RELEASE 
all use pool version 14.  Linux ZFS-FUSE 0.6.9 is at pool version 23, 
and Linux zfs-0.5.0 is at pool vesion 26.


Are there any other ZFS or ZFS-class filesystems on a supported 
distribution that are worthy of consideration for this timeframe?



2.) IllumOS appears to be the likely heir to what was known as 
OpenSolaris.  They have their own mailing lists at 
http://lists.illumos.org/m/listinfo .  Interested community members 
might like to sign up there in case there is a sudden unavailability of 
opensolaris.org and its forums and lists.  Nexenta is sponsoring 
IllumOS.  Nexenta also appears somewhat insulated from the demise of 
OpenSolaris, and is a refuge for several former Sun engineers who were 
active on OpenSolaris.  Genunix.org and the Phoronix.com forums are 
other places to watch.



Other comments inline:


Russ Price wrote:
My guess is that the theoretical Solaris Express 11 will be crippled 
by any or all of: missing features, artificial limits on 
functionality, or a restrictive license. I consider the latter most 
likely, much like the OTN downloads of Oracle DB, where you can 
download and run it for development purposes, but don't even THINK of 
using it as a production server for your home or small business. Of 
course, an Oracle DB is overkill for such a purpose anyway, but that's 
a different kettle of fish.


For me, Solaris had zero mindshare since its beginning, on account of 
being prohibitively expensive. When OpenSolaris came out, I basically 
ignored it once I found out that it was not completely open source, 
since I figured that there was too great a risk of a train wreck like 
we have now. Then, I decided this winter to give ZFS a spin, decided I 
liked it, and built a home server around it - and within weeks Oracle 
took over, tore up the tracks without telling anybody, and made the 
train wreck I feared into a reality. I should have listened to my own 
advice.


As much as I'd like to be proven wrong, I don't expect SX11 to be 
useful for my purposes, so my home file server options are:


1. Nexenta Core. It's maintained, and (somewhat) more up-to-date than 
the late OpenSolaris. As I've been running Linux since the days when a 
486 was a cutting-edge system, I don't mind having a GNU userland. Of 
course, now that Oracle has slammed the door, it'll be difficult for 
it to move forward - which leads to:
1a. NexentaStor Community Edition may also be suitable for home file 
server class uses, depending on your actual storage needs.  It currently 
has a 12 TB limit, measured in actual used capacity.

http://support.nexenta.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=69&nav=0,15


2. IllumOS. In 20/20 hindsight, a project like this should have begun 
as soon as OpenSolaris first came out the door, but better late than 
never. In the short term, it's not yet an option, but in the long 
term, it may be the best (or only) hope. At the very least, I won't be 
able to use it until an open mpt driver is in place.


3. Just stick with b134. Actually, I've managed to compile my way up 
to b142, but I'm having trouble getting beyond it - my attempts to 
install later versions just result in new boot environments with the 
old kernel, even with the latest pkg-gate code in place. Still, even 
if I get the latest code to install, it's not viable for the long term 
unless I'm willing to live with stasis.


4. FreeBSD. I could live with it if I had to, but I'm not fond of its 
packaging system; the last time I tried it I couldn't get the package 
tools to pull a quick binary update. Even IPS works better. I could go 
to the ports tree instead, but if I wanted to spend my time 
recompiling everything, I'd run Gentoo instead.


5. Linux/FUSE. It works, but it's slow.
5a. Compile-it-yourself ZFS kernel module for Linux. This would be a 
hassle (though DKMS would make it less of an issue), but usable - 
except that the current module only supports zvols, so it's not ready 
yet, unless I wanted to run ext3-on-zvol. Neither of these solutions 
are practical for booting from ZFS.


6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for 
years before switching to ZFS, and it works - but it's a bitter pill 
to swallow after drinking the ZFS Kool-Aid. 


7.) Linux/BTRFS.  Still green, but moving quickly.  It will have crossed 
a minimum usability and stability threshold when Ubuntu or Fedora is 
willing to support it as default.  Might happen with Ubuntu 11.04, 
although in mid-May there was talk that 10.10 had a slight chance as 
well (but that seems unlikely now).


8.) EON NAS or other OpenS

[zfs-discuss] ZFS pool and filesystem version list, OpenSolaris builds list

2010-08-15 Thread Haudy Kazemi

Hello,

This is a consolidated list of ZFS pool and filesystem versions, along 
with the builds and systems they are found in. It is based on multiple 
online sources. Some of you may find it useful in figuring out where 
things are at across the spectrum of systems supporting ZFS including 
FreeBSD and FUSE. At the end of this message there is a list of the 
builds OpenSolaris releases and some OpenSolaris derivatives are based 
on. The list is sort-of but not strictly comma delimited, and of course 
may contain errata.


-hk


Solaris Nevada xx = snv_xx = onnv_xx ~= testing builds for Solaris 11
SXCE = Solaris Express Community Edition

ZFS Pool Version, Where found (multiple), Notes about this version
1, Nevada/SXCE 36, Solaris 10 6/06, Initial ZFS on-disk format 
integrated on 10/31/05. During the next six months of internal use, 
there were a few on-disk format changes that did not result in a version 
number change, but resulted in a flag day since earlier versions could 
not read the newer changes. For '6389368 fat zap should use 16k blocks 
(with backwards compatibility)' and '6390677 version number checking 
makes upgrades challenging'
2, Nevada/SXCE 38, Solaris 10 10/06 (build 9), Ditto blocks (replicated 
metadata) for '6410698 ZFS metadata needs to be more highly replicated 
(ditto blocks)'
3, Nevada/SXCE 42, Solaris 10 11/06 (build 3), Hot spares and double 
parity RAID-Z for '6405966 Hot Spare support in ZFS' and '6417978 double 
parity RAID-Z a.k.a. RAID6' and '6288488 du reports misleading size on 
RAID-Z'
4, Nevada/SXCE 62, Solaris 10 8/07, zpool history for '6529406 zpool 
history needs to bump the on-disk version' and '6343741 want to store a 
command history on disk'
5, Nevada/SXCE 62, Solaris 10 10/08, gzip compression algorithm for 
'6536606 gzip compression for ZFS'
6, Nevada/SXCE 62, Solaris 10 10/08, FreeBSD 7.0, 7.1, 7.2, bootfs pool 
property for '4929890 ZFS boot support for the x86 platform' and 
'6479807 pools need properties'
7, Nevada/SXCE 68, Solaris 10 10/08, Separate intent log devices for 
'6339640 Make ZIL use NVRAM when available'
8, Nevada/SXCE 69, Solaris 10 10/08, Delegated administration for 
'6349470 investigate non-root restore/backup'
9, Nevada/SXCE 77, Solaris 10 10/08, refquota and refreservation 
properties for '6431277 want filesystem-only quotas' and '6483677 need 
immediate reservation' and '6617183 CIFS Service - PSARC 2006/715'
10, Nevada/SXCE 78, OpenSolaris 2008.05, Solaris 10 5/09 (Solaris 10 
10/08 supports ZFS version 10 except for cache devices), Cache devices 
for '6536054 second tier ("external") ARC'
11, Nevada/SXCE 94, OpenSolaris 2008.11, Solaris 10 10/09, Improved 
scrub/resilver performance for '6343667 scrub/resilver has to start over 
when a snapshot is taken'
12, Nevada/SXCE 96, OpenSolaris 2008.11, Solaris 10 10/09, added 
Snapshot properties for '6701797 want user properties on snapshot'
13, Nevada/SXCE 98, OpenSolaris 2008.11, Solaris 10 10/09, FreeBSD 7.3+, 
FreeBSD 8.0-RELEASE, Linux ZFS-FUSE 0.5.0, added usedby properties for 
'6730799 want user properties on snapshots' and 'PSARC/2008/518 ZFS 
space accounting enhancements'
14, Nevada/SXCE 103, OpenSolaris 2009.06, Solaris 10 10/09, FreeBSD 
8-STABLE, 8.1-RELEASE, 9-CURRENT, added passthrough-x aclinherit 
property support for '6765166 Need to provide mechanism to optionally 
inherit ACE_EXECUTE' and 'PSARC 2008/659 New ZFS "passthrough-x" ACL 
inheritance rules'
15, Nevada/SXCE 114, added quota property support for '6501037 want 
user/group quotas on ZFS' and 'PSARC 2009/204 ZFS user/group quotas & 
space accounting'
16, Nevada/SXCE 116, Linux ZFS-FUSE 0.6.0, added stmf property support 
for '6736004 zvols need an additional property for comstar support'
17, Nevada/SXCE 120, added triple-parity RAID-Z for '6854612 
triple-parity RAID-Z'
18, Nevada/SXCE 121, Linux zfs-0.4.9, added ZFS snapshot holds for 
'6803121 want user-settable refcounts on snapshots'
19, Nevada/SXCE 125, added ZFS log device removal option for '6574286 
removing a slog doesn't work'
20, Nevada/SXCE 128, added zle compression to support dedupe in version 
21 for 'PSARC/2009/571 ZFS Deduplication Properties'
21, Nevada/SXCE 128, added deduplication properties for 'PSARC/2009/571 
ZFS Deduplication Properties'
22, Nevada/SXCE 128a, Nexenta Core Platform Beta 2, Beta 3, added zfs 
receive properties for 'PSARC/2009/510 ZFS Received Properties'
23, Nevada 135, Linux ZFS-FUSE 0.6.9, added slim ZIL support for 
'6595532 ZIL is too talkative'
24, Nevada 137, added support for system attributes for '6716117 ZFS 
needs native system attribute infrastructure' and '6516171 zpl symlinks 
should have their own object type'

25, Nevada ??, Nexenta Core Platform RC1
26, Nevada 141, Linux zfs-0.5.0


ZFS Pool Version, OpenSolaris, Solaris 10, Description
1 snv_36 Solaris 10 6/06 Initial ZFS version
2 snv_38 Solaris 10 11/06 Ditto blocks (replicated metadata)
3 snv_42 Solaris 10 11/06 Hot spares and double parity RAID-Z
4 snv_

Re: [zfs-discuss] Help! Dedup delete FS advice needed!!

2010-08-15 Thread Victor Latushkin

On Aug 15, 2010, at 11:30 PM, Marc Emmerson wrote:

> Hi all,
> I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a 
> couple of filesystems which I decided to delete last week, the first 
> contained about 6GB of data and was deleted in about 30 minutes, the second 
> (about 100GB of VMs) is still being deleted (I think) 4.5 days later!

Could you please post output of

echo "::arc" | mdb -k

victor

> 
> Now, I've seen delete "dedup enabled fs" operations take a while before (2 
> days) but 4.5 days is a surprise.
> 
> I am wondering what (if anything) I can do to speed this up, my server only 
> has 4GB RAM, would it be beneficial/safe for me to switch off, upgrade to 
> 8GB?  I am assuming this may help the delete operation as more memory should 
> mean that more of the dedup table is stored in RAM?
> 
> Or is there anything else I can do to speed things up or indeed determine how 
> much longer left?
> 
> I'd appreciate any advice, cheers
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need to convert or remove some "un-removable" drives

2010-08-15 Thread Richard Elling
On Aug 15, 2010, at 2:05 PM, TheJay wrote:

> Is there anybody that can help me, or come up with a suggestion other than 
> "take a backup to another pool/tape?"

Attach mirrors to c6t5d0, c6t7d0, and c6t8d0.
 -- richard

> On Aug 12, 2010, at 5:37 PM, TheJay wrote:
> 
>> Guys,
>> 
>> Need your help. My DEV134 OSOL build with my 30TB disk system got really 
>> screwed due to my fat fingers :-(
>> 
>> I added 3 drives to my pool with the intent to add them to my RAIDz2...
>> 
>> This is what my zpool status looks like:
>> 
>>  •   pool: rzpool2
>>  •  state: ONLINE
>>  •  scrub: none requested
>>  • config:
>>  •  
>>  • NAME STATE READ WRITE CKSUM
>>  • rzpool2  ONLINE   0 0 0
>>  •   raidz2-0   ONLINE   0 0 0
>>  • c6t16d0  ONLINE   0 0 0
>>  • c6t18d0  ONLINE   0 0 0
>>  • c6t19d0  ONLINE   0 0 0
>>  • c6t3d0   ONLINE   0 0 0
>>  • c6t2d0   ONLINE   0 0 0
>>  • c6t1d0   ONLINE   0 0 0
>>  • c6t4d0   ONLINE   0 0 0
>>  • c6t6d0   ONLINE   0 0 0
>>  • c6t9d0   ONLINE   0 0 0
>>  •   raidz2-1   ONLINE   0 0 0
>>  • c6t0d0   ONLINE   0 0 0
>>  • c6t17d0  ONLINE   0 0 0
>>  • c6t10d0  ONLINE   0 0 0
>>  • c6t11d0  ONLINE   0 0 0
>>  • c6t12d0  ONLINE   0 0 0
>>  • c6t13d0  ONLINE   0 0 0
>>  • c6t14d0  ONLINE   0 0 0
>>  • c6t15d0  ONLINE   0 0 0
>>  •   c6t5d0 ONLINE   0 0 0
>>  •   c6t7d0 ONLINE   0 0 0
>>  •   c6t8d0 ONLINE   0 0 0
>> 
>> 
>> 
>> Check drives c6t5d0 ,c6t7d0 and c6t8d0-> So dumb of me!
>> 
>> How do I either convert or *remove* the drives from pool (with edit/hexedit 
>> or anything else) 
>> 
>> Please HELP me!
>> 
>> 
>> 
>>  • zpool iostat -v
>>  • capacity operationsbandwidth
>>  • pool alloc   free   read  write   read  write
>>  • ---  -  -  -  -  -  -
>>  • rpool34.0G   198G  0  0  6.28K  4.34K
>>  •   c4t0d0s0   34.0G   198G  0  0  6.28K  4.34K
>>  • ---  -  -  -  -  -  -
>>  • rzpool2  22.6T  4.58T  2 14  92.4K  1.21M
>>  •   raidz2 11.8T   441G  1 12  89.3K  1.16M
>>  • c6t16d0  -  -  0  3  14.9K   170K
>>  • c6t18d0  -  -  0  3  14.1K   170K
>>  • c6t19d0  -  -  0  3  13.3K   170K
>>  • c6t3d0   -  -  0  3  14.9K   170K
>>  • c6t2d0   -  -  0  3  14.2K   170K
>>  • c6t1d0   -  -  0  3  13.3K   170K
>>  • c6t4d0   -  -  0  3  14.9K   170K
>>  • c6t6d0   -  -  0  3  14.1K   170K
>>  • c6t9d0   -  -  0  3  14.5K   170K
>>  •   raidz2 10.8T  70.8G  0  1  3.02K  55.4K
>>  • c6t0d0   -  -  0  1  5.06K  9.37K
>>  • c6t17d0  -  -  0  1  5.11K  9.38K
>>  • c6t10d0  -  -  0  1  5.09K  9.38K
>>  • c6t11d0  -  -  0  1  5.05K  9.38K
>>  • c6t12d0  -  -  0  1  5.11K  9.38K
>>  • c6t13d0  -  -  0  1  5.14K  9.38K
>>  • c6t14d0  -  -  0  1  5.07K  9.37K
>>  • c6t15d0  -  -  0  1  5.14K  9.37K
>>  •   c6t5d0  685K  1.36T  0  0 84  2.86K
>>  •   c6t7d0  848K  1.36T  0  0686  5.28K
>>  •   c6t8d0  848K  1.36T  0  0686  5.28K
>>  • ---  -  -  -  -  -  -
>> 
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need to convert or remove some "un-removable" drives

2010-08-15 Thread TheJay
Is there anybody that can help me, or come up with a suggestion other than 
"take a backup to another pool/tape?"
On Aug 12, 2010, at 5:37 PM, TheJay wrote:

> Guys,
> 
> Need your help. My DEV134 OSOL build with my 30TB disk system got really 
> screwed due to my fat fingers :-(
> 
> I added 3 drives to my pool with the intent to add them to my RAIDz2...
> 
> This is what my zpool status looks like:
> 
>   pool: rzpool2
>  state: ONLINE
>  scrub: none requested
> config:
>  
> NAME STATE READ WRITE CKSUM
> rzpool2  ONLINE   0 0 0
>   raidz2-0   ONLINE   0 0 0
> c6t16d0  ONLINE   0 0 0
> c6t18d0  ONLINE   0 0 0
> c6t19d0  ONLINE   0 0 0
> c6t3d0   ONLINE   0 0 0
> c6t2d0   ONLINE   0 0 0
> c6t1d0   ONLINE   0 0 0
> c6t4d0   ONLINE   0 0 0
> c6t6d0   ONLINE   0 0 0
> c6t9d0   ONLINE   0 0 0
>   raidz2-1   ONLINE   0 0 0
> c6t0d0   ONLINE   0 0 0
> c6t17d0  ONLINE   0 0 0
> c6t10d0  ONLINE   0 0 0
> c6t11d0  ONLINE   0 0 0
> c6t12d0  ONLINE   0 0 0
> c6t13d0  ONLINE   0 0 0
> c6t14d0  ONLINE   0 0 0
> c6t15d0  ONLINE   0 0 0
>   c6t5d0 ONLINE   0 0 0
>   c6t7d0 ONLINE   0 0 0
>   c6t8d0 ONLINE   0 0 0
> 
> 
> 
> Check drives c6t5d0 ,c6t7d0 and c6t8d0-> So dumb of me!
> 
> How do I either convert or *remove* the drives from pool (with edit/hexedit 
> or anything else) 
> 
> Please HELP me!
> 
> 
> 
> zpool iostat -v
> capacity operationsbandwidth
> pool alloc   free   read  write   read  write
> ---  -  -  -  -  -  -
> rpool34.0G   198G  0  0  6.28K  4.34K
>   c4t0d0s0   34.0G   198G  0  0  6.28K  4.34K
> ---  -  -  -  -  -  -
> rzpool2  22.6T  4.58T  2 14  92.4K  1.21M
>   raidz2 11.8T   441G  1 12  89.3K  1.16M
> c6t16d0  -  -  0  3  14.9K   170K
> c6t18d0  -  -  0  3  14.1K   170K
> c6t19d0  -  -  0  3  13.3K   170K
> c6t3d0   -  -  0  3  14.9K   170K
> c6t2d0   -  -  0  3  14.2K   170K
> c6t1d0   -  -  0  3  13.3K   170K
> c6t4d0   -  -  0  3  14.9K   170K
> c6t6d0   -  -  0  3  14.1K   170K
> c6t9d0   -  -  0  3  14.5K   170K
>   raidz2 10.8T  70.8G  0  1  3.02K  55.4K
> c6t0d0   -  -  0  1  5.06K  9.37K
> c6t17d0  -  -  0  1  5.11K  9.38K
> c6t10d0  -  -  0  1  5.09K  9.38K
> c6t11d0  -  -  0  1  5.05K  9.38K
> c6t12d0  -  -  0  1  5.11K  9.38K
> c6t13d0  -  -  0  1  5.14K  9.38K
> c6t14d0  -  -  0  1  5.07K  9.37K
> c6t15d0  -  -  0  1  5.14K  9.37K
>   c6t5d0  685K  1.36T  0  0 84  2.86K
>   c6t7d0  848K  1.36T  0  0686  5.28K
>   c6t8d0  848K  1.36T  0  0686  5.28K
> ---  -  -  -  -  -  -
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! Dedup delete FS advice needed!!

2010-08-15 Thread Marc Emmerson
thanks Tim, I have just chucked in another 4GB, hopefully I'll have my server 
back come the morning!

cheers,

Marc
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Kevin Walker
To be fair, he did talk some sense about how everyone was claiming to have a
product that was cloud computing, but I still don't like Oracle. With there
current Java Patent war with Google and now this with OpenSolaris, it leaves
a very bad taste in my mouth.

Will this affect ZFS being used in FreeBSD?

On 15 August 2010 15:13, David Magda  wrote:

> On Aug 14, 2010, at 19:39, Kevin Walker wrote:
>
>  I once watched a video interview with Larry from Oracle, this ass rambled
>> on
>> about how he hates cloud computing and that everyone was getting into
>> cloud
>> computing and in his opinion no one understood cloud computing, apart from
>> him... :-|
>>
>
> If this is the video you're talking about, I think you misinterpreted what
> he meant:
>
>  Cloud computing is not only the future of computing, but it is the
>> present, and the entire past of computing is all cloud. [...] All it is is a
>> computer connected to a network. What do you think Google runs on? Do you
>> think they run on water vapour? It's databases, and operating systems, and
>> memory, and microprocessors, and the Internet. And all of a sudden it's none
>> of that, it's "the cloud". [...] All "the cloud" is, is computers on a
>> network, in terms of technology. In terms of business model, you can say
>> it's rental. All SalesForce.com was, before they were cloud computing, was
>> software-as-a-service, and then they became cloud computing. [...] Our
>> industry is so bizarre: they change a term and think they invented
>> technology.
>>
>
> http://www.youtube.com/watch?v=rmrxN3GWHpM#t=45m
>
> I don't see any inaccurate in what said.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! Dedup delete FS advice needed!!

2010-08-15 Thread Tim Cook
On Sun, Aug 15, 2010 at 2:30 PM, Marc Emmerson wrote:

> Hi all,
> I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a
> couple of filesystems which I decided to delete last week, the first
> contained about 6GB of data and was deleted in about 30 minutes, the second
> (about 100GB of VMs) is still being deleted (I think) 4.5 days later!
>
> Now, I've seen delete "dedup enabled fs" operations take a while before (2
> days) but 4.5 days is a surprise.
>
> I am wondering what (if anything) I can do to speed this up, my server only
> has 4GB RAM, would it be beneficial/safe for me to switch off, upgrade to
> 8GB?  I am assuming this may help the delete operation as more memory should
> mean that more of the dedup table is stored in RAM?
>
> Or is there anything else I can do to speed things up or indeed determine
> how much longer left?
>
> I'd appreciate any advice, cheers
>
>
It would be extremely beneficial for you to switch off and upgrade to 8GB.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help! Dedup delete FS advice needed!!

2010-08-15 Thread Marc Emmerson
Hi all,
I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a couple 
of filesystems which I decided to delete last week, the first contained about 
6GB of data and was deleted in about 30 minutes, the second (about 100GB of 
VMs) is still being deleted (I think) 4.5 days later!

Now, I've seen delete "dedup enabled fs" operations take a while before (2 
days) but 4.5 days is a surprise.

I am wondering what (if anything) I can do to speed this up, my server only has 
4GB RAM, would it be beneficial/safe for me to switch off, upgrade to 8GB?  I 
am assuming this may help the delete operation as more memory should mean that 
more of the dedup table is stored in RAM?

Or is there anything else I can do to speed things up or indeed determine how 
much longer left?

I'd appreciate any advice, cheers
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Tim Cook
On Sun, Aug 15, 2010 at 9:48 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Sun, 15 Aug 2010, David Magda wrote:
>
>>
>> But that US$ 400 was only if you wanted support. For the last little while
>> you could run Solaris 10 legally without a support contract without issues.
>>
>
> The $400 number is bogus since the amount that Oracle quotes now depends on
> the value of the hardware that the OS will run on.  For my old SPARC Blade
> 2500 (which will probably not go beyond Solaris 10), the OS support cost was
> only in the $60-70 range.  On a brand-new high-end system, the cost is
> higher.  The OS support cost on a million dollar system would surely be
> quite high but owners of such systems will surely pay for system support
> rather than just OS support and care very much that their system continues
> running.
>
> The previous Sun software support pricing model was completely bogus. The
> Oracle model is also bogus, but at least it provides a means for an
> entry-level user to be able to afford support.
>
> Bob
>
>

The cost discussion is ridiculous, period.  $400 is a steal for support.
 You'll pay 3x or more for the same thing from Redhat or Novell.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv reads a lot from destination zpool

2010-08-15 Thread Phil Harman
I saw this the other day when doing an initial "auto sync" from one Nexenta 
3.0.3 node to another (using the ZFS/SSH method). I later tried it again with a 
fresh destination pool and the read traffic was minimal. Sadly I didn't have an 
opportunity to do and investigation, but it doesn't fit my current model if how 
things do or should work (which is troubling).

On 15 Aug 2010, at 15:50, Jerome Warnier  wrote:

> I just copied a snapshot from one zpool (let's call is "source") to
> another one ("destination") on the same machine using zpool send/recv.
> I'm wondering why this process is taking so much bandwidth reading from
> "destination", while writing to it, or reading from "source" did not?
> At least this is what "zpool iostat 5" says.
> 
> Any idea, anyone?
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Garrett D'Amore
On Sun, 2010-08-15 at 07:38 -0700, Richard Jahnel wrote:
> FWIW I'm making a significant bet that Nexenta plus Illumos will be the 
> future for the space in which I operate.
> 
> I had already begun the process of migrating my 134 boxes over to Nexenta 
> before Oracle's cunning plans became known. This just reaffirms my decision.


It warms my heart to hear you say that. :-)  After all, I made a similar
bet with my career. :-)

- Garrett

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] send/recv reads a lot from destination zpool

2010-08-15 Thread Jerome Warnier
I just copied a snapshot from one zpool (let's call is "source") to
another one ("destination") on the same machine using zpool send/recv.
I'm wondering why this process is taking so much bandwidth reading from
"destination", while writing to it, or reading from "source" did not?
At least this is what "zpool iostat 5" says.

Any idea, anyone?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Bob Friesenhahn

On Sun, 15 Aug 2010, David Magda wrote:


But that US$ 400 was only if you wanted support. For the last little while 
you could run Solaris 10 legally without a support contract without issues.


The $400 number is bogus since the amount that Oracle quotes now 
depends on the value of the hardware that the OS will run on.  For my 
old SPARC Blade 2500 (which will probably not go beyond Solaris 10), 
the OS support cost was only in the $60-70 range.  On a brand-new 
high-end system, the cost is higher.  The OS support cost on a million 
dollar system would surely be quite high but owners of such systems 
will surely pay for system support rather than just OS support and 
care very much that their system continues running.


The previous Sun software support pricing model was completely bogus. 
The Oracle model is also bogus, but at least it provides a means for 
an entry-level user to be able to afford support.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Jerome Warnier
On Sat, 2010-08-14 at 17:35 +0200, Andrej Podzimek wrote:
> > 3. Just stick with b134. Actually, I've managed to compile my way up to 
> > b142, but I'm having trouble getting beyond it - my attempts to install 
> > later versions just result in new boot environments with the old kernel, 
> > even with the latest pkg-gate code in place. Still, even if I get the 
> > latest code to install, it's not viable for the long term unless I'm 
> > willing to live with stasis.
> 
> I run build 146. There have been some heads-up messages on the topic. You 
> need b137 or later in order to build b143 or later. Plus the latest packaging 
> bits and other stuff. 
> http://mail.opensolaris.org/pipermail/on-discuss/2010-June/001932.html
> 
> When compiling b146, it's good to read this first: 
> http://mail.opensolaris.org/pipermail/on-discuss/2010-August/002110.html 
> Instead of using the tagged onnv_146 code, you have to apply all the 
> changesets up to 13011:dc5824d1233f.
>   
> > 6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for years 
> > before switching to ZFS, and it works - but it's a bitter pill to swallow 
> > after drinking the ZFS Kool-Aid.
> 
> Or Btrfs. It may not be ready for production now, but it could become a 
> serious alternative to ZFS in one year's time or so. (I have been using it 
> for some time with absolutely no issues, but some people (Edward Shishkin) 
> say it has obvious bugs related to fragmentation.)

Do not forget Btrfs is mainly developed by ... Oracle. Will it survive
better than Free Solaris/ZFS?

> Andrej

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Richard Jahnel
FWIW I'm making a significant bet that Nexenta plus Illumos will be the future 
for the space in which I operate.

I had already begun the process of migrating my 134 boxes over to Nexenta 
before Oracle's cunning plans became known. This just reaffirms my decision.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread David Magda

On Aug 14, 2010, at 14:54, Edward Ned Harvey wrote:


From:  Russ Price





For me, Solaris had zero mindshare since its beginning, on account of
being prohibitively expensive.


I hear that a lot, and I don't get it.  $400/yr does move it out of  
peoples'
basements generally, and keeps sol10 out of enormous clustering  
facilities
that don't have special purposes or free alternatives.  But I  
wouldn't call

it prohibitively expensive, for a whole lot of purposes.


But that US$ 400 was only if you wanted support. For the last little  
while you could run Solaris 10 legally without a support contract  
without issues.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread David Magda

On Aug 14, 2010, at 19:39, Kevin Walker wrote:

I once watched a video interview with Larry from Oracle, this ass  
rambled on
about how he hates cloud computing and that everyone was getting  
into cloud
computing and in his opinion no one understood cloud computing,  
apart from

him... :-|


If this is the video you're talking about, I think you misinterpreted  
what he meant:


Cloud computing is not only the future of computing, but it is the  
present, and the entire past of computing is all cloud. [...] All it  
is is a computer connected to a network. What do you think Google  
runs on? Do you think they run on water vapour? It's databases, and  
operating systems, and memory, and microprocessors, and the  
Internet. And all of a sudden it's none of that, it's "the cloud".  
[...] All "the cloud" is, is computers on a network, in terms of  
technology. In terms of business model, you can say it's rental. All  
SalesForce.com was, before they were cloud computing, was software- 
as-a-service, and then they became cloud computing. [...] Our  
industry is so bizarre: they change a term and think they invented  
technology.


http://www.youtube.com/watch?v=rmrxN3GWHpM#t=45m

I don't see any inaccurate in what said.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss