Re: [zfs-discuss] stupid ZFS question - floating point operations

2010-12-24 Thread Garrett D'Amore

Thanks for the clarification.  I guess I need to go back and figure out how ZFS 
crypto keying is performed.  I guess most likely the key is generated from some 
sort of one-way hash from a passphrase?

  - Garrett

-Original Message-
From: Darren J Moffat [mailto:darren.mof...@oracle.com]
Sent: Thu 12/23/2010 1:32 AM
To: Garrett D'Amore
Cc: Erik Trimble; Jerry Kemp; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] stupid ZFS question - floating point operations
 
On 22/12/2010 20:27, Garrett D'Amore wrote:
 That said, some operations -- and cryptographic ones in particular --
 may use floating point registers and operations because for some
 architectures (sun4u rings a bell) this can make certain expensive

Well remembered!  There are sun4u optimisations that use the floating 
point unit but those only apply to the bignum code which in kernel is 
only used by RSA.

 operations go faster. I don't think this is the case for secure
 hash/message digest algorithms, but if you use ZFS encryption as found
 in Solaris 11 Express you might find that on certain systems these
 registers are used for performance reasons, either on the bulk crypto or
 on the keying operations. (More likely the latter, but my memory of
 these optimizations is still hazy.)

RSA isn't used at all by ZFS encryption, everything is AES (including 
key wrapping) and SHA256.

So those optimistations for floating point don't come into play for ZFS 
encryption.

-- 
Darren J Moffat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-24 Thread Edward Ned Harvey
 From: Frank Lahm [mailto:frankl...@googlemail.com]
 
 With Netatalk for AFP he _is_ running a database: any AFP server needs
 to maintain a consistent mapping between _not reused_ catalog node ids
 (CNIDs) and filesystem objects. Luckily for Apple, HFS[+] and their
 Cocoa/Carbon APIs provide such a mapping making diirect use of HFS+
 CNIDs. Unfortunately most UNIX filesystem reuse inodes and have no API
 for mapping inodes to filesystem objects. Therefor all AFP servers
 running on non-Apple OSen maintain a database providing this mapping,
 in case of Netatalk it's `cnid_dbd` using a BerkeleyDB database.

Don't all of those concerns disappear in the event of a reboot?

If you stop AFP, you could completely obliterate the BDB database, and restart 
AFP, and functionally continue from where you left off.  Right?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-24 Thread Richard Elling
On Dec 23, 2010, at 2:25 AM, Stephan Budach wrote:
 as I have learned from the discussion about which SSD to use as ZIL drives, I 
 stumbled across this article, that discusses short stroking for increasing 
 IOPs on SAS and SATA drives:
 
 http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
 
 Now, I am wondering if using a mirror of such 15k SAS drives would be a 
 good-enough fit for a ZIL on a zpool that is mainly used for file services 
 via AFP and SMB.

SMB does not create much of a synchronous load.  I haven't explored AFP 
directly,
but if they do use Berkeley DB, then we do have a lot of experience tuning ZFS 
for
Berkeley DB performance.

 I'd particulary like to know, if someone has already used such a solution and 
 how it has worked out.

Latency is what matters most.  While there is a loose relationship between IOPS
and latency, you really want low latency.  For 15krpm drives, the average 
latency
is 2ms for zero seeks.  A decent SSD will beat that by an order of magnitude.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-24 Thread Phil Harman

On 24/12/2010 18:21, Richard Elling wrote:
Latency is what matters most.  While there is a loose relationship 
between IOPS
and latency, you really want low latency.  For 15krpm drives, the 
average latency
is 2ms for zero seeks.  A decent SSD will beat that by an order of 
magnitude.


And the closer you get to the CPU, the lower the latency. For example, 
the DDRdrive X1 is yet another order of magnitude faster because it sits 
directly on the PCI bus, without the overhead of SAS protocol.


Yet the humble old 15K drive with 2ms sequential latency is still and 
order of magnitude faster than a busy drive delivering 20ms latencies 
under a random workload.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-24 Thread Richard Elling
On Dec 22, 2010, at 8:57 PM, Bill Werner wer...@cubbyhole.com wrote:

 got it attached to a UPS with very conservative
 shut-down timing. Or
 are there other host failures aside from power a
 ZIL would be
 vulnerable too (system hard-locks?)?
 
 Correct, a system hard-lock is another example...
 
 How about comparing a non-battery backed ZIL to running a ZFS dataset with 
 sync=disabled.  Which is more risky?

Disabling the ZIL is always more risky. But more importantly, disabling
the ZIL is a policy decision. If the user is happy with that policy, then
they should be happy with the consequence.
  -- richard

 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-24 Thread Martin Matuska
Tim Cook tim at cook.ms writes:

 You are not a court of law, and that statement has not been tested.  It is
your opinion and nothing more.  I'd appreciate if every time you repeated that
statement, you'd preface it with in my opinion so you don't have people
running around believing what they're doing is safe.  I'd hope they'd be smart
enough to consult with a lawyer, but it's probably better to just not spread
unsubstantiated rumor in the first place.  
 
 --Tim

Hi guys, I am one of the ZFS porting folks at FreeBSD.

You might want to look at this site: http://www.sun.com/lawsuit/zfs/

There are three main threatening Netapp patents mentioned:
5,819,292 - copy on write
7,174,352 - filesystem snapshot
6,857,001 - writable snapshots

You can examine the documents at: http://www.sun.com/lawsuit/zfs/documents.jsp

5,819,292:
This one as a final action by the U.S. Patent Office from 16.06.2009. In this
action almost all claims subject to reexamination were rejected by the Office
(due to anticipation), only claims 1, 21 and 22 were confirmed as patentable.
These claims are not significant for copy-on-write. So you can consider the
copy-on-write patent by Netapp rejected. With this document in your hands they
cannot expect winning a lawsuit against you on copy-on-write anymore as there
is not much from the patent left over.

7,174,352:
This patent has a non-final action rejecting all the claims due to
anticipation. There may exist a final action that confirms this, but its not
among the documents. If there is a final action, you can use any filesystem
that does snapshots without risking a lawsuit from Netapp. The non-final
action document a very strong asset in your hands, anyway :-)

6,857,001:
No documents for this patent at the site.

So you can use copy-on-write - as to the documents all relevant parts of the
patent are rejected.
Snapshots - the non-final action document is a good asset, but I don't know if
there is a final action document. This patent can be considered as almost
rejected.
Clones - no idea

But remember, this goes for ANY filesystems, this isn't only about ZFS.
So every filesystem doing snapshots or clones (btrfs?) should actually 
have a permission from Netapp as they involve their patents ;-)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss