Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Brandon High
The different methods should be chosen based on the types of clients you
expect. Mac and Linux work better with one, and windows works best with
another.

Searching for "formkc", etc. should yield some information on which is best
for your purposes.

-B

Sent from my Nexus One.
On Jan 15, 2011 6:17 AM, "Chris Ridd"  wrote:
>
> On 15 Jan 2011, at 13:57, Achim Wolpers wrote:
>
>> Am 15.01.11 14:52, schrieb Chris Ridd:
>>> What are the normalization properties of these filesystems? The zfs man
page says they're used when comparing filenames:
>> The normalization properties are set to none. Is this the key to my
>> solution?
>
> Judging from some discussion of normalization on this list in 2009 <
http://opensolaris.org/jive/thread.jspa?threadID=110207> I would say so.
>
> But I am not really certain of the practical implications of the other
settings. It feels like anything apart from "none" would be OK, but maybe
one is better than the others?
>
> Cheers,
>
> Chris
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2011-01-15 Thread Jan Sommer
I could resolve this issue:

I was testing FreeNAS with a raidz1 setup before I decided to check out 
Nexentastore and it seems Nexentastore has some kind of problems if the 
harddisk array already contain some kind of raidz data. After wiping the discs 
with a tool from the "Ultimate Boot CD" I could perform all wizard steps and 
using Nexenta now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-15 Thread Bob Friesenhahn

On Fri, 14 Jan 2011, Peter Taps wrote:

Thank you for sharing the calculations. In lay terms, for Sha256, 
how many blocks of data would be needed to have one collision?


Two.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scalability and performance

2011-01-15 Thread Roy Sigurd Karlsbakk
> the ZFS_Best_Practises_Guide states this:
> 
> "Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the
> pool fills up, new allocations will be forced to favor larger vdevs
> over smaller ones and this will cause subsequent reads to come from a
> subset of underlying devices leading to lower performance."
> 
> I am setting up a zpool comprised of mirrored LUNs, each one being
> exported as a JBOD from my FC RAIDs. Now, as the zpool will fill up, I
> intend to attach more mirrors to it and I am wondering, if I
> understood that correctly.
> 
> Let's assume I am creating the initial zpool like this: zpool create
> tank mirror disk1a disk1b mirror disk2a disk2b mirror disk3a disk3b
> mirror disk4a disk4b. After some time the zpool has filled up and I
> attach another mirror to it: zpool attach tank mirror disk5a disk5b,
> this would mean, that all new data would take a performance hit, since
> it can only be stored on the new mirror disk, instead of being
> distributed across all vdevs, right?
> 
> So, to circumvent this, it would be mandantory to add at least as many
> vdevs at once, to satisfy the desired performance?

If you make a pool and then fill it to > 80% or so, it will slow down. Then, if 
adding more drives to that pool, mirrors, raidz or whatever, new writes will 
basically be written to the new ones, since the rest of them arre full. To fix 
this, block rewrite needs to be implemented to create a way of balancing an 
existing pool, and I don't think the ZFS developers are there yet. To avoid 
this, replacing existing drives with larger drives (with autoexpand=on set on 
the pool) might be a solution.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-15 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
> 
> Thank you for sharing the calculations. In lay terms, for Sha256, how many
> blocks of data would be needed to have one collision?

There is no point in making a generalization and a recommended "best
practice."  Because it's all just a calculation of probabilities and
everyone will draw the line differently.  My personal recommendation is to
enable verification at work, just so you can never be challenged or have to
try convincing your boss about something that is obvious to you.  The main
value of verification is that you don't need to explain anything to anyone.


Two blocks would be needed to have one collision, at a probability of 2^-256
which is 10^-78.  This is coincidentally very near the probability of
randomly selecting the same atom in the observable universe twice
consecutively.  The more blocks, the higher the probability, and that's all
there is to it (unless someone is intentionally trying to cause collisions
with data generated specifically and knowledgeably for that express
purpose).  When you start reaching 2^128 (which is 10^38) blocks it becomes
likely you have a collision.

Every data pool in the world is someplace in between 2 and 2^128 blocks.
Smack in the middle of the region where the probability is distinctly more
likely than randomly selecting the same atom of the universe twice, and less
likely than an armageddon caused by earth asteroid collision.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Chris Ridd

On 15 Jan 2011, at 13:57, Achim Wolpers wrote:

> Am 15.01.11 14:52, schrieb Chris Ridd:
>> What are the normalization properties of these filesystems? The zfs man page 
>> says they're used when comparing filenames:
> The normalization properties are set to none. Is this the key to my
> solution?

Judging from some discussion of normalization on this list in 2009 
 I would say so.

But I am not really certain of the practical implications of the other 
settings. It feels like anything apart from "none" would be OK, but maybe one 
is better than the others?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Achim Wolpers
Am 15.01.11 14:52, schrieb Chris Ridd:
> What are the normalization properties of these filesystems? The zfs man page 
> says they're used when comparing filenames:
The normalization properties are set to none. Is this the key to my
solution?

Achim



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-15 Thread Pawel Jakub Dawidek
On Fri, Jan 14, 2011 at 11:32:58AM -0800, Peter Taps wrote:
> Ed,
> 
> Thank you for sharing the calculations. In lay terms, for Sha256, how many 
> blocks of data would be needed to have one collision?
> 
> Assuming each block is 4K is size, we probably can calculate the final data 
> size beyond which the collision may occur. This would enable us to make the 
> following statement:
> 
> "With Sha256, you need verification to be turned on only if you are dealing 
> with more than xxxT of data."

Except that this is wrong question to ask. The question you can ask is
"How many blocks of data do I need so collision probability is %?".

> Also, another related question. Why 256 bits was chosen and not 128 bits or 
> 512 bits? I guess Sha512 may be an overkill. In your formula, how many blocks 
> of data would be needed to have one collision using Sha128?

There is no SHA128 and SHA512 has too long hash. Currently the maximum
hash ZFS can handle is 32 bytes (256 bits). Wasting another 32 bytes
without improving anything in practise wasn't probably worth it.

BTW. As for SHA512 being slower it looks like it depends on
implementation or SHA512 is faster to compute on 64bit CPU.
On my laptop OpenSSL computes SHA256 55% _slower_ than SHA512.
If this is a general rule, maybe it will be worth considering using
SHA512 truncated to 256 bits to get more speed.

-- 
Pawel Jakub Dawidek   http://www.wheelsystems.com
p...@freebsd.org   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpXQHlrciD1Y.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Chris Ridd

On 15 Jan 2011, at 13:44, Achim Wolpers wrote:

> Hi!
> 
> I have a problem with the charset in the following scenario:
> 
> - OSOL Server with zfs pool und NFS/CIFS shares enabled
> - OSX Client with CIFS mounts
> - OSX Client with NFSv3 mounts
> 
> If one of the clients saves a file with a special character in the
> filename like 'äüöß', the other client can not access that file. The
> characters a displayed correctly on both clients as well as on the
> server. What is the reason of this incompatibility of CIFS and NFS
> invoked filenames and how can I get around it?

What are the normalization properties of these filesystems? The zfs man page 
says they're used when comparing filenames:

---
 normalization = none | formC | formD | formKC | formKD

 Indicates whether the file system should perform a  uni-
 code normalization of file names whenever two file names
 are compared, and which normalization  algorithm  should
 be  used. File names are always stored unmodified, names
 are normalized as part of  any  comparison  process.  If
 this  property  is set to a legal value other than none,
 and the utf8only  property  was  left  unspecified,  the
 utf8only  property  is  automatically  set  to  on.  The
 default value of the  normalization  property  is  none.
 This property cannot be changed after the file system is
 created.
---

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs + charset

2011-01-15 Thread Achim Wolpers
Hi!

I have a problem with the charset in the following scenario:

- OSOL Server with zfs pool und NFS/CIFS shares enabled
- OSX Client with CIFS mounts
- OSX Client with NFSv3 mounts

If one of the clients saves a file with a special character in the
filename like 'äüöß', the other client can not access that file. The
characters a displayed correctly on both clients as well as on the
server. What is the reason of this incompatibility of CIFS and NFS
invoked filenames and how can I get around it?

Achim



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss