Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)
On Jan 14, 2011, at 14:32, Peter Taps wrote: > Also, another related question. Why 256 bits was chosen and not 128 bits or > 512 bits? I guess Sha512 may be an overkill. In your formula, how many blocks > of data would be needed to have one collision using Sha128? There are two ways to get 128 bits: use a 128-bit function (e.g., MD5), or use a longer function and truncate its output. In the case of MD5, it has been depreciated for a while now because of collisions. [1] Similarly 160-bit hash functions are getting collisions as well (SHA-1). [2] So the next step up is generally 256 (though there are a few 224-bit-output hashes out there). However, if you're going to use to 256-bit hash functions, why throw away half of security if you've already done all the work to get those 256 bits? Might as well use all the bits and get extra security. Using a 512-bit hash function was probably deemed as "too expensive" for CPUs at this time. There's also the fact that things are a bit in flux currently, as there's a competition to find the official (US) 'next generation' hash function. [3] And while it official-ness is mostly a US (military) thing, it will probably become a de facto standard in many other countries and industries. For the proverbial "sha128", you'd roughly need only half the blocks of data before getting a collision as compared to SHA-256. The math is left as an exercise for the reader. [1] http://en.wikipedia.org/wiki/MD5#Security [2] http://en.wikipedia.org/wiki/SHA-1#SHA-1 [3] http://en.wikipedia.org/wiki/SHA-3 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs send & tape autoloaders?
On Thu, Jan 13, 2011 at 8:09 AM, Stephan Budach wrote: > Actually mbuffer does a great job for that, too. Whenever I am using mbuffer > I am achieving much higher throughput then using ssh. Agreed, mbuffer seems to be required to get decent throughput. Using it on both ends of an SSH pipe (or at least at the sending side) helps smooth out performance more than you'd expect. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)
I am posting this once again as my previous post went into the middle of the thread and may go unnoticed. Ed, Thank you for sharing the calculations. In lay terms, for Sha256, how many blocks of data would be needed to have one collision? Assuming each block is 4K is size, we probably can calculate the final data size beyond which the collision may occur. This would enable us to make the following statement: "With Sha256, you need verification to be turned on only if you are dealing with more than xxxT of data." Also, another related question. Why 256 bits was chosen and not 128 bits or 512 bits? I guess Sha512 may be an overkill. In your formula, how many blocks of data would be needed to have one collision using Sha128? Appreciate your help. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)
Ed, Thank you for sharing the calculations. In lay terms, for Sha256, how many blocks of data would be needed to have one collision? Assuming each block is 4K is size, we probably can calculate the final data size beyond which the collision may occur. This would enable us to make the following statement: "With Sha256, you need verification to be turned on only if you are dealing with more than xxxT of data." Also, another related question. Why 256 bits was chosen and not 128 bits or 512 bits? I guess Sha512 may be an overkill. In your formula, how many blocks of data would be needed to have one collision using Sha128? Appreciate your help. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Hard Errors on HDDs
Richard Elling writes: > In my experience, this looks like a set of devices sitting behind an > expander. I have seen one bad disk take out all disks sitting behind > an expander. I have also seen bad disk firmware take out all disks > behind an expander. I once saw a bad cable take out everything. > -- richard In my experience i ve also seen the same problems. a lot of sata disks (seagate barracuda ES.2 and other) all behind expanders (supermicro sc847 chassis) the issue were solved after we removed all sata disks behind our expander and replaced them with Enterprise SAS Disks. thereafter we only faced this problems when an connected sata-ssd died. so we also moved our sata-ssds away from this backplane and connected them directly to the 1068 based controller. the problem arrised, after we moved a identically server to a expander backplane (to get more drives connected). before this discs were running for months without any problems *direct* attached. regards daniel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Size of incremental stream
Well, in this case, the rsync sent data is about the size of the "USED" column in "zfs list -t snapshot" while the zfs stream is 4 times bigger. Also, with rsync, if it fails in the middle, i don't have to start over. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss