On 02/27/2013 12:32 PM, Ahmed Kamal wrote:
> How is the quality of the ZFS Linux port today? Is it comparable to Illumos
> or at least FreeBSD ? Can I trust production data to it ?
Can't speak from personal experience, but a colleague of mine has been
PPA builds on Ubuntu and has had, well, less t
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
> On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
>> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
>>
>> I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
>
> I ca
On 02/26/2013 03:51 PM, Gary Driggs wrote:
> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
>
> I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
> this list is going to get shut down by Oracle next month.
>
> Whose descrip
On 02/26/2013 09:33 AM, Tiernan OToole wrote:
> As a follow up question: Data Deduplication: The machine, to start, will
> have about 5Gb RAM. I read somewhere that 20TB storage would require about
> 8GB RAM, depending on block size...
The typical wisdom is that 1TB of dedup'ed data = 1GB of RAM.
On 02/21/2013 04:02 PM, Markus Grundmann wrote:
> On 02/21/2013 03:34 PM, Jan Owoc wrote:
>> Does this do what you want? (zpool destroy is already undo-able) Jan
>
> Jan that's not was I want.
> I want set a property that's enable/disable all modifications with zpool
> commands (e.g. "zfs destroy
On 02/21/2013 12:27 AM, Peter Wood wrote:
> Will adding another vdev hurt the performance?
In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
in
On 02/17/2013 06:40 AM, Ian Collins wrote:
> Toby Thain wrote:
>> Signed up, thanks.
>>
>> The ZFS list has been very high value and I thank everyone whose wisdom
>> I have enjoyed, especially people like you Sašo, Mr Elling, Mr
>> Friesenhahn, Mr Harvey, the distinguished Sun and Oracle engineers
On 02/16/2013 10:47 PM, James C. McPherson wrote:
> On 17/02/13 06:54 AM, Sašo Kiselkov wrote:
>> On 02/16/2013 09:49 PM, John D Groenveld wrote:
>>> Boot with kernel debugger so you can see the panic.
>>
>> Sadly, though, without access to the source code, all he do
On 02/16/2013 09:49 PM, John D Groenveld wrote:
> Boot with kernel debugger so you can see the panic.
Sadly, though, without access to the source code, all he do can at that
point is log a support ticket with Oracle (assuming he has paid his
support fees) and hope it will get picked up by somebody
On 02/16/2013 06:44 PM, Tim Cook wrote:
> We've got Oracle employees on the mailing list, that while helpful, in no
> way have the authority to speak for company policy. They've made that
> clear on numerous occasions And that doesn't change the fact that we
> literally have heard NOTHING from O
On 02/15/2013 03:39 PM, Tyler Walter wrote:
> As someone who has zero insider information and feels that there isn't
> much push at oracle to develop or release new zfs features, I have to
> assume it's not coming. The only way I see it becoming a reality is if
> someone in the illumos community de
On 02/13/2013 04:30 PM, Kiley, Heather L (IS) wrote:
> I am trying to replace a failed disk on my zfs system.
> I replaced the disk and while the physical drive status is now OK, my logical
> drive is still failed.
> When I do a zpool status, the new disk comes up as unavailable:
> spa
On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote:
> Why should it?
>
> I believe currently only Nexenta but correct me if I am wrong
The code has been mainlined a while ago, see:
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730
http
On 02/11/2013 04:53 PM, Borja Marcos wrote:
>
> Hello,
>
> I'n updating Devilator, the performance data collector for Orca and FreeBSD
> to include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
> writes and reads, and several hit/misses data pairs.
>
> Any suggestions to i
On 02/05/2013 05:04 PM, Sašo Kiselkov wrote:
> On 01/31/2013 11:16 PM, Albert Shih wrote:
>> Hi all,
>>
>> I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
>> (I known it's bad).
>>
>> Well I've server running FreeB
On 01/31/2013 11:16 PM, Albert Shih wrote:
> Hi all,
>
> I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
> (I known it's bad).
>
> Well I've server running FreeBSD 9.0 with (don't count / on differents
> disks) zfs pool with 36 disk.
>
> The performance is very very g
On 01/29/2013 03:08 PM, Robert Milkowski wrote:
>> From: Richard Elling
>> Sent: 21 January 2013 03:51
>> VAAI has 4 features, 3 of which have been in illumos for a long time. The
> remaining
>> feature (SCSI UNMAP) was done by Nexenta and exists in their NexentaStor
> product,
>> but the CEO made
On 01/29/2013 02:59 PM, Robert Milkowski wrote:
>>> It also has a lot of performance improvements and general bug fixes
>> in
>>> the Solaris 11.1 release.
>>
>> Performance improvements such as?
>
>
> Dedup'ed ARC for one.
> 0 block automatically "dedup'ed" in-memory.
> Improvements to ZIL perfo
On 01/22/2013 11:22 PM, Jim Klimov wrote:
> On 2013-01-22 23:03, Sašo Kiselkov wrote:
>> On 01/22/2013 10:45 PM, Jim Klimov wrote:
>>> On 2013-01-22 14:29, Darren J Moffat wrote:
>>>> Preallocated ZVOLs - for swap/dump.
>>>
>>> Or is it also sup
On 01/22/2013 10:45 PM, Jim Klimov wrote:
> On 2013-01-22 14:29, Darren J Moffat wrote:
>> Preallocated ZVOLs - for swap/dump.
>
> Or is it also supported to disable COW for such datasets, so that
> the preallocated swap/dump zvols might remain contiguous on the
> faster tracks of the drive (i.e.
On 01/22/2013 05:34 PM, Darren J Moffat wrote:
>
>
> On 01/22/13 16:02, Sašo Kiselkov wrote:
>> On 01/22/2013 05:00 PM, casper@oracle.com wrote:
>>>> Some vendors call this (and thins like it) "Thin Provisioning", I'd say
>>>> it is more
On 01/22/2013 05:00 PM, casper@oracle.com wrote:
>> Some vendors call this (and thins like it) "Thin Provisioning", I'd say
>> it is more "accurate communication between 'disk' and filesystem" about
>> in use blocks.
>
> In some cases, users of disks are charged by bytes in use; when not usi
On 01/22/2013 04:32 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Darren J Moffat [mailto:darr...@opensolaris.org]
>>
>> Support for SCSI UNMAP - both issuing it and honoring it when it is the
>> backing store of an iSCSI target.
>
> When I search for scsi unmap, I c
On 01/22/2013 02:39 PM, Darren J Moffat wrote:
>
> On 01/22/13 13:29, Darren J Moffat wrote:
>> Since I'm replying here are a few others that have been introduced in
>> Solaris 11 or 11.1.
>
> and another one I can't believe I missed since I was one of the people
> that helped design it and I did
On 01/22/2013 02:20 PM, Michel Jansens wrote:
>
> Maybe 'shadow migration' ? (eg: zfs create -o shadow=nfs://server/dir
> pool/newfs)
Hm, interesting, so it works as a sort of replication system, except
that the data needs to be read-only and you can start accessing it on
the target before the i
On 01/22/2013 12:30 PM, Darren J Moffat wrote:
> On 01/21/13 17:03, Sašo Kiselkov wrote:
>> Again, what significant features did they add besides encryption? I'm
>> not saying they didn't, I'm just not aware of that many.
>
> Just a few examples:
>
> Sol
On 01/22/2013 03:56 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
>>
>> as far as incompatibility among products, I've yet to come
>> across it
>
> I was talking about ... install solar
On 01/21/2013 02:28 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> I disagree the ZFS is developmentally challenged.
>
> As an IT consultant, 8 years ago before I heard of ZFS, it was always easy
> to sell Ontap,
On 01/08/2013 04:27 PM, mark wrote:
>> On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
>>
>> FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to
>> locate
>> with their configurators. There might be a more modern equivalent cleverly
>> hidden somewhere difficult to find.
>>
On 01/07/2013 09:32 PM, Tim Fletcher wrote:
> On 07/01/13 14:01, Andrzej Sochon wrote:
>> Hello *Sašo*!
>>
>> I found you here:
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2012-May/051546.html
>>
>> “How about reflashing LSI firmware to the card? I read on Dell's spec
>>
>> sheets that the
On 11/14/2012 11:14 AM, Michel Jansens wrote:
> Hi,
>
> I've ordered a new server with:
> - 4x600GB Toshiba 10K SAS2 Disks
> - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no
> SAS/SATA problems). Specs:
> http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
>
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you
have to wire it yourself (plus you need to buy a pair of internal
SFF-8087 cables to connect the back and front backplanes - incredible
SuperMicro doesn't provide those out of the box), but other than that,
never had a problem wit
On 11/07/2012 01:16 PM, Eugen Leitl wrote:
> I'm very interested, as I'm currently working on an all-in-one with
> ESXi (using N40L for prototype and zfs send target, and a Supermicro
> ESXi box for production with guests, all booted from USB internally
> and zfs snapshot/send source).
Well, seein
On 11/07/2012 12:39 PM, Tiernan OToole wrote:
> Morning all...
>
> I have a Dedicated server in a data center in Germany, and it has 2 3TB
> drives, but only software RAID. I have got them to install VMWare ESXi and
> so far everything is going ok... I have the 2 drives as standard data
> stores..
On 10/25/2012 05:40 PM, Bob Friesenhahn wrote:
> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>
>> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>>>
>>>> Look for Dell's "6Gbps SAS HBA" cards.
On 10/25/2012 04:28 PM, Patrick Hahn wrote:
> On Thu, Oct 25, 2012 at 10:13 AM, Sašo Kiselkov wrote:
>
>> On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
>>> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>>>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>&
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>>
>>> Look for Dell's "6Gbps SAS HBA" cards. They can be had new for <$100 and
>>> are essentially r
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>
>> Look for Dell's "6Gbps SAS HBA" cards. They can be had new for <$100 and
>> are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
>> card
On 10/25/2012 05:59 AM, Jerry Kemp wrote:
> I have just acquired a new JBOD box that will be used as a media
> center/storage for home use only on my x86/x64 box running OpenIndiana
> b151a7 currently.
>
> Its strictly a JBOD, no hw raid options, with an eSATA port to each drive.
>
> I am looking
On 09/26/2012 05:18 PM, Matt Van Mater wrote:
>>
>> If the added device is slower, you will experience a slight drop in
>> per-op performance, however, if your working set needs another SSD,
>> overall it might improve your throughput (as the cache hit ratio will
>> increase).
>>
>
> Thanks for yo
On 09/26/2012 05:08 PM, Matt Van Mater wrote:
> I've looked on the mailing list (the evil tuning wikis are down) and
> haven't seen a reference to this seemingly simple question...
>
> I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
> (about 1.5 years old) that isn't gett
On 09/26/2012 01:14 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> Got me wondering: how many reads of a block from spinning rust
>> suffice for it to ult
On 09/25/2012 09:38 PM, Jim Klimov wrote:
> 2012-09-11 16:29, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
>>>
>>> My first thought was everything is
On 09/21/2012 01:34 AM, Jason Usher wrote:
> Hi,
>
> I have a ZFS filesystem with compression turned on. Does the "used" property
> show me the actual data size, or the compressed data size ? If it shows me
> the compressed size, where can I see the actual data size ?
It shows the allocated n
Have you tried a zpool clear and subsequent scrub to see if the error
pops up again?
Cheers,
--
Saso
On 09/20/2012 09:45 AM, Stephan Budach wrote:
> Hi,
>
> a couple of days we had an issue with one of our FC switches which led
> to a switch restart. Due to this issue the zpool vdevs had been
>
On 09/18/2012 04:31 PM, Eugen Leitl wrote:
>
> I'm currently thinking about rolling a variant of
>
> http://www.napp-it.org/napp-it/all-in-one/index_en.html
>
> with remote backup (via snapshot and send) to 2-3
> other (HP N40L-based) zfs boxes for production in
> our organisation. The systems t
On 09/11/2012 04:06 PM, Dan Swartzendruber wrote:
> Thanks a lot for clarifying how this works.
You're very welcome.
> Since I'm quite happy
> having an SSD in my workstation, I will need to purchase another SSD :) I'm
> wondering if it makes more sense to buy two SSDs of half the size (e.g.
>
On 09/11/2012 03:41 PM, Dan Swartzendruber wrote:
> LOL, I actually was unclear not you. I understood what you were saying,
> sorry for being unclear. I have 4 disks in raid10, so my max random read
> throughput is theoretically somewhat faster than the L2ARC device, but I
> never really do that
On 09/11/2012 03:32 PM, Dan Swartzendruber wrote:
> I think you may have a point. I'm also inclined to enable prefetch caching
> per Saso's comment, since I don't have massive throughput - latency is more
> important to me.
I meant to say the exact opposite: enable prefetch caching only if your
l
On 09/05/2012 05:06 AM, Yaverot wrote:
> "What is the smallest sized drive I may use to replace this dead drive?"
>
> That information has to be someplace because ZFS will say that drive Q is too
> small. Is there an easy way to query that information?
I use fdisk to find this out. For instance
On 08/30/2012 04:22 PM, Anonymous wrote:
>> On 08/30/2012 12:07 PM, Anonymous wrote:
>>> Hi. I have a spare off the shelf consumer PC and was thinking about loading
>>> Solaris on it for a development box since I use Studio @work and like it
>>> better than gcc. I was thinking maybe it isn't so sma
On 08/30/2012 04:08 PM, Nomen Nescio wrote:
>>> Hi. I have a spare off the shelf consumer PC and was thinking about loading
>>> Solaris on it for a development box since I use Studio @work and like it
>>> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
>>> has only one d
On 08/30/2012 12:07 PM, Anonymous wrote:
> Hi. I have a spare off the shelf consumer PC and was thinking about loading
> Solaris on it for a development box since I use Studio @work and like it
> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
> has only one drive. If ZF
On 08/27/2012 09:02 PM, Mark Wolek wrote:
> RAIDz set, lost a disk, replaced it... lost another disk during resilver.
> Replaced it, ran another resilver, and now it shows all disks with too many
> errors.
>
> Safe to say this is getting rebuilt and restored, or is there hope to recover
> some
On 08/27/2012 12:58 PM, Yuri Vorobyev wrote:
> 27.08.2012 14:43, Sašo Kiselkov пишет:
>
>>> Is there any way to disable ARC for testing and leave prefetch enabled?
>>
>> No. The reason is quite simply because prefetch is a mechanism separate
>> from your d
On 08/27/2012 10:37 AM, Yuri Vorobyev wrote:
> Is there any way to disable ARC for testing and leave prefetch enabled?
No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs on ahead of
your anticipated read requests and
On 08/26/2012 07:40 AM, Yuri Vorobyev wrote:
> Can someone with Supermicro JBOD equipped with SAS drives and LSI
> HBA do this sequential read test?
Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.
> Don't forget to set primarycache=none on testing dataset.
There's
On 08/25/2012 11:53 AM, Jim Klimov wrote:
>> No they're not, here's l2arc_buf_hdr_t a per-buffer structure
>> held for
>> buffers which were moved to l2arc:
>>
>> typedef struct l2arc_buf_hdr {
>> l2arc_dev_t *b_dev;
>> uint64_t b_daddr;
>> } l2arc_buf_hdr_t;
>>
>> That's about 16-bytes overhead p
On 08/25/2012 12:22 AM, Jim Klimov wrote:
> 2012-08-25 0:42, Sašo Kiselkov wrote:
>> Oh man, that's a million-billion points you made. I'll try to run
>> through each quickly.
>
> Thanks...
> I still do not have the feeling that you've fully got my
&g
Oh man, that's a million-billion points you made. I'll try to run
through each quickly.
On 08/24/2012 05:43 PM, Jim Klimov wrote:
> First of all, thanks for reading and discussing! :)
No problem at all ;)
> 2012-08-24 17:50, Sašo Kiselkov wrote:
>> This is something I
On 08/24/2012 05:13 PM, Scott Aitken wrote:
> Hi all,
>
> I know the easiest answer to this question is "don't do it in the first
> place, and if you do, you should have a backup", however I'll ask it
> regardless.
>
> Is there a way to backup the ZFS metadata on each member device of a pool
> to
This is something I've been looking into in the code and my take on your
proposed points this:
1) This requires many and deep changes across much of ZFS's architecture
(especially the ability to sustain tlvdev failures).
2) Most of this can be achieved (except for cache persistency) by
implementi
On 08/20/2012 10:15 PM, Jim Klimov wrote:
> 2012-08-20 23:39, Sašo Kiselkov пишет:
>>> We then tried to recreate the pool, which was successful - but
>>> without data…
>>
>> A zpool create overwrites all labels on a device (that's why you had to
>> ad
On 08/20/2012 08:55 PM, Ernest Dipko wrote:
> Is there any way to recover the data within a zpool after a spool create -f
> was issued on the disks?
>
> We had a pool that contained two internal disks (mirrored) and we added a
> zvol to it our of an existing pool for some temporary space. After
On 08/13/2012 02:01 PM, Ray Arachelian wrote:
> On 08/13/2012 06:50 AM, Sašo Kiselkov wrote:
>> See the -d option to zpool import. -- Saso
>
> Many thanks for this, it worked very nicely, though the first time
> I ran it, it failed. So what -d does is to substitute /dev. In
On 08/13/2012 12:48 PM, Ray Arachelian wrote:
> While attempting to fix the last of my damaged zpools, there's one that
> consists of 4 drives + one 60G file. The file happened by accident - I
> attempted to add a partition off an SSD drive but missed the cache
> keyword. Of course, once this is
On 08/13/2012 10:45 AM, Scott wrote:
> Hi Saso,
>
> thanks for your reply.
>
> If all disks are the same, is the root pointer the same?
No.
> Also, is there a "signature" or something unique to the root block that I can
> search for on the disk? I'm going through the On-disk specification at t
On 08/13/2012 10:00 AM, Sašo Kiselkov wrote:
> On 08/13/2012 03:02 AM, Scott wrote:
>> Hi all,
>>
>> I have a 5 disk raidz array in a state of disrepair. Suffice to say three
>> disks are ok, while two are missing all their labels. (Both ends of the
>> disks were
On 08/13/2012 03:02 AM, Scott wrote:
> Hi all,
>
> I have a 5 disk raidz array in a state of disrepair. Suffice to say three
> disks are ok, while two are missing all their labels. (Both ends of the
> disks were overwritten). The data is still intact.
There are 4 labels on a zfs-labeled disk,
On 08/09/2012 01:11 PM, Joerg Schilling wrote:
> Sa?o Kiselkov wrote:
>
>> On 08/09/2012 01:05 PM, Joerg Schilling wrote:
>>> Sa?o Kiselkov wrote:
>>>
> To me it seems that the "open-sourced ZFS community" is not open, or
> could you
> point me to their mailing list archives?
>
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
> Sa?o Kiselkov wrote:
>
>>> To me it seems that the "open-sourced ZFS community" is not open, or could
>>> you
>>> point me to their mailing list archives?
>>>
>>> Jörg
>>>
>>
>> z...@lists.illumos.org
>
> Well, why then has there been a discussi
On 08/09/2012 12:52 PM, Joerg Schilling wrote:
> Jim Klimov wrote:
>
>> In the end, the open-sourced ZFS community got no public replies
>> from Oracle regarding collaboration or lack thereof, and decided
>> to part ways and implement things independently from Oracle.
>> AFAIK main ZFS developmen
On 08/07/2012 04:08 PM, Bob Friesenhahn wrote:
> On Tue, 7 Aug 2012, Sašo Kiselkov wrote:
>>
>> MLC is so much cheaper that you can simply slap on twice as much and use
>> the rest for ECC, mirroring or simply overprovisioning sectors. The
>> common practice to extending
On 08/07/2012 02:18 AM, Christopher George wrote:
>> I mean this as constructive criticism, not as angry bickering. I totally
>> respect you guys doing your own thing.
>
> Thanks, I'll try my best to address your comments...
Thanks for your kind reply, though there are some points I'd like to
add
On 08/07/2012 12:12 AM, Christopher George wrote:
>> Is your DDRdrive product still supported and moving?
>
> Yes, we now exclusively target ZIL acceleration.
>
> We will be at the upcoming OpenStorage Summit 2012,
> and encourage those attending to stop by our booth and
> say hello :-)
>
> http
On 08/03/2012 03:18 PM, Justin Stringfellow wrote:
> While this isn't causing me any problems, I'm curious as to why this is
> happening...:
>
>
>
> $ dd if=/dev/random of=ob bs=128k count=1 && while true
Can you check whether this happens from /dev/urandom as well?
--
Saso
__
On 08/01/2012 04:14 PM, Jim Klimov wrote:
> 2012-08-01 17:55, Sašo Kiselkov пишет:
>> On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>>> boun...@opensolaris.org] On Behalf Of
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> Availability of the DDT is IMHO crucial to a deduped pool, so
>> I won't be surprised to see it forced to
On 08/01/2012 12:04 PM, Jim Klimov wrote:
> Probably DDT is also stored with 2 or 3 copies of each block,
> since it is metadata. It was not in the last ZFS on-disk spec
> from 2006 that I found, for some apparent reason ;)
That's probably because it's extremely big (dozens, hundreds or even
thous
On 07/29/2012 06:01 PM, Jim Klimov wrote:
> 2012-07-29 19:50, Sašo Kiselkov wrote:
>> On 07/29/2012 04:07 PM, Jim Klimov wrote:
>>>For several times now I've seen statements on this list implying
>>> that a dedicated ZIL/SLOG device catching sync writes for t
On 07/29/2012 04:07 PM, Jim Klimov wrote:
> Hello, list
Hi Jim,
> For several times now I've seen statements on this list implying
> that a dedicated ZIL/SLOG device catching sync writes for the log,
> also allows for more streamlined writes to the pool during normal
> healthy TXG syncs, than i
On 07/25/2012 05:49 PM, Habony, Zsolt wrote:
> Hello,
> There is a feature of zfs (autoexpand, or zpool online -e ) that it can
> consume the increased LUN immediately and increase the zpool size.
> That would be a very useful ( vital ) feature in enterprise environment.
>
> Though when I t
Hi,
Have you had a look iostat -E (error counters) to make sure you don't
have faulty cabling? I've bad cables trip me up once in a manner similar
to your situation here.
Cheers,
--
Saso
On 07/23/2012 07:18 AM, Yuri Vorobyev wrote:
> Hello.
>
> I faced with a strange performance problem with ne
On 07/12/2012 09:52 PM, Sašo Kiselkov wrote:
> I have far too much time to explain
P.S. that should have read "I have taken far too much time explaining".
Men are crap at multitasking...
Cheers,
--
Saso
___
zfs-discuss mailing lis
On 07/12/2012 07:16 PM, Tim Cook wrote:
> Sasso: yes, it's absolutely worth implementing a higher performing hashing
> algorithm. I'd suggest simply ignoring the people that aren't willing to
> acknowledge basic mathematics rather than lashing out. No point in feeding
> the trolls. The PETABYTES
On 07/11/2012 10:06 PM, Bill Sommerfeld wrote:
> On 07/11/12 02:10, Sašo Kiselkov wrote:
>> Oh jeez, I can't remember how many times this flame war has been going
>> on on this list. Here's the gist: SHA-256 (or any good hash) produces a
>> near uniform random di
On 07/11/2012 06:23 PM, Gregg Wonderly wrote:
> What I'm saying is that I am getting conflicting information from your
> rebuttals here.
Well, let's address that then:
> I (and others) say there will be collisions that will cause data loss if
> verify is off.
Saying that "there will be" withou
On 07/11/2012 05:58 PM, Gregg Wonderly wrote:
> You're entirely sure that there could never be two different blocks that can
> hash to the same value and have different content?
>
> Wow, can you just send me the cash now and we'll call it even?
You're the one making the positive claim and I'm ca
On 07/11/2012 05:33 PM, Bob Friesenhahn wrote:
> On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
>>
>> The reason why I don't think this can be used to implement a practical
>> attack is that in order to generate a collision, you first have to know
>> the disk block tha
On 07/11/2012 05:10 PM, David Magda wrote:
> On Wed, July 11, 2012 09:45, Sašo Kiselkov wrote:
>
>> I'm not convinced waiting makes much sense. The SHA-3 standardization
>> process' goals are different from "ours". SHA-3 can choose to go with
>> someth
On 07/11/2012 04:56 PM, Gregg Wonderly wrote:
> So, if I had a block collision on my ZFS pool that used dedup, and it had my
> bank balance of $3,212.20 on it, and you tried to write your bank balance of
> $3,292,218.84 and got the same hash, no verify, and thus you got my
> block/balance and no
On 07/11/2012 04:54 PM, Ferenc-Levente Juhos wrote:
> You don't have to store all hash values:
> a. Just memorize the first one SHA256(0)
> b. start cointing
> c. bang: by the time you get to 2^256 you get at least a collision.
Just one question: how long do you expect this going to take on averag
On 07/11/2012 04:39 PM, Ferenc-Levente Juhos wrote:
> As I said several times before, to produce hash collisions. Or to calculate
> rainbow tables (as a previous user theorized it) you only need the
> following.
>
> You don't need to reproduce all possible blocks.
> 1. SHA256 produces a 256 bit ha
On 07/11/2012 04:36 PM, Justin Stringfellow wrote:
>
>
>> Since there is a finite number of bit patterns per block, have you tried to
>> just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
>> if there is ever a collision? If you found an algorithm that produced no
>> c
On 07/11/2012 04:30 PM, Gregg Wonderly wrote:
> This is exactly the issue for me. It's vital to always have verify on. If
> you don't have the data to prove that every possible block combination
> possible, hashes uniquely for the "small" bit space we are talking about,
> then how in the world
On 07/11/2012 04:27 PM, Gregg Wonderly wrote:
> Unfortunately, the government imagines that people are using their home
> computers to compute hashes and try and decrypt stuff. Look at what is
> happening with GPUs these days. People are hooking up 4 GPUs in their
> computers and getting huge
On 07/11/2012 04:23 PM, casper@oracle.com wrote:
>
>> On Tue, 10 Jul 2012, Edward Ned Harvey wrote:
>>>
>>> CPU's are not getting much faster. But IO is definitely getting faster.
>>> It's best to keep ahea
> d of that curve.
>>
>> It seems that per-socket CPU performance is doubling every
On 07/11/2012 04:22 PM, Bob Friesenhahn wrote:
> On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
>> the hash isn't used for security purposes. We only need something that's
>> fast and has a good pseudo-random output distribution. That's why I
>> looked toward Edon-R
On 07/11/2012 04:19 PM, Gregg Wonderly wrote:
> But this is precisely the kind of "observation" that some people seem to miss
> out on the importance of. As Tomas suggested in his post, if this was true,
> then we could have a huge compression ratio as well. And even if there was
> 10% of the
On 07/11/2012 03:58 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
>>
>> I really mean no disrespect, but this comment is so dumb I could swear
>> my IQ dropped by a
1 - 100 of 172 matches
Mail list logo