Reducing the record size would negatively impact performance. For rational why, see thesection titled "Match Average I/O Block Sizes" in my blog post on filesystem caching:http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htmlBrad
Brad Diggs | Principal Sales Consultant
Jim,You are spot on. I was hoping that the writes would be close enough to identical thatthere would be a high ratio of duplicate data since I use the same record size, page size,compression algorithm, … etc. However, that was not the case. The main thing that Iwanted to prove though was that if
discussion list'
Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup
Reducing the record size would negatively impact performance. For rational
why, see the
section titled Match Average I/O Block Sizes in my blog post on filesystem
caching:
http://www.thezonemanager.com
Thanks for running and publishing the tests :)
A comment on your testing technique follows, though.
2011-12-29 1:14, Brad Diggs wrote:
As promised, here are the findings from my testing. I created 6
directory server instances ...
However, once I started modifying the data of the replicated
S11 FCSBrad
Brad Diggs | Principal Sales Consultant |972.814.3698eMail:brad.di...@oracle.comTech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs
On Dec 29, 2011, at 8:11 AM, Robert Milkowski wrote:And these results are from S11 FCS I assume.On older builds or Illumos
On Thu, Dec 29, 2011 at 9:53 AM, Brad Diggs brad.di...@oracle.com wrote:
Jim,
You are spot on. I was hoping that the writes would be close enough to
identical that
there would be a high ratio of duplicate data since I use the same record
size, page size,
compression algorithm, … etc.
On Mon, Dec 12, 2011 at 11:04 PM, Erik Trimble tr...@netdemons.com wrote:
On 12/12/2011 12:23 PM, Richard Elling wrote:
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not
very
efficient. Real dedup aware replication is
On Thu, Dec 29, 2011 at 6:44 PM, Matthew Ahrens mahr...@delphix.com wrote:
On Mon, Dec 12, 2011 at 11:04 PM, Erik Trimble tr...@netdemons.com wrote:
(1) when constructing the stream, every time a block is read from a fileset
(or volume), its checksum is sent to the receiving machine. The
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek
Sent: 10 December 2011 14:05
To: Mertol Ozyoney
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency
On Mon, Dec 12, 2011 at 08:30:56PM +0400, Jim Klimov wrote:
2011-12-12 19:03, Pawel Jakub Dawidek пишет:
As I said, ZFS reading path involves no dedup code. No at all.
I am not sure if we contradicted each other ;)
What I meant was that the ZFS reading path involves reading
logical data
On Dec 11, 2011 5:12 AM, Nathan Kroenert nat...@tuneunix.com wrote:
On 12/11/11 01:05 AM, Pawel Jakub Dawidek wrote:
On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do
On Sun, Dec 11, 2011 at 04:04:37PM +0400, Jim Klimov wrote:
I would not be surprised to see that there is some disk IO
adding delays for the second case (read of a deduped file
clone), because you still have to determine references
to this second file's blocks, and another path of on-disk
Not exactly. What is dedup'ed is the stream only, which is infect not very
efficient. Real dedup aware replication is taking the necessary steps to
avoid sending a block that exists on the other storage system.
http://www.oracle.com/
Mertol Özyöney | Storage Sales
Mobile: +90 533 931 0752
I am almost sure that in cache things are still hydrated. There is an
outstanding RFE for this, while I am not sure, I think this feature will
be implemented sooner or later. And in theory there will be little
benefits as most dedup'ed shares are used for archive purposes...
PS: NetApp's do have
Thanks everyone for your input on this thread. It sounds like there is sufficient weightbehind the affirmative that I will include this methodology into my performance analysistest plan. If the performance goes well, I will share some of the results when we concludein January/February
2011-12-12 19:03, Pawel Jakub Dawidek пишет:
On Sun, Dec 11, 2011 at 04:04:37PM +0400, Jim Klimov wrote:
I would not be surprised to see that there is some disk IO
adding delays for the second case (read of a deduped file
clone), because you still have to determine references
to this second
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not very
efficient. Real dedup aware replication is taking the necessary steps to
avoid sending a block that exists on the other storage system.
These exist outside of ZFS (eg
On 12/12/2011 12:23 PM, Richard Elling wrote:
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not very
efficient. Real dedup aware replication is taking the necessary steps to
avoid sending a block that exists on the other
On 12/11/11 01:05 AM, Pawel Jakub Dawidek wrote:
On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
And you really work at Oracle?:)
The answer is
2011-12-11 15:10, Nathan Kroenert wrote:
Hey all,
That reminds me of something I have been wondering about... Why only 12x
faster? If we are effectively reading from memory - as compared to a
disk reading at approximately 100MB/s (which is about an average PC HDD
reading sequentially), I'd
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nathan Kroenert
That reminds me of something I have been wondering about... Why only 12x
faster? If we are effectively reading from memory - as compared to a
disk reading at approximately
What kind of drives are we talking about? Even SATA drives are
available according to application type (desktop, enterprise server,
home PVR, surveillance PVR, etc). Then there are drives with SAS
fiber channel interfaces. Then you've got Winchester platters vs SSD
vs hybrids. But even before
On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
And you really work at Oracle?:)
The answer is definiately yes. ARC caches on-disk blocks and dedup just
On 12/07/11 20:48, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
In fact , most of our functions, like replication is not dedup aware.
For example, thecnicaly it's possible to optimize our
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mertol Ozyoney
Sent: Wednesday, December 07, 2011 3:49 PM
To: Brad Diggs
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup
Unfortunetly
On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
On 12/07/11 20:48, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
In fact , most of our functions, like replication is not dedup aware.
For example,
You can see the original ARC case here:
http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt
On 8 Dec 2011, at 16:41, Ian Collins wrote:
On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
On 12/07/11 20:48, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2
Hello,I have a hypothetical question regarding ZFS reduplication. Does the L1ARC cache benefitfrom reduplicationin the sense that the L1ARC will only need to cache one copy of the reduplicated dataversus many copies? Here is an example:Imagine that I have a server with 2TB of RAM and a PB of disk
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
In fact , most of our functions, like replication is not dedup aware.
However we have significant advantage that zfs keeps checksums regardless of
the dedup being on and
It was my understanding that both dedup and caching work on
block level. So if you have identical on-disk blocks (same
original data past same compression and encryption), they
turn into one(*) on-disk block with several references from
DDT. And that one block is only cached once, saving ARC
30 matches
Mail list logo