Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-25 Thread Roy Sigurd Karlsbakk
 (2) L2arc is not simply a slower extension of L1arc as you seem to be
 thinking. Every entry in the L2arc requires an entry in the L1arc. I
 don't know what the multiplier ratio is, but I hear something between
 10x and 20x. So if you have for example 20G of L2arc, that would
 consume something like 1-2G of ram.
 
 Oddly enough, just google for this: L2ARC memory requirements
 And what you see is --- A conversation in which you, Roy, and I both
 participated. And it was perfectly clear that you require RAM
 consumption in order to support your L2ARC. So, I really don't know
 how any confusion came about here... You should be solidly aware by
 now, that enabling L2ARC comes with a RAM cost.

Sorry - had forgotten that thread, but you're right. Still, ZFS not using all 
its memory for the DDT, it seems, only (RAM-1GB)/4 (by default) for metadata, 
according to Toomas Soome (tsome @ #openindiana), meaning I hit the barrier at 
about 1,2TB unique dedup data on disk.

Does anyone know where I can find exact numbers for the RAM cost? I remember 
reading something about this last I did some testing, but I can't recall the 
RAM cost was as high as 10-20% of L2ARC. With these numbers in place, we could 
create a spreadsheet or even a webapp to allow for easy calculation, given 
(guessed) known average blocksize etc...

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-25 Thread Gary Driggs
On Apr 25, 2011, at 6:11 AM, Roy Sigurd Karlsbakk wrote:

 Does anyone know where I can find exact numbers for the RAM cost?

Doesn't it also vary by RAID type in use? A calculator would be useful but may 
have to account for each version of zpool in the wild given that the source is 
available...

-Gary
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-25 Thread Roy Sigurd Karlsbakk
  Does anyone know where I can find exact numbers for the RAM cost?
 
 Doesn't it also vary by RAID type in use? A calculator would be useful
 but may have to account for each version of zpool in the wild given
 that the source is available...

If we can get the numbers for a given zpool version, we can start off there. No 
reason to try to cover all possible versions and configurations all at once.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-24 Thread Roy Sigurd Karlsbakk
 In theory, dedup should accelerate performance when data is
 duplicated.
 
 In the specs you quoted above, you have not nearly enough ram. I would
 say you should consider 4G or 8G to be baseline if you have no dedup
 and no l2arc. But when you add the dedup and the l2arc, your ram
 requirements increase.
 
 With the 8T storage... Add 8-16G ram to your server on top of your
 baseline.

I never got around to filling it. Already at about 1.2TB fill, it was dead 
slow, and for that the 8GB RAM should suffice.

 I'm not sure how to calculate extra ram requirements for l2arc.

AFAIK, L2ARC will just add more, slower RAM...

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-24 Thread Edward Ned Harvey
 From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
 
  With the 8T storage... Add 8-16G ram to your server on top of your
  baseline.
 
 I never got around to filling it. Already at about 1.2TB fill, it was dead 
 slow,
 and for that the 8GB RAM should suffice.
 
  I'm not sure how to calculate extra ram requirements for l2arc.
 
 AFAIK, L2ARC will just add more, slower RAM...

Sorry, both of these statements are inaccurate.

(1)  Well, I'll drop the subject of 8G being enough ram.  If I say any more 
about that, I'll just be repeating myself.  So I'll only repeat one part:  8G 
is not enough if you have dedup and l2arc enabled and expect it to perform 
reasonably.  I don't even run a laptop with less than 8G of ram anymore.

(2)  L2arc is not simply a slower extension of L1arc as you seem to be 
thinking.  Every entry in the L2arc requires an entry in the L1arc.  I don't 
know what the multiplier ratio is, but I hear something between 10x and 20x.  
So if you have for example 20G of L2arc, that would consume something like 1-2G 
of ram.

Oddly enough, just google for this:  L2ARC memory requirements  
And what you see is --- A conversation in which you, Roy, and I both 
participated.  And it was perfectly clear that you require RAM consumption in 
order to support your L2ARC.  So, I really don't know how any confusion came 
about here...  You should be solidly aware by now, that enabling L2ARC comes 
with a RAM cost.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Edward Ned Harvey
 From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
 
 That's theory, in practice, even with sufficient RAM/L2ARC and some amount
 of SLOG, dedup slows down writes to a minimum. My test was done with 8TB
 net storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG
 (mirrored) and the rest for L2ARC. 

In theory, dedup should accelerate performance when data is duplicated.

In the specs you quoted above, you have not nearly enough ram.  I would say you 
should consider 4G or 8G to be baseline if you have no dedup and no l2arc.  But 
when you add the dedup and the l2arc, your ram requirements increase.

With the 8T storage...  Add 8-16G ram to your server on top of your baseline.

I'm not sure how to calculate extra ram requirements for l2arc.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Edward Ned Harvey
 From: Toomas Soome [mailto:toomas.so...@mls.ee]
 
 well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can
fit
 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M
 entries thats with default metadata limit.  unless i did my
calculations
 wrong, that will explain  the slowdown.

Not sure where you're getting those numbers, but rule of thumb is to add
1-3G of ram for every 1T of unique dedup data.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup 


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Tomas Bodzar
On Sat, Apr 23, 2011 at 3:06 PM, Edward Ned Harvey
openindi...@nedharvey.com wrote:
 From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]

 That's theory, in practice, even with sufficient RAM/L2ARC and some amount
 of SLOG, dedup slows down writes to a minimum. My test was done with 8TB
 net storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG
 (mirrored) and the rest for L2ARC.

 In theory, dedup should accelerate performance when data is duplicated.

 In the specs you quoted above, you have not nearly enough ram.  I would say 
 you should consider 4G or 8G to be baseline if you have no dedup and no 
 l2arc.  But when you add the dedup and the l2arc, your ram requirements 
 increase.

 With the 8T storage...  Add 8-16G ram to your server on top of your baseline.

 I'm not sure how to calculate extra ram requirements for l2arc.

Isn't it too much?  I asked elsewhere and looks like they have lower
needs for RAM http://www.shiningsilence.com/dbsdlog/2011/04/22/7647.html



 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Edward Ned Harvey
 From: Tomas Bodzar [mailto:tomas.bod...@gmail.com]
 
 Isn't it too much?  

Too much ram is an oxymoron.

Always add more ram.  And then double the ram.  Or else don't complain about 
performance.  ;-)


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Gregory Youngblood
Unless you buy the fishworks based storage products, which I believe includes 
the dedup feature and is sold for production environments. I just don't 
remember if the dedup feature is labelled experimental or similar.

Sent from my Droid Incredible. 

Edward Ned Harvey openindi...@nedharvey.com wrote:

 From: Alan Coopersmith [mailto:alan.coopersm...@oracle.com]
 
 While I'm fairly sure Oracle disagrees with Mr. Harvey's claim that it's
 not considered production worthy, 

Here's what I meant when I said that:
The current production release is still Solaris 10, which does not include
dedup yet.  If you want dedup from oracle, it's only available in Solaris 11
Express.  The subjective label 'production worthy' is a relative and
personal value assessment for any given individual in any given specific
situation.  But the name Express implies that it's not fully baked.

So I'm sure you'll hear conflicting reports, from different oracle
employees, and different users and admins, assessing the value of the term
production worthy.  My personal assessment is not yet.

There is no consensus yet.  Some will say yes, some will say no.  I say no.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Gary Driggs
On Apr 23, 2011, at 8:21 AM, Gregory Youngblood greg...@youngblood.me wrote:

 Unless you buy the fishworks based storage products, which I believe includes 
 the dedup feature and is sold for production environments. I just don't 
 remember if the dedup feature is labelled experimental or similar.

The unified storage series of appliances offer inline dedup, compression, and 
thin provisioning as production features; 
http://www.oracle.com/us/products/servers-storage/storage/unified-storage

-Gary
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-23 Thread Alan Coopersmith
On 04/23/11 06:01 AM, Edward Ned Harvey wrote:
 From: Alan Coopersmith [mailto:alan.coopersm...@oracle.com]

 While I'm fairly sure Oracle disagrees with Mr. Harvey's claim that it's
 not considered production worthy, 
 
 Here's what I meant when I said that:
 The current production release is still Solaris 10, which does not include
 dedup yet.

Solaris 10 is a current production release, but it's not the only one.
Exadata  Exalogic machines ship with Solaris 11 Express, which seems to count
as production to me.   Of course, the price of those includes support to get
fixes that have been developed since the original Solaris 11 Express release
in November.

 If you want dedup from oracle, it's only available in Solaris 11
 Express.

Or in the Storage Appliance 2010.Q1 and later releases, but those are only
available for those appliances, not any other computers.

[This is getting way off-topic for the openindiana list though.]

-- 
-Alan Coopersmith-alan.coopersm...@oracle.com
 Oracle Solaris Platform Engineering: X Window System


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Chris Ridd

On 21 Apr 2011, at 23:07, Toomas Soome wrote:

 
 the basic math behind the scenes is following (and not entirely determined):
 
 1. DTT data is kept in metadata part of ARC; 
 2. metadata default max is arc_c_max / 4. 
 
 note that you can rise that limit.
 
 3. arc max is RAM - 1GB. 
 
 so, if you have 8GB of ram, your arc max is 7GB and max metadata is 1.75GB.  
 so, with server with 8GB of ram, your server will store MAX 1.75GB DTT in 
 arc. the DTT entry is told to take 250B.

In this week's ZFS presentation at LOSUG, Darren Moffat suggested each DTT 
entry was 320 bytes. To get the number of entries needed, use zdb -DD poolname.

There's some good stuff in the ZFS Dedup FAQ 
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Chris

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Edward Ned Harvey
 From: James Kohout [mailto:jkoh...@yahoo.com]
 
 So looking to upgrade to io148 to be able to enable deduplication.  So
 does have any experience running a ZFS RaidZ2 pool with deduplication in
 a production environment?  Is  ZFS deduplication in oi148 considered
 stable/production ready?  I would hate to break a working setup chasing a
 feature that is not ready.

OI only goes up to zpool ver 28 (the current, and likely final open source
release.)

Even in solaris 11 express, which has a significantly newer version, dedup
isn't considered production worthy.  So I would advise you to live without
it for now.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Ben Taylor
On Fri, Apr 22, 2011 at 8:22 AM, Edward Ned Harvey
openindi...@nedharvey.com wrote:
 From: James Kohout [mailto:jkoh...@yahoo.com]

 So looking to upgrade to io148 to be able to enable deduplication.  So
 does have any experience running a ZFS RaidZ2 pool with deduplication in
 a production environment?  Is  ZFS deduplication in oi148 considered
 stable/production ready?  I would hate to break a working setup chasing a
 feature that is not ready.

 OI only goes up to zpool ver 28 (the current, and likely final open source
 release.)

I thought Oracle was going to continue to release source snapshots after
a binary release had been made.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Jerry Kemp
That was my understanding also.  I thought that only the binary/distro
roll outs were stopping.

Can someone in the know comment further on this?

Jerry


On 04/22/11 08:23, Ben Taylor wrote:

 
 I thought Oracle was going to continue to release source snapshots after
 a binary release had been made.
 

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Sriram Narayanan
On Fri, Apr 22, 2011 at 8:33 PM, Jerry Kemp sun.mail.lis...@oryx.cc wrote:
 That was my understanding also.  I thought that only the binary/distro
 roll outs were stopping.

 Can someone in the know comment further on this?


This has been discussed a lot last year and this year.

There was a leaked Oracle memo which stated that Oracle would stop
commits, and would make source code available only when the final
binary release of Solaris 11 was made available.

While there has been no official confirmation of the above or
responses to any questions, the source code repository at
opensolaris.org was not updated anymore since then for the ON code
base.

At least one other project at opensolaris.org continues to receive
updates. This is the IPS project which I closely track. There may be
others too - I've not checked.

 Jerry


-- Sriram

 On 04/22/11 08:23, Ben Taylor wrote:


 I thought Oracle was going to continue to release source snapshots after
 a binary release had been made.


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss




-- 
Belenix: www.belenix.org

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Ignacio Marambio Catán
The information about releasing the source was part of a leaked memo.
Oracle has made no official comitment to such a thing and the author
of the memo is no longer at Oracle.
No Oracle employee can reveal that information even if they had it
without risking both his job and a lawsuit.

On Fri, Apr 22, 2011 at 12:03 PM, Jerry Kemp sun.mail.lis...@oryx.cc wrote:
 That was my understanding also.  I thought that only the binary/distro
 roll outs were stopping.

 Can someone in the know comment further on this?

 Jerry


 On 04/22/11 08:23, Ben Taylor wrote:


 I thought Oracle was going to continue to release source snapshots after
 a binary release had been made.


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Jerry Kemp
It appears that I mis-understood the memo.

Thanks for setting me straight.

Jerry


On 04/22/11 10:09, Sriram Narayanan wrote:

 There was a leaked Oracle memo which stated that Oracle would stop
 commits, and would make source code available only when the final
 binary release of Solaris 11 was made available.
 

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Alan Coopersmith
On 04/22/11 08:03 AM, Jerry Kemp wrote:
 Can someone in the know comment further on this?

Sorry, but no, we can't comment on that.

-- 
-Alan Coopersmith-alan.coopersm...@oracle.com
 Oracle Solaris Platform Engineering: X Window System


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Alan Coopersmith
On 04/22/11 08:09 AM, Sriram Narayanan wrote:
 At least one other project at opensolaris.org continues to receive
 updates. This is the IPS project which I closely track. There may be
 others too - I've not checked.

The Caiman installers do as well, as do the projects to build  package
mostly-externally-created open source (the gates for the X, JDS, SFW 
Userland consolidations).

OpenIndiana actively tracks  mirrors all of those gates.

-- 
-Alan Coopersmith-alan.coopersm...@oracle.com
 Oracle Solaris Platform Engineering: X Window System


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Roy Sigurd Karlsbakk
That's theory, in practice, even with sufficient RAM/L2ARC and some amount of 
SLOG, dedup slows down writes to a minimum. My test was done with 8TB net 
storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG (mirrored) 
and the rest for L2ARC. Application tested was Bacula with the OI box as a 
storage agent (bacula-sd). Performance was ok until about 1TB was used, dedup 
numbers were low, since it was during the initial backup, but write speed was 
down to the 10s MB/s.

roy

- Original Message -
 the basic math behind the scenes is following (and not entirely
 determined):
 
 1. DTT data is kept in metadata part of ARC;
 2. metadata default max is arc_c_max / 4.
 
 note that you can rise that limit.
 
 3. arc max is RAM - 1GB.
 
 so, if you have 8GB of ram, your arc max is 7GB and max metadata is
 1.75GB. so, with server with 8GB of ram, your server will store MAX
 1.75GB DTT in arc. the DTT entry is told to take 250B.
 
 now the tricky part - those numbers are max values; but you also need
 some space to store normal metadata, not just DTT. also, you cant
 really distinguish DTT from other metadata - that will leave space for
 some guessing unfortunately.
 
 for perfomance consideration, even if you have enough ram and l2arc,
 the arc warmup time is more critical, as currently the l2arc contents
 will be lost on reboot and arc contents as well obviously - thats the
 downside of having dedupe integrated into filesystem.
 
 On 22.04.2011, at 0:48, Eric D. Mudama wrote:
 
  On Thu, Apr 21 at 14:12, James Kohout wrote:
  All,
  Been running opensolaris 134 with a 9T RaidZ2 array as a backup
  server
  in a production environment. Whenever I tried to turn the ZFS
  deduplication I always had crashes and other issues, which I most
  likely
  attributed to the know ZFS dedup bugs in 134. Once I rebuild the
  pool
  without dedup, things have been running great for several months
  without
  a glitch. As a result, I am highly confident it was not a hardware
  issue.
 
  So looking to upgrade to io148 to be able to enable deduplication.
  So
  does have any experience running a ZFS RaidZ2 pool with
  deduplication in
  a production environment? Is ZFS deduplication in oi148 considered
  stable/production ready? I would hate to break a working setup
  chasing a
  feature that is not ready.
 
  Any feedback, experience would be appreciated.
 
  The summary of list postings over the last 6 months is that dedup
  requires way more RAM and/or L2ARC than most people budgeted in
  order
  to work as smoothly as a non-dedup installation, and that when under
  budgeted in RAM/L2ARC, the performance of scrubs and snapshot
  deletion
  is atrocious.
 
  I don't have the math handy on the memory requirements, maybe
  someone
  can post that part of the summary.
 
 
 
  --
  Eric D. Mudama
  edmud...@bounceswoosh.org
 
 
  ___
  OpenIndiana-discuss mailing list
  OpenIndiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

-- 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS with Dedupication for NFS server

2011-04-22 Thread Toomas Soome


well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can fit 
5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M 
entries thats with default metadata limit.  unless i did my calculations 
wrong, that will explain  the slowdown.


On 22.04.2011, at 21:19, Roy Sigurd Karlsbakk wrote:

 That's theory, in practice, even with sufficient RAM/L2ARC and some amount of 
 SLOG, dedup slows down writes to a minimum. My test was done with 8TB net 
 storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG (mirrored) 
 and the rest for L2ARC. Application tested was Bacula with the OI box as a 
 storage agent (bacula-sd). Performance was ok until about 1TB was used, dedup 
 numbers were low, since it was during the initial backup, but write speed was 
 down to the 10s MB/s.
 
 roy
 
 - Original Message -
 the basic math behind the scenes is following (and not entirely
 determined):
 
 1. DTT data is kept in metadata part of ARC;
 2. metadata default max is arc_c_max / 4.
 
 note that you can rise that limit.
 
 3. arc max is RAM - 1GB.
 
 so, if you have 8GB of ram, your arc max is 7GB and max metadata is
 1.75GB. so, with server with 8GB of ram, your server will store MAX
 1.75GB DTT in arc. the DTT entry is told to take 250B.
 
 now the tricky part - those numbers are max values; but you also need
 some space to store normal metadata, not just DTT. also, you cant
 really distinguish DTT from other metadata - that will leave space for
 some guessing unfortunately.
 
 for perfomance consideration, even if you have enough ram and l2arc,
 the arc warmup time is more critical, as currently the l2arc contents
 will be lost on reboot and arc contents as well obviously - thats the
 downside of having dedupe integrated into filesystem.
 
 On 22.04.2011, at 0:48, Eric D. Mudama wrote:
 
 On Thu, Apr 21 at 14:12, James Kohout wrote:
 All,
 Been running opensolaris 134 with a 9T RaidZ2 array as a backup
 server
 in a production environment. Whenever I tried to turn the ZFS
 deduplication I always had crashes and other issues, which I most
 likely
 attributed to the know ZFS dedup bugs in 134. Once I rebuild the
 pool
 without dedup, things have been running great for several months
 without
 a glitch. As a result, I am highly confident it was not a hardware
 issue.
 
 So looking to upgrade to io148 to be able to enable deduplication.
 So
 does have any experience running a ZFS RaidZ2 pool with
 deduplication in
 a production environment? Is ZFS deduplication in oi148 considered
 stable/production ready? I would hate to break a working setup
 chasing a
 feature that is not ready.
 
 Any feedback, experience would be appreciated.
 
 The summary of list postings over the last 6 months is that dedup
 requires way more RAM and/or L2ARC than most people budgeted in
 order
 to work as smoothly as a non-dedup installation, and that when under
 budgeted in RAM/L2ARC, the performance of scrubs and snapshot
 deletion
 is atrocious.
 
 I don't have the math handy on the memory requirements, maybe
 someone
 can post that part of the summary.
 
 
 
 --
 Eric D. Mudama
 edmud...@bounceswoosh.org
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 -- 
 Vennlige hilsener / Best regards
 
 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk.
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss