Re: [zfs-discuss] gaining speed with l2arc

2011-05-09 Thread Chris Forgeron
I've got a system with 24 Gig of RAM, and I'm running into some interesting 
issues playing with the ARC, L2ARC, and the DDT. I'll post a separate thread 
here shortly.  I think even if you add more RAM, you'll run into what I'm 
noticing (and posting about).

-Original Message-
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Frank Van Damme
Sent: Tuesday, May 03, 2011 4:33 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] gaining speed with l2arc

Hi, hello,

another dedup question. I just installed an ssd disk as l2arc.  This is a 
backup server with 6 GB RAM (ie I don't often read the same data again), 
basically it has a large number of old backups on it and they need to be 
deleted. Deletion speed seems to have improved although the majority of reads 
are still coming from disk.

 capacity operationsbandwidth
pool  alloc   free   read  write   read  write
  -  -  -  -  -  -
backups   5.49T  1.58T  1.03K  6  3.13M  91.1K
  raidz1  5.49T  1.58T  1.03K  6  3.13M  91.1K
c0t0d0s1  -  -200  2  4.35M  20.8K
c0t1d0s1  -  -202  1  4.28M  24.7K
c0t2d0s1  -  -202  1  4.28M  24.9K
c0t3d0s1  -  -197  1  4.27M  13.1K
cache -  -  -  -  -  -
  c1t5d0   112G  7.96M 63  2   337K  66.6K

The above output is while the machine is only deleting files (so I guess the 
goal is to have *all* metadata reads from the cache). So the first riddle: how 
to explain the low number of writes to l2arc compared to the reads from disk.

Because reading bits of the DDT is supposed to be the biggest bottleneck, I 
reckoned it would be a good idea to try not to expire any part of my DDT from 
l2arc. l2arc is memory mapped, so they say, so perhaps there is a method to 
reserve as much memory for this as possible, too.
Could one attain this by setting zfs_arc_meta_limit to a higher value?
I don't need much process memory on this machine (I use rsync and not much 
else).

I was also wondering if setting secondarycache=metadata for that zpool would be 
a good idea (to make sure l2arc stays reserver for metadata, since the DDT is 
considered metadata).
Bad idea, or would it even help to set primarycache=metadata too, to not let 
RAM fill up with file data?

P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris variants 
with bugs fixed/better performance, too).

--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen, dead or 
alive or by any means, including but not limited to telepathy without the 
benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining speed with l2arc

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Van Damme
> 
> another dedup question. I just installed an ssd disk as l2arc.  This
> is a backup server with 6 GB RAM (ie I don't often read the same data
> again), basically it has a large number of old backups on it and they
> need to be deleted. Deletion speed seems to have improved although the
> majority of reads are still coming from disk.
> 
>  capacity operationsbandwidth
> pool  alloc   free   read  write   read  write
>   -  -  -  -  -  -
> backups   5.49T  1.58T  1.03K  6  3.13M  91.1K
>   raidz1  5.49T  1.58T  1.03K  6  3.13M  91.1K
> c0t0d0s1  -  -200  2  4.35M  20.8K
> c0t1d0s1  -  -202  1  4.28M  24.7K
> c0t2d0s1  -  -202  1  4.28M  24.9K
> c0t3d0s1  -  -197  1  4.27M  13.1K
> cache -  -  -  -  -  -
>   c1t5d0   112G  7.96M 63  2   337K  66.6K

You have a server with 5T of storage (4T used), 112G l2arc, dedup enabled,
and 6g of ram.
Ouch.  That is not nearly enough ram.  I expect to summarize the thread
"Dedup and L2ARC memory requirements (again)" but until that time, I suggest
going to read that thread.  A more reasonable amount of ram for your system
is likely in the 20G-30G range.


> first riddle: how to explain the low number of writes to l2arc
> compared to the reads from disk.

As you read things from disk, they go into ARC.  As things are about to
expire from ARC, they might or might not get into L2ARC.  If they expire too
quickly from ARC, they won't make their way into L2ARC.  I'm sure your
problem is lack of ram.


> I don't need much process memory on this machine (I use rsync and not
> much else).

For rough numbers:  I have a machine that does absolutely nothing.  It has
2G ram.  If I create a process to runaway and consume all ram infinitely, it
starts to push things into swap when it gets up to around 1300M.  Which
means the actual process & kernel memory consumption is around 700M.

If you want reasonable performance, you will need 1G + whatever L2ARC
requires + whatever DDT requires + the actual arc used to cache files.  So a
few G on top of the L2ARC + DDT requirements.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] gaining speed with l2arc

2011-05-03 Thread Frank Van Damme
Hi, hello,

another dedup question. I just installed an ssd disk as l2arc.  This
is a backup server with 6 GB RAM (ie I don't often read the same data
again), basically it has a large number of old backups on it and they
need to be deleted. Deletion speed seems to have improved although the
majority of reads are still coming from disk.

 capacity operationsbandwidth
pool  alloc   free   read  write   read  write
  -  -  -  -  -  -
backups   5.49T  1.58T  1.03K  6  3.13M  91.1K
  raidz1  5.49T  1.58T  1.03K  6  3.13M  91.1K
c0t0d0s1  -  -200  2  4.35M  20.8K
c0t1d0s1  -  -202  1  4.28M  24.7K
c0t2d0s1  -  -202  1  4.28M  24.9K
c0t3d0s1  -  -197  1  4.27M  13.1K
cache -  -  -  -  -  -
  c1t5d0   112G  7.96M 63  2   337K  66.6K

The above output is while the machine is only deleting files (so I
guess the goal is to have *all* metadata reads from the cache). So the
first riddle: how to explain the low number of writes to l2arc
compared to the reads from disk.

Because reading bits of the DDT is supposed to be the biggest
bottleneck, I reckoned it would be a good idea to try not to expire
any part of my DDT from l2arc. l2arc is memory mapped, so they say, so
perhaps there is a method to reserve as much memory for this as
possible, too.
Could one attain this by setting zfs_arc_meta_limit to a higher value?
I don't need much process memory on this machine (I use rsync and not
much else).

I was also wondering if setting secondarycache=metadata for that zpool
would be a good idea (to make sure l2arc stays reserver for metadata,
since the DDT is considered metadata).
Bad idea, or would it even help to set primarycache=metadata too, to not
let RAM fill up with file data?

P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris
variants with bugs fixed/better performance, too).

-- 
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss