You can't really do that.

Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes
RAM to maintain a cache table of what's in the L2ARC.  Using 2GB of RAM
with an SSD-based L2ARC (even without Dedup) likely won't help you too
much vs not having the SSD. 

If you're going to turn on Dedup, you need at least 8GB of RAM to go
with the SSD.

-Erik


On Tue, 2011-01-18 at 18:35 +0000, Michael Armstrong wrote:
> Thanks everyone, I think overtime I'm gonna update the system to include an 
> ssd for sure. Memory may come later though. Thanks for everyone's responses
> 
> Erik Trimble <erik.trim...@oracle.com> wrote:
> 
> >On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote:
> >> I've since turned off dedup, added another 3 drives and results have 
> >> improved to around 148388K/sec on average, would turning on compression 
> >> make things more CPU bound and improve performance further?
> >> 
> >> On 18 Jan 2011, at 15:07, Richard Elling wrote:
> >> 
> >> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:
> >> > 
> >> >> Hi guys, sorry in advance if this is somewhat a lowly question, I've 
> >> >> recently built a zfs test box based on nexentastor with 4x samsung 2tb 
> >> >> drives connected via SATA-II in a raidz1 configuration with dedup 
> >> >> enabled compression off and pool version 23. From running bonnie++ I 
> >> >> get the following results:
> >> >> 
> >> >> Version 1.03b       ------Sequential Output------ --Sequential Input- 
> >> >> --Random-
> >> >>                   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> >> >> --Seeks--
> >> >> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
> >> >> /sec %CP
> >> >> nexentastor      4G 60582  54 20502   4 12385   3 53901  57 105290  10 
> >> >> 429.8   1
> >> >>                   ------Sequential Create------ --------Random 
> >> >> Create--------
> >> >>                   -Create-- --Read--- -Delete-- -Create-- --Read--- 
> >> >> -Delete--
> >> >>             files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
> >> >> /sec %CP
> >> >>                16  7181  29 +++++ +++ +++++ +++ 21477  97 +++++ +++ 
> >> >> +++++ +++
> >> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++
> >> >> 
> >> >> 
> >> >> I'd expect more than 105290K/s on a sequential read as a peak for a 
> >> >> single drive, let alone a striped set. The system has a relatively 
> >> >> decent CPU, however only 2GB memory, do you think increasing this to 
> >> >> 4GB would noticeably affect performance of my zpool? The memory is only 
> >> >> DDR1.
> >> > 
> >> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, 
> >> > turn off dedup
> >> > and enable compression.
> >> > -- richard
> >> > 
> >> 
> >> _______________________________________________
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
> >Compression will help speed things up (I/O, that is), presuming that
> >you're not already CPU-bound, which it doesn't seem you are.
> >
> >If you want Dedup, you pretty much are required to buy an SSD for L2ARC,
> >*and* get more RAM.
> >
> >
> >These days, I really don't recommend running ZFS as a fileserver without
> >a bare minimum of 4GB of RAM (8GB for anything other than light use),
> >even with Dedup turned off. 
> >
> >
> >-- 
> >Erik Trimble
> >Java System Support
> >Mailstop:  usca22-317
> >Phone:  x67195
> >Santa Clara, CA
> >Timezone: US/Pacific (GMT-0800)
> >
-- 
Erik Trimble
Java System Support
Mailstop:  usca22-317
Phone:  x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to