On Sat, May 24, 2008 at 11:45 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>
> Hugh Saunders wrote:
>> On Sat, May 24, 2008 at 4:00 PM,  <[EMAIL PROTECTED]> wrote:
>>>  > cache improve write performance or only reads?
>>>
>>> L2ARC cache device is for reads... for write you want
>>>   Intent Log
>>
>> Thanks for answering my question, I had seen mention of intent log
>> devices, but wasn't sure of their purpose.
>>
>> If only one significantly faster disk is available, would it make
>> sense to slice it and use a slice for L2ARC and a slice for ZIL? or
>> would that cause horrible thrashing?
>
> I wouldn't recommend this configuration.
> As you say it would thrash the head. Log devices mainly need to write
> fast as they only ever are read once on reboot if there's uncommitted
> transactions. Whereas cache devices require a fast read as the write can
> be done slowly and asynchronously. So a common device sliced for use as
> both purposes wouldn't work well unless it was both fast read and write
> and had minimal seek times (nvram, ss disk).
>
> Neil.
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

An interesting thread.  I sit in front of a heavily used SXDE (latest)
desktop for many hours and my home dir (where most of the work takes
place) is based on a mirrored pair of 500Gb SATA drives.  The real
issue with this desktop is that it's based on a 939-pin AMD (x4400
IIRC) AMD CPU and I can't go beyond 4Gb of main memory.  So .... after
reading this thread, I pushed in a couple of 15k RPM SAS drives on an
LsiLogic 1068 SAS controller and assigned one as a log device and one
as a cache device.[1]  What a difference!!!!  :)

Now I've go the response time I wanted from the system to begin
with[0], along with the capacity and low-cost per Gb afforded by
commodity SATA drives.  This is just an FYI to others on the list to
"experiment" with this setup.  You may be very pleasantly surprised by
the results.  I certainly am!  :)

Excellent work Team ZFS.

[0] and which I had until I got towards 46 ZFS filesystems.... and an
lofiadm mounted home dir that is shared between different (work) zones
and just keeps on growing...

[1] # psrinfo -v
Status of virtual processor 0 as of: 05/25/2008 21:57:37
  on-line since 05/24/2008 15:44:51.
  The i386 processor operates at 2420 MHz,
        and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 05/25/2008 21:57:37
  on-line since 05/24/2008 15:44:53.
  The i386 processor operates at 2420 MHz,
        and has an i387 compatible floating point processor.

# zpool status
  pool: tanku
 state: ONLINE
 scrub: scrub completed with 0 errors on Sat May 24 19:35:32 2008
config:

        NAME        STATE     READ WRITE CKSUM
        tanku       ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
        logs        ONLINE       0     0     0
          c4t2d0    ONLINE       0     0     0
        cache
          c4t1d0    ONLINE       0     0     0

errors: No known data errors


Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to