Tomas Ögren wrote:
> On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes:
>
>   
>> Tomas Ögren wrote:
>>     
>>> On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
>>>
>>>       
>>>> Tomas Ögren wrote:
>>>>         
>>>>> Hello.
>>>>>
>>>>> Executive summary: I want arc_data_limit (like arc_meta_limit, but for
>>>>> data) and set it to 0.5G or so. Is there any way to "simulate" it?
>>>>>   
>>>>>           
>>>> We describe how to limit the size of the ARC cache in the Evil Tuning 
>>>> Guide.
>>>> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
>>>>         
>>> Will that limit the _data_ portion only, or the metadata as well?
>>>       
>> Recent builds of OpenSolaris have the ability to control on a per 
>> dataset basis what is put into the ARC and L2ARC using the
>> primrarycache and secondarycache dataset properties:
>>
>>       primarycache=all | none | metadata
>>
>>           Controls what is cached in the primary cache  (ARC).  If
>>           this  property  is set to "all", then both user data and
>>           metadata is cached. If this property is set  to  "none",
>>           then  neither  user data nor metadata is cached. If this
>>           property is set to "metadata",  then  only  metadata  is
>>           cached. The default value is "all".
>>
>>       secondarycache=all | none | metadata
>>
>>           Controls what is cached in the secondary cache  (L2ARC).
>>           If  this  property  is set to "all", then both user data
>>           and metadata is cached.  If  this  property  is  set  to
>>           "none",  then  neither user data nor metadata is cached.
>>           If this property is set to "metadata", then  only  meta-
>>           data is cached. The default value is "all".
>>     
>
> Yeah, the problem is (like I wrote in the first post), if I set
> primarycache=metadata, then ZFS prefetch will go into "horribly
> inefficient mode" where it will do lots of prefetching, but the
> prefetched data will be discarded immediately.
>
> 128k prefetch for a 32k read will throw away the other 96k immediately.
> Followed by another 128k prefetch for the next 32k read, throwing away
> the other 96k.
>   

Are you sure this is a prefetch, or is it just the recordsize?
The checksum is based on the record, so to validate the checksum
the entire record must be read.  If you have a fixed record record
sized workload where the size < 128 kBytes, then you might
adjust the recordsize parameter.
 -- richard

> So ZFS needs to have _some_ data cache, but I want to limit it for
> "short term data" only.. Setting data cache limit to 512M or something
> should work fine, but I want to leave the rest to metadata as that's the
> place where it can help the most.
>
> Unless I can do some trickery with a ram disk and put that as
> secondarycache with data cache as well..
>
> /Tomas
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to