Hello Ian,

Thursday, May 3, 2007, 10:20:20 PM, you wrote:

IC> Roch Bourbonnais wrote:
>>
>> with recent bits ZFS compression is now handled concurrently with many
>> CPUs working on different records.
>> So this load will burn more CPUs and acheive it's results
>> (compression) faster.
>>
IC> Would changing (selecting a smaller) filesystem record size have any effect?

>> So the observed pauses should be consistent with that of a load
>> generating high system time.
>> The assumption is that compression now goes faster than when is was
>> single threaded.
>>
>> Is this undesirable ? We might seek a way to slow down compression in
>> order to limit the system load.
>>
IC> I think you should, otherwise we have a performance throttle that scales
IC> with the number of cores!

For file servers you actually want to all CPUs to be used for
compression otherwise you get bad performance with compression and
plenty of CPU left doing nothing...

So maybe special pool/dataset property (compression_parallelism=N?)

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to