Mike Gerdts wrote:
> On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote:
>> Hi Mike,
>>
>>
>> Mike Gerdts wrote:
>>> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]>
>>> wrote:
>>>> Thank you very much all for this valuable input.
>>>>
>>>> Based on the collected information, I would take
>>>> following approach as far as calculating size of
>>>> swap and dump devices on ZFS volumes in Caiman
>>>> installer is concerned.
>>>>
>>>> [1] Following formula would be used for calculating
>>>>   swap and dump sizes:
>>>>
>>>> size_of_swap = size_of_dump = MAX(512 MiB, MIN(physical_memory/2, 32
>>>> GiB))
>>> dump should scale with memory size, but the size given is completely
>>> overkill.  On very active (heavy kernel activity) servers with 300+ GB
>>> of RAM, I have never seen a (compressed) dump that needed more than 8
>>> GB.  Even uncompressed the maximum size I've seen has been in the 18
>>> GB range.  This has been without zfs in the mix.  It is my
>>> understanding that at one time the arc was dumped as part of kernel
>>> memory but that was regarded as a bug and has sense been fixed.  If
>>> the arc is dumped, a value of dump much closer to physical memory is
>>> likely to be appropriate.
>> I would agree that given the fact, user can customize this any time
>> after installation, the smaller upper bound is the better. Would
>> it be fine then to use 16 GiB, or even smaller one would be more
>> appropriate ?
>
> By default, only kernel memory is dumped to the dump device.  Further,
> this is compressed.  I have heard that 3x compression is common and
> the samples that I have range from 3.51x - 6.97x.
>
> If you refer to InfoDoc 228921 (contract only - can that be opened or
> can a Sun employee get permission to post same info to an open wiki?)
> you will see a method for approximating the size of a crash dump.  On
> my snv_91 virtualbox instance (712 MB RAM configured), that method
> gave me an estimated (uncompressed) crash dump size of about 450 MB.
> I induced a panic to test the approximation.  In reality it was 323 MB
> and compress(1) takes it down to 106 MB.  My understanding is that the
> algorithm used in the kernel is a bit less aggressive than the
> algorithm used by compress(1) so maybe figure 120 - 150 MB in this
> case.  My guess is that this did not compress as well as my other
> samples because on this smaller system a higher percentage of my
> kernel pages were not full of zeros.
>
> Perhaps the right size for the dump device is more like:
>
> MAX(256 MiB, MIN(physical_memory/4, 16 GiB)

Thanks a lot for making this investigation and collecting
valuable data - I will modify the proposed formula according
to your suggestion.

>
> Further, dumpadm(1M) could be enhanced to resize the dump volume on
> demand.  The size that it would choose would likely be based upon what
> is being dumped (kernel, kernel+user, etc.), memory size, current
> estimate using InfoDoc 228921 logic, etc.
>
>>> As an aside, does the dedicated dump on all machines make it so that
>>> savecore no longer runs by default?  It just creates a lot of extra
>>> I/O during boot (thereby slowing down boot after a crash) and uses a
>>> lot of extra disk space for those that will never look at a crash
>>> dump.  Those that actually use it (not the majority target audience
>>> for OpenSolaris, I would guess) will be able to figure out how to
>>> enable (the yet non-existent) svc:/system/savecore:default.
>>>
>> Looking at the savecore(1M) man pages, it seems that it is managed
>> by svc:/system/dumpadm:default. Looking at the installed system,
>> this service is online. If I understand correctly, you are recommending
>> to disable it by default ?
>
> "dumpadm -n" is really the right way to do this.

I see - thanks for clarifying it.

Jan

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to