On Sat, Apr 6, 2013 at 12:30 PM, Bill Ricker <bill.n1...@gmail.com> wrote:

>
> On Sat, Apr 6, 2013 at 12:20 AM, Conor Walsh <c...@adverb.ly> wrote:
>
>> Ah! Ok, so maybe I was confused about this. Even if I set the last
>>> reference to an object to undef perl will keep the memory until exit? The
>>> high water mark for memory usage never goes down? Well, that is fine I
>>> suppose, it isn't like this process will be really all that long lived. It
>>> also means that the iterative form of this algorithm will use all that much
>>> less ram, I think.
>>>
>>
>> Yeah, this is how the perl executable works.  Undef'd (well,
>> garbage-collected) stuff becomes free to be re-used by additional stuff
>> within that particular instance of perl, but never to the OS. Uri's point
>> about rarely-used pages getting swapped to disk by the OS stands, though.
>
>
> As Uri said, this is not peculiar to perl. When C heap grows, it never
> shrinks  C programs can release non-heap, non-stack special segments they
> may have mapped previously, but those are peculiar techniques. The only
> difference with Java (whose interpreter like perl's is build on C) is
> there's a start flag to limit max heap growth, still doesn't give back.
>
> As long as the heap doesn't get too fragmented (java's GC moves things,
> perl recount doesn't) and there's no significant leakage, this isn't a
> problem. But it *can* be a problem. To avoid fragmentation pushing
> perpetual growth, perl sometimes needs a C-like pre-allocation pool pattern
> (or dedicated heap) for the big objects and buffers.
>
>
> --
> Bill
> @n1vux bill.n1...@gmail.com
>



-- 
Bill
@n1vux bill.n1...@gmail.com

_______________________________________________
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to