chromatic wrote:
On Sunday 24 February 2008 18:41:23 Bob Rogers wrote:
Granted, and it's tough to make a PMC truly read-only until after it's
completely initialized . . .
There's a similar problem for accessors and setters. Again, that's
solveable with more code or more cleverness.
Some of our memory problems seem to be strange interactions between PObjs
allocated out of constant pools, garbage collection, and freezing/thawing PBC
(not to mention the interaction of HLLs).
PObjs allocated out of constant pools persist in memory. They get marked
(sometimes, but not always
Am Sonntag, 24. Februar 2008 10:55 schrieb chromatic:
PMCs that *do* need a special mark() are troublesome; they may contain
pointers to non-constant PObjs that *do* need live marking, lest they get
swept away during the second half of GC. If these constant PObjs don't get
marked, there's a
On Sunday 24 February 2008 07:33:30 Leopold Toetsch wrote:
Am Sonntag, 24. Februar 2008 10:55 schrieb chromatic:
PMCs that *do* need a special mark() are troublesome; they may contain
pointers to non-constant PObjs that *do* need live marking, lest they get
swept away during the second
From: chromatic [EMAIL PROTECTED]
Date: Sun, 24 Feb 2008 01:55:20 -0800
Some of our memory problems seem to be strange interactions between
PObjs allocated out of constant pools, garbage collection, and
freezing/thawing PBC (not to mention the interaction of HLLs).
Amen
On Sunday 24 February 2008 16:55:48 Bob Rogers wrote:
Some of our memory problems seem to be strange interactions between
PObjs allocated out of constant pools, garbage collection, and
freezing/thawing PBC (not to mention the interaction of HLLs).
Amen! -- particularly the strange
From: chromatic [EMAIL PROTECTED]
Date: Sun, 24 Feb 2008 17:22:19 -0800
On Sunday 24 February 2008 16:55:48 Bob Rogers wrote:
Why do constant PMCs ever need to point to non-constant ones? In other
words, why are those pointed-to PObjs not also constant?
The reason is
On Sunday 24 February 2008 18:41:23 Bob Rogers wrote:
Granted, and it's tough to make a PMC truly read-only until after it's
completely initialized . . .
There's a similar problem for accessors and setters. Again, that's
solveable with more code or more cleverness.
So, you're saying
, and then mutable again (but some
objects may not allow this, particularly singletons). Immutable objects
are not treated specially by garbage collection. (I don't think this is
useful to Parrot per se, but may be useful to HLL implementors.)
pure -- immune from garbage collection. As a practical matter
# New Ticket Created by François PERRAD
# Please include the string: [perl #49328]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=49328
I've isolated one problem between PBC loading and garbage collection.
(remember
Hi all,
A Segmentation fault occurs in the languages/lua/t/tables_3.pir.
This test is a simple table creation (with 1000 items) :
a = {}
for i=1,1000 do a[i] = i*2 end
print(a[9])
This problem started with revision 11586.
In the previous Lua PMC implementation (r11478),
Sam Vilain wrote:
Hi all,
While I must start this post out by saying that I've never implemented
either STM or a garbage collector, during a discussion on #parrot (is
that channel logged?), a similarity between the two processes occurred
to me.
Not really. STM is a scheme to handle access
Hi all,
While I must start this post out by saying that I've never implemented
either STM or a garbage collector, during a discussion on #parrot (is
that channel logged?), a similarity between the two processes occurred
to me.
Would this be an adequate expression of a generational Garbage
Hi,
I've been wondering how to lazy lists will work.
The answer Correctly, don't worry about it, is entirely acceptable...
The intent of this example in S06 seems clear, make @oddsquares
a lazily filled array of squares of odd @nums:
S06/Pipe operators
It [==] binds the (potentially lazy)
With cons based lists, past stream values are no longer referred to
so can be reclaimed, but we have random access arrays.
That's about where my wondering stopped.
It started again. @primesquares.shift would do it
Brad
Nick Glencross [EMAIL PROTECTED] wrote:
Ok, now I understand. This is inspecting the C runtime stack (I think).
The Interpreter remembers the bottom address, and then when the time
comes, a routine runs the depth of the stack.
Yes. Exactly.
The values on the stack are then checked whether
Cory Spencer [EMAIL PROTECTED] wrote:
I've come across another garbage collection/DOD issue that I'm trying to
solve (without much success) and would appreciate some tips/advice on how
to locate the source of the problem.
When I'm investigating GC bugs, the usual procedure is like this:
- run
Leopold Toetsch wrote:
Cory Spencer [EMAIL PROTECTED] wrote:
I've come across another garbage collection/DOD issue that I'm trying to
solve (without much success) and would appreciate some tips/advice on how
to locate the source of the problem.
Running valgrind (on supported platforms
Nick Glencross wrote:
The DOD certainly has a few things flagged up, which I'm going to
quickly investigate to see if they are serious or not...
I've learned alot about DOD since earlier (and watched telly). Not as
straightforward as I thought it would be to find if these traces should
be
Nick Glencross wrote:
I've learned alot about DOD since earlier (and watched telly). Not as
straightforward as I thought it would be to find if these traces should
be considered serious or not (I would say any logic based on unitialised
values will bite one day!).
Ok, now I understand. This is
Cory Spencer [EMAIL PROTECTED] wrote:
---354891615-125901741-966306=:18075
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
I've been writing a Lisp implementation on top of Parrot for the last
several months (and I'm just about at the point where I'm ready to unleash
it upon
a segmentation fault and on a Mac OS X 10.3 machine
produces a bus error. Running with the garbage collection disabled (ie.
with the -G flag) does not produce these errors.
If anyone could point me towards what is going wrong, I would MOST
appreciate it - it's been driving me nuts. (I've been
Try again, please.
In addition to fixing the incomplete commit below,
I've committed the build runtime library with parrot ticket as well, so
you'll need to do a re-configure build.
On Tue, Sep 28, 2004 at 02:49:45PM +0200, Leopold Toetsch wrote:
Will Coleda [EMAIL PROTECTED] wrote:
Will Coleda [EMAIL PROTECTED] wrote:
oolong:~/research/parrot coke$ ./parrot languages/tcl/tcl.pbc
[EMAIL PROTECTED]:~/src/parrot-leo/languages/tcl]
$ make
make: *** No rule to make target `lib/commands/unset.imc', \
needed by `lib/tcllib.pbc'. Stop.
leo
I don't see my followup that I sent from a different account earlier today.
Try this again - you'll need a re-configure as there's a change to the root Makefile
that tcl now requires.
Thanks for checking into this.
Leopold Toetsch wrote:
Will Coleda [EMAIL PROTECTED] wrote:
André Pang [EMAIL PROTECTED] wrote:
On 21/08/2004, at 5:48 AM, Leopold Toetsch wrote:
3) The copying collector isn't integrated yet. But that should be easy.
After finishing sweep and if there is some possible wastage in the
memory pools, these get compacted.
I thought Parrot wasn't using
objects. That's exactly a mark and sweep garbage
collector. DOD is the first phase of that garbage collection. It would
be rather useless, if we only detect garbage and not collect it.
It's misleading.
... The GC, on
the other hand, collects up garbage memory, making the assumption
that all
Matt Fowles [EMAIL PROTECTED] wrote:
Leo~
Nice summary of the issues, but I have a few nits to pick
Thanks. I'll only look at DFS. It more cache-friendly.
Thus I don't think that BFS works. Now lets consider DFS of both sets.
A refs C,B; B refs D, E; C refs E; D refs G; E refs A
Ah,
Leo~
On Fri, 20 Aug 2004 16:26:33 +0200, Leopold Toetsch [EMAIL PROTECTED] wrote:
And yes, I'm really thinking of inserting these A* nodes. Freezing an
object does need it. DOD of course not really.
How is space going to be made for these? DOD probably does not want
to allocate the dummy
[ Oops that one got the wrong address, resent ]
Original Message
To: perl6-internals-subscribe ...
Some remarks
0) Parrot's nomenclature DOD vs GC is a bit misleading. The DOD
subsystem is the stop-the-world mark sweep collector that recycles
object headers. The GC is the
references in
the volatile root set is ignored for garbage collection. For freezing
the whole interpreter it would need consideration though.
Summary:
- the old generation graph is always uptodate
- on massive changes to such an old object it's taken out of old
- the young generation's graph is created last
At 9:48 PM +0200 8/20/04, Leopold Toetsch wrote:
0) Parrot's nomenclature DOD vs GC is a bit misleading. The DOD
subsystem is the stop-the-world mark sweep collector that recycles
object headers. The GC is the copying collector for variable sized
string and other buffer memory.
The incremental
On 21/08/2004, at 5:48 AM, Leopold Toetsch wrote:
3) The copying collector isn't integrated yet. But that should be easy.
After finishing sweep and if there is some possible wastage in the
memory pools, these get compacted.
I thought Parrot wasn't using copying collectors, since you're exposing
Andr Pang writes:
On 21/08/2004, at 5:48 AM, Leopold Toetsch wrote:
3) The copying collector isn't integrated yet. But that should be easy.
After finishing sweep and if there is some possible wastage in the
memory pools, these get compacted.
I thought Parrot wasn't using copying
Richard Jones is the author of Garbage Collection: algorithms for automatic
dynamic memory management
http://www.amazon.com/exec/obidos/tg/detail/-/0471941484. A book that I
believe has been mentioned on the list before.
I do not believe anyone has mentioned his website:
http://www.cs.kent.ac.uk
, and the GC_IS_MALLOC code is already handling the
COW cases.
#2 is a bit more interesting and, if we do it right, means that
we'll end up with a pluggable garbage collection system and may
(possibly) be able to have type 3 threads share object arenas and
memory pools, which'd be rather nice. Even
issues when sharing data between interpreters. So, time for some work.
The task is twofold:
1) The garbage collection and memory allocation APIs needs to be
formalized and abstracted a bit. (We're likely most of the way there,
but it's been a while since I've looked and, honestly, the GC/DOD
with a pluggable garbage collection system and may (possibly)
be able to have type 3 threads share object arenas and memory pools,
which'd be rather nice. Even if not, it leaves a good window for
experimentation in allocation and collection, which isn't bad either.
This, combined with the mention
chunk of memory, so
freeing up one's not a sufficient reason to return the memory to the
free pool.
#2 is a bit more interesting and, if we do it right, means that
we'll end up with a pluggable garbage collection system and may
(possibly) be able to have type 3 threads share object arenas
"TB" == Tim Bunce [EMAIL PROTECTED] writes:
TB On Thu, Feb 15, 2001 at 02:26:10PM -0500, Uri Guttman wrote:
"TB" == Tim Bunce [EMAIL PROTECTED] writes:
TB As a part of that the weak reference concept, bolted recently into
TB perl5, could be made more central in perl6.
TB
Damien Neil wrote:
Using object lifetime to control state is almost never a good idea,
even if you have deterministic finalization. A much better approach
is to have methods which allow holders of the object to control it,
and a finalizer (DESTROY method) which cleans up only if necessary.
Hong Zhang
A deterministic finalization means we shouldn't need to force
programmers
to have good ideas. Make it easy, remember? :)
I don't believe such an algorithm exists, unless you stick with reference
count.
Either doesn't exist, or is more expensive than refcounting. I guess we
On Thu, Feb 15, 2001 at 08:21:03AM -0300, Branden wrote:
Hong Zhang
A deterministic finalization means we shouldn't need to force
programmers
to have good ideas. Make it easy, remember? :)
I don't believe such an algorithm exists, unless you stick with reference
count.
Either
Tim Bunce wrote:
On Thu, Feb 15, 2001 at 08:21:03AM -0300, Branden wrote:
And don't forget that if we stick with refcounting, we should try to
find a
way to break circular references, too.
As a part of that the weak reference concept, bolted recently into perl5,
could be made more central
On Thu, Feb 15, 2001 at 08:07:39AM -0300, Branden wrote:
I think you just said all about why we shouldn't bother giving objects
deterministic finalization, and I agree with you. If we explicitly want to
free resources (files, database connections), then we explicitly call close.
Otherwise, it
Damien Neil wrote:
On Thu, Feb 15, 2001 at 08:07:39AM -0300, Branden wrote:
I think you just said all about why we shouldn't bother giving objects
deterministic finalization, and I agree with you. If we explicitly want
to
free resources (files, database connections), then we explicitly
Branden wrote:
Just set autoflush, if you're lazy...
And say goodbye to performance...
The problem is
that you can not only count on $fh's DESTROY being called at the end of
the block, you often can't count on it ever happening.
Anyway, the file would be flushed and closed...
That's
Hong Zhang wrote:
This code should NEVER work, period. People will just ask for trouble
with this kind of code.
Actually I meant to have specified "" as the mode, i.e. append, then
what I originally said holds true. This behaviour is predictable and
dependable in the current perl
rrent perl implementation. Without the the file
will contain just "bar\n".
That was not what I meant. Your code already assume the existence of
reference counting. It does not work well with any other kind of garbage
collection. If you translate the same code into C without putting in
t
Hong Zhang wrote:
That was not what I meant. Your code already assume the existence of
reference counting. It does not work well with any other kind of garbage
collection. If you translate the same code into C without putting in
the close(), the code will not work at all.
Wrong, it does
Alan Burlison wrote:
I think you'll find that both GC *and* reference counting scheme will
require the heay use of mutexes in a MT program.
There are several concurrent GC algorithms that don't use
mutexes -- but they usually depend on read or write barriers
which may be really hard for us to
There are several concurrent GC algorithms that don't use
mutexes -- but they usually depend on read or write barriers
which may be really hard for us to implement. Making them run
well always requires help from the OS memory manager and that
would hurt portability. (If we don't have OS
Hong Zhang wrote:
The memory barriers are always needed on SMP, whatever algorithm
we are using.
I was just pointing out that barriers are an alternative to mutexes.
Ref count certainly would use mutexes instead of barriers.
The memory barrier can be easily coded in assembly, or intrinsic
behaviour is predictable and
dependable in the current perl implementation. Without the the file
will contain just "bar\n".
That was not what I meant. Your code already assume the existence of
reference counting. It does not work well with any other kind of garbage
collection. If yo
At 09:13 PM 2/15/2001 -0500, Ken Fox wrote:
Hong Zhang wrote:
The memory barriers are always needed on SMP, whatever algorithm
we are using.
I was just pointing out that barriers are an alternative to mutexes.
Ref count certainly would use mutexes instead of barriers.
Not really they
At 07:44 PM 2/14/2001 +, Simon Cozens wrote:
On Wed, Feb 14, 2001 at 08:32:41PM +0100, [EMAIL PROTECTED] wrote:
DESTROY would get called twice, which is VERY BAD.
*blink*
It is? Why?
I grant you it isn't the clearest way of programming, but "VERY BAD"?
package
bout to go away.
DESTROY might be called around the same time its memory is being reclaimed,
but from a language perspective, all this memory dealing is non-existant.
DESTROY is a language thing, garbage collection an implementation detail
of the run-time, purely necessary because of the limited phys
On Wed, Feb 14, 2001 at 08:32:41PM +0100, [EMAIL PROTECTED] wrote:
DESTROY would get called twice, which is VERY BAD.
*blink*
It is? Why?
I grant you it isn't the clearest way of programming, but "VERY BAD"?
package NuclearReactor::CoolingRod;
sub new {
[[ reply goes to -internals ]]
OK. Let's clear it up all at once from start. Below is the lifecycle of an
object (in Perl). A reference is blessed, and an object is the result of
this blessing. During the object's life, several methods of it are called,
but independent of which are called, it
[trimming distribution to -internals only]
On Wed, Feb 14, 2001 at 07:44:53PM +, Simon Cozens wrote:
package NuclearReactor::CoolingRod;
sub new {
Reactor-decrease_core_temperature();
bless {}, shift
}
sub DESTROY {
Reactor-increase_core_temperature();
}
A better
On Wed, Feb 14, 2001 at 01:24:34PM -0800, Damien Neil wrote:
Using object lifetime to control state is almost never a good idea,
even if you have deterministic finalization.
A deterministic finalization means we shouldn't need to force programmers
to have good ideas. Make it easy, remember?
On Thu, Feb 15, 2001 at 12:11:27AM +, Simon Cozens wrote:
Using object lifetime to control state is almost never a good idea,
even if you have deterministic finalization.
A deterministic finalization means we shouldn't need to force programmers
to have good ideas. Make it easy,
ject's refcount is 1, or it is found
in some sort of sweep, then DESTROY and garbage collection can be
considered.
Meanwhile, I agree that try/finally (or any similar such explicit
exception handling mechanism) is not an appropriate way to talk
about GC, more strongly, I think the two mechanisms
On Sunday 11 February 2001 22:48, Jan Dubois wrote:
Doing full GC in this fashion after failed API calls will probably wipe
out any performance gain mark-and-sweep has over reference counting.
Well, after select failed API calls, not every call. And mark-and-sweep,
if that's the GC scheme
to think that using mark-and-sweep for garbage collection will be a
performance boost. This may not be the case if objects still need to
reference counted.
I do wish people would get garbage collection and finalization split in
their minds. They are two separate things which can
Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:
Also, the vast majority of perl variables have no finalization
attached to them.
That's true, but without static typing don't you have to treat them as if
they did? At the very least you need to do a "is it an object with a
to think that using mark-and-sweep for garbage collection will be a
performance boost. This may not be the case if objects still need to
reference counted.
I do wish people would get garbage collection and finalization split in
their minds. They are two separate things which can
At 01:45 PM 02-12-2001 -0300, Branden wrote:
I think having both copying-GC and refcounting-GC is a good idea. I may be
saying a stupid thing, since I'm not a GC expert, but I think objects that
rely on having their destructors called the soonest possible for resource
cleanup could use a
Buddha Buck wrote:
At 01:45 PM 02-12-2001 -0300, Branden wrote:
Am I too wrong here?
It's... complicated...
Agreed.
Here's an example of where things could go wrong:
sub foo {
my $destroyme1 = new SomeClass;
my $destroyme2 = new SomeClass;
my @processme1;
do wish people would get garbage collection and finalization split in
their minds. They are two separate things which can, and will, be dealt
with separately.
2x the penalty, right? Instead of a speed increase we carry the burden of
ref-counting in addition to the overhead of an alternate s
a bad space/time tradeoff. But many people seem
to think that using mark-and-sweep for garbage collection will be a
performance boost. This may not be the case if objects still need to
reference counted.
Most perl doesn't use that many objects that live on past what's obvious
lexically, at least
At 09:49 AM 2/12/2001 -0800, Jan Dubois wrote:
On Mon, 12 Feb 2001 14:50:44 -0300, "Branden" [EMAIL PROTECTED]
wrote:
Actually I was thinking something like PMCs ($@%) being copy-GCed and
referred objects (new SomeClass) being refcounted. In this case above, every
operation would use
On Mon, 12 Feb 2001 14:50:44 -0300, "Branden" [EMAIL PROTECTED]
wrote:
Actually I was thinking something like PMCs ($@%) being copy-GCed and
referred objects (new SomeClass) being refcounted. In this case above, every
operation would use refcount's, since they're storing objects in PMCs. What
On Mon, 12 Feb 2001 13:33:52 -0500 (EST), Sam Tregar [EMAIL PROTECTED]
wrote:
It's reasonably obvious (which is to say "cheap") which variables aren't
involved with anything finalizable.
Probably a simple bit check and branch. Is that cheap? I guess it must
be.
Yes, but incrementing the
On Mon, 12 Feb 2001, Dan Sugalski wrote:
I think I've heard you state that before. Can you be more specific? What
alternate system do you have in mind? Is this just wishful thinking?
This isn't just wishful thinking, no.
You picked the easy one. Maybe you can get back to the other two
On Mon, 12 Feb 2001 13:29:21 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:
At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:
Perl needs some level of tracking for objects with finalization attached to
them. Full refcounting isn't required, however.
I
On Mon, Feb 12, 2001 at 01:33:52PM -0500, Sam Tregar wrote:
Perhaps. It's not rare in OO Perl which is coincidentally one area in
serious need of a speedup. I suppose I'm warped by my own experience -
all the code I see every day is filled with references and objects.
That's probably not
At 01:33 PM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:
I think I've heard you state that before. Can you be more specific? What
alternate system do you have in mind? Is this just wishful thinking?
This isn't just wishful thinking, no.
You picked the
At 10:46 AM 2/12/2001 -0800, Jan Dubois wrote:
On Mon, 12 Feb 2001 13:29:21 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:
At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:
Perl needs some level of tracking for objects with finalization
attached to
Dan Sugalski [EMAIL PROTECTED] writes:
At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:
Perl needs some level of tracking for objects with finalization attached to
them. Full refcounting isn't required, however.
I think I've heard you state
At 01:59 PM 2/12/2001 -0700, Tony Olekshy wrote:
Dan Sugalski wrote:
I do wish people would get garbage collection and finalization split in
their minds. They are two separate things which can, and will, be dealt
with separately.
For the record:
THE GARBAGE COLLECTOR WILL HAVE
consideration of the matter of end-of-scope
includes not just (1) garbage collection, and (2) DESTROY, but also
(3) the matter of end-of-scope operations explicitly requested by
the developer, in an explicit order which may or may not be related
to GC or DESTROY order, and (4) the matter
On Mon, Feb 12, 2001 at 05:33:05PM -0500, Dan Sugalski wrote:
package foo;
use attrs qw(cleanup_sub);
would be nice, but I don't know that he'll go for it. (Though it's the only
way I can think of to avoid AUTOLOAD being considered a potential destructor)
Fiat?
It's pretty hard (for
At 11:28 PM 2/12/2001 +0100, Robin Berjon wrote:
At 15:37 12/02/2001 -0500, Dan Sugalski wrote:
It *is* rare in OO perl, though. How many of the variables you use are
really, truly in need of finalization? .1 percent? .01 percent? Less? Don't
forget that you need to count every scalar in every
e external resources (e.g. file handles or database
connections) that are only freed during DESTROY. Postponing DESTROY until
an indeterminate time in the future can lead to program failures due to
resource exhaustion.
But doesn't resource exhaustion usually trigger garbage collection and
resource exhaustion.
But doesn't resource exhaustion usually trigger garbage collection and
resource reallocation? (Not that this addresses the remainder of your
post.)
Not necessarily; you would have to implement it that way: When you try to
open a file and you don't succeed, you run the garbage col
, vice the XS code anyway?
This scheme would only work if *all* resources including memory and
garbage collection are handled by the OS (or at least by a virtual machine
like JVM or .NET runtime). But this still doesn't solve the destruction
order problem.
Well, no. My thought would
On Sun, 11 Feb 2001, Jan Dubois wrote:
However, I couldn't solve the problem of "deterministic destruction
behavior": Currently Perl will call DESTROY on any object as soon as the
last reference to it goes out of scope. This becomes important if the
object own scarce external resources
e anywhere.
Doing full GC in this fashion after failed API calls will probably wipe
out any performance gain mark-and-sweep has over reference counting.
This scheme would only work if *all* resources including memory and
garbage collection are handled by the OS (or at least by a virtual machine
I do wish people would get garbage collection and finalization split in
their minds. They are two separate things which can, and will, be dealt
with separately.
For the record:
THE GARBAGE COLLECTOR WILL HAVE NOTHING TO DO WITH FINALIZATION, AND NO
PERL OBJECT CODE WILL BE CALLED FOR VARIABLES
Dan Sugalski wrote:
At 12:06 PM 2/9/2001 -0500, Ken Fox wrote:
2. Work proportional to live data, not total data. This is hard to
believe for a C programmer, but good garbage collectors don't have
to "free" every allocation -- they just have to preserve the live,
or
On Fri, Feb 09, 2001 at 01:19:36PM -0500, Dan Sugalski wrote:
The less memory you chew through the faster your code will probably be (or
at least you'll have less overhead). Reuse is generally faster and less
resource-intensive than recycling. What's true for tin cans is true for memory.
At 06:30 PM 2/9/2001 +, Nicholas Clark wrote:
On Fri, Feb 09, 2001 at 01:19:36PM -0500, Dan Sugalski wrote:
The less memory you chew through the faster your code will probably be (or
at least you'll have less overhead). Reuse is generally faster and less
resource-intensive than
up/down by 100% might not even be noticeable.
I was thinking of things that may process a lot of data but in small
pieces, like the command-line greps and suchlike things. They can take a
while on 100M files, but that shouldn't be because they've eaten 200M of
RAM in the process...
Going to a mo
Something for the reference shelf:
Garbage Collection
Algorithms for Automatic Dynamic Memory Management
Richard Jones Raphael Lins
John Wiley and Sons, 1996
ISBN-0-471-94148-4
Still haven't had time to delve into it but from a quick browse it looks
gies)
JH Generational Garbage Collection (you know the drill)
JH Incremental and Concurrent Garbage Collection (...)
JH Garbage Collection for C
JH Garbage Collection for C++
JH Cache-Conscious Garbage Collection
JH Distributed Garbage Collecion
sounds like a bunch of garbage to me.
du
On Wed, Aug 30, 2000 at 01:15:39PM -0400, [EMAIL PROTECTED] wrote:
I didn't realize until I read through parts of this exactly how much time a
refcounting GC scheme took. Between that and perl 5's penchant for
flattening arrays and hashes (which creates a lot of garbage itself for
biggish
On Wed, 30 Aug 2000, Joshua N Pritikin wrote:
On Wed, Aug 30, 2000 at 01:15:39PM -0400, [EMAIL PROTECTED] wrote:
I didn't realize until I read through parts of this exactly how much time a
refcounting GC scheme took. Between that and perl 5's penchant for
flattening arrays and hashes
98 matches
Mail list logo