On 16/08/2011 09:43, Florian Klämpfl wrote:
Am 15.08.2011 20:50, schrieb Mattias Gaertner:
On Mon, 15 Aug 2011 11:29:27 +0100
Martin<laza...@mfriebe.de>  wrote:
[...]
It's a dream of mine to optimize that. eg. to have synedit allocate a
bigger chunk, and have synedit use it knowlege about lifetime of data,
to organize it better.  But there is just to much other important work...
Indeed.
But small chunks have a high chance of reuse. That's why
most mem managers have special optimizations for them and that's why
many small chunks can actually be a good thing.
I think also that it's a better idea to leave pooling to the heap
manager. Allocating bigger chunks and split them is only useful if this
requires no additional book keeping about used/free memery.


Ok, to make my point more clear, there are at least 2 scenarios that came to mind when I wrote the above:

The first is very concrete, but actually a bigger change than just looking after memory.

1)
the pascal highlighter keeps nodes in a tree. They represent the parsing state per line. In most files their is a limited numbers of style that repeat themself. (lines refer to the node representing them). Currently no ref count is kept. So the highlighter doe not know when a node becomes unused. Nodes are never freed. I would like to see how it works if I refcount them. However if that is the case, a certain amount of unused nodes should be kept. Some of the nodes can even be requested and released, when an already known line is accessed. So there would be a permanent alloc/dealloc of the same data. So I would be interested, if that would benefit from keeping nodes.

Sofar all this, is actually not about allocating big blocks, and dividing them, but rather about keeping a pool of normally allocated small block. And also all of this is very low prior. I have to much other stuff, that I wanna do first.

For any one interested: opening Lazarus with about 10 units from lazarus.lpi, results in 1150 nodes, opening the 450 univint on top adds another 80 nodes.

On top of this (assuming ref count, and freeing whatever isn't kept in the pool), it would be of interest if allocating memory for those nodes in blocks of 50 or 100. but that would more be to satisfy curiosity, I don't think there will be much of a benefit. It is even likely to make things worse. It would reduce fragmentation just a little, but since blocks could only be returned as a whole, a single node still in use, can prevent a block from being returned..

2)
Which is not yet even confirmed that any issue like this exists.

Search for memory (like temporary buffers) that is frequently allocated and deallocted, and keep a permanent buffer instead. This is done already in paint routines, where individual words are re-assembled to the line (as long as they match in style and color). Instead of using a string that grows, and re-allocs with each grow, a permanent pchar buffer is used.

So the idea is to go looking for other code, that may work on text or other data, causing frequent small memory changes, that could be kept in an existing place.


--
_______________________________________________
Lazarus mailing list
Lazarus@lists.lazarus.freepascal.org
http://lists.lazarus.freepascal.org/mailman/listinfo/lazarus

Reply via email to