On 07/15/2010 04:43 PM, Hans-Peter Diettrich wrote:
Just an idea: When the lists contain many entries, they could be split
into buckets. Then the currently searched bucket(s) could be locked
against use by other threads, which can skip them and inspect the next
bucket.
I suppose their idea i
On 07/15/2010 05:17 PM, Adem wrote:
I was curious about the differences about FastMM (which I use under
Delphi) and TopMM, and asked about it in Delphi ThirdPartyTools NG.
Thus TopMM seems excellently suited here.
I don't understand why in their Graphics, they imply that the Delphi 7
projec
Adem schrieb:
>> The FastMM4 uses a LOCKed asm instruction for every memory allocation
> or dis-allocation. This LOCK ensure that a memory is modified by only a
> thread at a time. This is the same LOCKed asm function which is used
> internally by Windows with its Critical Sections. Windows itself
On 2010-07-15 17:46, Michael Schnell wrote:
On 07/15/2010 03:36 PM, Sergei Gorelkin wrote:
- FastMM is somewhat slower than FPC's memory manager, but the
difference is small.
Good to know !
- Given the amount of source code in FPC and FastMM, FPC is clearly a
winner :)
Yep. FastMM uses a
Michael Schnell schrieb:
With small blocks there is no concurrency (as in case of conflict, the
second thread will use medium.
Just an idea: When the lists contain many entries, they could be split
into buckets. Then the currently searched bucket(s) could be locked
against use by other thre
On 07/15/2010 03:36 PM, Sergei Gorelkin wrote:
- FastMM is somewhat slower than FPC's memory manager, but the
difference is small.
Good to know !
- Given the amount of source code in FPC and FastMM, FPC is clearly a
winner :)
Yep. FastMM uses a lot ASM, so a plus for FPC RTL.
-Michael
On 07/15/2010 01:28 PM, Marco van de Voort wrote:
How is this conflict detected? If this is a kind of lock, (that needs to be
SMP safe I guess) the FPC manager can probably skip that in most small
allocations, and only has to do this if it really touches global structures?
This is quite a lot
On 07/15/2010 01:14 PM, Florian Klaempfl wrote:
And the third and fourth thread?
Should not make much difference. The time span that is eligible for a
conflict is very short and thus more than two threads at the same time
that can't do the critical action in a normal way is extremely unlikely.
Michael Schnell wrote:
I did not take a look at FPC's memory manager. Maybe someone might want
to do some profiling
I did the extensive profiling when working on fcl-xml package. For a single-threaded application,
the following is true:
- FastMM is somewhat slower than FPC's memory man
In our previous episode, Michael Schnell said:
>
> With small blocks there is no concurrency (as in case of conflict, the
> second thread will use medium.
I'm no memory manager expert, but reading this raises some question:
How is this conflict detected? If this is a kind of lock, (that needs
Michael Schnell schrieb:
> On 07/15/2010 12:05 PM, Jonas Maebe wrote:
>> And you will get overhead in the FASTMM scheme if you have two threads
>> that are concurrently allocating and/or freeing a lot of small or
>> medium blocks.
> Only in extremely rare cases (which of course will be handled but
On 07/15/2010 12:05 PM, Jonas Maebe wrote:
And you will get overhead in the FASTMM scheme if you have two threads
that are concurrently allocating and/or freeing a lot of small or
medium blocks.
Only in extremely rare cases (which of course will be handled but seem
not to be relevant regarding
Michael Schnell wrote on Thu, 15 Jul 2010:
On 07/15/2010 11:43 AM, Florian Klaempfl wrote:
No, because the worker thread looks into the global structure when it
runs out of "local" space.
This kind of garbage control might add some overhead at unforeseen times.
And you will get overhead in
Michael Schnell schrieb:
> On 07/15/2010 11:43 AM, Florian Klaempfl wrote:
>> No, because the worker thread looks into the global structure when it
>> runs out of "local" space.
> This kind of garbage control might add some overhead at unforeseen times.
You don't turn off caches in SMP systems ei
On 07/15/2010 11:43 AM, Florian Klaempfl wrote:
No, because the worker thread looks into the global structure when it
runs out of "local" space.
This kind of garbage control might add some overhead at unforeseen times.
I seem to like the FASTMM paradigm better.
-Michael
__
Mattias Gärtner schrieb:
> Zitat von Jonas Maebe :
>
>> Michael Schnell wrote on Thu, 15 Jul 2010:
>>
>>> Did somebody take a look at FastMM for Delphi ? (
>>> http://sourceforge.net/projects/fastmm/ )
>>>
>>> Same seems to use a nice paradigm doing the Memory management for
>>> threaded applicati
Zitat von Jonas Maebe :
Michael Schnell wrote on Thu, 15 Jul 2010:
Did somebody take a look at FastMM for Delphi ? (
http://sourceforge.net/projects/fastmm/ )
Same seems to use a nice paradigm doing the Memory management for
threaded applications.
Then please explain that paradigm, sinc
On 07/15/2010 11:14 AM, Jonas Maebe wrote:
Then please explain that paradigm, since apparently you already looked
at it.
AFAIU, they use three areas for small midrange and large chunks
Small chunks are allocated in several lists, each of which hosts equally
sized chunks, thus finding the chun
Michael Schnell wrote on Thu, 15 Jul 2010:
Did somebody take a look at FastMM for Delphi ? (
http://sourceforge.net/projects/fastmm/ )
Same seems to use a nice paradigm doing the Memory management for
threaded applications.
Then please explain that paradigm, since apparently you already l
On 07/14/2010 05:21 PM, Jonas Maebe wrote:
a) the memory management overhead primarily comes from allocating and
freeing machine instruction (and to a lesser extent node tree) instances
Did somebody take a look at FastMM for Delphi ? (
http://sourceforge.net/projects/fastmm/ )
Same seems to
Hans-Peter Diettrich wrote on Thu, 15 Jul 2010:
Jonas Maebe schrieb:
[snip]
When the file resides in the OS file cache, no page faults will
occur unless the file was removed from the cache. If not, every
access request has to do disk I/O, resulting in process switching
etc., so that the p
Jonas Maebe schrieb:
Apart from specific scenarios, memory mapping can easily be slower than
direct reads. The main reason is that you get round trips to the OS via
hardware interrupts whenever you trigger a page fault, instead of doing
one or more (relatively cheap compared to interrupts) sys
On 2010-07-14 18:21, Jonas Maebe wrote:
Apart from specific scenarios, memory mapping can easily be slower
than direct reads. The main reason is that you get round trips to the
OS via hardware interrupts whenever you trigger a page fault, instead
of doing one or more (relatively cheap compared
Hans-Peter Diettrich wrote on Wed, 14 Jul 2010:
Marco van de Voort schrieb:
Mapping does not change that picture (the head still has to move if you
access a previously unread block). Mapping mainly is more about -
zero-copy access to file content
- and uses the VM system to cache _already a
24 matches
Mail list logo