Hi!

To compile a statement a lot of Nodes are allocated and they are rarely
freed during compilation.

Most of them lives until the statement is destroyed.

With each node there is the memory manager overhead (16 bytes in release
build), that exists there to make individual node deallocation possible.

A waste of space in this case, as nodes are de-allocated as a whole with
the pool.

I did some tests to see how many nodes were created and how much memory
a statement pool uses:

-- 39 nodes, 31728 bytes allocated
select * from rdb$database;

-- 109 nodes, 91904 bytes allocated
select * from rdb$database
union all
select * from rdb$database
union all
select * from rdb$database
union all
select * from rdb$database
union all
select * from rdb$database;

-- 163 nodes, 120496 bytes allocated
execute block
as
    declare n integer;
begin
    select rdb$relation_id from rdb$database into n;
    -- select repeated 20 more times
end!

So multiplying number of nodes by 16 we have around 2% of waste in these
tests.

I'm thinking on using something like a memory arena so a large block is
allocated (from the pool) and nodes allocation uses the arena, not
needing a per-node overhead.

It's not necessary to have a single large block. Many could be created
and we start with not very large to not waste space for small statements.

We could also estimate initial block size from the BLR size.

And inserting a protected delete operator in Node we avoid them being
explicitly called (with will crash).

What do you think?


Adriano



Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to