On Thursday, 26 June 2025 at 16:40:10 UTC, Steven Schveighoffer
wrote:
On Tuesday, 24 June 2025 at 08:48:16 UTC, realhet wrote:
Finalizers are run with the world resumed.
I'll remember this. It sounds less mystical than I thought.
Currently I'm having an 'infrastructure' that's like:
I'm managing all the parent/child object relations in the
destructors automatically.
This thing is traversed and destroyed in every parents'
destructor: `bool[Object] _childMap;`.
Child objects can also notify their parents when their
destructors are manually called by `object.destroy()`.
I use it to manage Vulkan object hierarcy, I destroy it with a
single. root.destroy() call. There is no multithreading and no GC
initiated destroys either. Maybe that's why it is stable.
(So at a last option I can still organize my classes in a tree
and I can avoid calling multiple destroys by a single destroy.)
With this automatic thing it was much easier to represent the
many vulkan classes:
```d
class VulkanPhysicalDevice
{
mixin SmartParent!q{
@PARENT VulkanInstance instance,
VkPhysicalDevice handle,
int index
};
//from here the lifecycle management is automatic: 'instance' is
holding a pointer to this object. The mixin template generates
the constructor and the destructor.
```
The mixin template is inside this module:
github/realhet/hetlib/blob/master/het/Package.d
There are destructor calls and aa.remove() operations in the
destructors.
It works very well, but it shouldn't because I'm heavily
accessing GC managed pointers from destructors ;) But maybe
because I do it in a hierarchical pattern, I avoid the
crashes/freezes/exceptions.
With the texture handle class, I tried a different approach: No
object hierarchy stored, I wanted that the GC initiate the
destroy calls whenever it wants. The hierarchy is maintained by
the GC. I don't even have to write a single .destroy(). It's full
automatic. But if I don't pin the allocated pointer, it freezes.
(I will try to debug that later.)
But there is another problem with your synchronization.
Thanks for pointing this out, but in this particular case the put
and the fetch calls are synchronized (serialized) by the x86
processor's cache coherency: It's a tricky queue, its state is
updated by single 8 byte memory writes, so that's where the
atomicity comes from.
The Queue can be configured to single/multithreaded on both the
input and output sides.
Current case: fetch is called only from the main thread regularly
to fetch what TextureHandles are needed to be deallocated (those
are on GPU accessable memory, so no GC conflicts, but I can't do
this deallocation from another thread, because the main thread
accessing/modifying that memory all the time)
put() is multithreaded: There will be multiple cameras, requiring
a texture buffer to put data into. When the camera object is no
longer used, the GC will notice it. It will notice that the
Texture Buffer object is no longer needed. I need to send this no
longer needed handle to the main thread from a destructor.
Can you post the usage of the item? And what is
`TB.createHandleAndSetData(fmt, data);`? Is this a C library
function? What does it do?
createHandleAndSetData, I'm having hard times naming :D But this
is just searches for a free texture handle, if there is non, it
reallocated the buffer on GPU. It's a lot of operations, but it's
not in any destructor and only on main thread.
In a GC finalizer, you should be able to access and clean up
resources allocated outside the GC. You cannot access any
resources allocated with the GC (unless they are pinned).
Again an important sentence to remember!
Actually there are allocations in that queue.put(). Internally it
is a linked list. Every item is a new allocation.
Is there a reason you don't want to immediately clean up the
resource in the destructor?
The texture handle is indeed not a GC managed resource, BUT! I'm
in a camera's thread or whatever thread, I can't have access to
the manager of the texture handles.
I also don't want to mynchronize all the texture operations ->
that is 95% main thread responsibility, and 5% responsibility of
the external input sources (camera streams from different
threads).
What about using C malloc to allocate your link nodes? Surely
the queue can be manually managed for memory.
A good idea, thanks. I can avoid using GC allocations when I
rewrite the Queue object to use malloc.
Thank You!