[Joshua Haberman] >1. signal graph is constructed in main thread >2. once audio starts, the audio thread runs the graph once for every >callback >3. if the main thread wants to change the graph while it's running, it >does any heavy lifting (memory allocation, etc), then sends a message >to the audio thread with a lock-free queue (ex: "add this node and >connect it to this other node"). >4. buffers are passed between the disk thread and the audio thread >using lock-free queues >5. in the real-time thread, buffers are allocated from and returned to >a lock-free freelist. For example, if you have a node that generates a >sine wave, it needs to allocate a buffer when it runs to put its data >in. Instead of calling malloc, it asks for a free buffer from the >freelist.
works just like the scheme i am using, with the exception that the graph management code is always run in the audio thread. lots of lock-free fifos for everything and their cousin, but glitch-free performance with absolutely no locks in the realtime code path is worth the effort. tim ps: 'always run in the audio thread' is not exactly true. in my implementation, the Graph object runs a non-RT thread of its own that takes over graph management while no AudioClock object has been added yet. so 'never run in the main thread' is more like it. the benefit is that the graph management code invocation is uniform; on the downside it takes a little longer to set up the initial graph due to the communication overhead.