> The runtime also is designed to minimize L1 cache misses, more on that
> if there is interest.

I would be interested in some of the details.

> Regarding the Nile C runtime, inter-thread communication is currently
> not a bottleneck. 

what are the current Nile bottlenecks?

>  Queuing is a relatively infrequent
> occurrence (compared to computation and stream data read/writing),

How is that so, I would have assumed that joins in the stream processor 
effectively become a reader writer problem.

>  plus the queues are kept on a per-process basis (there are many more
> Nile processes than OS-level threads running) which scales well.


Could this arbitrary distinction between process scheduling, OS threads and app 
threads (greenlets, nile process etc) be completely removed with a pluggable 
hierarchical scheduler? For example, at the top-level you might have a 
completely fair scheduler (4 processes = 1/4 of the time to your process 
assuming you can make use of it). Within that, it's up to you to divvy-up time. 
I'm visualizing this kind of how the Mach kernel had external memory pagers 
that you could plug in if ever you had better page eviction models for your 
domain.

Then there's obviously how that interacts with the HW model for thread / 
process switching and memory barriers but that seems like a separate problem.

shawn

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to