My original reply had a mistake in it, please disregard it. The following completely replaces it:
Hi Dale, You are right, that's a mistake in the paper. You should switch start-0 and start-1 on the top branch. Believe it or not, I did not write this section :) It's not really about Nile, but about a particular approach to dataflow in Nothing. In fact, beware of the sentences: "The way the Nile runtime works was generalized. Instead of expecting each kernel to run only when all of its input is ready, then run to completion and die, the Nothing version keeps running any kernel that has anything to do until they have all stopped" The above makes it sound like Nile runtimes in general wait for all the input to be ready before running a process. This is only true of the Squeak and Javascript versions of the Nile runtime. The C-based multithreaded one does not. Sorry I didn't catch this before publication. Regardless, the Nothing work strayed a bit from the Nile model of computation, and not in directions I would take it, so don't take too much about Nile from that section. Also, I wouldn't advocate writing Fibonacci like this in Nile. Nile was designed for coarse-grained dataflow, not fine-grained dataflow. The main reason for this was my opinion that 1) mathematical statements are often more readable than their visual, fine-grained dataflow equivalents* and 2) coarse-grained dataflow can be quite readable due to fewer communication paths, and thus easier composition, and in many cases they contain only a simple left-to-right flow. On top of that, is it easier to efficiently parallelize coarse-grained dataflow because the communication between components is much less, allowing parallel hardware to operate more independently. * For very simple statements, this may not be so true, but when scaling up to more practical examples, I think fine-grained dataflow gets messy fast. Regards, Dan _______________________________________________ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc