On Sun, 08 Nov 2009 17:17:38 -0500, Aubrey Jaffer <a...@alum.mit.edu> wrote:

> Implicit-concurrency doesn't mandate that any concurrency be used.
> And cautious application of concurrency is entirely compatible with
> implicit-concurrency.  For instance, one might only fork threads when
> the subexpressions call user-defined functions, or user-defined
> functions whose definitions are over 40 lines long.

If I understand you correctly, you are proposing that we apply a new  
semantic restriction on the properties of unspecified orders of evaluation  
in order to eliminate the need to do potentially impossible or intractable  
code analysis to determine whether the expression may be evaluated in  
parallel. However, at the same time, naive spawning of concurrency isn't  
helping anyone. The main worries that your proposal seems to address is  
that when naively executing two expressions which mutate the same state in  
parallel wherein that state mutation is not thread-safe (as in your  
example of random), you may get a result that doesn't make sense.

I don't see how this really benefits us much, because the requirement for  
serialized execution already removes makes such concurrent applications  
illegal, and guarantees that the behavior of the system is the same in  
threaded and non-threaded environments. Put another way, you shouldn't be  
able to have an optimization that changes the way the code executes in  
such a way that you can't duplicate it on a non-concurrent system. You  
haven't eliminated the need for code analysis. We still don't know when to  
parallelize stuff, and when not to. Many Schemes already track mutation  
through their systems to know when a function is "dirty" or not. You are  
proposing that the system use conservative optimizations already, so it  
doesn't seem to be a problem to be conservative here, either.

In the case of your random example, both functions would dirty the same  
state, and as such, executing them concurrently will lead to a potentially  
unserializable result. So you can't optimize it. Done. If this random  
function came from another module, you may know nothing about it. Fine,  
you can't optimize it.

However, other cases can still be easily optimized. Fully transparent  
functions, for example, that don't mutate state, can be parallelized  
easily in the serialized model. Functions which dirty different state  
spaces also can be done. This analysis can be done, and fast algorithms  
exist that are able to work within similar domains. Function inlining, for  
example, requires that you know whether a function may be mutated, and  
hence not inlined. Still, the problem remains knowing when to make  
execution concurrent. This isn't changed in ICS, and the way in which ICS  
makes such analysis easier doesn't appear to me to be enough to warrant  
its adoption in the Scheme standard.

You can't get around the problem of procedures existing in other modules,  
and you can't reasonably do anything with those in either normal Scheme or  
IC Scheme. You can't easily determine the best metrics for automatically  
parallelizing in either ICS or Scheme, and so you have to be conservative  
either way, and you can get a tight result. ICS doesn't change any of  
this, unless I totally miss something. The only thing ICS seems to do is  
shift responsibility of one problem (determining whether concurrent  
execution is possible) that I think is possibly the easier of all the  
problems here. Do we really want this? Can compilers do this for us? I  
think so, but I'm prepared to be wrong, so please, lay it on me.

        Aaron W. Hsu

-- 
Of all tyrannies, a tyranny sincerely exercised for the good of its  
victims may be the most oppressive. -- C. S. Lewis

_______________________________________________
r6rs-discuss mailing list
r6rs-discuss@lists.r6rs.org
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to