I find it absolutely hilarious that you mention linked lists in the same 
breath as the need to work with multicore processors. You realize that 
linked lists, by their very nature, are 100% impervious to parallelizing? 
At least with array lists you can split em up into blocks and hand off each 
block. Scala and/or functional programming aren't some sort of magic faerie 
dust that makes everything work just peachy in multicore world. Splitting 
jobs at the highest of levels, there's your magic faerie dust. Fortunately 
it's something your web servlet dispatcher is probably already doing for 
you. Yay for frameworks.

On Monday, July 23, 2012 12:31:44 AM UTC+2, KWright wrote:
>
> It's not just the methods, it's the data structures.  For example, 
> recursion makes a LOT of sense for iterating over an immutable linked list, 
> and is often one of the first techniques taught within LISP.  The need 
> seems less apparent in an imperative/mutable paradigm, but I think it's 
> fair to predict that we'll see less and less of this style as core counts 
> follow the inevitable curve of Moore's law and as technologies like OpenCL 
> become integrated into the Java ecosystem (NVidia's flagship GTX 690 has 
> 3072 cores, and we're seeing this tech creeping back into CPUs...)
>  
>

> Only if tools are not adapted to show the presence of TCO in a stack 
> trace.  Which seems highly unlikely if this becomes a core JVM optimisation.
>
>
This kind of complication is exactly why it's not worth it. It's not a 
simple matter of finding 'blank' calls into another method and replacing 
the subroutine call with a 'ditch my frame, then goto' approach.
 

>  
>
>> * Resource Management (i.e. try/finally blocks) instantly kill TCO. The 
>> requirement that some method remains TCOable is a gigantic pain in the 
>> behind for code maintainers.
>>
>
> Try/finally blocks mess with almost any declarative/parallel technique 
> available.  This is why Scala has `Either`, Haskell has `Maybe` and Akka's 
> futures will trap and return an Exception instead of throwing it.
>

Yet more distracting bullpucky on how parallel programming is somehow 
forcing us into doing entirely different things. try/finally isn't going 
away - resources need managing, period. There's no link between TCO and 
parallelism, no matter how much you try to make one. Languages which 
traditionally have looked at the functional approach have also tended to 
like TCO. Similarly, recursion is still completely irrelevant here. If I 
want to, for example, optimize the ability to let's say determine the 
maximum value in a list (using multiple cores), I'd write something like:

int maxVal = myList.par().reduce(Aggregates::max);

two observations:

(A) What in the flying blazes does this have to do with recursion and TCO?

(B) I sure hope that myList is NOT a linked list, because if it was, 
performance is going to suck.

Do you have clue as to why a linked list is just end of the line, 
performance wise? How crazy you sound here? The tracker objects of a linked 
list are just _HORRIBLE_ from a modern CPU architecture point of view, and 
any data structure which can only be traversed accumulation style is the 
WORST possible thing you could ever have for a parallelism point of view. 
It needs to be trivially easy to take any given data structure and split it 
into X roughly equal pieces. Assuming each piece is itself such a data 
structure, you can hardcode X to any value you please so long as it is 
larger than 1. For example, a conclist is trivially splittable into 2 
roughly equally sized conclists. To parallelize a job on a conclist, you 
check if you are the singleton or void list and return the appropriate 
number, and if not, you ask your left child to go figure out the answer for 
itself in a separate thread, and then you ask the right side to go figure 
out the answer in this thread, then when that returns you wait for left, 
and then you can trivially provide an answer. conslists (i.e. LinkedLists) 
are not trivially splittable into 2 batches. They are therefore stupid, 
from a parallelism perspective.

 

>
>> Not least, the `return` statement itself.  In almost all function 
> languages the final expression evaluated in a function becomes the return. 
>  More than anything else, I've found this makes it far easier to reason 
> about what can (and what can't) be optimised using tail-calls.
>
>
Yes, you are a scala junkie. No, this STILL doesn't mean that the rest of 
the java community thinks like you. Which part of: The current crop of java 
programmers do not think like you, and do not understand what is and is not 
TCO-able, is hard to understand?

-- 
You received this message because you are subscribed to the Google Groups "Java 
Posse" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/javaposse/-/DRqmKhIqs44J.
To post to this group, send email to javaposse@googlegroups.com.
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en.

Reply via email to