On Saturday, August 23, 2014 12:04:51 PM UTC-5, Cecil Westerhof wrote:
>
> 2014-08-23 7:06 GMT+02:00 Mars0i <mars...@logical.net <javascript:>>:
>
>> (1) Others may disagree, but ... although I love Lisps,  think that 
>> purely functional programming is cool, and have come to see that there are 
>> situations in which FP is *great*, I am not someone who thinks that FP 
>> is always clearer (easier to understand) or as clear as imperative 
>> programming.  For some jobs, one can be be made clearer than the other.  
>> Maybe, in addition to providing examples in which FP is clearer, you would 
>> want to provide an example in which it's less clear.  Since so many people 
>> go around promoting the latest, greatest thing that will solve all of your 
>> programming problems, it might build trust to show that you understand that 
>> there are tradeoffs.
>>
>
> ​Can you share which things you find less clear?
>

FP textbooks sometimes claim that recursion is clearer than looping with 
side effects.  I think it depends--some cases are clearer in one form than 
another.   The examples that the textbooks give often don't convince me.  
(This view is not based on the fact that loops are more familiar to most 
programmers.  My feeling is that other things being equal, with respect to 
a programmer's experience with loops and recursion, sometimes loops are 
clearer.)

Also, I think that simple lazy sequences aren't always better than a loop.  
For many cases, the difference between (take n (map my-fn my-sequence)) and 
a loop that terminates when an index reaches n is trivial.  As I wrote 
earlier, though, I do think there are situations in which being able to map 
or doseq different functions over a sequence facilitates modularity, and 
then laziness gives you the option of keeping no more in memory than you 
need.

Here's an example that's immediately available to me, since it involves 
code I wrote this past week.  

First, here's something from a Java routine (actually Processing) that's 
part of force-directed graph layout procedures by Ben Fry, available at 
http://benfry.com/writing/archives/3 (in Node.pde in the first directory in 
the chapter 8 zip file):

    float ddx = 0;
    float ddy = 0;

    for (int j = 0; j < nodeCount; j++) {
      Node n = nodes[j];
      if (n != this) {
        float vx = x - n.x;
        float vy = y - n.y;
        float lensq = vx * vx + vy * vy;
        if (lensq == 0) {
          ddx += random(1);
          ddy += random(1);
        } else if (lensq < 10000) {
          ddx += vx / lensq;
          ddy += vy / lensq;
        }
      }
    }

Without fully understanding the rationale for every part of this code, I 
translated it into Clojure:

  (let [nodes (remove #(= this-node %) all-nodes)
        {x :x y :y dx :dx dy :dy} this-node
        sum-normalized-diffs (fn [[ddx ddy]
                                  {n-x :x n-y :y}] ; the next node to 
examine
                               (let [vx (- x n-x)
                                     vy (- y n-y)
                                     lensq (+ (* vx vx) (* vy vy))]
                                 (cond (== lensq 0)    [(+ ddx (rand)) (+ 
ddy (rand))]
                                       (< lensq 10000) [(+ ddx (/ vx lensq))
                                                        (+ ddy (/ vy 
lensq))]
                                       :else           [ddx ddy]))) 
        [ddx ddy] (reduce sum-normalized-diffs [0 0] nodes)



(Maybe this is not a good example.  There is probably a clearer way to 
write this in Clojure--since it's what I came up with pretty quickly.  It 
might help if I understood the purpose of the original code, better, too.  
And that's assuming there are no bugs ....)

I don't think Fry's Java version is a *lot* easier to understand than my 
Clojure code, but I do think it's a little bit easier to understand.  I 
think that this part of the Java

        if (lensq == 0) {
          ddx += random(1);
          ddy += random(1);
        } else if (lensq < 10000) {
          ddx += vx / lensq;
          ddy += vy / lensq;
        }

is pretty clear, for example: 
If P, add a quantity into two variables, and if Q, subtract a different 
quantity, and if neither P nor Q, then leave the variables at the values 
they had.  
Keep adjusting the values of those two variables as you loop through all 
nodes except this one.  

I think my Clojure version of this code is a little less easy to read, and 
certainly no clearer.  Among other things, I think that passing both ddx 
and ddy in 2-element sequences is a little less clear than completely 
separating them on separate lines as in the Java code.  And, somehow, 
wrapping all of this into a function passed to reduce seems a little more 
difficult to understand, to me.

Others may disagree with me, perhaps because their experience is 
different.  (Also, note that I am *not* complaining about Clojure in this 
case.  I prefer the Clojure code for various reasons--that's why I wrote 
it.  I just think the Java is a little easier to read.  In this case.  It's 
not something I say very often!)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to