> Both work but I am not satisfied. Is there not a method that could do the job 
> of '+' operator (namely concatenate in a copy)?
>

No; unfortunately Python doesn't make functional approaches as
convenient as I would personally like.


The copy operation on lists itself is expensive to do, so that's why I
think the list API encourages a mutational approach. If we were to
define our own data structures, we could make the functional approach
more natural and efficient.



> As for my first code :
>
> def linear_merge1(list1, list2):
>
>   if list1==[]: return list2
>   elif list2==[]: return list1
>   elif list1[-1]>list2[-1]:
>     a=list1.pop()
>     linear_merge(list1,list2).append(a)
>     return linear_merge(list1,list2)
>   else:
>     a=list2.pop()
>     linear_merge(list1,list2).append(a)
>     return linear_merge(list1,list2)
>
>
> I don't get why it works to be fair. It feels like return 
> linear_merge(list1,list2) does not call the function again.


That's because the behavior of this version is weird.  :)


Its contact could be stated as follows:

----------------------------------------

Given two sorted lists, linear_merge1 does the following three things:

    A. Mutates one of the lists to be empty,

    B. Mutates the other list have the merge-sorted results, and

    C. Returns the reference to the merge-sorted list.

----------------------------------------

How can we be sure?  We can prove this to ourselves by inductive proof.


We'll do the induction based on the combined number of elements list1
and list2.  Call this combined size 'n'.

Base case: if n = 0, then both list1 and list2 are empty, and all
three conditions A, B, and C hold trivially.  That's easy enough.

Ok, time for the analysis for the n > 0 case.  As our induction
hypothesis, assume that all three conditions A, B, and C are true when
we call linear_merge1 for input with a combined size < n.  We want to
show that, if we're assuming that the induction hypothesis is true for
those smaller input sizes, then those three conditions will continue
to hold for the case where the input is of size n.


There are two cases we consider, when considering the case where the
input size n > 0.  We break our case analysis into two situations that
cover the gamut of possibility:

Case 1: either list1 or list2 is empty.  If this is true, then we hit
one of the two initial cases in the code that check for emptiness.
Since one of the lists is already empty, it doesn't need to be
mutated, so condition A holds.  Since both input lists were already
sorted, then case B holds.  Finally, case C holds because it's
returning that list.


Case 2: neither list1 nor list2 is empty.

Let's first consider the case where list1[-1] > list2[-1].  Then when
we perform:

    a = list1.pop()

list1 still is a sorted list, though it is smaller.  When we call:

    linear_merge(list1,list2).append(a)

then, since the combined size of list1 and list2 is now less than n,
we can apply our induction hypothesis.  Then we can assume that the
following things happen:

    * either list1 or list2 empties out, due to condition A being
applied to the smaller input.

    * we add the element 'a' to whichever merge-sorted list was
returned by the recursive call.

After that call, either list1 or list2 contain the merge-sorted
results, and the other list is empty.

Finally, the funky second call to:

    return linear_merge(list1, list2)

will return either list1 or list2, because we know that, from the
prior call, one of those lists is empty due to the operation of that
first recursive call.  We now know that what gets returned here is the
reference to the list that has the merge-sorted results.

Ok, that closes that chain of reasoning for the case where list1[-1] >
list2[-1].  And since we can make a similar chain of reasoning for the
other case, we can stop here.

---


That's a rough math proof sketch of why linear_merge1 is working for you.


As we can see, we have to do a lot more consideration of what state
our values are in, due to all the mutation happening.  It also shows
that the second recursive call to the linear_merge() is not really
using it to merge: it's really trying to select the list that was used
to accumulate results.  We'd get the same results with:

######################################
def linear_merge1(list1, list2):
  if list1==[]: return list2
  elif list2==[]: return list1
  elif list1[-1]>list2[-1]:
    a=list1.pop()
    linear_merge(list1,list2).append(a)
    return list1 or list2
  else:
    a=list2.pop()
    linear_merge(list1,list2).append(a)
    return list1 or list2
######################################

and depend on Python to determine which value is truthy, which in the
case of lists, will prefer the non-empty lists.  That is, the second
recursive calls are red herrings.  That's what initially made me do a
double-take when I saw it, and I had to reason to myself why in the
world it was working: it's all due to the mutational aspects of that
approach.



So, yeah, the code is *much* too clever for it's own good.  :P
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to