Re: on writing a number as 2^s * q, where q is odd

2023-12-03 Thread Julieta Shem via Python-list
jak  writes:

[...]

>> --8<---cut here---start->8---
>> def powers_of_2_in(n):
>>if remainder(n, 2) != 0:
>>  return 0, n
>>else:
>>  s, r = powers_of_2_in(n // 2)
>>  return 1 + s, r
>> --8<---cut here---end--->8---
>
> for n = 0 your function get stack overflow

That's right.  Let's say ``assert n > 0'' before we say ``if''.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on writing a number as 2^s * q, where q is odd

2023-12-03 Thread Julieta Shem via Python-list
Alan Bawden  writes:

> jak  writes:
>
>Alan Bawden ha scritto:
>> Julieta Shem  writes:
>>
>> How would you write this procedure?
>> def powers_of_2_in(n):
>> ...
>>
>> def powers_of_2_in(n):
>>  return (n ^ (n - 1)).bit_count() - 1
>>
>
>Great solution, unfortunately the return value is not a tuple as in the
>OP version. Maybe in this way?
>
>def powers_of_2_inB(n):
>bc = (n ^ (n - 1)).bit_count() - 1
>return bc, int(n / (1 << bc))
>
> Good point.  I overlooked that.  I should have written:
>
> def powers_of_2_in(n):
> bc = (n ^ (n - 1)).bit_count() - 1
> return bc, n >> bc

That's pretty fancy and likely the fastest.  

I was pretty happy with a recursive version.  If I'm not mistaken,
nobody has offered a recursive version so far.  It's my favorite
actually because it seems to be the clearest one.

--8<---cut here---start->8---
def powers_of_2_in(n):
  if remainder(n, 2) != 0:
return 0, n
  else:
s, r = powers_of_2_in(n // 2)
return 1 + s, r
--8<---cut here---end--->8---
-- 
https://mail.python.org/mailman/listinfo/python-list


on writing a number as 2^s * q, where q is odd

2023-11-29 Thread Julieta Shem via Python-list
How would you write this procedure?

--8<---cut here---start->8---
def powers_of_2_in(n):
  s = 0
  while "I still find factors of 2 in n...":
q, r = divmod(n, 2)
if r == 0:
  s = s + 1
  n = n // 2
else:
  return s, n
--8<---cut here---end--->8---
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on a tail-recursive square-and-multiply

2023-11-09 Thread Julieta Shem via Python-list
Julieta Shem  writes:

[...]

> I agree.  By the way, I once read or watched an interview with Guido van
> Rossum and and he was asked why not to tail-call optimize Python and the
> answer he gave --- IIRC --- was that tail-call optimization makes it
> harder for a beginner to understand a stack trace.  I'm now interested
> in discovering whether that was his sole reason.  Do you (or does
> anyone) know of any reference that I could look into?  Thank you.

It seems everyone refers Guido van Rossum's post at 

  https://neopythonic.blogspot.com/2009/04/final-words-on-tail-calls.html

>From the page, confuse-users is /not/ the only reason.  After mentioning
it first, he begins a second paragraph saying ``[t]he main issue here
[...]'' and I wonder if by ``main issue'' he means something like ---
the main /problem/ with tail-call optimization (for me) is [...].

--8<---cut here---start->8---
  Personally, I think it is a fine feature for some languages, but I
  don't think it fits Python: The elimination of stack traces for some
  calls but not others would certainly confuse many users, who have not
  been raised with tail call religion but might have learned about call
  semantics by tracing through a few calls in a debugger.

  The main issue here is that I expect that in many cases tail calls are
  not of a recursive nature (neither direct nor indirect), so the
  elimination of stack frames doesn't do anything for the algorithmic
  complexity of the code, but it does make debugging harder. For
  example, if your have a function ending in something like this:

  if x > y:
return some_call(z)
  else:
return 42

  and you end up in the debugger inside some_call() whereas you expected
  to have taken the other branch, with TCO as a feature your debugger
  can't tell you the value of x and y, because the stack frame has been
  eliminated.
--8<---cut here---end--->8---
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: on a tail-recursive square-and-multiply

2023-11-08 Thread Julieta Shem via Python-list
Greg Ewing  writes:

> On 8/11/23 2:26 pm, Julieta Shem wrote:
>> For the first time I'm trying to write a tail-recursive
>> square-and-multiply and, even though it /seems/ to work, I'm not happy
>> with what I wrote and I don't seem to understand it so well.
>
> Stepping back a bit, why do you feel the need to write this
> tail-recursively? Is it just an exercise?

Yes --- an exercise.  (It would be relevant if I were in a language that
gives me tail-call optimization.  I'm preparing myself for when that
happens.)

> Note that Python doesn't optimise tail calls, so anything that
> can be done tail-recursively is probably better done iteratively.

I agree.  By the way, I once read or watched an interview with Guido van
Rossum and and he was asked why not to tail-call optimize Python and the
answer he gave --- IIRC --- was that tail-call optimization makes it
harder for a beginner to understand a stack trace.  I'm now interested
in discovering whether that was his sole reason.  Do you (or does
anyone) know of any reference that I could look into?  Thank you.

>> --8<---cut here---start->8---
>> def sam(b, e, m, acc = 1):
>>if e == 0:
>>  return acc
>>if is_even(e):
>>  return sam(remainder(b * b, m), e//2, m, acc)
>>else:
>>  return sam(b, e - 1, m, remainder(b * acc, m))
>> --8<---cut here---end--->8---
>> You see, I tried to use an accumulator, but I'm only accumulating
>> when
>> the exponent is odd.  When it's even, I feel I'm forced to change the
>> base into b * b mod m and leave the accumulator alone.  This feels so
>> unnatural to me.  I feel I broke some symmetry there.  I'm having to
>> think of two cases --- when I change the accumulator and when I change
>> the base.  That seems too much for my small head.  Can you help?
>
> Well, there are inherently two cases, and they're different, so
> I don't think you're doing anything wrong here. It was asymmetrical
> to begin with. If you were doing it iteratively you would also be
> leaving the accumulator alone when the exponent is even.

I think you're quite right and that was not clear until now.  Here's my
iterative version of the procedure:

--8<---cut here---start->8---
def isam(b, e, m):
  r = 1
  while e > 0:
if is_even(e):
  b = remainder(b * b, m)
  e = e // 2
else:
  r = remainder(r * b, m)
  e = e - 1
  return remainder(r, m)
--8<---cut here---end--->8---

So, indeed, it is asymmetric.  When it's even, I change the base.  When
it's odd, I change the /r/eturning value.  Thank you so much.

(*) The remainder function

--8<---cut here---start->8---
def is_even(n):
  return remainder(n, 2) == 0

def remainder(a, b):
  return a % b
--8<---cut here---end--->8---
-- 
https://mail.python.org/mailman/listinfo/python-list


on a tail-recursive square-and-multiply

2023-11-07 Thread Julieta Shem via Python-list
For the first time I'm trying to write a tail-recursive
square-and-multiply and, even though it /seems/ to work, I'm not happy
with what I wrote and I don't seem to understand it so well.

--8<---cut here---start->8---
def sam(b, e, m, acc = 1):
  if e == 0:
return acc
  if is_even(e):
return sam(remainder(b * b, m), e//2, m, acc)
  else:
return sam(b, e - 1, m, remainder(b * acc, m))
--8<---cut here---end--->8---

You see, I tried to use an accumulator, but I'm only accumulating when
the exponent is odd.  When it's even, I feel I'm forced to change the
base into b * b mod m and leave the accumulator alone.  This feels so
unnatural to me.  I feel I broke some symmetry there.  I'm having to
think of two cases --- when I change the accumulator and when I change
the base.  That seems too much for my small head.  Can you help?
-- 
https://mail.python.org/mailman/listinfo/python-list