I am having some difficulties in producing the correct code for a simple
binary to decimal conversion program. The arithmetic I want to use is the
doubling method - so if I wanted to get the decimal equivalent of 1001, I
would start with multiplying 0 * 2 and adding the left most digit. I would
then take the sum of 1 and multiply it by 2 and add the next digit to that
product and so on and so forth until I come to an end sum of 9.

I seem to come up with something like this:

binnum = raw_input("Please enter a binary number:  ")


for i in range(0, len(binum), 1):
    item = "0"
    if i < len(binum) - 1:
        item = binum[i + 1]

    binsum = binsum * int(item) * 2 + binsum + int(binum[i])


print "\nThe binary number ", binum, " you entered converts to", binsum, "
in decimal."


I can't really figure out what is going wrong here. Please help me - this
has been driving me insane.
_______________________________________________
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor

Reply via email to