Re: Floating point "g" format not stripping trailing zeros
Ian Kelly writes: > When you specify the a precision of 15 in your format string, you're > telling it to take the first 15 of those. It doesn't care that the > last couple of those are zeros, because as far as it's concerned, > those digits are significant. OK, it's a bit surprising, but also consistent with the rest of the decimal module. Thanks for clearing it up. My concrete use case is printing an arbitrary fraction as a user-readable decimal, rounded to the specified number of digits, and using the exponential notation where appropriate: import decimal _dec_fmt_context = decimal.Context(prec=15, rounding=decimal.ROUND_HALF_UP) def _format(frac): with decimal.localcontext(_dec_fmt_context): dec = decimal.Decimal(frac.numerator) / decimal.Decimal(frac.denominator) return '{:g}'.format(dec) The decimal obtained by dividing the numerator with the denominator includes trailing zeros. Calling normalize() to get rid of them will have the unfortunate side effect of turning 9806650 into 9.80665e+6, and the method recommended in the documentation: def remove_exponent(d): return d.quantize(decimal.Decimal(1)) if d == d.to_integral() else d.normalize() ...will raise "decimal.InvalidOperation: quantize result has too many digits for current context" when the number is too large. For now I'm emulating the behavior of '%g' on floats using rstrip('0') to get rid of the trailing zeros: ... s = '{:g}'.format(dec) if '.' in s and 'e' not in s: s = s.rstrip('0') s = s.rstrip('.') return s -- https://mail.python.org/mailman/listinfo/python-list
Re: Floating point "g" format not stripping trailing zeros
> > from decimal import Decimal as D > > x = D(1)/D(999) > > '{:.15g}'.format(x) > >> > >> '0.00100100100100100' [...] > > I'd say it's a bug. P is 15, you've got 17 digits after the decimal place > > and two of those are insignificant trailing zeros. > > Actually it's the float version that doesn't match the documentation. > In the decimal version, sure there are 17 digits after the decimal > place there, but the first two -- which are leading zeroes -- would > not normally be considered significant. {:.15g} is supposed to give 15 digits of precision, but with trailing zeros removed. For example, '{:.15g}'.format(Decimal('0.5')) should yield '0.5', not '0.500' -- and, it indeed does. It is only for some numbers that trailing zeros are not removed, which looks like a bug. The behavior of floats matches both the documentation and other languages using the 'g' decimal format, such as C. > The float version OTOH is only giving you 13 significant digits when > 15 were requested. It is giving 15 significant digits if you count the trailing zeros that have been removed. If those two digits had not been zeros, they would have been included. This is again analogous to '{:.15g}'.format(0.5) returning '0.5'. -- https://mail.python.org/mailman/listinfo/python-list
Floating point "g" format not stripping trailing zeros
According to the documentation of the "g" floating-point format, trailing zeros should be stripped from the resulting string: """ General format. For a given precision p >= 1, this rounds the number to p significant digits and then formats the result in either fixed-point format or in scientific notation, depending on its magnitude.[...] In both cases insignificant trailing zeros are removed from the significand, and the decimal point is also removed if there are no remaining digits following it. """ However, in some cases, the trailing zeros apparently remain: >>> from decimal import Decimal as D >>> x = D(1)/D(999) >>> '{:.15g}'.format(x) '0.00100100100100100' For floats, the trailing zeros are removed: >>> '{:.15g}'.format(1. / 999) '0.001001001001001' This behavior is present in both 2.7.8 and 3.4.1. Is this a bug in the formatting of Decimals? -- https://mail.python.org/mailman/listinfo/python-list
Py_DECREF after an exception
I'm wondering what happens with the exception info during object cleanup immediately after an exception is thrown. Consider this code: PyObject *args = PyBuild_Value("(O(O){})", name, parent); if (!args) return NULL; PyObject *val = some_python_func(x, args, NULL); Py_DECREF(args); if (!val) return NULL; The idea is to propagate the exception possibly raised by some_python_func and at the same time avoid leaking memory. But Py_DECREF can cause arbitrary Python code to get executed, including the code that eventually ends up calling PyErr_Clear when it wants to ignore some unrelated exception. This could cause exception information to be forgotten. Is there a way around this, or is there a reason why this is not a problem in practice? -- http://mail.python.org/mailman/listinfo/python-list