Bug in Decimal??
> I'll follow up directly with the author of mpdecimal, as this is > somewhat serious on a language that's so widely used as python. > But please test it and confirm, am I seeing ghost digits? This has already been settled on libmpdec-devel, but for the list: As Mark Dickinson has already explained, you need to increase the intermediate precision. For example: >>> from decimal import * >>> getcontext().prec=4000 >>> one=Decimal(1) >>> number=Decimal('1e-1007') >>> partial=(one+number)/(one-number) >>> getcontext().prec=2016 >>> partial.ln() Decimal('2. 00 00 00 00 00 00 00 00 00 00 00 00 00 00 007E-1007') Otherwise 'partial' has an error that is too large when you pass it to the ln() function. Since decimal mostly follows IEEE 754 with arbitrary precision extensions, it cannot behave differently. Stefan Krah -- https://mail.python.org/mailman/listinfo/python-list
Re: python decimal library dmath.py v0.3 released
[I found this via the python-ideas thread] Wolfgang Maier biologie.uni-freiburg.de> writes: > math.factorial is accurate and faster than your pure-Python function, especially for large numbers. It is slower for huge numbers than decimal if you use this Python function: http://www.bytereef.org/mpdecimal/quickstart.html#factorial-in-pure-python Be sure to set MAX_EMAX and MIN_EMIN, that's missing in the example. If you want to *see* all digits of a very large number, then decimal is probably even faster than gmpy. See: http://www.bytereef.org/mpdecimal/benchmarks.html#arbitrary-precision-libraries Stefan Krah -- https://mail.python.org/mailman/listinfo/python-list
Re: Decimal 0**0
Steven D'Aprano wrote: > Does anyone have an explanation why Decimal 0**0 behaves so differently from > float 0**0? > > Tested in both Python 2.7 and 3.3, float 0**0 returns 1, as I would expect: The behavior follows the specification: http://speleotrove.com/decimal/daops.html#refpower Why exactly the decision was made I cannot say. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Thorough Python 2.7.3 Windows Build Documentation?
Leonard, Arah wrote: > By the way, do you happen to know how tricky it is to get Python 2.7.3 to > build with VS 2010? Or have any tips there? It doesn't seem to be > officially supported, but it sure would be nice to get out of the dark ages > of MS compilers and be only one version behind the latest. I wouldn't attempt it. For Python 3.3 the conversion has been done and the diff was enormous. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Thorough Python 2.7.3 Windows Build Documentation?
Leonard, Arah wrote: > But after benchmarking a PGO build made by running the build_pgo.bat it > turns out that it made no difference whatsoever to my performance loss. > Within an acceptable statistical variation in the benchmark tool itself my > PGO build performed identically to my regular build. Which means that I'm > still running 30% slower than the precompiled binaries somehow even though > I'm theoretically using the right compiler. And which also means that I can > neither confirm nor deny if the released precompiled binaries are PGO builds > or not. I remember that some versions of Visual Studio silently completed the PGO build without actually having PGO capabilities. :) I think for VS 2008 at least "Professional" is needed, for VS 2010 "Ultimate". Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Thorough Python 2.7.3 Windows Build Documentation?
Leonard, Arah wrote: > I?m building a 32-bit CPython 2.7.3 distro for Windows using the MS Visual > Studio Professional 2008 SP1 (and all hotfixes) MSVC 9 compiler. My build > works, technically, but it also happens to benchmark over 30% slower than the > precompiled binaries in the distributed Python 2.7.3 MSI. Can anyone point me > in the direction of some thoroughly detailed build documentation so that I can > figure out how to get that 30% back with my build? I think the official binaries use the PGO build. Be sure to run all tests thoroughly, we've had problems in with the PGO build in 3.3. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Py 3.3, unicode / upper()
wxjmfa...@gmail.com wrote: > But, this is not the problem. > I was suprised to discover this: > > >>> 'Straße'.upper() > 'STRASSE' > > I really, really do not know what I should think about that. > (It is a complex subject.) And the real question is why? http://de.wikipedia.org/wiki/Gro%C3%9Fes_%C3%9F#Versalsatz_ohne_gro.C3.9Fes_.C3.9F "Die gegenwärtigen amtlichen Regeln[6] zur neuen deutschen Rechtschreibung kennen keinen Großbuchstaben zum ß: Jeder Buchstabe existiert als Kleinbuchstabe und als Großbuchstabe (Ausnahme ß). Im Versalsatz empfehlen die Regeln, das ß durch SS zu ersetzen: Bei Schreibung mit Großbuchstaben schreibt man SS, zum Beispiel: Straße -- STRASSE." According to the new official spelling rules the uppercase ß does not exist. The recommendation is to use "SS" when writing in all-caps. As to why: It has always been acceptable to replace ß with "ss" when ß wasn't part of a character set. In the new spelling rules, ß has been officially replaced with "ss" in some cases: http://en.wiktionary.org/wiki/da%C3%9F The uppercase ß isn't really needed, since ß does not occur at the beginning of a word. As far as I know, most Germans wouldn't even know that it has existed at some point or how to write it. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: On-topic: alternate Python implementations
Steven D'Aprano wrote: > Who would want to deal with C's idiosyncrasies, low-powered explicit type > system, difficult syntax, and core-dumps, when you could use something > better? In the free software world, apparently many people like C. C is also quite popular in the zero-fault software world: Several verification tools do exist and Leroy et al. are writing a certified compiler for C to plug the hole between the verified source code and the generated assembly. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python source code not available on github?
Peter Otten <__pete...@web.de> wrote: > gmspro wrote: > > Why is python source code not available on github? Why should every free software project be available on a single proprietary platform? Also, see: http://arstechnica.com/business/2012/03/hacker-commandeers-github-to-prove-vuln-in-ruby/ Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: [ANN] cdecimal-2.3 released
Paul Rubin wrote: > > Both cdecimal and libmpdec have an extremely conservative release policy. > > When new features are added, the complete test suite is run both with and > > without Valgrind on many different platforms. With the added tests against > > decNumber, this takes around 8 months on four cores. > > Wow. I wonder whether it's worth looking into some formal verification > if the required level of confidence is that high. Currently four of the main algorithms (newton-div, inv-sqrt, sqrt, log) and a couple of auxiliary functions have proofs in ACL2. The results are mechanically verified Lisp forms that are guaranteed to produce results *within correct error bounds* in a conforming Lisp implementation. Proving full conformance to the specification including all rounding modes, Overflow etc. would be quite a bit of additional work. For C, I think the why3 tool should be a good approach: http://why3.lri.fr/ The verification of the L4 kernel allegedly took 30 man-years, so it might take a while... Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
[ANN] cdecimal-2.3 released
Hi, I'm pleased to announce the release of cdecimal-2.3. cdecimal is a fast drop-in replacement for the decimal module in Python's standard library. Blurb = cdecimal is a complete implementation of IBM's General Decimal Arithmetic Specification. With the appropriate context parameters, cdecimal will also conform to the IEEE 754-2008 Standard for Floating-Point Arithmetic. Typical performance gains over decimal.py are between 30x for I/O heavy benchmarks and 80x for numerical programs. In a PostgreSQL database benchmark, the speedup is 12x. +-+-+--+-+ | | decimal | cdecimal | speedup | +=+=+==+=+ | pi|42.75s |0.58s | 74x | +-+-+--+-+ | telco | 172.19s |5.68s | 30x | +-+-+--+-+ | psycopg | 3.57s |0.29s | 12x | +-+-+--+-+ In the pi benchmark, cdecimal often performs better than Java's BigDecimal running on Java HotSpot(TM) 64-Bit Server VM. What's New == o The underlying library - libmpdec - now has a very comprehensive test suite against decNumber. o libmpdec now has full support for compilers without uint64_t. o Code coverage of cdecimal has been increased to 100%. Now both libmpdec and cdecimal have 100% coverage. o Improved code for conversion of small Python integers to Decimals leads to a performance gain of around 15%. o Added real(), imag(), conjugate(), __complex__() methods. o Add Fraction and complex comparisons (enabled for Python 3.2). o Support for DecimalTuple output. Stability = Both cdecimal and libmpdec have an extremely conservative release policy. When new features are added, the complete test suite is run both with and without Valgrind on many different platforms. With the added tests against decNumber, this takes around 8 months on four cores. Install === Since cdecimal is listed on PyPI, it can be installed using pip: pip install cdecimal Windows installers are available at: http://www.bytereef.org/mpdecimal/download.html Links = http://www.bytereef.org/mpdecimal/index.html http://www.bytereef.org/mpdecimal/changelog.html http://www.bytereef.org/mpdecimal/download.html Checksums of the released packages == 03f76f4acbb6e7f648c6efc6e424bbc1b4afb5632dac5196f840e71f603a2b4a mpdecimal-2.3.tar.gz b0fd5bec2cc6a6035bc406339d020d2f4200a7dce8e8136a2850612a06508ed1 mpdecimal-2.3.zip d737cbe43ed1f6ad9874fb86c3db1e9bbe20c0c750868fde5be3f379ade83d8b cdecimal-2.3.tar.gz 84afd94126549a3c67c3bab7437d085347f9d05c cdecimal-2.3.win-amd64-py2.6.msi ba0fbb1f9314dcef29481414a5c3496ec159df2e cdecimal-2.3.win-amd64-py2.7.msi d11bbd560e9cb9d34b0e7a068ac1c1eac5371428 cdecimal-2.3.win-amd64-py3.1.msi d024148ea603dc8e82f8371ebdfaa0e65f5a9945 cdecimal-2.3.win-amd64-py3.2.msi d196a9e0b44dcb75bbf4eda44078b766e6113f72 cdecimal-2.3.win32-py2.6.msi e2b044da6c241df0911059216821c9865cb9e4f0 cdecimal-2.3.win32-py2.7.msi 7e8b47eb3a2f50191e76f981fbe55050f13495e8 cdecimal-2.3.win32-py3.1.msi 61be767b91aab0ba0d602fb2b23f6d882cafec05 cdecimal-2.3.win32-py3.2.msi a2278910a5b447af963e1d427dbeb48f49e377be cdecimal-2.3-no-thread.win-amd64-py2.6.msi 8da96d2f1ab1a98062cd43cb4f381b47309d8c22 cdecimal-2.3-no-thread.win-amd64-py2.7.msi 85cd3ff4496aa7e0d0979d1695eef27cc7735c28 cdecimal-2.3-no-thread.win-amd64-py3.1.msi 6c179a1284aceb3a7bfc481daae1d7d60359d487 cdecimal-2.3-no-thread.win-amd64-py3.2.msi 40f245e907512c5d3602ba5993755a0b4b67ca80 cdecimal-2.3-no-thread.win32-py2.6.msi 960eb9bfd9fcf0faee6493506c1917d46536193a cdecimal-2.3-no-thread.win32-py2.7.msi 42b651ee1bf4c94611c43522d69f1965515949b8 cdecimal-2.3-no-thread.win32-py3.1.msi ec26f14c35502d1d5488d440d7bc22ad41e9ac65 cdecimal-2.3-no-thread.win32-py3.2.msi -- http://mail.python.org/mailman/listinfo/python-list
Re: .format vs. %
Neil Cerutti wrote: > On 2012-01-03, Stefan Krah wrote: > > $ ./python -m timeit -n 100 '"%s" % 7.928137192' > > 100 loops, best of 3: 0.0164 usec per loop > > % is faster, but not by an order of magnitude. > > On my machine: > > C:\WINDOWS>python -m timeit -n 100 -s "n=7.92" "'%s' % n" > 100 loops, best of 3: 0.965 usec per loop > > C:\WINDOWS>python -m timeit -n 100 -s "n=7.92" "'{}'.format(n)" > 100 loops, best of 3: 1.17 usec per loop Indeed, I was a bit surprised by the magnitude of the difference. Your timings seem to be in line with the difference seen in the real-world benchmark. It isn't a big deal, considering that the numeric formatting functions have to so many options, e.g: >>> "{:020,}".format(712312312.2) '00,000,712,312,312.2' Still, it's nice to have a faster choice. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: .format vs. %
Neil Cerutti wrote: > > In the real-world telco benchmark for _decimal, replacing the > > single line > > > > outfil.write("%s\n" % t) > > > > with > > > > outfil.write("{}\n".format(t)) > > > > adds 23% to the runtime. I think %-style formatting should not > > be deprecated at all. > > When it becomes necessary, it's possible to optimize it by > hoisting out the name lookups. >... >outfil_write = outfil.write >append_newline = "{}\n".format >... > outfil_write(append_newline(t)) >... Did you profile this? I did, and it still adds 22% to the runtime. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: .format vs. %
Andrew Berg wrote: > To add my opinion on it, I find format() much more readable and easier > to understand (with the exception of the {} {} {} {} syntax), and would > love to see %-style formatting phased out. For me the %-style is much more readable. Also, it is significantly faster: $ ./python -m timeit -n 100 '"%s" % 7.928137192' 100 loops, best of 3: 0.0164 usec per loop $ ./python -m timeit -n 100 '"{}".format(7.928137192)' 100 loops, best of 3: 1.01 usec per loop In the real-world telco benchmark for _decimal, replacing the single line outfil.write("%s\n" % t) with outfil.write("{}\n".format(t)) adds 23% to the runtime. I think %-style formatting should not be deprecated at all. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: How to build 64-bit Python on Solaris with GCC?
Skip Montanaro wrote: > Thanks. I have several different versions in my local sandbox. None > are 64-bit ELFs. Just to make sure I hadn't missed some new development > in this area, I cloned the hg repository and build the trunk version > from scratch. I get a 32-bit executable on Solaris: > >% file ./python >./python: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), >dynamically linked (uses shared libs), not stripped ./configure CFLAGS=-m64 LDFLAGS=-m64 should work with a reasonably recent revision. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: memory management
Juan Declet-Barreto wrote: > Well, I am using Python 2.5 (and the IDLE shell) in Windows XP, which > ships with ESRI's ArcGIS. In addition, I am using some functions in the > arcgisscripting Python geoprocessing module for geographic information > systems (GIS) applications, which can complicate things. I am currently > isolating standard library Python code (e.g., os.walk()) from the > arcgisscripting module to evaluate in which module the environment > crash is occurring. It might be a good idea to check if the problem also occurs with Python 2.7 since Python 2.5 is no longer maintained. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Generating equally-spaced floats with least rounding error
Arnaud Delobelle wrote: > >>>> start, stop, n = -1, 1.1, 7 > >>>> [float(F(start) + i*(F(stop)-F(start))/n) for i in range(n+1)] > > [-1.0, -0.7, -0.39997, -0.09996, > > 0.20004, 0.5001, 0.8, 1.1] > > >>> start, stop, n = -1, 1.1, 7 > >>> [((n-i)*start + i*stop)/n for i in range(n+1)] > [-1.0, -0.7001, -0.39997, > -0.09996, 0.20004, 0.5, 0.8, 1.1] > > On these examples, using fractions is no better than what I suggested > in my previous post. Why not use Decimal if one needs exact endpoints? Something like: def drange(start, stop, step=1): if (step == 0): raise ValueError("step must be != 0") with localcontext() as ctx: ctx.traps[Inexact] = True x = start cmp = -1 if step > 0 else 1 while x.compare(stop) == cmp: yield float(x) x += step >>> list(drange(Decimal(1), Decimal("3.1"), Decimal("0.3"))) [1.0, 1.3, 1.6, 1.9, 2.2, 2.5, 2.8] >>> list(drange(Decimal(-1), Decimal("1.1"), Decimal("0.3"))) [-1.0, -0.7, -0.4, -0.1, 0.2, 0.5, 0.8] >>> list(drange(Decimal(-1), Decimal("1.1"), Decimal("0.1823612873"))) [-1.0, -0.8176387127, -0.6352774254, -0.4529161381, -0.2705548508, -0.0881935635, 0.0941677238, 0.2765290111, 0.4588902984, 0.6412515857, 0.823612873, 1.0059741603] >>> list(drange(Decimal(-1), Decimal("1.1"), Decimal(1)/3)) Traceback (most recent call last): File "", line 1, in File "", line 10, in drange File "/usr/lib/python3.2/decimal.py", line 1178, in __add__ ans = ans._fix(context) File "/usr/lib/python3.2/decimal.py", line 1652, in _fix context._raise_error(Inexact) File "/usr/lib/python3.2/decimal.py", line 3836, in _raise_error raise error(explanation) decimal.Inexact: None Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: way to calculate 2**1000 without expanding it?
Gary Herron wrote: >>>> i am writing a program to sum up the digits of a number 2**1000? >>>> Is there a way/formula to do it without expanding it? > > Here's another one-liner using a generator instead of map: > > sum(int(c) for c in str(2**1000)) The OP did not specify the base: >>> bin(2**1000).count('1') 1 Or just: >>> print(1) 1 Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: try... except with unknown error types
Chris Torek wrote: > (I have also never been sure whether something is going to raise > an IOError or an OSError for various OS-related read or write > operation failures -- such as exceeding a resource limit, for > instance -- so most places that do I/O operations on OS files, I > catch both. Still, it sure would be nice to have a static analysis > tool that could answer questions about potential exceptions. :-) ) There is an effort to fix this: http://www.python.org/dev/peps/pep-3151/ And an implementation ... http://hg.python.org/features/pep-3151/ ... together with a feature request: http://bugs.python.org/issue12555 I think the whole concept makes a lot of sense and is really worth taking a look at. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: LDFLAGS problem
Philip Semanchuk wrote: > On Feb 21, 2011, at 12:56 PM, Robin Becker wrote: > > After installing python 2.7.1 on a Freebsd 8.0 system with the normal > > configure make dance > > > > ./configure --prefix=$HOME/PYTHON --enable-unicode=ucs2 > > make > > make install > > > > I find that when I build extensions PIL, MySQLdb I'm getting errors related > > to a dangling ${LDFLAGS} > > > > eg MySQLdb > > > >> running build_ext > >> building '_mysql' extension > >> creating build/temp.freebsd-7.0-RELEASE-i386-2.7 > >> gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall > >> -Wstrict-prototypes -fPIC -Dversion_info=(1,2,2,'final',0) > >> -D__version__=1.2.2 -I/usr/local/include/mysql > >> -I/home/rptlab/PYTHON/include/python2.7 -c _mysql.c -o > >> build/temp.freebsd-7.0-RELEASE-i386-2.7/_mysql.o -fno-strict-aliasing -pipe > >> gcc -pthread -shared ${LDFLAGS} > >> build/temp.freebsd-7.0-RELEASE-i386-2.7/_mysql.o -L/usr/local/lib/mysql > >> -lmysqlclient_r -lz -lcrypt -lm -o > >> build/lib.freebsd-7.0-RELEASE-i386-2.7/_mysql.so > >> gcc: ${LDFLAGS}: No such file or directory > >> error: command 'gcc' failed with exit status 1 > > > > where should I be looking to fix this problem? Try the patch from http://bugs.python.org/issue10547 or use an svn checkout. The patch didn't make it into 2.7.1. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: ctypes from_param() truncating 64 bit pointers to 32 bit
Joakim Hove wrote: > > and I actually don't know how this specific case works. > > However, you are not providing a function signature for the "print_addr" C > > function in the BugTest case, so my guess is that it assumes the standard > > signature for C functions, which is "int func(int)". That would explain the > > 32bit truncation. > > Well; I think you are wrong. First of all because what I showed in the > Stackoverflow post was just a contrived example - the real code where > the problem initially arose does indeed have the proper function > signature. I suggest that you reconsider, since this appears to work: def from_param(self): return ctypes.c_void_p(self.c_ptr) Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
[ANN] cdecimal-2.2 released
Hi, I'm pleased to announce the release of cdecimal-2.2. cdecimal is a fast drop-in replacement for the decimal module in Python's standard library. Blurb = cdecimal is a complete implementation of IBM's General Decimal Arithmetic Specification. With the appropriate context parameters, cdecimal will also conform to the IEEE 754-2008 Standard for Floating-Point Arithmetic. Typical performance gains over decimal.py are between 30x for I/O heavy benchmarks and 80x for numerical programs. In a PostgreSQL database benchmark, the speedup is 12x. +-+-+--+-+ | | decimal | cdecimal | speedup | +=+=+==+=+ | pi|42.75s |0.58s | 74x | +-+-+--+-+ | telco | 172.19s |5.68s | 30x | +-+-+--+-+ | psycopg | 3.57s |0.29s | 12x | +-+-+--+-+ In the pi benchmark, cdecimal often performs better than Java's BigDecimal running on Java HotSpot(TM) 64-Bit Server VM. Both cdecimal and the underlying library - libmpdec - have very large test suites. libmpdec has 100% code coverage, cdecimal 85%. The test suites have been running continuously for over a year without any major issues. Install === Since cdecimal is now listed on PyPI, it can be installed using pip: pip install cdecimal Windows installers are available at: http://www.bytereef.org/mpdecimal/download.html Links = http://www.bytereef.org/mpdecimal/index.html http://www.bytereef.org/mpdecimal/changelog.html http://www.bytereef.org/mpdecimal/download.html Checksums of the released packages == sha256sum - 3d92429fab74ddb17d12feec9cd949cd8a0be4bc0ba9afc5ed9b3af884e5d406 mpdecimal-2.2.tar.gz e8f02731d4089d7c2b79513d01493c36ef41574423ea3e49b245b86640212bdc mpdecimal-2.2.zip 515625c5c5830b109c57af93d49ae2c57ec3f230d46a3e0583840ff73d7963be cdecimal-2.2.tar.gz sha1sum --- 24695b2c9254e1b870eb663e3d966eb4f0abd5ab cdecimal-2.2.win32-py2.6.msi e74cb7e722f30265b408b322d2c50d9a18f78587 cdecimal-2.2.win32-py2.7.msi 7c39243b2fc8b1923ad6a6066536982844a7617f cdecimal-2.2.win32-py3.1.msi 5711fd69a8e1e2e7be0ad0e6b93ecc10aa584c68 cdecimal-2.2.win-amd64-py2.6.msi b1cd7b6a373c212bf2f6aa288cd767171bfefd41 cdecimal-2.2.win-amd64-py2.7.msi f08a803a1a42a2d8507da1dc49f3bf7eed37c332 cdecimal-2.2.win-amd64-py3.1.msi cb29fa8f67befaf2d1a05f4675f840d7cd35cf6c cdecimal-2.2-no-thread.win32-py2.6.msi 012a44488f2ce2912f903ae9faf995efc7c9324b cdecimal-2.2-no-thread.win32-py2.7.msi 1c08c73643fc45d7b0feb62c33bebd76537f9d02 cdecimal-2.2-no-thread.win32-py3.1.msi b6dbd92e86ced38506ea1a6ab46f2e41f1444eae cdecimal-2.2-no-thread.win-amd64-py2.6.msi b239b41e6958d9e71e91b122183dc0eaefa00fef cdecimal-2.2-no-thread.win-amd64-py2.7.msi 413724ff20ede7b648f57dd9a78a12e72e064583 cdecimal-2.2-no-thread.win-amd64-py3.1.msi Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: AIX 5.3 - Enabling Shared Library Support Vs Extensions
Anurag Chourasia wrote: > When I configure python to enable shared libraries, none of the extensions > are getting built during the make step due to this error. > > building 'cStringIO' extension > gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall > -Wstrict-prototypes -I. -I/u01/home/apli/wm/GDD/Python-2.6.6/./Include -I. > -IInclude -I./Include -I/opt/freeware/include > -I/opt/freeware/include/readline -I/opt/freeware/include/ncurses > -I/usr/local/include -I/u01/home/apli/wm/GDD/Python-2.6.6/Include > -I/u01/home/apli/wm/GDD/Python-2.6.6 -c > /u01/home/apli/wm/GDD/Python-2.6.6/Modules/cStringIO.c -o > build/temp.aix-5.3-2.6/u01/home/apli/wm/GDD/Python-2.6.6/Modules/cStringIO.o > ./Modules/ld_so_aix gcc -pthread -bI:Modules/python.exp > build/temp.aix-5.3-2.6/u01/home/apli/wm/GDD/Python-2.6.6/Modules/cStringIO.o > -L/usr/local/lib *-lpython2.6* -o build/lib.aix-5.3-2.6/cStringIO.so Try these flags: -L. -L/usr/local/lib If this solves the problem and the issue is also present in Python-2.7, you should report a bug at http://bugs.python.org/ . Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Intel C Compiler
Drake wrote: > I'm an engineer who has access to the Intel C/C++ compiler (icc), and > for the heck of it I compiled Python2.7 with it. > > Unsurprisingly, it compiled fine and functions correctly as far as I > know. However, I was interested to discover that the icc compile > printed literally thousands of various warnings and remarks. > > Examples: > Parser/node.c(13): remark #2259: non-pointer conversion from "int" to > "short" may lose significant bits > n->n_type = type; I sometimes use icc. This is one of the most annoying warnings of the Intel compiler. See: http://software.intel.com/en-us/forums/showthread.php?t=62308 The problem is that the compiler issues this warning even when there is no way that significant bits could be lost. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: decimal.Decimal formatting
pyt...@lists.fastmail.net wrote: > I have various decimals which eventually are written to an XML file. > Requirements indicate a precision of 11. I am currently having some > 'issues' with Decimal("0"). When using > quantize(decimal.Decimal("1e-11")) the result is not 0.000, but > 1e-11. > > Personally I agree 1e-11 is a better notation than 0.000, but I > currently need the long one. Is there a way to overwrite the switch to > the scientific notation? Apparently there is a switch in notation > 'between' 1e-6 and 1e-7. Try: >>> format(Decimal("0"), ".11f") '0.000' Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Python -- floating point arithmetic
Mark Dickinson wrote: > On Jul 8, 2:59 pm, Stefan Krah wrote: > > pow() is trickier. Exact results have to be weeded out before > > attempting the correction loop for correct rounding, and this is > > complicated. > > > > For example, in decimal this expression takes a long time (in cdecimal > > the power function is not correctly rounded): > > > > Decimal('100.0') ** Decimal('-557.71e-74288') > > Hmm. So it does. Luckily, this particular problem is easy to deal > with. Though I dare say that you have more up your sleeve. :)? Not at the moment, but I'll keep trying. :) Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Python -- floating point arithmetic
Adam Skutt wrote: > > > I actually agree with much of what you've said. It was just the > > "impossible" claim that went over the top (IMO). The MPFR library > > amply demonstrates that computing many transcendental functions to > > arbitrary precision, with correctly rounded results, is indeed > > possible. > That's because you're latching onto that word instead of the whole > sentence in context and making a much bigger deal out of than is > appropriate. The fact that I may not be able to complete a given > calculation for an arbitrary precision is not something that can be > ignored. It's the same notional problem with arbitrary-precision > integers: is it better to run out of memory or overflow the > calculation? The answer, of course, is a trick question. In the paper describing his strategy for correct rounding Ziv gives an estimate for abnormal cases, which is very low. This whole argument is a misunderstanding. Mark and I argue that correct rounding is quite feasible in practice, you argue that you want guaranteed execution times and memory usage. This is clear now, but was not so apparent in the "impossible" paragraph that Mark responded to. I think asking for strictly bounded resource usage is reasonable. In cdecimal there is a switch to turn off correct rounding for exp() and log(). Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Python -- floating point arithmetic
Adam Skutt wrote: > On Jul 8, 7:23 am, Mark Dickinson wrote: > > On Jul 8, 11:58 am, Adam Skutt wrote: > > > > > accurately. Moreover, in general, it's impossible to even round > > > operations involving transcendental functions to an arbitrary fixed- > > > precision, you may need effectively infinite precision in order to the > > > computation. > > > > Impossible? Can you explain what you mean by this? Doesn't the > > decimal module do exactly that, giving correctly-rounded exp() and > > log() results to arbitrary precision? > > You run into the table-maker's dilemma: there's no way to know in > advance how many digits you need in order to have n bits of precision > in the result. For some computations, the number of bits required to > get the desired precision can quickly overwhelm the finite limitations > of your machine (e.g., you run out of RAM first or the time to compute > the answer is simply unacceptable). Yes, this appears to be unsolved yet, see also: http://www.cs.berkeley.edu/~wkahan/LOG10HAF.TXT "Is it time to quit yet? That's the Table-Maker's Dilemma. No general way exists to predict how many extra digits will have to be carried to compute a transcendental expression and round it _correctly_ to some preassigned number of digits. Even the fact (if true) that a finite number of extra digits will ultimately suffice may be a deep theorem." However, in practice, mpfr rounds correctly and seems to be doing fine. In addition to this, I've been running at least 6 months of continuous tests comparing cdecimal and decimal, and neither log() nor exp() poses a problem. pow() is trickier. Exact results have to be weeded out before attempting the correction loop for correct rounding, and this is complicated. For example, in decimal this expression takes a long time (in cdecimal the power function is not correctly rounded): Decimal('100.0') ** Decimal('-557.71e-74288') Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Engineering numerical format PEP discussion
Keith wrote: > Even though this uses the to_eng_string() function, and even though I > am using the decimal.Context class: > > >>> c = decimal.Context(prec=5) > >>> decimal.Decimal(1234567).to_eng_string(c) > '1234567' > > That is not an engineering notation string. To clarify further: The spec says that the printing functions are not context sensitive, so to_eng_string does not *apply* the context. The context is only passed in for the 'capitals' value, which determines whether the exponent letter is printed in lower or upper case. This is one of the unfortunate situations where passing a context can create great confusion for the user. Another one is: >>> c = Context(prec=5) >>> Decimal(12345678, c) Decimal('12345678') Here the context is passed only for the 'flags' and 'traps' members: >>> Decimal("wrong", c) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.2/decimal.py", line 548, in __new__ "Invalid literal for Decimal: %r" % value) File "/usr/lib/python3.2/decimal.py", line 3836, in _raise_error raise error(explanation) decimal.InvalidOperation: Invalid literal for Decimal: 'wrong' >>>c.traps[InvalidOperation] = False >>> Decimal("wrong", c) Decimal('NaN') Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Engineering numerical format PEP discussion
Chris Rebert wrote: > >>>> c = decimal.Context(prec=5) > >>>> decimal.Decimal(1234567).to_eng_string(c) > > '1234567' > > > > That is not an engineering notation string. > > Apparently either you and the General Decimal Arithmetic spec differ > on what constitutes engineering notation, there's a bug in the Python > decimal library, or you're hitting some obscure part of the spec's > definition. I don't have the expertise to know which is the case. > > The spec: http://speleotrove.com/decimal/decarith.pdf > (to-engineering-string is on page 20 if you're interested) The module is correct. Printing without exponent follows the same rules as to-scientific-string: "If the exponent is less than or equal to zero and the adjusted exponent is greater than or equal to -6, the number will be converted to a character form without using exponential notation." Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with floats and calculations
Karsten Goen wrote: > also this doesn't help, there are still errors in the accuracy. Isn't there a > perfect way to do such calculations? The hint I gave you removes the most egregious error in your program. You still have to quantize the result of (c*var*d) / b) if you want it to match a. If I adapt your program, I don't find any non-matching numbers. for i in range(10): # set random numbers a = Decimal(str(random.uniform(0.1,123))).quantize(Decimal('0.01')) b = Decimal(str(random.uniform(20.1,3000))).quantize(Decimal('0.01')) c = Decimal(str(random.uniform(100, 5000))).quantize(Decimal('0.01')) var = Decimal('8.314').quantize(Decimal('0.01')) # calc d d = (a * b)/ (c * var) if ((c*var*d) / b).quantize(Decimal('0.01')) != a: print a, (c*var*d) / b Note that for perfect accuracy you should use the fraction module. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with floats and calculations
Karsten Goen wrote: > hey all, > I got a problem with floats and calculations. I made an mini-application where > you get random questions with some science calculations in it > So the user can type in his result with the values given by random creation. > And the user value is compared against the computer value... the problem is > that the user input is only 2 numbers behind the '.' so like 1.42, 1.75 > > here is the example: > http://dpaste.com/hold/158698/ > > without decimal it would be very inaccurate. decimal is very accurate when I > have to compare d with users calculations from a,b,c,var. > But when I ask the user what is "a" the result gets inaccurate when > calculating > with the same values given before (b,c,d,var). > > Maybe anyone can help me with this problem, I don't want to generate for every > possible user input a single formula. And also it should be possible for a > computer, my calculator at home does the same and is much smaller and slower. d = (a * b)/ (c * var) d = Decimal(d).quantize(Decimal('0.01')) By quantizing d, the above equality does not hold any longer. You've got to drop that line (your calculator doesn't quantize either). Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: how to register with pypi - no such setup.py
Phlip wrote: > On Dec 26, 6:01 am, "Martin v. Loewis" wrote: > > > > Now my next problem - how to get pypi.python.org to stop burning up > > > version numbers each time I test this? > > > > I don't speak English well enough to understand what "to burn up" > > means - to my knowledge, PyPI does no such thing. > > I don't know how to test pypi on my own notebook. > > Each time I change setup.py or something, if I want to test my change, ^ > This means I am using up perfectly good version numbers. If I don't > change the number, I get a condescending message "Upload failed (400): > A file named "Morelia-0.0.10.tar.gz" already exists for > Morelia-0.0.10. To fix problems with that file you should create a new > release." It is quite reasonable that changed archives with the same version number are not accepted. Very helpful, not condescending. Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Fast decimal arithmetic module released
Mark Dickinson wrote: > On Oct 2, 8:53 pm, Stefan Krah wrote: > > Hi, > > > > today I have released the following packages for fast arbitrary precision > > decimal arithmetic: > > > [...] > > Nice! I'd been wondering how you'd been finding all those decimal.py > bugs. Now I know. :) Thanks! Yes, actually deccheck.py deserves the credit. ;) Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Fast decimal arithmetic module released
Hi, today I have released the following packages for fast arbitrary precision decimal arithmetic: 1. libmpdec Libmpdec is a C (C++ ready) library for arbitrary precision decimal arithmetic. It is a complete implementation of Mike Cowlishaw's General Decimal Arithmetic specification. 2. fastdec.so == Fastdec.so is a Python C module with the same functionality as decimal.py. With some restrictions, code written for for decimal.py should work identically. 3. deccheck.py === deccheck.py performs redundant calculations using both decimal.py and fastdec.so. For each calculation the results of both modules are compared and an exception is raised if they differ. This module was mainly developed for testing, but could in principle be used for redundant calculations. Correctness Libmpdec passes IBM's official test suite and a multitude of additional tests. Fastdec.so passes (with minor modifications) all Python unit tests. When run directly, deccheck.py performs very exhaustive tests that compare fastdec.so with decimal.py. All tests complete successfully under Valgrind. Speed == In a couple of initial benchmarks, libmpdec compares very well against decNumber and the Intel decimal library. For very large numbers, the speed is roughly the same as the speed of the apfloat library. Fastdec.so compares quite well against gmpy and even native Python floats. In the benchmarks, it is significantly faster than Java's BigDecimal class. Portability All tests have been completed on Linux 64/32-bit, Windows 64/32-bit, OpenSolaris 32-bit, OpenBSD 32-bit and Debian Mips 32-bit. For 32-bit platforms there is a pure ANSI C version, 64 bit platforms require a couple of asm lines. Further information and downloads at: http://www.bytereef.org/libmpdec.html Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Re: Solution for XML-RPC over a proxy
* Andrew R <[EMAIL PROTECTED]> wrote: > All, > > I couldn't get my xml-rpc script to work via a corporate proxy. > > I noticed a few posts asking about this, and a very good helper script by jjk > on > starship. That script didn't work for me, and I think its a little old -- but > it > was very helpful to figure it out. > > The below script is a replacement/update to the earlier work. It is runnable > as > a test or usable as a module. Tests pass from behind and away from a proxy, on > win32 and Linux i686, with Python 2.4.1. > > Comments welcome. No real comment, just to point out an alternative that works well for me: http://docs.python.org/dev/lib/xmlrpc-client-example.html Stefan Krah -- http://mail.python.org/mailman/listinfo/python-list
Python script running as Windows service: Clean shutdown
Hello, I'm trying to run a Python script as a Windows service with a defined shutdown. The script (enigma-client.py) handles the communications with the server in a distributed computing effort and calls a C program (enigma.exe) to do the computations. enigma.exe should save its current state when receiving SIGINT or SIGTERM. This (obviously) works under Unix and also when running the script from the Windows command line and terminating it with Ctrl-C. I understand that for a clean shutdown of a Windows service one would have to use the win32 extensions and have the working thread check for the shutdown event in short intervals (<[EMAIL PROTECTED]>). This would leave me with these options: a) Write enigma-client.py as a service. Write enigma.exe as a service and have it poll regularly for shutdown events. b) Have enigma.exe save its state regularly, use srvany.exe and forget about a graceful shutdown. I'm new to Windows services, so I'd be grateful for corrections or better solutions. Here is relevant part of the code (The whole thing is at http://www.bytereef.org/enigma-client.txt ): """ main """ cmdline = 'enigma.exe -R 00trigr.naval 00bigr.naval 00ciphertext > NUL' eclient = Eclient() if len(sys.argv) != 3: eclient.usage() if os.path.isfile(LOCKFILE): print "enigma-client: error: found lockfile %s. \n" \ "Check that no other enigma-client process is using this directory." \ % LOCKFILE sys.exit(1) atexit.register(eclient.rm_lockfile) eclient.touch_lockfile() win32process.SetPriorityClass( win32process.GetCurrentProcess(), win32process.IDLE_PRIORITY_CLASS ) while 1: retval = os.system(cmdline) >> 8 if retval == 0: eclient.submit_chunk(sys.argv[1], int(sys.argv[2])) eclient.get_chunk(sys.argv[1], int(sys.argv[2])) elif retval == 1: eclient.get_chunk(sys.argv[1], int(sys.argv[2])) time.sleep(10) else: """./enigma has caught a signal""" sys.exit(retval) Stefan Krah -- Break original Enigma messages: http://www.bytereef.org/m4_project.html -- http://mail.python.org/mailman/listinfo/python-list