Re: [Python-Dev] 2.4.4: backport classobject.c HAVE_WEAKREFS?
Fredrik Lundh wrote: a dynamic registration approach would be even better, with a single entry point used to register all methods and hooks your C extension has implemented, and code on the other side that builds a properly initialized type descriptor from that set, using fallback functions and error stubs where needed. I knocked out a prototype of this last week, emailed Mr. Lundh about it, then forgot about it. Would anyone be interested in taking a peek at it? I only changed one file to use this new-style initialization, sha256module.c. The resulting init_sha256() looks like this: PyMODINIT_FUNC init_sha256(void) { PyObject *m; SHA224type = PyType_New("_sha256.sha224", sizeof(SHAobject), NULL); if (SHA224type == NULL) return; PyType_SetPointer(SHA224type, pte_dealloc, &SHA_dealloc); PyType_SetPointer(SHA224type, pte_methods, &SHA_methods); PyType_SetPointer(SHA224type, pte_members, &SHA_members); PyType_SetPointer(SHA224type, pte_getset, &SHA_getseters); if (PyType_Ready(SHA224type) < 0) return; SHA256type = PyType_New("_sha256.sha256", sizeof(SHAobject), NULL); if (SHA256type == NULL) return; PyType_SetPointer(SHA256type, pte_dealloc, &SHA_dealloc); PyType_SetPointer(SHA256type, pte_methods, &SHA_methods); PyType_SetPointer(SHA256type, pte_members, &SHA_members); PyType_SetPointer(SHA256type, pte_getset, &SHA_getseters); if (PyType_Ready(SHA256type) < 0) return; m = Py_InitModule("_sha256", SHA_functions); if (m == NULL) return; } In a way this wasn't really a good showpiece for my code. The "methods", "members", and "getseters" structs still need to be passed in. However, I did change all four "as_" structures so you can set those directly. For instance, the "concat" as_sequence method for a PyString object would be set using PyType_SetPointer(PyString_Type, pte_sequence_concat, string_concat); (I actually converted the PyString object to my new code, but had chicken-and-egg initialization problems as a result and backed out of it. The code is still in the branch, just commented out.) Patch available for interested parties, larry ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
Grig Gheorghiu schrieb: > OK, I deleted the checkout directory on one of my buidslaves and > re-ran the build steps. The tests passed. So my conclusion is that a > full rebuild is needed for the tests to pass after the last checkins > (which included files such as configure and configure.in). Indeed, you had to re-run configure. There was a bug where -Werror was added to the build flags, causing several configure tests to fail (most notably, it would determine that there's no memmove on Linux). > Maybe the makefiles should be modified so that a full rebuild is > triggered when the configure and configure.in files are changed? The makefiles already do that: if configure changes, a plain "make" will first re-run configure. > At this point, I'll have to tell all the Pybots owners to delete their > checkout directories and start a new build. Not necessarily. You can also ask, at the buildbot GUI, that a non-existing branch is build. This should cause the checkouts to be deleted (and then the build to fail); the next regular build will check out from scratch. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Promoting PCbuild8
Brian Warner schrieb: > To be precise, you have have as many build procedures per slave as you like, > but if the procedure depends upon running on a particular platform, then it > is unlikely that a single slave can accomodate multiple platforms. Ah, right, I can have multiple builders per slave. That's good. For the case of x86 and AMD64, a single slave can indeed accommodate both platforms. > If the x86 and the x64 builds can be run on the same machine, how do you > control which kind of build you're doing? The decision about whether to run > them in the same buildslave or in two separate buildslaves depends upon how > you express this control. One possibility is that you just pass some > different CFLAGS to the configure or compile step.. in that case, putting > them both in the same slave is easy, and the CFLAGS settings will appear in > your BuildFactories. Most likely, there would be different batch files to run, although using environment variables might also work. So I guess I could use the same slave for both builders. > You could create a MasterLock that is shared by just the two Builders which > use slaves which share the same machine. That would prohibit the two Builders > from running at the same time. (SlaveLocks wouldn't help here, because as you > pointed out there is no way to tell the buildmaster that two slaves share a > host). Ah, ok. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
On 10/19/06, Grig Gheorghiu <[EMAIL PROTECTED]> wrote: On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote:>>> On 10/19/06, Grig Gheorghiu <[EMAIL PROTECTED] > wrote:> > On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote:> > >> > > Possibly. If you look at the reason those tests failed it is because > > > time.strftime is missing for some odd reason. But none of recent> checkins> > > seem to have anything to do with the 'time' module, let alone with how> > > methods are added to modules (Martin's recent checkins have been for > > > PyArg_ParseTuple).> > >> > > -Brett> >> > Could there possible be a side effect of the PyArg_ParseTuple changes?>> I doubt that, especially since I just updated my pristine checkout and > test_time passed fine.>> -Brett>>OK, I deleted the checkout directory on one of my buidslaves andre-ran the build steps. The tests passed. So my conclusion is that afull rebuild is needed for the tests to pass after the last checkins (which included files such as configure and configure.in).The Python buildbots are doing full rebuilds every time, that's whythey're green and happy, but the Pybots are just doing incremental builds.Maybe the makefiles should be modified so that a full rebuild istriggered when the configure and configure.in files are changed?Maybe, but I don't know how to do that. -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Promoting PCbuild8
"Martin v. Löwis" <[EMAIL PROTECTED]> writes: >> But I agree that >> getting regular builds running would be a good thing. An x64 box >> would be ideal to build both the x86 and x64 versions on. A single >> bot can manage many platforms, right? > > A single machine, and a single buildbot installation, yes. But not > a single build slave, since there can be only one build procedure > per slave. To be precise, you have have as many build procedures per slave as you like, but if the procedure depends upon running on a particular platform, then it is unlikely that a single slave can accomodate multiple platforms. Each Builder object in the buildbot config file is created with a BuildFactory (which defines the sequence of steps it will execute), and a list of buildslaves that it can run on. There is a many-to-many mapping from Builders to buildslaves. For example, you might have an "all-tests" Builder that does a compile and runs the unit-test suite, and a second "build-API-docs" Builder that just runs epydoc or something. Both of these Builders could easily run on the same slave. But if you have an x86 Builder and a PPC Builder, you'd be hard pressed to find a single buildslave that could usefully serve for both. If the x86 and the x64 builds can be run on the same machine, how do you control which kind of build you're doing? The decision about whether to run them in the same buildslave or in two separate buildslaves depends upon how you express this control. One possibility is that you just pass some different CFLAGS to the configure or compile step.. in that case, putting them both in the same slave is easy, and the CFLAGS settings will appear in your BuildFactories. If instead you have to use a separate chroot environment (or whatever the equivalent is for this issue) for each, then it may be easiest to run two separate buildslaves (and your BuildFactories might be identical). > It's possible to tell the master not to build different branches on a > single slave (i.e. 2.5 has to wait if trunk is building), but it's not > possible to tell it that two slaves reside on the same machine (it might be > possible, but I don't know how to do it). You could create a MasterLock that is shared by just the two Builders which use slaves which share the same machine. That would prohibit the two Builders from running at the same time. (SlaveLocks wouldn't help here, because as you pointed out there is no way to tell the buildmaster that two slaves share a host). cheers, -Brian ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote: > > > On 10/19/06, Grig Gheorghiu <[EMAIL PROTECTED]> wrote: > > On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote: > > > > > > Possibly. If you look at the reason those tests failed it is because > > > time.strftime is missing for some odd reason. But none of recent > checkins > > > seem to have anything to do with the 'time' module, let alone with how > > > methods are added to modules (Martin's recent checkins have been for > > > PyArg_ParseTuple). > > > > > > -Brett > > > > Could there possible be a side effect of the PyArg_ParseTuple changes? > > I doubt that, especially since I just updated my pristine checkout and > test_time passed fine. > > -Brett > > OK, I deleted the checkout directory on one of my buidslaves and re-ran the build steps. The tests passed. So my conclusion is that a full rebuild is needed for the tests to pass after the last checkins (which included files such as configure and configure.in). The Python buildbots are doing full rebuilds every time, that's why they're green and happy, but the Pybots are just doing incremental builds. Maybe the makefiles should be modified so that a full rebuild is triggered when the configure and configure.in files are changed? At this point, I'll have to tell all the Pybots owners to delete their checkout directories and start a new build. Grig ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
On 10/19/06, Grig Gheorghiu <[EMAIL PROTECTED]> wrote: On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote:>> Possibly. If you look at the reason those tests failed it is because> time.strftime is missing for some odd reason. But none of recent checkins > seem to have anything to do with the 'time' module, let alone with how> methods are added to modules (Martin's recent checkins have been for> PyArg_ParseTuple).>> -BrettCould there possible be a side effect of the PyArg_ParseTuple changes? I doubt that, especially since I just updated my pristine checkout and test_time passed fine.-Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
On 10/19/06, Brett Cannon <[EMAIL PROTECTED]> wrote: > > Possibly. If you look at the reason those tests failed it is because > time.strftime is missing for some odd reason. But none of recent checkins > seem to have anything to do with the 'time' module, let alone with how > methods are added to modules (Martin's recent checkins have been for > PyArg_ParseTuple). > > -Brett Could there possible be a side effect of the PyArg_ParseTuple changes? Grig ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python unit tests failing on Pybots farm
On 10/19/06, Grig Gheorghiu <[EMAIL PROTECTED] > wrote: The latest trunk checkin caused almost all Pybots to fail when runningthe Python unit tests.273 tests OK.12 tests failed:test___all__ test_calendar test_capi test_datetime test_emailtest_email_renamed test_imaplib test_mailbox test_strftime test_strptime test_time test_xmlrpcHere's the status page:http://www.python.org/dev/buildbot/community/trunk/ Not sure why the official Python buildbot farm is all green and happymaybe a difference in how the steps are running?Possibly. If you look at the reason those tests failed it is because time.strftime is missing for some odd reason. But none of recent checkins seem to have anything to do with the 'time' module, let alone with how methods are added to modules (Martin's recent checkins have been for PyArg_ParseTuple). -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python-Dev Digest, Vol 39, Issue 55
Larry Hastings wrote:Chetan Pandya wrote:> I don't have a patch build, since I didn't download the revision used > by the patch.> However, I did look at values in the debugger and it looked like x in> your example above had a reference count of 2 or more within> string_concat even when there were no other assignments that would > account for it.It could be the optimizer. If you concatenate hard-coded strings, thepeephole optimizer does constant folding. It says "hey, look, thisbinary operator is performed on two constant objects". So it evaluates the _expression_ itself and substitutes the result, in this case swapping(pseudotokens here) [PUSH "a" PUSH "b" PLUS] for [PUSH "ab"].Oddly, it didn't seem to optimize away the whole _expression_. If you say "a" + "b" + "c" + "d" + "e", I would have expected the peepholeoptimizer to turn that whole shebang into [PUSH "abcde"]. But when Igave it a cursory glance it seemed to skip every-other; it constant-folded "a" + "b", then + "c" and optimized ("a" + "b" + "c") +"d", resulting ultimately I believe in [PUSH "ab" PUSH "cd" PLUS PUSH "e" PLUS]. But I suspect I missed something; it bears furtherinvestigation.I looked at the optimizer, but couldn't find any place where it does constant folding for strings. However, I an unable to set breakpoints for some mysterious reason, so investigation is somewhat hard. But I am not bothered about it anymore, since it does not behave the way I originally thought it did. But this is all academic, as real-world performance of my patch is notcontingent on what the peephole optimizer does to short runs of hard-coded strings in simple test cases.> The recursion limit seems to be optimistic, given the default stack> limit, but of course, I haven't tried it.I've tried it, on exactly one computer (running Windows XP). The depth limit was arrived at experimentally. But it is probably too optimisticand should be winched down. On the other hand, right now when you do x = "a" + x ten zillion timesthere are always two references to the concatenation object stored in x: the interpreter holds one, and x itself holds the other. That means Ihave to build a new concatenation object each time, so it becomes adegenerate tree (one leaf and one subtree) recursing down the right-hand side.This is the case I was thinking of (but not what I wrote). I plan to fix that in my next patch. There's already code that says "ifthe next instruction is a store, and the location we're storing to holdsa reference to the left-hand side of the concatenation, make the location drop its reference". That was an optimization for theold-style concat code; when the left side only had one reference itwould simply resize it and memcpy() in the right side. I plan to add support for dropping the reference when it's the *right*-hand side of the concatenation, as that would help prepending immensely. Once that'sdone, I believe it'll prepend ((depth limit) * (number of items inob_sstrings - 1)) + 1 strings before needing to render. I am confused as to whether you are referring to the LHS or the concatenation operation or the assignment operation. But I haven't looked at how the reference counting optimizations are done yet. In general, there are caveats about removing references, but I plan to look at that later. There is another, possibly complimentary way of reducing the recursion depth. While creating a new concatenation object, instead of inserting the two string references, the strings they reference can be inserted in the new object. This can be done if the number of strings they contain is small. In the x = "a" + x case, for example, this will reduce the recursion depth of the string tree (but not reduce the allocations). -Chetan ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
2006/10/19, Raymond Hettinger <[EMAIL PROTECTED]>: > My colleague got an odd result today that is reproducible on his build > of Python (RedHat's distribution of Py2.4.2) but not any other builds > ... > >>> set(-194 * (1/100.0) for i in range(1)) > set([-19400.0, -193995904.0, -193994880.0]) I neither can reproduce it in my Ubuntu, but analyzing the problem... what about this?: d = {} for i in range(1): val = -194 * (1/100.0) d[val] = d.get(val, 0) + 1 or d = {} for i in range(1): val = -194 * (1/100.0) d.setdefault(val, []).append(i) I think that is interesting to know,,, - if in these structures the problem still happens... - how many values go for each key, and which values. Regards, -- .Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
> I noticed that you used both "nondeterministic" and > "reproducible" though. LOL. The nondeterministic part is that the same calculation will give different answers and there doesn't appear to be a pattern to which of the several answers will occur. The reproducible part is that it happens from session-to-session > Are the specific values significant (e.g., do > you really need range(1) to demonstrate the problem)? No, you just need to run the calculation several times at the command line: >>> -194 * (1/100.0) -193994880.0 >>> -194 * (1/100.0) -19400.0 >>> -194 * (1/100.0) -19400.0 Raymond -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Thursday, October 19, 2006 1:44 PM To: Raymond Hettinger Cc: python-dev@python.org Subject: Re: [Python-Dev] Nondeterministic long-to-float coercion Raymond> My colleague got an odd result today that is reproducible on Raymond> his build of Python (RedHat's distribution of Py2.4.2) but not Raymond> any other builds I've checked (including an Ubuntu Py2.4.2 Raymond> built with a later version of GCC). I hypothesized that this Raymond> was a bug in the underlying GCC libraries, but the magnitude of Raymond> the error is so large that that seems implausible. Does anyone Raymond> have a clue what is going-on? Not off the top of my head (but then I'm not a guts of the implementation or gcc whiz). I noticed that you used both "nondeterministic" and "reproducible" though. Does your colleague always get the same result? If you remove the set constructor do the oddball values always wind up in the same spots on repeated calls? Are the specific values significant (e.g., do you really need range(1) to demonstrate the problem)? Also, I can never remember exactly, but are even-numbered minor numbers in GCC releases supposed to be development releases (or is that for the Linux kernel)? Just a few questions that come to mind. Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
[Raymond Hettinger] > My colleague got an odd result today that is reproducible on his build > of Python (RedHat's distribution of Py2.4.2) but not any other builds > I've checked (including an Ubuntu Py2.4.2 built with a later version of > GCC). I hypothesized that this was a bug in the underlying GCC > libraries, but the magnitude of the error is so large that that seems > implausible. > > Does anyone have a clue what is going-on? > > Python 2.4.2 (#1, Mar 29 2006, 11:22:09) [GCC 4.0.2 20051125 (Red Hat > 4.0.2-8)] on linux2 Type "help", "copyright", "credits" or "license" for > more information. > >>> set(-194 * (1/100.0) for i in range(1)) > set([-19400.0, -193995904.0, -193994880.0]) Note that the Hamming distance between -19400.0 and -193995904.0 is 1, and ditto between -193995904.0 and -193994880.0, when viewed as IEEE-754 doubles. That is, 193995904.0 is "missing a bit" from -19400.0, and -193994880.0 is missing the same bit plus an additional bit. Maybe clearer, writing a function to show the hex little-endian representation: >>> def ashex(d): ... return binascii.hexlify(struct.pack(">> ashex(-19400) '6920a7c1' >>> ashex(-193995904) # "the 2 bit" from "6" is missing, leaving 4 '4920a7c1' >>> ashex(-193994880) # and "the 8 bit" from "9" is missing, leaving 1 '4120a7c1' More than anything else that suggests flaky memory, or "weak bits" in a HW register or CPU<->FPU path. IOW, it looks like a hardware problem to me. Note that the missing bits here don't coincide with a "natural" software boundary -- screwing up a bit "in the middle of" a byte isn't something software is prone to do. You could try different inputs and see whether the same bits "go missing", e.g. starting with a double with a lot of 1 bits lit. Might also try using these as keys to a counting dict to see how often they go missing. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
Raymond Hettinger wrote: > My colleague got an odd result today that is reproducible on his build > of Python (RedHat's distribution of Py2.4.2) but not any other builds > I've checked (including an Ubuntu Py2.4.2 built with a later version of > GCC). I hypothesized that this was a bug in the underlying GCC > libraries, but the magnitude of the error is so large that that seems > implausible. These errors are due to a bit or two being flipped in either the long or double representation of the number. They could be due to a compiler bug, but other potential culprits include bad memory, a bum power supply introducing noise, or cooling problems. Has your colleague run memtest86 or other load tests for a day on their box? http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
Raymond Hettinger schrieb: > My colleague got an odd result today that is reproducible on his build > of Python (RedHat's distribution of Py2.4.2) but not any other builds > I've checked (including an Ubuntu Py2.4.2 built with a later version of > GCC). I hypothesized that this was a bug in the underlying GCC > libraries, but the magnitude of the error is so large that that seems > implausible. Does anyone have a clue what is going-on? I'd say it's memory corruption. Look: r=array.array("d",[-19400.0, -193995904.0, -193994880.0]).tostring() print map(ord,r[0:8]) print map(ord,r[8:16]) print map(ord,r[16:24]) gives [0, 0, 0, 0, 105, 32, 167, 193] [0, 0, 0, 0, 73, 32, 167, 193] [0, 0, 0, 0, 65, 32, 167, 193] It's only one byte that changes, and then that in only two bits (2**3 and 2**5). Could be faulty hardware, too. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Python unit tests failing on Pybots farm
The latest trunk checkin caused almost all Pybots to fail when running the Python unit tests. 273 tests OK. 12 tests failed: test___all__ test_calendar test_capi test_datetime test_email test_email_renamed test_imaplib test_mailbox test_strftime test_strptime test_time test_xmlrpc Here's the status page: http://www.python.org/dev/buildbot/community/trunk/ Not sure why the official Python buildbot farm is all green and happymaybe a difference in how the steps are running? Grig -- http://agiletesting.blogspot.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Nondeterministic long-to-float coercion
Raymond> My colleague got an odd result today that is reproducible on Raymond> his build of Python (RedHat's distribution of Py2.4.2) but not Raymond> any other builds I've checked (including an Ubuntu Py2.4.2 Raymond> built with a later version of GCC). I hypothesized that this Raymond> was a bug in the underlying GCC libraries, but the magnitude of Raymond> the error is so large that that seems implausible. Does anyone Raymond> have a clue what is going-on? Not off the top of my head (but then I'm not a guts of the implementation or gcc whiz). I noticed that you used both "nondeterministic" and "reproducible" though. Does your colleague always get the same result? If you remove the set constructor do the oddball values always wind up in the same spots on repeated calls? Are the specific values significant (e.g., do you really need range(1) to demonstrate the problem)? Also, I can never remember exactly, but are even-numbered minor numbers in GCC releases supposed to be development releases (or is that for the Linux kernel)? Just a few questions that come to mind. Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Nondeterministic long-to-float coercion
My colleague got an odd result today that is reproducible on his build of Python (RedHat's distribution of Py2.4.2) but not any other builds I've checked (including an Ubuntu Py2.4.2 built with a later version of GCC). I hypothesized that this was a bug in the underlying GCC libraries, but the magnitude of the error is so large that that seems implausible. Does anyone have a clue what is going-on? Raymond Python 2.4.2 (#1, Mar 29 2006, 11:22:09) [GCC 4.0.2 20051125 (Red Hat 4.0.2-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> set(-194 * (1/100.0) for i in range(1)) set([-19400.0, -193995904.0, -193994880.0]) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] state of the maintenance branches
On 10/19/06, Paul Moore <[EMAIL PROTECTED]> wrote: On 10/19/06, Anthony Baxter <[EMAIL PROTECTED]> wrote:> Anyway, all of the above is open to disagreement or other opinions - if you> have them, let me know. My only thought is that you've done a fantastic job pushing throughall the recent releases.Thanks!Thanks from me as well! You showed great patience putting up with all of us during releases. -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] state of the maintenance branches
On 10/19/06, Anthony Baxter <[EMAIL PROTECTED]> wrote: > Anyway, all of the above is open to disagreement or other opinions - if you > have them, let me know. My only thought is that you've done a fantastic job pushing through all the recent releases. Thanks! Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] state of the maintenance branches
OK - 2.4.4 is done. With that, the release24-maint branch moves into dignified old age, where we get to mostly ignore it, yay! Unless you really feel like it, I don't think there's much point to making the effort to backport fixes to this branch. Any future releases from that branch will be of the serious security flaw only variety, and are almost certainly only going to have those critical patches applied. Either this weekend or next week I'll cut a 2.3.6 off the release23-maint branch. As previously discussed, this will be a source-only release - I don't envisage making documentation packages or binaries for it. Although should we maybe have new doc packages with the newer version number, just to prevent confusion? Fred? What do you think? I don't think there's any need to do this for 2.3.6c1, but maybe for 2.3.6 final? For 2.3.6, it's just 2.3.5 plus the email fix and the PSF-2006-001 fix. As I feared, I've had a couple of people asking for a 2.3.6. Oh well. Only one person has (jokingly) suggested a new 2.2 release. That ain't going to happen :-) I don't even want to _think_ about 2.5.1 right now. I can't see us doing this before December at the earliest, and preferably early in 2007. As far as I can see so far, the generator+threads nasty that's popped up isn't going to affect so many people that it needs a rushed out 2.5.1 to cover it - although this may change as the problem and solution becomes better understood. Anyway, all of the above is open to disagreement or other opinions - if you have them, let me know. -- Anthony Baxter <[EMAIL PROTECTED]> It's never too late to have a happy childhood. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Segfault in python 2.5
Mike Klaas wrote: > On 10/18/06, Tim Peters <[EMAIL PROTECTED]> wrote: [...] > Shouldn't the thread state generally be the same anyway? (I seem to > recall some gloomy warning against resuming generators in separate > threads). > Is this an indication that generators aren't thread-safe? regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://holdenweb.blogspot.com Recent Ramblings http://del.icio.us/steve.holden ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] RELEASED Python 2.4.4, Final.
On behalf of the Python development team and the Python community, I'm happy to announce the release of Python 2.4.4 (FINAL). Python 2.4.4 is a bug-fix release. While Python 2.5 is the latest version of Python, we're making this release for people who are still running Python 2.4. This is the final planned release from the Python 2.4 series. Future maintenance releases will be in the 2.5 series, beginning with 2.5.1. See the release notes at the website (also available as Misc/NEWS in the source distribution) for details of the more than 80 bugs squished in this release, including a number found by the Coverity and Klocwork static analysis tools. We'd like to offer our thanks to both these firms for making this available for open source projects. * Python 2.4.4 contains a fix for PSF-2006-001, a buffer overrun * * in repr() of unicode strings in wide unicode (UCS-4) builds. * * See http://www.python.org/news/security/PSF-2006-001/ for more. * There's only been one small change since the release candidate - a fix to "configure" to repair cross-compiling of Python under Unix. For more information on Python 2.4.4, including download links for various platforms, release notes, and known issues, please see: http://www.python.org/2.4.4 Highlights of this new release include: - Bug fixes. According to the release notes, at least 80 have been fixed. This includes a fix for PSF-2006-001, a bug in repr() for unicode strings on UCS-4 (wide unicode) builds. Enjoy this release, Anthony Anthony Baxter [EMAIL PROTECTED] Python Release Manager (on behalf of the entire python-dev team) pgpHQFKzDQCYF.pgp Description: PGP signature ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com