Please ignore my last email. The idea for combining trunc, ceil,
floor, etc was probably just a distractor.
[GvR]
> One thing I'm beginning to feel more and more strongly about
> is that round, trunc, ceil and floor all belong in the same
> category, and either should all be builtins or should a
> One thing I'm beginning to feel more and more strongly about
> is that round, trunc, ceil and floor all belong in the same
> category, and either should all be builtins or should all
> be in math.
>
> I should also admit that the 2-arg version of round() was
> borrowed from ABC, but the use ca
[MvL]
> I wouldn't want to propose removal of len(), no. However,
> I do think that adding more builtins (trunc in particular)
> is bad, especially when they make perfect methods.
+1
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mai
What is the current thinking on this? Is the target still April 2008 as
mentioned in PEP361? Are we going to have an alpha sometime soonish?
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
[GvR]
>> > - trunc(), round(), floor() and ceil() should all be built-ins,
>> > corresponding to __trunc__, __round__, __floor__ and __ceil__.
In Py2.6, what will the signatures be?
Last I heard, ceil() and floor() were still going to be float-->float.
Raymond
___
> Then int() can be defined by deferring to trunc()
> -- as opposed to round().
That part is new and represents some progress. If I understand it
correctly, it means that we won't have both __int__ and __trunc__
magic methods. That's a good thing.
Raymond
__
> [Guido]
>> There is actually quite an important signal to the reader that is
>> present when you see trunc(x) but absent when you see int(x): with
>> trunc(x), the implication is that x is a (Real) number. With int(x),
>> you can make no such assumption -- x could be a string, or it could be
>> a
[Raymond Hettinger]
>> The idea that programmers are confused by int(3.7)-->3 may not be nuts, but
>> it doesn't match any experience I've had with any
>> programmer, ever.
[Christian Heimes]
> You haven't been doing newbie support in #python lately. Sta
>. You may disagree, but that doesn't make it nuts.
Too many thoughts compressed into one adjective ;-)
Deprecating int(float)-->int may not be nuts, but it is disruptive.
Having both trunc() and int() in Py2.6 may not be nuts, but it is duplicative
and confusing.
The original impetus for faci
> I've been
> advocating trunc() under the assumption that int(float) would be
> deprecated and eliminated as soon as practical
And how much code would break for basically zero benefit?
This position is totally nuts.
There is *nothing* wrong with int() as it stands now. Nobody has
problems wit
[Christian Heimes]
> In my opinion float(complex) does do the most sensible thing. It fails
> and points the user to abs().
Right.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
> 'bytearray' is a separate issue. It's a brand new type: a *mutable*
> byte array. Its status as a built-in and completely different API
> makes it much more convenient than using the old array module.
That's reasonable.
Raymond
___
Python-Dev mailing
[christian.heimes]
>> Backport of several functions from Python 3.0 to 2.6 including
>> PyUnicode_FromString, PyUnicode_Format and PyLong_From/AsSsize_t.
>> The functions are partly required for the backport of the bytearray type and
>> _fileio module. They should also make it easier to
>> port
> trunc() has well-defined semantics -- it takes a Real instance and
> converts it to an Integer instance using round-towards-zero semantics.
>
> int() has undefined semantics -- it takes any object and converts it
> to an int (a concrete type!)
So, the problem is basically this:
Since i
> but if Guido
> likes the idea of a standard naming convention (such as the ABC suffix)
> for classes that use the ABCMeta metaclass, I'd certainly be happy to go
> through and update the affected classes and the code which refers to them.
A prefix would be better.
Raymond
__
> If you want ABCs to be more easily recognizable
> as such, perhaps we could use a naming convention,
Essentially, that's all I was asking for. It doesn't
really matter to me whether numbers.py gets called
abc_numbers or abc.numbers. Either one would be an
improvement.
Raymond
__
All of the abstract base classes should be collected in one place. I propose
that the modules be collected into a package so that we can write:
import abc.numbers
import abc.collections
. . .
Besides collecting all the relevant code in one place, it has a nice additional
benefit of c
[GvR]
> Does no-one thinks it means round(f) either?
Heck no. The int() and round() functions have been in Lotus and Excel for an
eternity and nobody has a problem learning what those functions do.
Also, the extra argument for round(f, n) makes it clear that it can return
other floats rounded
>.some pocket calculators have an INT function, defined
> as floor(x) when x is positive and ceil(x) when x is negative
That's the mathematical definition. The way they explain
it is dirt simple: return the integer portion of a number.
Some of the calculators that have int() also have frac()
w
> If the decision comes to be that int(float) should be blessed
> as a correct way to truncate a float, I'd agree with Raymond
> that trunc() is just duplication and should be eliminated.
Yay, we've make progress!
> I'd,of course, rather have a spelling that says what it means. :)
I wouldn't f
rational.py contains code for turning a float into an
exact integer ratio. I've needed something like this in
other situations as well. The output is more convenient
than the mantissa/exponent pair returned by math.frexp().
I propose C-coding this function and either putting it in
the math modul
[Raymond Hettinger]
> Since something similar is happening to math.ceil and math.floor,
> I'm curious why trunc() ended-up in builtins instead of the math
> module. Doesn't it make sense to collect similar functions
> with similar signatures in the same place?
[Christian H
>> Can anyone explain to me why we need both trunc() and int()?
> trunc() has well-defined semantics -- it takes a Real instance
> and converts it to an Integer instance using round-towards-zero
> semantics.
Since something similar is happening to math.ceil and math.floor,
I'm curious why trunc()
Can anyone explain to me why we need both trunc() and int()?
We used to be very resistant to adding new built-ins and
magic method protocols. In days not long past, this would
have encountered fierce opposition.
ISTM that numbers.py has taken on a life of its own and
is causing non-essential off
>> And, would we lose the nice relationship expressed by:
>>
>> for elem in container:
>> assert elem in container
[Steven Bethard]
>We've already lost this if anyone really wants to break it::
>
>>>> class C(object):
>... def __iter__(self):
>... return iter(x
[Daniel Stutzbach]
> There are many places in the C implementation where a slot
> returns an int rather than a PyObject. There other replies
> in this thread seem to support altering the signature of the
> slot to return a PyObject. Is this setting a precedent that
> _all_ slots should return
> having b"" and bytes as aliases for "" and
> str in 2.6 would mean that we could write 2.6 code that correctly
> expresses the use of binary data -- and we could use u"" and unicode
> for code using text, and 2to3 would translate those to "" and str and
> the code would be correct 3.0 text proces
> *If* we provide some kind of "backport" of
> bytes (even if it's just an alias for or trivial
> subclass of str), it should be part of a strategy
> that makes it easier to write code that
> runs under 2.6 and can be automatically translated
> to run under 3.0 with the same semantics.
If it's
[GvR]
> I believe the issue of whether and how to backport bytes
> (and bytearray?) from 3.0 to 2.6 has come up before, but
> I don't think we've come to any kind of conclusion.
My recommendation is to leave it out of 2.6.
Not every 3.0 concept has to be backported. This particular one doesn't h
>> 1. Have structseq subclass from PyTupleObject so that isinstance(s, tuple)
>> returns True. This makes the object usable whenever
>> tuples are needed.
>
> Hmm, is that really necessary? structseq has been in use for quite a
> while and this need hasn't come up -- it's been designed to be qui
[Christian Heimes]
> Log:
> Added new an better structseq representation. E.g. repr(time.gmtime(0)) now
> returns 'time.struct_time(tm_year=1970, tm_mon=1,
> tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=1, tm_isdst=0)'
> instead of '(1970, 1, 1, 0, 0, 0, 3, 1, 0)'. The
> feature
[Raymond]
>>> When does it come-up that you want a third summed dict
>>> while keeping the two originals around unchanged? Does
>>> it matter that the addition is non-commutative? Would
>>> a + b + c produce an intermediate a/b combo and then
>>> another new object for a/b/c so that the entries i
> I'd like to take an "all or nothing" approach to this: either we
> implement the same 4 operations as for sets (|, &, ^ and -) or we
> implement none of them. .
. . .
> I'm not sure where I stand on this proposal -- I guess a +0, if
> someone else does the work. (The abc.py module needs to be up
> I wasn't suggesting that the result of concatenation would
> be a chained table, rather that it would perform the
> equivalent of an update and return the new dict
>(the same way extend works for lists)
When does it come-up that you want a third summed dict
while keeping the two originals
[Jared]
> I think it would be convenient and pythonic if dict
> objects implemented the PySequence_Concat method.
IMO, the chainmap() recipe on ASPN is a much better solution since it doesn't
create a third dictionary with the all the attendant allocation and copying
effort. It isn't a common u
This seems like something that could reasonably be added to Py2.6.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mai
>> And then extend this to any other
>> package that we consider creating? Otherwise leave it out? How would
>> that follow for sqlite since that is not going to get any shorter
>> thanks to a package? Should it still go into the package for
>> organizational purposes?
> If you're asking me, the
[Aahz]
> I have always thought that "quantize()" makes Decimal
> confusing in the context of the other mechanisms that Python makes
> available for other kinds of numbers.
No doubt, the spec made a number of choices that are obvious only if you work
at IBM. And, there is no doubt, the module has
[Jeffrey Yasskin]
> I always like to have a patch around because abstract discussions,
> even (especially?) on simple topics, have a tendency to run off into
> the weeds. A patch keeps things focused and moving forward.
Please recognize that our little system of patches and newsgroup
discussions i
[Jeffrey Yasskin]
> Given Guido's agreement, expect another version of this patch with
> only __trunc__.
Why is __trunc__ being backported? Is a trunc() builtin being backported?
What is the point for a synonym for int() and __int__ in
Py2.6.
Unless I'm missing something, this doesn't improve
[Jeffrey Yasskin]
>> > I'm not
>> > sure exactly what you're objecting to. Could you be more precise?
>>
>> You note said: "I'll implement Context.round() in a separate patch. Comment
>> away."
>
> Oh, sorry for not being clear then. I don't intend to write or discuss
> that separate patch until
[Raymond]
>> There should probably be a PEP sets clearer guidelines about what should be
>> backported from Py3.0.
>>
>> Perhaps something like this:
>> * If there is a new feature that can be implemented in both and will make
>> both more attractive, then it should be in both.
>> * If something
[Jeffrey Yasskin]
> The other 3 methods
> specified by PEP 3141 aren't strictly necessary for 2.6, but they will
> be needed for 3.0. I'd rather not make the two versions of Decimal
> gratuitously different, so this patch puts them in the 2.6 version
> too.
If I understand you correctly, then the
more harm than good.
Raymond
- Original Message -
From: "Jeffrey Yasskin" <[EMAIL PROTECTED]>
To: "Raymond Hettinger" <[EMAIL PROTECTED]>
Cc: "Mark Dickinson" <[EMAIL PROTECTED]>; "Python 3000" <[EMAIL PROTECTED]>;
Sent
[Tim]
> I agree it's far from obvious to most how
> to accomplish rounding using the decimal facilities.
FWIW, there is an entry for this in the Decimal FAQ:
http://docs.python.org/lib/decimal-faq.html
Raymond
___
Python-Dev mailing list
Python-Dev@p
> I think pep 3141's round(x, ndigits) does (1). The only thing it
> doesn't support yet is specifying the rounding mode. Perhaps the pep
> should say that round() passes any extra named arguments on to the
> __round__() method so that users can specify a rounding mode for types
> that support it?
>> ConcurrentHashMap scales better in the face of threading
.. .
>> So, do Python implementations need to guarantee that list(dict_var) ==
>> a later result from list(dict_var)?
> What code would break if we loosened this restriction?
I can imagine someone has code like this:
for k in d:
[GvR to Tim]
> Do you have an opinion as to whether we should
> adopt round-to-even at all (as a default)?
For the sake of other implementations (Jython, etc) and for ease of reproducing
the results with other tools (Excel, etc), the simplest choice is int(x+0.5).
That works everywhere, it is
[Paul Moore]
> 1 .__str__()
> This one is a number "1" followed by
> the operator "." followed by "__str__".
FWIW, I would avoid the odd spacing and write this as:
(1).__str__()
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail
[GvR]
> We're thin on contributors as it is (have you noticed
> how few people are submitting anything at all lately?).
The people who are contributing are doing a nice job. Also, it was nice that
the change was discussed on the list.
> 2.6 should be extremely compatible with 2.5 by default.
G
> Consistency and compatibility with
> 3.0 suggest that they should return long for every new type we add
> them to. What does the list think?
I think Py2.6 and Py2.5 should be treated with more respect. Will backporting
this change can only cause relief or create
headaches?. By definition, t
[Jeroen Ruigrok van der Werven]
> On the Trac project using your grep gives me 203 lines, if we take ~2 lines
> for and after in consideration, it still means 203/5 ~= 40 occurences.
Thanks. I'm more curious about the content of those lines. Does the proposed
syntax help, does the need go away
[GvR]
> I wonder if your perceived need for this isn't skewed by your
> working within the core?
The need was perceived by a colleague who does not work on the core. My own
skew was in the opposite direction -- I've seen the pattern so often that I'm
oblivious to it.
Before posting, I ran some
The standard library, my personal code, third-party packages, and my employer's
code base are filled with examples of the following pattern:
try:
import threading
except ImportError:
import dummy_threading as threading
try:
import xml.etree.cElementTree as ET
except ImportError:
tr
The bots are kicking-off so many false alarms that it is becoming difficult to
tell whether a check-in genuinely broke a build.
At the root of the problem is a number of tests in the test suite that randomly
blow-up. I now tend to automatically dismiss failures in test_logging and
test_threadi
> 2007/12/8, Raymond Hettinger <[EMAIL PROTECTED]>:
>>...the proposal adds new syntax without adding functionality.
>
> That is indeed the definition of syntactic sugar [1]. Python is full
> of that, for example the import statement.
. . .
> The real problem is that
>
In your example, why do you "raise StopIteration" instead just writing "return"?
- Original Message -
From: "Manuel Alejandro Cerón Estrada" <[EMAIL PROTECTED]>
Take a look at this example:
def lines():
for line in my_file:
if some_error():
raise StopIteration()
> Note that 'yield break' resembles the 'break' statement used in loops,
> while 'StopIteration' doesn't. 'yield break' is more orthogonal to the
> rest of the language.
>
> I am looking forward to seeing your opinions.
-1
I do not find the meaning to be transparent and the proposal adds new synt
> Hm... [EMAIL PROTECTED] bounced. I wonder what's going on there..
I'm now in an EWT spin-off company. The new email address is [EMAIL PROTECTED]
Also, I frequently check the [EMAIL PROTECTED] account too.
Raymond
___
Python-Dev mailing list
Python
> I never even saw that one. I'm hoping Raymond will have another look.
Great. Will review it this week.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/
> It looks like we're in agreement to drop unbound methods
+1 It is a bit cleaner to simply return the unmodified function.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http
From: "Daniel Stutzbach" <[EMAIL PROTECTED]>
> Many years ago I implemented a deque type in C (for use in C programs)
> using a single dynamically sized array as the underling type,
The approach was not used so we could avoid data movement
associated with re-sizing.
> I see the
> advantages as f
> In my opinion __replace__ should be able to replace multiple fields.
This suggestion is accepted and checked-in. See revision 58975.
Surprisingly, the required signature change results in improved
clarity. This was an all around win.
Raymond
___
P
> crucial stuff like __fields__ ... fully
> read-write.
On further thought, I think this is a good idea. Nothing good can come from
writing to this class variable.
Suggestion is checked-in in rev 58971. Curiously, it was already documented as
read-only (I took the time machine out for a spin
> for efficiency I would prefer to avoid using * to break
> up the sequences generated directly by the database interface.
There are some use cases that would be better served by an alternate design and
others that are better served by the current design. For example, it would
really suck to ha
> As you may have seen, I have recently been granted developer
> privileges on python svn.
Hello Amaury. Welcome aboard.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://
> I'm not sure about the name "propset" ...
>Maybe something like "setproperty" would be better.
I think not. Saying "setproperty" has too many ambiguous mental parsings.
When does "set" take place -- assigning a value to a property is different
defining the property itself. Is "set" a verb
> and have a matching propdel decorator?
-1. That would be a complete waste of builtin space.
Put stuff in when it is really needed. Ideas are
not required to automatically propagate from the
commonly used cases to the rarely used cases.
Raymond
___
P
> I'd like to make this [propset] a standard built-in,
+1 -- I find this to be an attractive syntax
> I'd also like to change property so that the doc
> string defaults to the doc string of the getter.
+1 -- This should also be done for classmethod,
staticmethod, and anything else that wraps f
For Py2.6, I fixed a long standing irritant where the count raised an
OverflowError when it reached LONG_MAX.
If you guys agree that is a bug fix, I'll backport it to Py2.5.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.
>> Yes! We have guaranteed that spec updates are to be treated as bug fixes
>> and backported. This is especially important in this
>> case
>> because other errors have been fixed and the test cases have grown.
>
> Perfect! I'll backport it to 2.5... what about 2.4?
If there are any plans for
> Decimal is a pretty stand alone module, and I'm absolutely sure that
> just backporting the whole module and its testcases will fix a lot of
> problems, and Py2.5 users will have new functionality, but is this ok?
Yes! We have guaranteed that spec updates are to be treated as bug fixes and
bac
If the differences are few, I prefer that you insert some conditionals
that attach different functions based on the version number. That way
we can keep a single version of the source that works on all of the
pythons.
Raymond
On Sep 29, 2007, at 8:26 AM, "Thomas Wouters" <[EMAIL PROTECTED]
[Bruce Frederiksen]
I've added a new function to itertools called 'concat'. This
function is
much like chain, but takes all of the iterables as a single
argument.
[Raymond]
>> Any practical use cases or is this just a theoretical improvement?
>>
>> For Py2.x, I'm not wil
[Bruce Frederiksen]
>> I've added a new function to itertools called 'concat'. This function is
>> much like chain, but takes all of the iterables as a single argument.
Any practical use cases or is this just a theoretical improvement?
For Py2.x, I'm not willing to unnecessarily expand the modu
[Bill Janssen]
>
> How about this one, though:
>
>PyDict_NEW(int) => PySetObject *
>PyDict_ADD(s, value)
>
> ADD would just stick value in the next
> empty slot (and steal its reference).
Dicts, sets and frozenset are implemented as hash tables, not as arrays, so the
above suggestion do
You can create a frozenset from any iterable using PyFrozenSet_New().
If you don't have an iterable and want to build-up the frozenset one element at
a time, the approach is to create a regular set (or some other mutable
container), add to it, then convert it to a frozenset when you're done:
The docs do make a distinction and generally follow the definitions given in
the glossary for the tuturial.
In the case of iter(collection), I prefer the current wording because the
target object need not support __iter__, it is sufficient
to supply a sequential __getitem__ method.
Raymond
[Stephen J. Turnbull]
> Shouldn't the "until" in the doc be "while"? Alternatively, "true"
> could be changed to "false".
Yes. I'll make the change.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/pyt
[Matthieu on itertools.dropwhile() docs]
> Make an iterator that drops elements from the iterable as long as the
> predicate is true; afterwards, returns every element. Note,
> the iterator does not produce any output until the predicate is true, so it
> may have a lengthy start-up time.
>
> It
From: "Guido van Rossum" <[EMAIL PROTECTED]>
> But doesn't the very same argument also apply against islice(), which
> you just offered as an alternative?
Not really. The use cases for islice() typically do not involve
repeated slices of an iterator unless it is slicing off the front
few elements
[Walter Dörwald]
> I'd like to propose the following addition to itertools: A function
> itertools.getitem() which is basically equivalent to the following
> python code:
>
> _default = object()
>
> def getitem(iterable, index, default=_default):
>try:
> return list(iterable)[index]
>
From: "Greg Ewing"
> If the aforementioned iterable can yield *anything*,
> then it might yield this 'nodef' value as well.
>
> For this reason, there *can't* exist any *standard*
> guaranteed-unambiguous sentinel value. Each use
> case needs its own, to ensure it's truly unambiguous
> in the con
> - If you make a mistake in LaTeX, you will get a cryptic error which
> is usually a little difficult to figure out (if you're not used to
> it). You can an error though.
FWIW, the pure Python program in Tools/scripts/texchecker.py does a
pretty good job of catching typical LaTeX mistakes and
> - If you make a mistake in LaTeX, you will get a cryptic error which
> is usually a little difficult to figure out (if you're not used to
> it). You can an error though.
FWIW, the pure Python program in Tools/scripts/texchecker.py does a
pretty good job of catching typical LaTeX mistakes and
> - If you make a mistake in LaTeX, you will get a cryptic error which
> is usually a little difficult to figure out (if you're not used to
> it). You can an error though.
FWIW, the pure Python program in Tools/scripts/texchecker.py does a
pretty good job of catching typical LaTeX mistakes and
> - If you make a mistake in LaTeX, you will get a cryptic error which
> is usually a little difficult to figure out (if you're not used to
> it). You can an error though.
FWIW, the pure Python program in Tools/scripts/texchecker.py does a
pretty good job of catching typical LaTeX mistakes and
> - If you make a mistake in LaTeX, you will get a cryptic error which
> is usually a little difficult to figure out (if you're not used to
> it). You can an error though.
FWIW, the pure Python program in Tools/scripts/texchecker.py does a
pretty good job of catching typical LaTeX mistakes and
>> * New method (proposed by Shane Holloway): s1.isdisjoint(s2).
>> Logically equivalent to "not s1.intersection(s2)" but has an
>> early-out if a common member is found.
[MvL]
> I'd rather see iterator versions of the set operations.
Interesting idea. I'm not sure I see how to make it work.
Here some ideas that have been proposed for sets:
* New method (proposed by Shane Holloway): s1.isdisjoint(s2). Logically
equivalent to "not s1.intersection(s2)" but has an early-out if a common member
is found. The speed-up is potentially large given two big sets that may
largely overlap or
> The only rationale I can think of for such a thing is
> that maybe they're trying to accommodate the possibility
> of a machine built entirely around a hardware implementation
> of the spec, that doesn't have any other way of doing
> bitwise logical operations.
Nonsense. The logical operations
> The only rationale I can think of for such a thing is
> that maybe they're trying to accommodate the possibility
> of a machine built entirely around a hardware implementation
> of the spec, that doesn't have any other way of doing
> bitwise logical operations. If that's the case, then Python
>
I question the sanity of the spec writers in this case, I do trust that
overall, they have provided an extremely well thought-out spec, have gone
through extensive discussion/feedback cycles, and have provided a thorough
test-suite. It is as good as it gets.
Ray
> I'd like to suggest that we remove all (or nearly all) uses of
> xrange from the stdlib. A quick scan shows that most of the usage
> of it is unnecessary. With it going away in 3.0, and it being
> informally deprecated anyway, it seems like a good thing to go away
> where possible.
>
>Any obj
>Raymond> I find that style hard to maintain. What is the advantage over
>Raymond> multi-line strings?
>
>Raymond> rows = self.executesql('''
>Raymond> select cities.city, state, country
>Raymond> from cities, venues, events, addresses
>Raymond> where cities.
[Skip]
> I use it all the time. For example, to build up (what I consider to be)
>readable SQL queries:
>
> rows = self.executesql("select cities.city, state, country"
>"from cities, venues, events, addresses"
>"where cities.city like %s"
>
[Collin Winter]
> This should be fixed in r54844. The problem was that the availability
> of the urlfetch resource wasn't being checked early enough and so
> test_support.run_suite() was converting the ResourceDenied exception
> into a TestError instance. This wasn't showing up on other machines
>
Ralf, your issue is arising because of revision 53655 which fixes SF 1615701.
Subclasses of builtins are pickled using obj.__reduce_ex__() which returns
a tuple with a _reconstructor function and a tuple of arguments to that
function.
That tuple of arguments include the subclass name, the base cl
The pickle issue may be related to revision 53655 fixing a psuedo-bug (it was
arguable whether current or prior behavior was most desirable). Will look at
this more and will report back.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http:
[Facundo Batista]
>The names, as the new functions will be discussed here in the second
>step. For example, I'm not absolute sure that something like...
>
Decimal("1100").xor(Decimal("0110")
>Decimal("1010")
>
>...is actually needed.
>
It doesn't matter. We promise to offer a full impleme
801 - 900 of 1487 matches
Mail list logo