FYI, itertools.compress() is very useful in conjunction with
itertools.cycle() to pick out elements following a periodic pattern of
indices. For example,
# Elements at even indices.
>>> list(compress(range(20), cycle([1, 0])))
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# Or at odd ones.
>>> list(compres
[Chris Angelico ]
> ...
> For it to be accepted, you have to convince people - particularly,
> core devs - that it's of value. At the moment, I'm unconvinced, but on
> the other hand, all you're proposing is a default value for a
> currently-mandatory argument, so the bar isn't TOO high (it's not l
"""
Some people, when confronted with a problem, think “I know, I'll use
regular expressions.” Now they have two problems.
- Jamie Zawinski
"""
Even more true of regexps than of floating point, and even of daemon threads ;-)
regex is a terrific module, incorporating many features that newer
reg
[Tim]
>>> That leaves the happy 5% who write "[^X]*X", which
>>> finally says what they intended from the start.
[Steven]
>> Doesn't that only work if X is literally a single character?
RIght. It was an examp[e, not a meta-example. Even for a _single
character_, "match up to the next, but never m
[Tim]
>> In SNOBOL, as I recall, it could be spelled
>>
>> ARB "spam" FENCE
[Chris]
> Ah, so that's a bit more complicated than the "no-backtracking"
> parsing style of REXX and scanf.
Oh, a lot more complex. In SNOBOL, arbitrary computation can be
performed at any point in pattern matching..
[Steven D'Aprano ]
> I've been interested in the existence of SNOBOL string scanning for
> a long time, but I know very little about it.
>
> How does it differ from regexes, and why have programming languages
> pretty much standardised on regexes rather than other forms of string
> matching?
What
[Tim, on trying to match only the next instance of "spam"]
> Assertions aren't needed, but it is nightmarish to get right.
Followed by a nightmare that got it wrong. My apologies - that's what
I get for trying to show off ;-)
It's actually far easier if assertions are used, and I'm too old to
bot
do! He was never fond of doctest either.
I know I don't. I love them. Tim Peters loves them (he should, he wrote
> the module) and if its good enough for Uncle Timmy then anyone who
> disparages them better have a damn good reason.
>
Nope! They don't need any reason at all.
Since this started, I've been using variants of the following in my own
code, wrapping `accumulate`:
from itertools import accumulate, chain, cycle, islice
def accum(iterable, func=None, *, initial=None, skipfirst=False):
if initial is not None:
iterable = chain([initi
[James Lu]
> > Currently, is = = = always equivalent
> > to = ; = ; = ?
[Stephen J. Turnbull[
> No. It's equivalent to
>
> =
> =
> =
>
> and the order matters because the s may have side effects.
This is tricky stuff. In fact the rightmost expression is evaluated once,
[Jonathan Fine ]
> The first line from "import this" is
>
> The Zen of Python, by Tim Peters
>
> I suggest we put this discussion on hold, until Tim Peters (copied)
> has had a chance to respond.
>
>
Don't look at me - I was merely channeling Guido
[Tim Delaney ]
> "Elegant" is the *only* word I think it would be appropriate to replace
> "beautiful" with.
> And I can't think of an elegant replacement for "ugly" to pair with
> "elegant". "Awkward" would probably be the best I can think of, and
> "Elegant is better than awkward" just feels k
[Tim]
>
> > I already made clear that I'm opposed to changing it.
>
[Terry Reedy ]
> To me, this settles the issues. As author, you own the copyright on
> your work. The CLA allows revision of contributions, but I don't think
> that contributed poetry should be treated the same as code and docs
[Eric V. Smith ]
> Here’s the idea: for f-strings, we add a !d conversion operator, which
> is superficially similar to !s, !r, and !a. The meaning of !d is:
> produce the text of the expression (not its value!), followed by an
> equal sign, followed by the repr of the value of the expression.
..
[Tim]
> >> But what if
>>
> >> {EXPR!d:FMT}
> >>
> >> acted like the current
> >>
> >> EXPR={EXPR:FMT}
> >>
> >> ? I'd find _that_ useful often. For example, when displaying floats,
> >> where the repr is almost never what I want to see.
> >> ...
[Eric V. Smith ]
> After giv
[Guido]
> Gah! You are overthinking it. This idea is only worth it if it's dead
> simple, and the version that Eric posted to start this thread, where !d
> uses the repr() of the expression, is the only one simple enough to bother
> implementing.
>
In that case, I have no real use for it, for rea
[Tim]
>
> > Note that transforming
> >
> > {EXPR!d:FMT}
> >
> > into
> >
> > EXPR={repr(EXPR):FMT}
> >
> > is actually slightly more involved than transforming it into
> >
> > EXPR={EXPR:FMT}
> >
> > so I don't buy the argument that the original idea is simpler. More
> > magical and le
[David Mertz ]
> I have to say though that the existing behavior of
> `statistics.median[_low|_high|]`
> is SURPRISING if not outright wrong. It is the behavior in existing Python,
> but it is very strange.
>
> The implementation simply does whatever `sorted()` does, which is an
> implementation
[David Mertz ]
> Thanks Tim for clarifying. Is it even the case that sorts are STABLE in
> the face of non-total orderings under __lt__? A couple quick examples
> don't refute that, but what I tried was not very thorough, nor did I
> think much about TimSort itself.
I'm not clear on what "stable
[David Mertz ]
> OK, let me be more precise. Obviously if the implementation in a class is:
>
> class Foo:
> def __lt__(self, other):
> return random.random() < 0.5
>
> Then we aren't going to rely on much.
>
> * If comparison of any two items in a list (under __lt__) is deterministic,
I'd like to see internal consistency across the central-tendency
statistics in the presence of NaNs. What happens now:
mean: the code appears to guarantee that a NaN will be returned if a
NaN is in the input.
median: as recently detailed, just about anything can happen,
depending on how undefi
[David Mertz ]
> I think consistent NaN-poisoning would be excellent behavior. It will
> always make sense for median (and its variants).
>
>> >>> statistics.mode([2, 2, nan, nan, nan])
>> nan
>> >>> statistics.mode([2, 2, inf - inf, inf - inf, inf - inf])
>> 2
>
>
> But in the mode case, I'm not
[Guido]
> ,,,
> I learned something in this thread -- I had no idea that the deque datatype
> even has an option to limit its size (and silently drop older values as new
> ones are added), let alone that the case of setting the size to zero is
> optimized in the C code. But more importantly, I don'
to do
;-)
On Sun, Sep 8, 2019 at 4:02 PM Guido van Rossum wrote:
>
> That's a question for Tim Peters.
>
> On Sun, Sep 8, 2019 at 9:48 PM Andrew Barnert via Python-ideas
> wrote:
>>
>> On Sep 7, 2019, at 18:07, [email protected] wrote:
>> >
>
[Richard Higginbotham ]
> If you start including the time it takes to convert lists to sets its even
> more
> pronounced. hash values can collide and the bigger the data set the
> more likely it is to happen.
That's not so. Python's hash tables dynamically resize so that the
load factor never ex
BTW, this thread seems a good place to mention the under-appreciated
SortedContainers package from Grant Jenks:
http://www.grantjenks.com/docs/sortedcontainers/
This supplies sorted containers (lists, dicts, sets) coded in pure
Python, which generally run at least as fast as C extensions
impl
[Tim]
>> Something I haven't seen mentioned here, but I may have missed it:
>> when timing algorithms with sets and dicts, people often end up merely
>> measuring the bad effects of hardware data cache misses when the
>> containers get large, and especially in contrived benchmarks.
>>
>> In those t
Note that CPython's sort is in the business of merging sorted
(sub)lists. Any mergesort is. But CPython's adapts to take
advantage, when possible, of "lumpy" distributions. For example, if
you sort
list(range(100, 200)) + list(range(0, 100))
it goes very fast (O(N)). Because
[Guido]
> I suspect Tim Peters had a good reason to include copysign() rather than
> sign().
Tim wasn't involved in this one :-)
But copysign is a "recommended" function in IEEE-754, and was
eventually added to the C standard (for libm). So supplying it in
`math` was re
[Guido]
> We're not changing next(). It's too fundamental to change even subtly.
Note that `next()` already accepts two arguments (the second is an
optional default in case its iterable argument is exhausted). Like:
>>> next(iter([]), 42)
42
> We might add itertools.first(), but not builtins.fi
[Guido]
> ...
> I do have to admit that I'm probably biased because I didn't recall 2-arg
> next()
> myself until it was mentioned in this thread.
I knew about it once, but had forgotten all about it too until this
thread :-) Which does indeed make the case stronger for adding
itertools.first, e
[Andrew Barnert ]
> Didn’t PyPy already make the fix years ago of rewriting all of itertools
> (for both 2.7 and 3.3 or whenever) as “Python builtins” in the underlying
> namespace?
I don't know.
> Also, even if I’m remembering wrong, just writing a Python module in front
> of the C module, with
[Steven D'Aprano ]
> What do you think of my suggestion that we promote the itertools recipe
> "take" into a function?
>
> https://mail.python.org/archives/list/[email protected]/message/O5RYM6ZDXEB3OAQT75IADT4YLXE25HTT/
That it's independent of whether `first()` should be added.
I would _m
[Guido]
>> def first(it, /, default=None):
>> it = iter(it)
>> try:
>> return next(it)
>> except StopIteration:
>> return default
[Greg Ewing ]
> Can you provide any insight into why you think it's better for
> it never to raise an exception, as opposed to raising
[Steven D'Aprano ] wrote:
>>> values = take(count, items, default=None)
[MRAB]
>> Why is the count first? Why not have the (definitely required) items
>> first and let the count have a default of 1?
[Steven]
> I lifted the bulk of the function, including the signature, from the
> recipe in th
[Guido]
> ...
> Similarly, there is already a spelling of first() (or close enough) that
> raises: 1-arg next().
> If 1-arg first() would also raise, it would fail the rule "similar things
> should be spelled
> similarly, and different things should be spelled differently".
The "... unless you'r
[Guido]
> The argument that first(it) and next(it) "look the same" doesn't convince
> me;
I'm sorry - I guess then I have absolutely no idea what you were
trying to say, and read it completely wrong.
> if these look the same then all function applications look the same, and
> that can certainly n
[Brett Cannon ]
> Thinking out loud here...
>
> What idiom are we trying to replace with one that's more obviously and whose
> semantics are easy to grasp?
For me, most of the time, it's to have an obvious, uniform way to
spell "non-destructively pick an object from a container (set, dict,
list, d
[Andrew Barnert ]
> ...
> I think you’re arguing that the philosophy of itertools is just wrong, at
> least for
> the kind of code you usually write with it and the kind of people who usually
> write that code. Is that fair, or am I misrepresenting you here?
It's fair enough, although rather than
[Tim]
>> For me, most of the time, it's to have an obvious, uniform way to
>> spell "non-destructively pick an object from a container (set, dict,
>> list, deque, heap, tuple, custom tree class, ...)". I don't even have
>> iterators in mind then, except as an implementation detail.
[Steven]
> You
[Steven D'Aprano ]
> It wasn't :-) but we're talking about adding a function to **itertools**
> not "container tools", one which will behave subtly different with
> containers and iterators. Your use-case ("first item in a container") is
> not the same as the semantics "next element of an iterator"
[]Brett Cannon ]
> ...
> Sorry, "needed" was too strong of a word. It's more about justification for
> including in the stdlib and deciding to support it for a decade or more
> versus the answer we give for simple one-liners of "put in your personal
> toolbox if you don't want to type it out every
>> Does that mean that first() and next() are undefined for sets?
[Stephen J. Turnbull ]
> first() is undefined. next() is defined by reference to iterating
> over the set (that's why I don't have a problem with iterating over a
> set).
Every suggestion here so far has satisfied that, if S is a
[Tim]
>> Every suggestion here so far has satisfied that, if S is a non-empty set,
>>
>> assert next(iter(S)) is first(S)
>>
>> succeeds. That is, `first()` is _defined_ by reference to iteration
>> order. It's "the first" in that order (hence the name).
[Stephen J. Turnbull ]
> The problem
[Brett Cannon]
> I'm also sure the docs will say "Returns the first item yielded by the
iterable." That
> last word is a dead give-away on how the choice will be made on any
collection,
> Sequence or not. (Doubly true if this goes into itertools.)
>
> There's also the contingency of users who will
[Tim]
>> BTW, you can go a long way in Python without knowing anything about `iter()`
>> or `next()`. But not without mastering `for` loops. That's why I prefer to
>> say that, for ordinary cases,
>>
>> a = first(it)
>>
>> has the same effect as:
>>
>> for a in it:
>> break
[Andr
[Steven D'Aprano]
...
> I don't recall who wants a version of first that raises,
I do, for one. But I want both (raising and non-raising) behaviors
at times, and the more-itertools version supplies both.
> or when that would be useful.
There is no practical way to assert than an iterable isn't
[Andrew Barnert ]
> Sure, but you can also explain first just fine by saying it returns
> the first thing in its argument, and that will stick as well.
We have different meanings for "fine" in this context. "The first
thing in its argument" is English prose, and prone to
misunderstanding. I'm ab
[Christopher Barker]
>> I think we all agree that it does not belong in __builtins__,
[Greg Ewing]
> Do we?
Nobody yet has argued in favor of it - or even suggested it.
> I'm not convinced.
And that remains true even now ;-) The new part here is that yours is
the first message to mention it th
[David Mertz ]
> Here's a discussion of both a conceptually simple and a convoluted but
> fast version (the latter, naturally, by Tim Peters).
Not really convoluted - it's the natural thing you'd do "by hand" to
move from one partition to the next: look at the curr
[Christopher Barker ]
> ...
> BTW: this has been a REALLY LONG thread -- I think it's time for a
> concrete proposal to be written up, sonce it appears we're not all
> clear on what we're talking about. For my part I think a first() function
> would be nice, an am open to a couple variations, so so
[Christopher Barker ]
> ...
> But the biggest barrier is that it would be a fair bit of churn on the sort()
> functions
> (and the float class), and would only help for floats anyway. If someone want
> to propose this, please do -- but I don't think we should wait for that to do
> something
> wit
[Richard Damon ]
> IEEE total_order puts NaN as bigger than infinity, and -NaN as less than
> -inf.
>
> One simple way to implement it is to convert the representaton to a 64
> bit signed integer (not its value, but its representation) and if the
> sign bit is set, complement the bottom 63 bits (be
[David]
> How is that fancy bitmask version different from my 3-line version?
Where he's referring to my:
https://bugs.python.org/msg336487
and, I presume, to his:
def total_order(x):
if math.isnan(x):
return (math.copysign(1, x), x)
. return (0, x)
\
Richard s
[David]
> Has anyone actually ever used those available bits for the zillions of NaNs
> for
> anything good?
Yes: in Python, many sample programs I've posted cleverly use NaN
bits to hide ASCII encodings of delightful puns ;-)
Seriously? Not that I've seen. The _intent_ was that, e.g., quiet
[David]
> What is it, 2**48-2 signaling NaNs and 2**48 quiet NaNs? Is my quick count
> correct (in 64-bit)?
Any bit pattern where the exponent is all ones (there are 11 exponent
bits, giving 2**(64-11) = 2**53 bit patterns with an all-ones
exponent), _and_ the significand isn't all 0 (it's an infi
[David Mertz]
>> As me and Uncle Timmy have pointed out, it IS FIXED in sorted(). You just
>> need to call:
>>
>>sorted_stuff = sorted(stuff, key=nan_aware_transform)
[Christopher Barker]
> But what would that be? floats have inf and -inf -- so how could you force
> the NaNs to be at the end
[M Siddharth Prajosh ]
> This is more of a doubt than a new idea. Python has always worked
> intuitively but this was a bummer.
>
> A list has an append method. So I can do list.append(value).
> I tried doing list(range(10)).append(10) and it returns None.
Yes. Most methods that mutate an object
[David Mertz ]
> ...
> What we get instead is a clear divide between mutating methods
> on collections that (almost) always return None, and functions
> like sorted() and reversed() that return copies of the underlying
> collection/iterable. Of course, there are many methods that don't
> have func
[Guido]
> Sounds like a hallucination or fabrication.
Nope! Turns out my memory was right :-)
> The behavior of `for i in range(10): i` in the REPL exists
> to this day, and list.append() never returned a value.
Sure, but those weren't the claims. The claim was that the result of
an expression
[Guido, on Pythons before 1.0.2 always printing non-None expression
statement results]
> Heh. That was such a misfeature that I had thoroughly suppressed any
> memory of its existence. -k indeed. :-)
I prefer to think of it as a bit of genius :-)
The natural desire to avoid mounds of useless out
[Paul Moore ]
> ...
> Can you share a bit more about why you need to do this? In the
> abstract, having the ability to split numbers over lines seems
> harmless and occasionally useful, but conversely it's not at all
> obvious why anyone would be doing this in real life.
It's not all that uncommon
[Guido]
> But beware, IIRC there are pathological cases involving floats, (long) ints
> and rounding where transitivity may be violated in Python (though I believe
> only Tim Peters can produce an example :-).
Not anymore ;-) That is, while comparisons mixing bigints and floats
may have
[Steven D'Aprano ]
> Sorting doesn't require a total order. Sorting only requires a weak
> order where the only operator required is the "comes before" operator,
> or less than. That's precisely how sorting in Python is implemented.
Let's be careful here. Python's list.sort() guarantees that _if_
[Steven D'Aprano ]
> ...
> All I intended was to say that sort would "work" in the sense you state:
> it will give *some* permutation of the data, where (I think, correct me
> if I'm wrong) each element compares less than the following element.
>
> Wait, no, that's not right either... each element
[Tim]
>> If the result of Python's sort is
>>
>> [x, y]
>>
>> then I happen to know (because I know everything about its
>> implementation) that one of two things is true:
>>
>> - The original list was also [x, y], and y < x was False.
[Steven]
> That's my reasoning, based on your earlie
[Guido]
> Look at the following code.
>
> def foo(a, b):
> x = a + b
> if not x:
> return None
> sleep(1) # A calculation that does not use x
> return a*b
>
> This code DECREFs x when the frame is exited (at the return statement).
> But (assuming) we can clearly see that x
[Guido]
>> I’ve never been able to remember whether (f@g)(x) means f(g(x)) or g(f(x)).
>> That pretty much kills the idea for me.
[David Mertz]
> Well, it means whichever one the designers decide it should mean. But
> obviously it's a thing to remember,
> and one that could sensibly go the other
[Steven D'Aprano ]
>>
>> The other simple solution is `next(iter(mydict.items()))`.
[Guido]
> That one always makes me uncomfortable, because the StopIteration it
> raises when the dict is empty might be misinterpreted. Basically I never
> want to call next() unless there's a try...except Sto
[Cade Brown , suggests ordered sets]
FYI, last time this went around was December, with over 100 msgs here:
https://mail.python.org/archives/list/[email protected]/thread/AEKCBGCKX23S4PMA5YNDHFJRHDL4JMKY/#AEKCBGCKX23S4PMA5YNDHFJRHDL4JMKY
___
Python-
[Random832 ]
> I was making a "convert Fraction to Decimal, exactly if possible" function and
> ran into a wall: it's not possible to do some of the necessary operations with
> exact precision in decimal:
>
> - multiplication
> - division where the result can be represented exactly [the divisor is
[Random832]
> I don't understand what's unclear -
Obviously ;-) But without carefully constructed concrete use cases,
what you say continues to remain unclear to me.
> I was suggesting there should be an easy way to have the exact
> result for all operations on Decimal operands that have exact r
[Random832 ]
> My suggestion was for a way to make it so that if an exact result is
> exactly representable at any precision you get that result, with
> rounding only applied for results that cannot be represented exactly
> regardless of precision.
That may have been the suggestion in your head ;-
[Tim]
>> But the decimal spec takes a different approach, which Python's docs
>> don't explain at all: the otherwise-mysterious ROUND_05UP rounding
>> mode. Quoting from the spec:
>>
>> http://speleotrove.com/decimal/damodel.html
>> ...
>> The rounding mode round-05up permits arithmet
[Tim]
>> Try to spell out what you mean - precisely! - by "this". I can't do
>> that for you. For any plausible way of fleshing it out I've thought
>> of, the answer is "no".
[Marco Sulla ]
> Well, please, don't be so harsh. I'm trying to discuss to someone that
> co-created Python itself, it's no
[Random832 ]
[various bits of pushback, which all basically boil down to this one:}
> ...
> That is nonsense. "exactly representable" is a plain english phrase and
> has a clear meaning that only involves the actual data format, not the
> context.
The `decimal` module implements a very exacting s
[Steven D'Aprano ]
> ...
> (To be honest, I was surprised to learn that the context precision is
> ignored when creating a Decimal from a string. I still find it a bit odd
> that if I set the precision to 3, I can still create a Decimal with
> twelve digits.)
You're not alone ;-) In the original
[Tim]
>> I suggested following the standards' rules (the constructor works the
>> same way as everything else - it rounds) for Python's module too, but
>> Mike Cowlishaw (the decimal spec's primary driver) overruled me on
>> that.
[Greg Ewing ]
> Did he offer a rationale for that?
Possibly, but I
[Sven R. Kunze ]
> ...
> He already convinced some people. Just not some venerable Python devs, which
> doesn't necessarily mean something at all.
>
> Their response: "Oh, I don't need it, let's close it."
> Arek: "But I need it."
>
> So, who's right now?
By default, the venerable Python devs ;-)
[Sven R. Kunze ]
>>>
>>> ...
>>> He already convinced some people. Just not some venerable Python devs,
>>> which>> doesn't necessarily mean something at all.
>>>
>>> Their response: "Oh, I don't need it, let's close it."
>>> Arek: "But I need it."
>>>
>>> So, who's right now?
[Tim]
>> By default,
On Tue, Sep 6, 2016 at 2:48 PM, David Mertz wrote:
> On Tue, Sep 6, 2016 at 1:15 PM, Sven R. Kunze wrote:
>>
>> Their response: "Oh, I don't need it, let's close it."
>> Arek: "But I need it."
>
>
> This definitely feels like a case of "put it on PyPI." Actually, maybe
> contribute to `boltons`,
[David Mertz ]
> This definitely feels like a case of "put it on PyPI." Actually, maybe
> contribute to `boltons`, it feels like it might fit as a utility function
> there.
It's trivial to write such a function if it's truly needed - it would
be easier to write it from scratch than to remember wh
[Sven R. Kunze ]
> I am not questioning experience which everyone in a team can benefit from.
>
>
> BUT experienced devs also need to recognize and respect the fact that
> younger/unexperienced developers are just better in detecting
> inconsistencies and bloody work-arounds. They simply haven't ha
[Arek Bulski ]
> I dont randomize test order. People should stop assuming that they know
> better.
They should, but, to be fair, you've left them little choice because
you've been so telegraphic about why you think this is a compelling
idea. Others are trying to fill in the blanks, and can only g
[redirected from python-dev, to python-ideas;
please send followups only to python-ideas]
[Elliot Gorokhovsky ]
> ...
> TL;DR: Should I spend time making list.sort() detect if it is sorting
> lexicographically and, if so, use a much faster algorithm?
It will be fun to find out ;-)
As Mark, and
[Elliot Gorokhovsky ]
> Wow, Tim himself!
And Elliot himself! It's a party :-)
> Regarding performance on semi-ordered data: we'll have to
> benchmark to see, but intuitively I imagine radix would meet Timsort
> because verifying that a list of strings is sorted takes Omega(nw)
> (which gives a
[please restrict follow-ups to python-ideas]
Let's not get hung up on meta-discussion here - I always thought "massive
clusterf**k" was a precise technical term anyway ;-)
While timing certainly needs to be done more carefully, it's obvious to me
that this approach _should_ pay off significantly
[Elliot Gorokhovsky]
> Warning: the contents of this message may be dangerous for readers with
> heart conditions.
It appears some people are offended by your exuberance. Ignore them ;-)
> ...
> And it turns out that if you do that, PyFloat_RichCompare becomes ONE LINE
> (after we set op=Py_LT)
[Paul Moore]
> My understanding is that the code does a per-check that all the
> elements of the list are the same type (float, for example). This is a
> relatively quick test (O(n) pointer comparisons). If everything *is* a
> float, then an optimised comparison routine that skips all the type
> ch
[Nick Coghlan]
> It's probably more relevant that cmp() went away, since that
> simplified the comparison logic to just PyObject_RichCompareBool,
> without the custom comparison function path.
Well, the current sort is old by now, and was written for Python 2.
But it did anticipate that rich compa
[Alireza Rafiei ]
> I have a list called count_list which contains tuples like below:
>
> > [('bridge', 2), ('fair', 1), ('lady', 1), ('is', 2), ('down', 4),
> > ('london', 2), ('falling', 4), ('my', 1)]
>
>
> I want to sort it based on the second parameter in descending order and the
> tuples with
[Sven R. Kunze ]
> Indeed. I also didn't know about that detail of reversing. :) Amazing. (Also
> welcome to the list, Alireza.)
It follows from what the docs say, although I'd agree it may be
helpful if the docs explicitly spelled out this consequence (that
reverse=True also preserves the origina
[Elliot Gorokhovsky ]
> I'm working on a special-case compare function for bounded integers for the
> sort stuff. By looking at the implementation, I figured out that Py_SIZE of
> a long is the sign times the number of digits (...right?).
> ...
Please ignore the other reply you got - they clearly
[Elliot Gorokhovsky ]
> (Summary of results: my patch at https://bugs.python.org/issue28685 makes
list.sort() 30-50%
> faster in common cases, and at most 1.5% slower in the uncommon worst
case.)
> ...
Would someone please move the patch along? I expect it's my fault it's
languished so long, sinc
[Elliot Gorokhovsky ]
> Another solution: check if there is more than one thread; if there is, then
> disable optimization. Is sorting in multithreaded programs common enough
> to warrant adding the complexity to deal with it?
Not a solution. Not even close ;-) Even if it made good sense,
there'
[Chris Angelico ]
> Arbitrary comparison functions let you do anything but whoa, I
> cannot imagine any way that this would ever happen outside of "hey
> look, here's how you can trigger a SystemError"!
CPython is full of defensive code protecting against malicious crap.
That's why it rarely c
[Tim Delaney ]
> Isn't there already always a scan of the iterable to build the keys array
> for sorting (even if no key keyword param is specified)?
No - `.sort()` is a list method, and as such has nothing to do with
arbitrary iterables, just lists (perhaps you're thinking of the
`sorted()` funct
In the other direction, e.g.,
def expand_rle(rle):
from itertools import repeat, chain
return list(chain.from_iterable(repeat(x, n) for x, n in rle))
Then
>>> expand_rle([('a', 5), ('bc', 3)])
['a', 'a', 'a', 'a', 'a', 'bc', 'bc', 'bc']
As to why zip is in the distribution,
Short course: the average number of probes needed when searching
small dicts/sets can be reduced, in both successful ("found") and
failing ("not found") cases.
But I'm not going to pursue this. This is a brain dump for someone
who's willing to endure the interminable pain of arguing about
benchm
Two bits of new info: first, it's possible to get the performance of
"double" without division, at least via this way:
"""
# Double hashing using Fibonacci multiplication for the increment. This
# does about as well as `double`, but doesn't require division.
#
# The multiplicative constant depen
1 - 100 of 280 matches
Mail list logo