On 26 December 2016 at 21:23, Zahari Dim wrote:
> > There are a lot of entirely valid properties that look something like
> this:
> >
> >
> > @property
> > def attr(self):
> > try:
> > return data_store[lookup_key]
> > except KeyError:
> > raise Att
On Sat, Dec 31, 2016 at 1:55 AM, Nick Coghlan wrote:
> Rather than changing the descriptor protocol in general, I'd personally be
> more amenable to the idea of *property* catching AttributeError from the
> functions it calls and turning it into RuntimeError (after a suitable
> deprecation period)
On 29 December 2016 at 08:13, Nathaniel Smith wrote:
> On Dec 28, 2016 12:44, "Brett Cannon" wrote:
>
> My quick on-vacation response is that attaching more objects to exceptions
> is typically viewed as dangerous as it can lead to those objects being kept
> alive longer than expected (see the d
On 29 December 2016 at 22:12, Erik Bray wrote:
> 1) CPython's TLS: Defines -1 as an uninitialized key (by fact of the
> implementation--that the keys are integers starting from zero)
> 2) pthreads: Does not definite an uninitialized default value for
> keys, for reasons described at [1] under "No
On 29 December 2016 at 18:35, Chris Angelico wrote:
> On Thu, Dec 29, 2016 at 7:20 PM, Steven D'Aprano
> wrote:
> > I'd rather add a generator to the itertools
> > module:
> >
> > itertools.iterhash(iterable) # yield incremental hashes
> >
> > or, copying the API of itertools.chain, add a m
Updating the docs sounds like the more important change for now, given
3.7+. But before the docs make an official recommendation for that recipe,
were the analyses that Steve and I did sufficient to confirm that its hash
distribution and performance is good enough at scale, or is more rigorous
anal
On Fri, Dec 30, 2016 at 5:05 PM, Nick Coghlan wrote:
> On 29 December 2016 at 22:12, Erik Bray wrote:
>>
>> 1) CPython's TLS: Defines -1 as an uninitialized key (by fact of the
>> implementation--that the keys are integers starting from zero)
>> 2) pthreads: Does not definite an uninitialized def
[email protected] writes:
> But as you showed, it's certainly possible to do some exploration in the
> meantime. Prompted by your helpful comparison, I just put together
> https://gist.github.com/jab/fd78b3acd25b3530e0e21f5aaee3c674 to further
> compare hash_tuple vs. hash_incremental.
It's
On 12/30/2016 06:55 AM, Nick Coghlan wrote:
Rather than changing the descriptor protocol in general, I'd personally be
more amenable to the idea of *property* catching AttributeError from the
functions it calls and turning it into RuntimeError (after a suitable
deprecation period). That way f
On 12/30/2016 07:10 AM, Chris Angelico wrote:
Actually, that makes a lot of sense. And since "property" isn't magic
syntax, you could take it sooner:
from somewhere import property
and toy with it that way.
What module would be appropriate, though?
Well, DynamicClassAttribute is kept in the
On Fri, Dec 30, 2016 at 5:24 PM, Nick Coghlan wrote:
>
> I understood the "iterhash" suggestion to be akin to itertools.accumulate:
>
> >>> for value, tally in enumerate(accumulate(range(10))): print(value, ...
It reminds me of hmac[1]/hashlib[2], with the API : h.update(...)
before a .diges
On 2016-12-30 20:59, Matthias Bussonnier wrote:
> On Fri, Dec 30, 2016 at 5:24 PM, Nick Coghlan wrote:
>>
>> I understood the "iterhash" suggestion to be akin to itertools.accumulate:
>>
>> >>> for value, tally in enumerate(accumulate(range(10))): print(value,
>> ...
>
> It reminds me of hma
I have read the discussion and I'm sure that use structure as Py_tss_t
instead of platform-specific data type. Just as Steve said that Py_tss_t
should be genuinely treated as an opaque type, the key state checking
should provide macros or inline functions with name like
PyThread_tss_is_created. Wel
On Fri, Dec 30, 2016 at 3:54 PM, Christian Heimes
wrote:
> Hi,
>
> I'm the author of PEP 456 (SipHash24) and one of the maintainers of the
> hashlib module.
>
> Before we come up with a new API or recipe, I would like to understand
> the problem first. Why does the initial op consider hash(large_
On 12/30/2016 03:36 PM, [email protected] wrote:
In the use cases I described, the objects' members are ordered. So in the
unlikely event that two objects hash to the same value but are unequal, the
__eq__ call should be cheap, because they probably differ in length or on their
first member
On Fri, Dec 30, 2016 at 7:20 PM, Ethan Furman wrote:
> On 12/30/2016 03:36 PM, [email protected] wrote:
>
> In the use cases I described, the objects' members are ordered. So in the
>> unlikely event that two objects hash to the same value but are unequal, the
>> __eq__ call should be cheap, be
On 12/30/2016 04:31 PM, [email protected] wrote:
On Fri, Dec 30, 2016 at 7:20 PM, Ethan Furman wrote:
If you are relying on an identity check for equality then no two
FrozenOrderedCollection instances can be equal. Was that your
intention? It it was, then just hash the instance's id() an
On Fri, Dec 30, 2016 at 8:04 PM, Ethan Furman wrote:
> No. It is possible to have two keys be equal but different -- an easy
> example is 1 and 1.0; they both hash the same, equal the same, but are not
> identical. dict has to check equality when two different objects hash the
> same but have n
On Fri, Dec 30, 2016 at 8:10 PM, wrote:
> On Fri, Dec 30, 2016 at 8:04 PM, Ethan Furman wrote:
>
>> No. It is possible to have two keys be equal but different -- an easy
>> example is 1 and 1.0; they both hash the same, equal the same, but are not
>> identical. dict has to check equality when
On 12/30/2016 06:12 PM, [email protected] wrote:
... your point stands that Python had to call __eq__ in these cases.
But with instances of large, immutable, ordered collections, an
application could either:
1. accept that it might create a duplicate, equivalent instance of
an existing ins
On Fri, Dec 30, 2016 at 9:21 PM, Ethan Furman wrote:
> I don't think so. As someone else said, a hash can be calculated once and
> then cached, but __eq__ has to be called every time. Depending on the
> clustering of your data that could be quick... or not.
>
__eq__ only has to be called when
On 12/30/2016 06:47 PM, [email protected] wrote:
__eq__ only has to be called when a hash bucket is non-empty. In that case,
it may be O(n) in pathological cases, but it could also be O(1) every time.
On the other hand, __hash__ has to be called on every lookup, is O(n) on
the first call, a
On Fri, Dec 30, 2016 at 10:08 PM, Ethan Furman wrote:
> So maybe this will work?
>
> def __hash__(self):
> return hash(self.name) * hash(self.nick) * hash(self.color)
>
> In other words, don't create a new tuple, just use the ones you already
> have and toss in a couple maths operatio
On Fri, Dec 30, 2016 at 9:29 AM, wrote:
> Updating the docs sounds like the more important change for now, given 3.7+.
> But before the docs make an official recommendation for that recipe, were
> the analyses that Steve and I did sufficient to confirm that its hash
> distribution and performance
On Sat, Dec 31, 2016 at 2:24 PM, wrote:
> See the "Simply XORing such results together would not be order-sensitive,
> and so wouldn't work" from my original post. (Like XOR, multiplication is
> also commutative.)
>
> e.g. Since FrozenOrderedCollection([1, 2]) != FrozenOrderedCollection([2,
> 1])
On Fri, Dec 30, 2016 at 10:30 PM, Nathaniel Smith wrote:
> ...
"Most hash schemes depend on having a "good" hash function, in the sense of
> simulating randomness. Python doesn't ..."
https://github.com/python/cpython/blob/d0a2f68a/Objects/dictobject.c#L133
...
Thanks for that link, fascinat
On Fri, Dec 30, 2016 at 09:47:54PM -0500, [email protected] wrote:
> __eq__ only has to be called when a hash bucket is non-empty. In that case,
> it may be O(n) in pathological cases, but it could also be O(1) every time.
> On the other hand, __hash__ has to be called on every lookup, is O(n) o
On Fri, Dec 30, 2016 at 07:08:27PM -0800, Ethan Furman wrote:
> So maybe this will work?
>
> def __hash__(self):
> return hash(self.name) * hash(self.nick) * hash(self.color)
I don't like the multiplications. If any of the three hashes return
zero, the overall hash will be zero. I t
On 31 December 2016 at 05:53, Ethan Furman wrote:
> On 12/30/2016 07:10 AM, Chris Angelico wrote:
>
> Actually, that makes a lot of sense. And since "property" isn't magic
>> syntax, you could take it sooner:
>>
>> from somewhere import property
>>
>> and toy with it that way.
>>
>> What module w
On 31 December 2016 at 08:24, Masayuki YAMAMOTO
wrote:
> I have read the discussion and I'm sure that use structure as Py_tss_t
> instead of platform-specific data type. Just as Steve said that Py_tss_t
> should be genuinely treated as an opaque type, the key state checking
> should provide macro
30 matches
Mail list logo