New submission from Jeroen Demeyer :
On Linux with an old kernel:
0:03:59 load avg: 5.97 [300/420/1] test_posix failed -- running: test_tools (1
min 11 sec), test_concurrent_futures (2 min 42 sec)
test test_posix failed -- Traceback (most recent call last):
File "/usr/local/src/sage-c
Jeroen Demeyer added the comment:
> Unpacking the int would mean having one sig_atomic_t for 'invalid', using
> that instead of INVALID_FD, plus an array of sig_atomic_t for the fd itself.
> Every time you want to change the fd you first set the 'invalid' flag, then
> the indi
Jeroen Demeyer added the comment:
> unpack the int into an array of sig_atomic_t.
What do you mean with this? You can't write a complete array atomically, so I
don't see how this would help.
--
___
Python tracker
<https://bugs.pyth
Jeroen Demeyer added the comment:
I'm not sure with what you disagree. At least, you have to admit that using
sig_atomic_t is buggy for different reasons than signal safety, namely that
there is no guarantee that one can safely convert back and forth to an &quo
Jeroen Demeyer added the comment:
> Back in 2007 the only POSIX-compliant type allowed for that was sig_atomic_t,
> anything else was undefined.
Fair enough, but having a non-atomic type is still much better than a
completely wrong type. In other words, the requirement of fitting
Jeroen Demeyer added the comment:
Just curious... how is PEP 384 relevant to modules insides CPython itself? I
thought that this only mattered for external packages. Do you expect people to
use a 3.7-compiled posixmodule.c on Python 3.8?
--
nosy: +jdemeyer
Jeroen Demeyer added the comment:
Why is this using type "sig_atomic_t" for a file descriptor instead of "int"
(which is the type of file descriptors)? See
https://github.com/python/cpython/pull/12670
--
nosy: +jdemeyer
___
Py
Change by Jeroen Demeyer :
--
pull_requests: +12653
___
Python tracker
<https://bugs.python.org/issue35983>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Jeroen Demeyer :
--
keywords: +patch
pull_requests: +12648
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36556>
___
___
Py
New submission from Jeroen Demeyer :
NOTE: because of PEP 442, this issue is specific to Python 2. This bug was
discovered while adding testcases for bpo-35983 to the Python 2.7 backport.
There is a nasty interaction between the trashcan and __del__: if you're very
close to the trashcan
Jeroen Demeyer added the comment:
In Python 3, the resurrection issue probably appears too. But it's not so much
a problem since __del__ (mapped to tp_finalize) is only called once anyway. So
there are no bad consequences if the object is resurrected incorrectly
Jeroen Demeyer added the comment:
I realized that there is a nasty interaction between the trashcan and __del__:
if you're very close to the trashcan limit and you're calling __del__, then
objects that should have been deallocated in __del__ (in particular, an object
involving self) might
Change by Jeroen Demeyer :
--
pull_requests: +12624
___
Python tracker
<https://bugs.python.org/issue35983>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
This might be solvable using PEP 580 by using METH_VARARGS instead of
METH_FASTCALL for such functions. This would still require a temporary tuple
for the positional args but no additional dict would need to be allocated.
--
nosy: +jdemeyer
Jeroen Demeyer added the comment:
This should be closed.
--
nosy: +jdemeyer
___
Python tracker
<https://bugs.python.org/issue29209>
___
___
Python-bugs-list m
Jeroen Demeyer added the comment:
OK, makes sense. Also super() calls I guess: you can write
super().__getitem__(x) but not super()[x] (although the latter *could* be
implemented if we wanted to).
I see two ways of fixing this:
1. Make wrapper descriptors faster, removing the need
Jeroen Demeyer added the comment:
> Amusingly, this is because of an old hack to make directly calling
> somedict.__getitem__ fast:
> https://github.com/python/cpython/commit/8f5cdaa784f555149adf5e94fd2e989f99d6b1db
But what's the use case of making somedict.__getitem__(x) fast? O
Change by Jeroen Demeyer :
--
keywords: +patch
pull_requests: +12613
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36525>
___
___
Py
Change by Jeroen Demeyer :
--
title: Deprecate instance method -> Deprecate instancemethod
___
Python tracker
<https://bugs.python.org/issue36525>
___
___
Py
Jeroen Demeyer added the comment:
> I'm tempted to call YAGNI on this.
Indeed. See https://bugs.python.org/issue36525
--
nosy: +jdemeyer
___
Python tracker
<https://bugs.python.org/iss
New submission from Jeroen Demeyer :
The "instance method" class is not used anywhere and there are no obvious use
cases. We should just deprecate it to simplify Python.
See discussion at
https://mail.python.org/pipermail/python-dev/2019-April/156975.html
--
messages: 3
Change by Jeroen Demeyer :
--
pull_requests: +12612
___
Python tracker
<https://bugs.python.org/issue36347>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
See also PEP 590, which has very similar ideas. Also PEP 580 is related to this.
--
___
Python tracker
<https://bugs.python.org/issue29
Jeroen Demeyer added the comment:
Is anybody willing to review PR 11636?
--
___
Python tracker
<https://bugs.python.org/issue35707>
___
___
Python-bugs-list m
Jeroen Demeyer added the comment:
> I'd propose adding "%0D%0A%0D%0AIf you are developing on another platform,
> try make regen-all and commit the updated files"
I updated the PR with wording similar to that. I don't want to bikeshed too
much about the precise wording.
Jeroen Demeyer added the comment:
I created an additional PR 12607 with some more changes to the, in particular
to make the old backwards-compatibility trashcan macros safer. This should be
seen as part of the same bugfix but I decided to make a new PR because PR 11841
had several positive
Change by Jeroen Demeyer :
--
pull_requests: +12546
___
Python tracker
<https://bugs.python.org/issue35983>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
> It disables the trashcan mechanism
Yes, it disables the trashcan in some cases. But only when using the trashcan
mechanism would probably crash CPython anyway due to a double deallocation. So
at the very least, PR 11841 improves things from "
Jeroen Demeyer added the comment:
Changing types like that looks like an ugly hack and a recipe for breakage. For
example, in list_dealloc(), the following needs the type to be correct:
if (numfree < PyList_MAXFREELIST && PyList_CheckExact(op))
free_list[numfree++] = o
Jeroen Demeyer added the comment:
> As an aside, I thought we had a merge hook to check this on Travis?
For some reason, the Travis CI build on
https://github.com/python/cpython/pull/12582 isn't actually starting. It says
"Waiting for status to be reported" but I pushed
Jeroen Demeyer added the comment:
To clarify: the purpose of MyList is specifically to check that no double
deallocations occur. For this test to make sense, MyList should not use the
trashcan itself.
--
___
Python tracker
<ht
Jeroen Demeyer added the comment:
Yes of course. When not using the trashcan, stuff crashes. I don't get your
point...
--
___
Python tracker
<https://bugs.python.org/issue35
Change by Jeroen Demeyer :
--
keywords: +patch
pull_requests: +12527
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36448>
___
___
Py
New submission from Jeroen Demeyer :
On Windows builds, one may get the message
C:\projects\cpython\PCbuild\_freeze_importlib.vcxproj(130,5): error :
importlib.h, importlib_external.h, importlib_zipimport.h updated. You will need
to rebuild pythoncore to see the changes.
See for example
Jeroen Demeyer added the comment:
I am curious, how did you find out about this bug? Do you have a concrete use
case for directly calling an instance of classmethod_descriptor? Typically, one
would write dict.fromkeys(...) instead of dict.__dict__['fromkeys'](dict, ...).
--
nosy
Jeroen Demeyer added the comment:
The consensus is clearly to return NotImplemented in this case, also because
that's what most builtins do, like the object() example that you mentioned.
However, I would rather keep that note and change it to say return
NotImplemented. It's an important
Jeroen Demeyer added the comment:
See also PEP 579 (issue 11) and the thread
https://mail.python.org/pipermail/python-ideas/2018-June/051572.html
--
nosy: +jdemeyer
___
Python tracker
<https://bugs.python.org/issue20
Jeroen Demeyer added the comment:
> I tried using compiler.compiler.remove('-Wstrict-prototypes') to no avail.
The -Wstrict-prototypes issue is a separate bug. It is fixed in Python >= 3.6
and there is an open backport PR for 2.7:
https://github.com/python/cpython/pul
Jeroen Demeyer added the comment:
> Jeroen, could you share your example? I am learning the C-API of Python and
> this example could be interesting.
Either use the Cython code I posted above or run the testcase I added in PR
11841.
--
___
Jeroen Demeyer added the comment:
See also https://bugs.python.org/issue35983 for another trashcan-related issue.
--
nosy: +jdemeyer
___
Python tracker
<https://bugs.python.org/issue17
Jeroen Demeyer added the comment:
NOTE: also OrderedDict currently uses trashcan hacking to work around this
problem:
/* Call the base tp_dealloc(). Since it too uses the trashcan mechanism,
* temporarily decrement trash_delete_nesting to prevent triggering it
* and putting
Change by Jeroen Demeyer :
--
keywords: +patch
pull_requests: +11871
stage: test needed -> patch review
___
Python tracker
<https://bugs.python.org/issu
Jeroen Demeyer added the comment:
The problem is easily reproduced with Cython:
cdef class List(list):
cdef int deallocated
def __dealloc__(self):
if self.deallocated:
print("Deallocated twice!")
self.deallocated = 1
L = None
for i i
New submission from Jeroen Demeyer :
When designing an extension type subclassing an existing type, it makes sense
to call the tp_dealloc of the base class from the tp_dealloc of the subclass.
Now suppose that I'm subclassing "list" which uses the trashcan mechanism. Then
it
Jeroen Demeyer added the comment:
(note typo in the above: /tmp/prefix/pip should be /tmp/prefix/bin/pip)
--
___
Python tracker
<https://bugs.python.org/issue25
Jeroen Demeyer added the comment:
> Could you still give it a quick check?
I did just that. For reference, these are the steps:
- Checkout the "2.7" branch of the cpython git repo
- ./configure --prefix=/tmp/prefix --exec-prefix=/tmp/eprefix && make && make
Jeroen Demeyer added the comment:
> Fixing this on 2.7 would require additional investigation (distutils might
> have diverged)
Let's be honest, we are talking about distutils here. So it's way more likely
that it didn't diverge and that the behavior is exactly the same on 2.7 and
3
Jeroen Demeyer added the comment:
> it seems that Jeroen's analysis is right.
So would you be willing to merge the PR then?
--
___
Python tracker
<https://bugs.python.org/issu
Jeroen Demeyer added the comment:
> You've got a reference leak in your __index__ based paths.
Thanks for pointing that out. I fixed that now.
--
___
Python tracker
<https://bugs.python.org/issu
Jeroen Demeyer added the comment:
> I'm also mildly concerned by how duplicative the code becomes post-patch.
I know, that's why I added that comment on GitHub.
> perhaps just implement _PyTime_ObjectToTime_t as a wrapper for
> _PyTime_ObjectToDenominator
Sure, but will that
Change by Jeroen Demeyer :
--
components: +Interpreter Core
___
Python tracker
<https://bugs.python.org/issue35707>
___
___
Python-bugs-list mailing list
Unsub
Jeroen Demeyer added the comment:
There is again some discussion about this at
https://discuss.python.org/t/why-are-some-expressions-syntax-errors/420
--
___
Python tracker
<https://bugs.python.org/issue19
Jeroen Demeyer added the comment:
> Test with os.posix_spawn() is fine:
Indeed, the difference between posix_spawn() and posix_spawnp() is that only
the latter uses $PATH to look up the executable.
--
___
Python tracker
<https://bugs.pyth
Change by Jeroen Demeyer :
--
pull_requests: +11407, 11408, 11409
___
Python tracker
<https://bugs.python.org/issue35707>
___
___
Python-bugs-list mailin
Change by Jeroen Demeyer :
--
pull_requests: +11407, 11408
___
Python tracker
<https://bugs.python.org/issue35707>
___
___
Python-bugs-list mailing list
Unsub
Change by Jeroen Demeyer :
--
pull_requests: +11407
___
Python tracker
<https://bugs.python.org/issue35707>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
> If we want to support other numerical types with loss in double rounding
Looking at the existing code, I can already see several double-rounding "bugs"
in the code, so I wouldn't be too much c
Jeroen Demeyer added the comment:
In other words: if we can only use __float__ and __int__, how do we know which
one to use?
--
___
Python tracker
<https://bugs.python.org/issue35
Jeroen Demeyer added the comment:
If __index__ doesn't "feel" right, what do you propose then to fix this issue,
keeping in mind the concerns of https://bugs.python.org/issue35707#msg333401
--
___
Python tracker
<https://bu
Jeroen Demeyer added the comment:
It's a relatively old Gentoo GNU/Linux system:
Linux tamiyo 3.17.7-gentoo #2 SMP PREEMPT Fri Dec 23 18:13:49 CET 2016 x86_64
Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz GenuineIntel GNU/Linux
The problem occurs when there are directories on $PATH which
New submission from Jeroen Demeyer :
This test was recently added (PR 6332):
def test_no_such_executable(self):
no_such_executable = 'no_such_executable'
try:
pid = posix.posix_spawn(no_such_executable,
[no_such_executable
Jeroen Demeyer added the comment:
The motivation for PEP 357 was certainly using an object as the index for a
sequence, but that's not the only use case.
In fact PEP 357 states "For example, the slot can be used any time Python
requires an integer internally"
So despite the name
Jeroen Demeyer added the comment:
To avoid code duplication, it's tempting to merge _PyTime_FromObject and
_PyTime_ObjectToDenominator
These two functions almost do the same, but not quite.
--
___
Python tracker
<https://bugs.python.
Jeroen Demeyer added the comment:
I guess I should wait until PR 11507 is merged, to avoid merge conflicts.
--
___
Python tracker
<https://bugs.python.org/issue35
Jeroen Demeyer added the comment:
My proposal vastly improves the situation for Decimal. I will write a PR for
this and I hope that it won't be rejected just because it's not perfect.
--
___
Python tracker
<https://bugs.python.org/issue35
Jeroen Demeyer added the comment:
For reference, the sources for my implementation:
https://github.com/sagemath/cysignals/blob/master/src/cysignals/pysignals.pyx
--
___
Python tracker
<https://bugs.python.org/issue13
Jeroen Demeyer added the comment:
> In Jeroen's API, I can see what the Python-level signal handler is, but
> there's no way to find out whether that signal handler is actually in use or
> not.
I added support for that in the latest cysignals release. Now you can do
>>
Jeroen Demeyer added the comment:
> The correct code works for float and int (and maybe decimal.Decimal, I don't
> recall!)
Not for Decimal! In fact sleep(Decimal("0.99")) is interpreted as sleep(0)
because __int__ is
Jeroen Demeyer added the comment:
> the most reliable way is to represent them as fractions (x.as_integer_ratio()
> or (x.numerator, x.denominator))
I don't think that we can rely on non-dunder names like that. They are not
reserved names, so classes can give them any sem
Jeroen Demeyer added the comment:
> I'm not sure in which order the conversion should be tried to avoid/reduce
> precision loss during the conversion.
I would suggest the order
1. __index__ to ensure exact conversion of exact integers
2. __float__ to ensure correct conversion of fl
New submission from Jeroen Demeyer :
This used to work correctly in Python 2:
class Half(object):
def __float__(self):
return 0.5
import time
time.sleep(Half())
With Python 3.6, one gets instead
Traceback (most recent call last):
File "test.py", line 6, in
time.
Jeroen Demeyer added the comment:
Many thanks!
--
___
Python tracker
<https://bugs.python.org/issue34751>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
> Is it necessary to use METH_FASTCALL?
In Python 3, the bug only occurs with METH_FASTCALL. The issue is a reference
counting bug and the temporary tuple used for a METH_VARARGS method avoids the
Jeroen Demeyer added the comment:
> it will typically change only the last two bits of the final result
Which is great if all that you care about is avoiding collisions.
--
___
Python tracker
<https://bugs.python.org/issu
Jeroen Demeyer added the comment:
> Did you try with a minimal project containing a C extension?
> Did you install in a system where sys.prefix != sys.exec_prefix?
Yes to both questions.
--
___
Python tracker
<https://bugs.python.org/i
Jeroen Demeyer added the comment:
Well, I did try it on a minimal Python project. I also read the distutils
sources and understood why it installs data_files in sys.prefix by default. So
what more do you need to be convinced?
--
___
Python
Jeroen Demeyer added the comment:
There is also
commit fa2f4b6d8e297eda09d8ee52dc4a3600b7d458e7
Author: Greg Ward
Date: Sat Jun 24 17:22:39 2000 +
Changed the default installation directory for data files (used by
the "install_data" command to the installation b
Jeroen Demeyer added the comment:
Just for fun, let's look at the history. That piece of documentation goes back
to
commit 632bda3aa06879396561dde5ed3d93ee8fb8900c
Author: Fred Drake
Date: Fri Mar 8 22:02:06 2002 +
Add more explanation of how data_files is used (esp. where
Jeroen Demeyer added the comment:
> If you’re not sure about the reason for that sentence, I think you should not
> remove it from the docs
If the docs are wrong, their history doesn't matter that much: the docs should
be fixed regardless.
> test the conditions (package with
Jeroen Demeyer added the comment:
Can somebody please review PR 6448?
--
___
Python tracker
<https://bugs.python.org/issue33261>
___
___
Python-bugs-list mailin
Change by Jeroen Demeyer :
Removed file: https://bugs.python.org/file40993/data_files_doc.patch
___
Python tracker
<https://bugs.python.org/issue25592>
___
___
Python-bug
Change by Jeroen Demeyer :
--
pull_requests: +9154
___
Python tracker
<https://bugs.python.org/issue25592>
___
___
Python-bugs-list mailing list
Unsubscribe:
Jeroen Demeyer added the comment:
I pushed an update at PR 9471. I think I took into account all your comments,
except for moving the length addition from the end to the begin of the function.
--
___
Python tracker
<https://bugs.python.
Jeroen Demeyer added the comment:
> Changes initialization to add in the length:
What's the rationale for that change? You always asked me to stay as close as
possible to the "official" hash function which adds in the length at the end.
Is there an actual benef
Jeroen Demeyer added the comment:
I pushed a documentation-only patch on PR 9540 to better document status quo.
Can somebody please review either PR 6653 or PR 9540?
--
___
Python tracker
<https://bugs.python.org/issue32
Jeroen Demeyer added the comment:
I updated PR 9471 with a tuple hash function based on xxHash. The only change
w.r.t. the official xxHash specification is that I'm not using parallellism and
just using 1 accumulator. Please have a look
Jeroen Demeyer added the comment:
> Taking an algorithm in wide use that's already known to get a top score on
> SMHasher and fiddling it to make a "slight" improvement in one tiny Python
> test doesn't make sense to me.
OK, I won't do that. The difference is n
Jeroen Demeyer added the comment:
> In the 64-bit build there are no collisions across my tests except for 11 in
> the new tuple test.
That's pretty bad actually. With 64 bits, you statistically expect something in
the order of 10**-8 collisions. So what you're seeing is 9
Jeroen Demeyer added the comment:
> Taking an algorithm in wide use that's already known to get a top score on
> SMHasher and fiddling it to make a "slight" improvement in one tiny Python
> test doesn't make sense to me.
What I'm doing is the most innocent change: ju
Jeroen Demeyer added the comment:
> people already wrote substantial test suites dedicated to that sole purpose,
> and we should aim to be "mere consumers" of functions that pass _those_ tests.
There are hash functions that pass those tests which are still bad in practice
wh
Jeroen Demeyer added the comment:
> Note: I'm assuming that by "PRIME32_2" you mean 2246822519U
Yes indeed.
> and that "MULTIPLIER" means 2654435761U.
No, I mean a randomly chosen multiplier which is 3 mod 8.
--
Jeroen Demeyer added the comment:
> I've posted several SeaHash cores that suffer no collisions at all in any of
> our tests (including across every "bad example" in these 100+ messages),
> except for "the new" tuple test. Which it also passed, most
Jeroen Demeyer added the comment:
A (simplified and slightly modified version of) xxHash seems to work very well,
much better than SeaHash. Just like SeaHash, xxHash also works in parallel. But
I'm not doing that and just using this for the loop:
for y in t:
y ^= y * (PRIME32_2
Jeroen Demeyer added the comment:
I'm having a look at xxHash, the second-fastest hash mentioned on
https://docs.rs/seahash/3.0.5/seahash/
--
___
Python tracker
<https://bugs.python.org/issue34
Jeroen Demeyer added the comment:
> I know of no such hash functions short of crypto-strength ones.
Being crypto-strength and having few collisions statistically are different
properties.
For non-crypto hash functions it's typically very easy to generate collisions
once you k
Jeroen Demeyer added the comment:
> For that reason, I've only been looking at those that scored 10 (best
> possible) on Appleby's SMHasher[1] test suite
Do you have a list of such hash functions?
--
___
Python tracker
<https://bugs.p
Jeroen Demeyer added the comment:
>>> from itertools import product
>>> len(set(map(hash, product([0.5, 0.25], repeat=20
32
Good catch! Would you like me to add this to the testsuite?
--
___
Python tracker
<https://bugs.
Jeroen Demeyer added the comment:
> For that reason, I've only been looking at those that scored 10 (best
> possible) on Appleby's SMHasher[1] test suite, which is used by everyone who
> does recognized work in this field.
So it seems that this SMHasher test suite doesn't catch th
Jeroen Demeyer added the comment:
> I've noted before, e.g., that sticking to a prime eliminates a world of
> regular bit patterns in the multiplier.
Why do you think this? 0x1fff is prime :-)
Having regular bit patterns and being prime are independent properties.
To be
Jeroen Demeyer added the comment:
> 100% pure SeaHash does x ^= t at the start first, instead of `t ^ (t << 1)`
> on the RHS.
Indeed. Some initial testing shows that this kind of "input mangling" (applying
such a permutation on the inputs) actually plays a much more im
301 - 400 of 586 matches
Mail list logo