[issue46896] add support for watching writes to selected dictionaries

2022-03-15 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks for the extended example.

I think in order for this example to answer the question I asked, a few more 
assumptions should be made explicit:

1) Either `spam_var` and/or `eggs_var` are frequently re-bound to new values in 
a hot code path somewhere. (Given the observations above about module-level 
code, we should assume for a relevant example this takes place in a function 
that uses `global spam_var` or `global eggs_var` to allow such rebinding.)

2) But `spam_var` and `eggs_var` are not _read_ in any hot code path anywhere, 
because if they were, then the adaptive interpreter would be just as likely to 
decide to watch them as it is to watch `EGGS_CONST`, in which case any benefit 
of per-key watching in this example disappears. (Keep in mind that with 
possibly multiple watchers around, "unwatching" anything on the dispatch side 
is never an option, so we can't say that the adaptive interpreter would decide 
to unwatch the frequently-re-bound keys after it observes them being re-bound. 
It can  always "unwatch" them in the sense of no longer being interested in 
them in its callback, though.)

It is certainly possible that this case could occur, where some module contains 
both a frequently-read-but-not-written global and also a global that is 
re-bound using `global` keyword in a hot path, but rarely read. But it doesn't 
seem warranted to pre-emptively add a lot of complexity to the API in order to 
marginally improve the performance of this quite specific case, unsupported by 
any benchmark or sample workload demonstrating it.

> This might not be necessary for us right now

I think it's worth keeping in mind that `PyDict_WatchKey` API can always be 
added later without disturbing or changing semantics of the `PyDict_Watch` API 
added here.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-15 Thread Carl Meyer


Carl Meyer  added the comment:

> There should not be much of a slowdown for this code when watching `CONST`:

How and when (and based on what data?) would the adaptive interpreter make the 
decision that for this code sample the key `CONST`, but not the key `var`, 
should be watched in the module globals dict? It's easy to contrive an example 
in which it's beneficial to watch one key but not another, but this is 
practically irrelevant unless it's also feasible for an optimizer to 
consistently make the right decision about which key(s) to watch.

The code sample also suggests that the module globals dict for a module is 
being watched while that module's own code object is being executed. In module 
body execution, writing to globals (vs reading them) is relatively much more 
common, compared to any other Python code execution context, and it's much less 
common for the same global to be read many times. Given this, how frequently 
would watching module globals dictionaries during module body execution be a 
net win at all? Certainly cases can be contrived in which it would be, but it 
seems unlikely that it would be a net win overall. And again, unless the 
optimizer can reliably (and in advance, since module bodies are executed only 
once) distinguish the cases where it's a win, it seems the example is not 
practically relevant.

> Another use of this is to add watch points in debuggers.
> To that end, it would better if the callback were a Python object.

It is easy to create a C callback that delegates to a Python callable if 
someone wants to implement this use case, so the vectorcall overhead is paid 
only when needed. The core API doesn't need to be made more complex for this, 
and there's no reason to impose any overhead at all on low-level 
interpreter-optimization use cases.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-10 Thread Carl Meyer


Carl Meyer  added the comment:

I've updated the PR to split `PyDict_EVENT_MODIFIED` into separate 
`PyDict_EVENT_ADDED`, `PyDict_EVENT_MODIFIED`, and `PyDict_EVENT_DELETED` event 
types. This allows callbacks only interested in e.g. added keys (case #2) to 
more easily and cheaply skip uninteresting events.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-10 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks for outlining the use cases. They make sense.

The current PR provides a flexible generic API that fully supports all three of 
those use cases (use cases 2 and 3 are strict subsets of use case 1.) Since the 
callback is called before the dict is modified, all the necessary information 
is available to the callback to decide whether the event is interesting to it 
or not.

The question is how much of the bookkeeping to classify events as "interesting" 
or "uninteresting" should be embedded in the core dispatch vs being handled by 
the callback.

One reason to prefer keeping this logic in the callback is that with 
potentially multiple chained callbacks in play, the filtering logic must always 
exist in the callback, regardless. E.g. if callback A wants to watch only 
keys-version changes to dict X, but callback B wants to watch all changes to 
it, events will fire for all changes, and callback A must still disregard 
"uninteresting" events that it may receive (just like it may receive events for 
dicts it never asked to watch at all.) So providing API for different "levels" 
of watching means that the "is this event interesting to me" predicate must 
effectively be duplicated both in the callback and in the watch level chosen.

The proposed rationale for this complexity and duplication is the idea that 
filtering out uninteresting events at dispatch will provide better performance. 
But this is hypothetical: it assumes the existence of perf-bottleneck code 
paths that repeatedly rebind globals. The only benchmark workload with this 
characteristic that I know of is pystone, and it is not even part of the 
pyperformance suite, I think precisely because it is not representative of 
real-world code patterns. And even assuming that we do need to optimize for 
such code, it's also not obvious that it will be noticeably cheaper in practice 
to filter on the dispatch side.

It may be more useful to focus on API. If we get the API right, internal 
implementation details can always be adjusted in future if a different 
implementation can be shown to be noticeably faster for relevant use cases. And 
if we get existing API right, we can always add new API if we have to. I don't 
think anything about the proposed simple API precludes adding 
`PyDict_WatchKeys` as an additional feature, if it turns out to be necessary.

One modification to the simple proposed API that should improve the performance 
(and ease of implementation) of use case #2 would be to split the current 
`PyDict_EVENT_MODIFIED` into two separate event types: `PyDict_EVENT_MODIFIED` 
and `PyDict_EVENT_NEW_KEY`. Then the callback-side event filtering for use case 
#2 would just be `event == PyDict_EVENT_NEW_KEY` instead of requiring a lookup 
into the dict to see whether the key was previously set or not.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

> have you considered watching dict keys rather than whole dicts?

Just realized that I misunderstood this suggestion; you don't mean per-key 
watching necessarily, you just mean _not_ notifying on dict values changes. Now 
I understand better how that connects to the second part of your comment! But 
yeah, I don't want this limitation on dict watching use cases.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

Hi Dennis, thanks for the questions!

> A curiosity: have you considered watching dict keys rather than whole dicts?

There's a bit of discussion of this above. A core requirement is to avoid any 
memory overhead and minimize CPU overhead on unwatched dicts. Additional memory 
overhead seems like a nonstarter, given the sheer number of dict objects that 
can exist in a large Python system. The CPU overhead for unwatched dicts in the 
current PR consists of a single added `testb` and `jne` (for checking if the 
dict is watched), in the write path only; I think that's effectively the 
minimum possible.

It's not clear to me how to implement per-key watching under this constraint. 
One option Brandt mentioned above is to steal the low bit of a `PyObject` 
pointer; in theory we could do this on `me_key` to implement per-key watching 
with no memory overhead. But then we are adding bit-masking overhead on every 
dict read and write. I think we really want the implementation here to be 
zero-overhead in the dict read path.

Open to suggestions if I've missed a good option here!

> That way, changing global values would not have to de-optimize, only adding 
> new global keys would.

> Indexing into dict values array wouldn't be as efficient as embedding direct 
> jump targets in JIT-generated machine code, but as long as we're not doing 
> that, maybe watching the keys is a happy medium?

But we are doing that, in the Cinder JIT. Dict watching here is intentionally 
exposed for use by extensions, including hopefully  in future the Cinder JIT as 
an installable extension. We burn exact pointer values for module globals into 
generated JIT code and deopt if they change (we are close to landing a change 
to code-patch instead of deopting.) This is quite a bit more efficient in the 
hot path than having to go through a layer of indirection.

I don't want to assume too much about how dict watching will be used in future, 
or go for an implementation that limits its future usefulness. The current PR 
is quite flexible and can be used to implement a variety of caching strategies. 
The main downside of dict-level watching is that a lot of notifications will be 
fired if code does a lot of globals-rebinding in modules where globals are 
watched, but this doesn't appear to be a problem in practice, either in our 
workloads or in pyperformance. It seems likely that a workable strategy if this 
ever was observed to be a problem would be to notice at runtime that globals 
are being re-bound frequently in a particular module and just stop watching 
that module's globals.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

Draft PR is up for consideration. Perf data in 
https://gist.github.com/carljm/987a7032ed851a5fe145524128bdb67a

Overall it seems like the base implementation is perf neutral -- maybe a slight 
impact on the pickle benchmarks? With all module global dicts (uselessly) 
watched, there are a few more benchmarks with small regressions, but also some 
with small improvements (just noise I guess?) -- overall still pretty close to 
neutral.

Comments welcome!

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +29891
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/31787

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-04 Thread Carl Meyer

Carl Meyer  added the comment:

Thanks for the feedback!

> Why so coarse?

Simplicity of implementation is a strong advantage, all else equal :) And the 
coarse version is a) at least somewhat proven as useful and usable already by 
Cinder / Cinder JIT, and b) clearly doable without introducing memory or 
noticeable CPU overhead to unwatched dicts. Do you have thoughts about how 
you'd do a more granular version without overhead?

> Getting a notification for every change of a global in module, is likely to 
> make use the use of global variables extremely expensive.

It's possible. We haven't ever observed this as an issue in practice, but we 
may have just not observed enough workloads with heavy writes to globals. I'd 
like to verify this problem with a real representative benchmark before making 
design decisions based on it, though. Calling a callback that is uninterested 
in a particular key doesn't need to be super-expensive if the callback is 
reasonably written, and this expense would occur only on the write path, for 
cases where the `global` keyword is used to rebind a global. I don't think it's 
common for idiomatic Python code to write to globals in perf-sensitive paths. 
Let's see how this shows up in pyperformance, if we try running it with all 
module globals dicts watched.

> For example, we could just tag the low bit of any pointer in a dictionary’s 
> values that we want to be notified of changes to

Would you want to tag the value, or the key? If value, does that mean if the 
value is changed it would revert to unwatched unless you explicitly watched the 
new value?

I'm a bit concerned about the performance overhead this would create for use of 
dicts outside the write path, e.g. the need to mask off the watch bit of 
returned value pointers on lookup.

> What happens if a watched dictionary is modified in a callback?

It may be best to document that this isn't supported; it shouldn't be necessary 
or advisable for the intended uses of dict watching. That said, I think it 
should work fine if the callback can handle re-entrancy and doesn't create 
infinite recursion. Otherwise, I think it's a case of "you broke it, you get to 
keep all the pieces."

> How do you plan to implement this? Steal a bit from `ma_version_tag`

We currently steal the low bit from the version tag in Cinder; my plan was to 
keep that approach.

> You'd probably need a PEP to replace PEP 509, but I think this may need a PEP 
> anyway.

I'd prefer to avoid coupling this to removal of the version tag. Then we get 
into issues of backward compatibility that this proposal otherwise avoids.

I don't think the current proposal is of a scope or level of user impact that 
should require a PEP, but I'm happy to write one if needed.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-03 Thread Carl Meyer


Carl Meyer  added the comment:

> Could we (or others) end up with unguarded stale caches if some buggy 
> extension forgets to chain the calls correctly?

Yes. I can really go either way on this. I initially opted for simplicity in 
the core support at the cost of asking a bit more of clients, on the theory 
that a) there are lots of ways for a buggy C extension to cause crashes with 
bad use of the C API, and b) I don't expect there to be very many extensions 
using this API. But it's also true that the consequences of a mistake here 
could be hard to debug (and easily blamed to the wrong place), and there might 
turn out to be more clients for dict-watching than I expect! If the consensus 
is to prefer CPython tracking an array of callbacks instead, we can try that.

> when you say "only one global callback": does that mean per-interpreter, or 
> per-process?

Good question! The currently proposed API suggests per-process, but it's not a 
question I've given a lot of thought to yet; open to suggestions. It seems like 
in general the preference is to avoid global state and instead tie things to an 
interpreter instance? I'll need to do a bit of research to understand exactly 
how that would affect the implementation. Doesn't seem like it should be a 
problem, though it might make the lookup at write time to see if we have a 
callback a bit slower.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-03 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks gps! Working on a PR and will collect pyperformance data as well.

We haven't observed any issues in Cinder with the callback just being called at 
shutdown, too, but if there are problems with that it should be possible to 
just have CPython clear the callback at shutdown time.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-01 Thread Carl Meyer


Change by Carl Meyer :


--
title: add support for watching writes to selecting dictionaries -> add support 
for watching writes to selected dictionaries

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selecting dictionaries

2022-03-01 Thread Carl Meyer

New submission from Carl Meyer :

CPython extensions providing optimized execution of Python bytecode (e.g. the 
Cinder JIT), or even CPython itself (e.g. the faster-cpython project) may wish 
to inline-cache access to frequently-read and rarely-changed namespaces, e.g. 
module globals. Rather than requiring a dict version guard on every cached 
read, the best-performing way to do this is is to mark the dictionary as 
“watched” and set a callback on writes to watched dictionaries. This optimizes 
the cached-read fast-path at a small cost to the (relatively infrequent and 
usually less perf sensitive) write path.

We have an implementation of this in Cinder ( 
https://docs.google.com/document/d/1l8I-FDE1xrIShm9eSNJqsGmY_VanMDX5-aK_gujhYBI/edit#heading=h.n2fcxgq6ypwl
 ), used already by the Cinder JIT and its specializing interpreter. We would 
like to make the Cinder JIT available as a third-party extension to CPython ( 
https://docs.google.com/document/d/1l8I-FDE1xrIShm9eSNJqsGmY_VanMDX5-aK_gujhYBI/
 ), and so we are interested in adding dict watchers to core CPython.

The intention in this issue is not to add any specific optimization or cache 
(yet); just the ability to mark a dictionary as “watched” and set a write 
callback.

The callback will be global, not per-dictionary (no extra function pointer 
stored in every dict). CPython will track only one global callback; it is a 
well-behaved client’s responsibility to check if a callback is already set when 
setting a new one, and daisy-chain to the previous callback if so. Given that 
multiple clients may mark dictionaries as watched, a dict watcher callback may 
receive events for dictionaries that were marked as watched by other clients, 
and should handle this gracefully.

There is no provision in the API for “un-watching” a watched dictionary; such 
an API could not be used safely in the face of potentially multiple 
dict-watching clients.

The Cinder implementation marks dictionaries as watched using the least bit of 
the dictionary version (so version increments by 2); this also avoids any 
additional memory usage for marking a dict as watched.

Initial proposed API, comments welcome:

// Mark given dictionary as "watched" (global callback will be called if it is 
modified)
void PyDict_Watch(PyObject* dict);

// Check if given dictionary is already watched
int PyDict_IsWatched(PyObject* dict);

typedef enum {
  PYDICT_EVENT_CLEARED,
  PYDICT_EVENT_DEALLOCED,
  PYDICT_EVENT_MODIFIED
} PyDict_WatchEvent;

// Callback to be invoked when a watched dict is cleared, dealloced, or 
modified.
// In clear/dealloc case, key and new_value will be NULL. Otherwise, new_value 
will be the
// new value for key, NULL if key is being deleted.
typedef void(*PyDict_WatchCallback)(PyDict_WatchEvent event, PyObject* dict, 
PyObject* key, PyObject* new_value);

// Set new global watch callback; supply NULL to clear callback
void PyDict_SetWatchCallback(PyDict_WatchCallback callback);

// Get existing global watch callback
PyDict_WatchCallback PyDict_GetWatchCallback();

The callback will be called immediately before the modification to the dict 
takes effect, thus the callback will also have access to the prior state of the 
dict.

--
components: C API
messages: 414307
nosy: carljm, dino.viehland, itamaro
priority: normal
severity: normal
status: open
title: add support for watching writes to selecting dictionaries
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46558] Quadratic time internal base conversions

2022-01-30 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Somebody pointed me to V8's implementation of str(bigint) today:

https://github.com/v8/v8/blob/main/src/bigint/tostring.cc

They say that they can compute str(factorial(1_000_000)) (which is 5.5 million 
decimal digits) in 1.5s:

https://twitter.com/JakobKummerow/status/1487872478076620800

As far as I understand the code (I suck at C++) they recursively split the 
bigint into halves using % 10^n at each recursion step, but pre-compute and 
cache the divisors' inverses.

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue46558>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46555] Unicode-mangled names refer inconsistently to constants

2022-01-30 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Ok, I can definitely agree with Serhiy pov: "True" is a keyword that always 
evaluates to the object that you get when you call bool(1). There is usually no 
name "True" and directly assigning to it is forbidden. But there are various 
other ways to assign a name "True". One is eg globals("True") = 5, another one 
(discussed in this issue) is using identifiers that NFKC-normalize to the 
string "True".

--

___
Python tracker 
<https://bugs.python.org/issue46555>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46555] Unicode-mangled names refer inconsistently to constants

2022-01-29 Thread Carl Friedrich Bolz-Tereick

Carl Friedrich Bolz-Tereick  added the comment:

hah, this is "great":

>>> 핋핣핦핖 = 1
>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': 
, '__spec__': None, 
'__annotations__': {}, '__builtins__': , 'True': 
1}

The problem is that the lexer assumes that anything that is not ASCII cannot be 
a keyword and lexes 핋핣핦핖 as an identifier.

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue46555>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46201] PEP 495 misnames PyDateTime_DATE_GET_FOLD

2021-12-30 Thread Carl Drougge


New submission from Carl Drougge :

PEP 495 names one of the accessor macros PyDateTime_GET_FOLD but the code names 
it PyDateTime_DATE_GET_FOLD.

The FOLD macros are also missing from 
https://docs.python.org/3/c-api/datetime.html (and versions).

--
assignee: docs@python
components: Documentation
messages: 409354
nosy: docs@python, drougge
priority: normal
severity: normal
status: open
title: PEP 495 misnames PyDateTime_DATE_GET_FOLD
type: behavior
versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue46201>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38085] Interrupting class creation in __init_subclass__ may lead to incorrect isinstance() and issubclass() results

2021-12-23 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Or, in other words, in my opinion this is the root cause of the bug:

class Base:
def __init_subclass__(cls):
global broken_class
broken_class = cls
assert 0
try:
class Broken(Base): pass
except: pass
assert broken_class not in Base.__subclasses__()

The assert fails, which imo it shouldn't.

--

___
Python tracker 
<https://bugs.python.org/issue38085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38085] Interrupting class creation in __init_subclass__ may lead to incorrect isinstance() and issubclass() results

2021-12-23 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

hm, I think I figured it out. The root cause is that even though the creation 
of the class Triffid fails, it can still be found via Animal.__subclasses__(), 
which the special subclass logic for ABCs is looking at. Triffid fills its 
_abc_impl data with some content, but Triffid._abc_impl was never successfully 
initialized, therefore it mutates the _abc_impl of its first base class Animal.

My conclusion would be that if a class is not successfully created, it 
shouldn't appear in the .__subclasses__() list of its bases.

See attached script for some illuminating prints.

--
nosy: +Carl.Friedrich.Bolz
Added file: https://bugs.python.org/file50515/x.py

___
Python tracker 
<https://bugs.python.org/issue38085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46042] Error range of "duplicate argument" SyntaxErrors is too big

2021-12-11 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Oh, don't worry, it's all good! It got fixed and I learned something.

--

___
Python tracker 
<https://bugs.python.org/issue46042>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46042] Error range of "duplicate argument" SyntaxErrors is too big

2021-12-11 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

ah, confused, seems you fixed them both too. will take a closer look!

--

___
Python tracker 
<https://bugs.python.org/issue46042>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46042] Error range of "duplicate argument" SyntaxErrors is too big

2021-12-11 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Oh no, I was about to open mine ;-)

https://github.com/python/cpython/compare/main...cfbolz:bpo-46042-syntax-error-range-duplicate-argument?expand=1

Basically equivalent, but I fixed the second bug too (would be very easy to add 
to yours)

--

___
Python tracker 
<https://bugs.python.org/issue46042>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46042] Error range of "duplicate argument" SyntaxErrors is too big

2021-12-11 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

let's see whether I promised too much, I don't know CPython's symtable.c too 
well yet ;-). Will shout for help when I get stuck.

Anyway, here is a related bug, coming from the same symtable function 
symtable_add_def_helper, also with an imprecise error location:

$ cat x.py 
{i for i in range(5)
if (j := 0)
for j in range(5)}

$ ./python x.py 
  File "/home/cfbolz/projects/cpython/x.py", line 1
{i for i in range(5)

SyntaxError: comprehension inner loop cannot rebind assignment expression 
target 'j'

--

___
Python tracker 
<https://bugs.python.org/issue46042>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46042] Error range of "duplicate argument" SyntaxErrors is too big

2021-12-10 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

The error range for the "duplicate argument in function definition" SyntaxError 
is too large:

$ cat x.py 
def f(a, b, c, d, e, f, g, a): pass
$ python x.py
  File "/home/cfbolz/projects/cpython/x.py", line 1
def f(a, b, c, d, e, f, g, a): pass
^^^
SyntaxError: duplicate argument 'a' in function definition

I would expect only the second 'a' to be underlined.

I can try to fix this.

--
messages: 408248
nosy: Carl.Friedrich.Bolz, pablogsal
priority: normal
severity: normal
status: open
title: Error range of "duplicate argument" SyntaxErrors is too big

___
Python tracker 
<https://bugs.python.org/issue46042>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37971] Wrong trace with multiple decorators (linenumber wrong in frame)

2021-12-10 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
pull_requests: +28252
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30027

___
Python tracker 
<https://bugs.python.org/issue37971>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37971] Wrong trace with multiple decorators (linenumber wrong in frame)

2021-12-10 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

I ran into this problem in PyPy today, preparing a patch for CPython too 
(without looking at the old one).

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue37971>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45859] test_collections has a wrong test in case _itemgetter is not available

2021-11-21 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
keywords: +patch
pull_requests: +27929
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/29691

___
Python tracker 
<https://bugs.python.org/issue45859>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45859] test_collections has a wrong test in case _itemgetter is not available

2021-11-21 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

test_field_descriptor in test_collections tries to pickle the descriptors of a 
namedtuple's fields, which is _collections._itemgetter on CPython. However, on 
PyPy that class doesn't exist. The code in collections deals fine with that 
fact, but the above-mentioned test does not make sense in that situation, since 
you can't pickle properties.

To test this behaviour, you can replace
"from _collections import _tuplegetter"
in collections/__init__.py with raise ImportError and see the test fail on 
CPython too.

--
messages: 406738
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: test_collections has a wrong test in case _itemgetter is not available
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue45859>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45781] Deleting __debug__ should be an SyntaxError

2021-11-13 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

ouch, apologies for not checking that!

--

___
Python tracker 
<https://bugs.python.org/issue45781>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45781] Deleting __debug__ should be an SyntaxError

2021-11-11 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

Right now, deleting __debug__ is not prevented:

>>> def f():
... del __debug__
... 

Of course actually executing it doesn't work:

>>> del __debug__
Traceback (most recent call last):
  File "", line 1, in 
NameError: name '__debug__' is not defined


Compare with assigning to __debug__:

>>> def f():
... __debug__ = 1
... 
  File "", line 2
SyntaxError: cannot assign to __debug__

--
messages: 406149
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: Deleting __debug__ should be an SyntaxError

___
Python tracker 
<https://bugs.python.org/issue45781>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45764] Parse error improvement forgetting ( after def

2021-11-09 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
keywords: +patch
pull_requests: +27735
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/29484

___
Python tracker 
<https://bugs.python.org/issue45764>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45764] Parse error improvement forgetting ( after def

2021-11-09 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

Something I see beginners make occasionally when defining functions without 
arguments is this:

def f:
...

Right now it just gives an "invalid syntax", would be nice to get an "expected 
'('".

I will try to give this a go! Should be a matter of making the '(' token an 
expected one.

--
messages: 406010
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: Parse error improvement forgetting ( after def

___
Python tracker 
<https://bugs.python.org/issue45764>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45727] Parse error when missing commas is inconsistent

2021-11-05 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

I found following inconsistency in the error message when there's a missing 
comma (it behaves that way both on main and 3.10).

Here's what happens with numbers, as expected:

Python 3.11.0a1+ (heads/main:32f55d1a5d, Nov  5 2021, 13:18:52) [GCC 11.2.0] on 
linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 1 2 3 4
  File "", line 1
1 2 3 4
^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?

But with names the error is further right in the lines:

>>> a b c d
  File "", line 1
a b c d
  ^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
>>> a b c d e f g
  File "", line 1
a b c d e f g
  ^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?

That looks potentially quite confusing to me?

(I don't know if these nit-picky parsing issues are too annoying, if they are 
please tell me to stop filing them).

--
messages: 405792
nosy: Carl.Friedrich.Bolz, pablogsal
priority: normal
severity: normal
status: open
title: Parse error when missing commas is inconsistent
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue45727>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45716] Confusing parsing error message when trying to use True as keyword argument

2021-11-04 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

A bit of a nitpick, but the following SyntaxError message is a bit confusing:

>>> f(True=1)
  File "", line 1
f(True=1)
  ^
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?

The problem with that line is not that it contains an assignment, it's almost a 
valid keyword argument after all. The problem is that the name of the keyword 
is True, which is no longer a name you can assign to. It would be better to 
produce the same error as with __debug__:

>>> f(__debug__=1)
  File "", line 1
SyntaxError: cannot assign to __debug__

The latter error message is however produced by the compiler, not the parser I 
think?

--
messages: 405741
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: Confusing parsing error message when trying to use True as keyword 
argument

___
Python tracker 
<https://bugs.python.org/issue45716>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45624] test_graphlib.py depends on iteration order of sets

2021-10-27 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
keywords: +patch
pull_requests: +27496
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/29233

___
Python tracker 
<https://bugs.python.org/issue45624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45624] test_graphlib.py depends on iteration order of sets

2021-10-27 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

here's the traceback running on pypy3.9-alpha:

==
FAIL: test_simple_cases (test.test_graphlib.TestTopologicalSort)
--
Traceback (most recent call last):
  File 
"/home/cfbolz/projects/small-commits-pypy/lib-python/3/test/test_graphlib.py", 
line 39, in test_simple_cases
[(3, 5, 7), (11, 8), (2, 10, 9)],
  File 
"/home/cfbolz/projects/small-commits-pypy/lib-python/3/test/test_graphlib.py", 
line 19, in _test_graph
self.assertEqual(list(static_order_with_groups(ts)), list(expected))
AssertionError: Lists differ: [(3, 7, 5), (8, 11), (2, 9, 10)] != [(3, 5, 7), 
(11, 8), (2, 10, 9)]

First differing element 0:
(3, 7, 5)
(3, 5, 7)

- [(3, 7, 5), (8, 11), (2, 9, 10)]
+ [(3, 5, 7), (11, 8), (2, 10, 9)]

--

___
Python tracker 
<https://bugs.python.org/issue45624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45624] test_graphlib.py depends on iteration order of sets

2021-10-27 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

test_graphlib fails on PyPy because it depends on the iteration order of sets. 
Will open a PR soon.

--
messages: 405084
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: test_graphlib.py depends on iteration order of sets

___
Python tracker 
<https://bugs.python.org/issue45624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30570] issubclass segfaults on objects with weird __getattr__

2021-10-18 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

PyPy raises a RecursionError here, which sounds like an ok outcome. So simply 
checking for the recursion would also be a way of fixing this...

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue30570>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45417] Enum creation non-linear in the number of values

2021-10-12 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

I fixed the reliance of set being insertion ordered in pypy and opened a pull 
request.

--

___
Python tracker 
<https://bugs.python.org/issue45417>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45417] Enum creation non-linear in the number of values

2021-10-12 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
nosy: +Carl.Friedrich.Bolz
nosy_count: 6.0 -> 7.0
pull_requests: +27198
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/28907

___
Python tracker 
<https://bugs.python.org/issue45417>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45384] Accept Final as indicating ClassVar for dataclass

2021-10-10 Thread Carl Meyer

Carl Meyer  added the comment:

Good idea to check with the PEP authors. 

I don’t think allowing both ClassVar and Final in dataclasses requires general 
intersection types. Neither ClassVar nor Final are real types; they aren’t part 
of the type of the value.  They are more like special annotations on a name, 
which are wrapped around a type as syntactic convenience. You’re right that it 
would require more than just amendment to the PEP text, though; it might 
require changes to type checkers, and it would also require changes to the 
runtime behavior of the `typing` module to special-case allowing 
`ClassVar[Final[…]]`. And the downside of this change is that it couldn’t be 
context sensitive to only be allowed in dataclasses. But I think this isn’t a 
big problem; type checkers could still error on that wrapping in non dataclass 
contexts if they want to. 

But even if that change can’t be made, I think backwards compatibility still 
precludes changing the interpretation of `x: Final[int] = 3` on a dataclass, 
and it is more valuable to be able to specify Final instance attributes 
(fields) than final class attributes on dataclasses.

--

___
Python tracker 
<https://bugs.python.org/issue45384>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45384] Accept Final as indicating ClassVar for dataclass

2021-10-10 Thread Carl Meyer


Carl Meyer  added the comment:

> Are Final default_factory fields real fields or pseudo-fields? (i.e. are they 
> returned by dataclasses.fields()?)

They are real fields, returned by `dataclasses.fields()`.

In my opinion, the behavior change proposed in this bug is a bad idea all 
around, and should not be made, and the inconsistency with PEP 591 should 
rather be resolved by explicitly specifying the interaction with dataclasses in 
a modification to the PEP.

Currently the meaning of:

```
@dataclass
class C:
x: Final[int] = 3
```

is well-defined, intuitive, and implemented consistently both in the runtime 
and in type checkers. It specifies a dataclass field of type `int`, with a 
default value of `3` for new instances, which can be overridden with an init 
arg, but cannot be modified (per type checker; runtime doesn't enforce Final) 
after the instance is initialized.

Changing the meaning of the above code to be "a dataclass with no fields, but 
one final class attribute of value 3" is a backwards-incompatible change to a 
less useful and less intuitive behavior.

I argue the current behavior is intuitive because in general the type 
annotation on a dataclass attribute applies to the eventual instance attribute, 
not to the immediate RHS -- this is made very clear by the fact that 
typecheckers happily accept `x: int = dataclasses.field(...)` which in a 
non-dataclass context would be a type error. Therefore the Final should 
similarly be taken to apply to the eventual instance attribute, not to the 
immediate assignment. And therefore it should not (in the case of dataclasses) 
imply ClassVar.

I realize that this means that if we want to allow final class attributes on 
dataclasses, it would require wrapping an explicit ClassVar around Final, which 
violates the current text of PEP 591. I would suggest this is simply because 
that PEP did not consider the specific case of dataclasses, and the PEP should 
be amended to carve out dataclasses specifically.

--
nosy: +carljm

___
Python tracker 
<https://bugs.python.org/issue45384>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34561] Replace list sorting merge_collapse()?

2021-09-06 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Thanks for your work Tim, just adapted the changes to PyPy's Timsort, using 
bits of runstack.py!

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue34561>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25130] Make tests more PyPy compatible

2021-08-28 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
nosy: +Carl.Friedrich.Bolz
nosy_count: 6.0 -> 7.0
pull_requests: +26460
pull_request: https://github.com/python/cpython/pull/28002

___
Python tracker 
<https://bugs.python.org/issue25130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17088] ElementTree incorrectly refuses to write attributes without namespaces when default_namespace is used

2021-06-20 Thread Carl Schaefer


Change by Carl Schaefer :


--
nosy: +carlschaefer

___
Python tracker 
<https://bugs.python.org/issue17088>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43907] pickle.py bytearray memoization bug with protocol 5

2021-04-21 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
keywords: +patch
pull_requests: +24221
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/25501

___
Python tracker 
<https://bugs.python.org/issue43907>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43907] pickle.py bytearray memoization bug with protocol 5

2021-04-21 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

The new codepath for the BYTEARRAY8 bytecode is missing memoization:

>>> import pickletools, pickle
>>> b = (bytearray(b"abc"), ) * 2
>>> b1, b2 = pickle.loads(pickle.dumps(b, 5)) # C version
>>> b1 is b2
True
(bytearray(b'abc'), bytearray(b'abc'))
>>> b1, b2 = pickle.loads(pickle._dumps(b, 5)) # python version
>>> b1 is b2 # :-(
False

Found it because PyPy is using pickle.py with no C implementation. I'm 
preparing a patch.

--
messages: 391537
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: pickle.py bytearray memoization bug with protocol 5

___
Python tracker 
<https://bugs.python.org/issue43907>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43833] Unexpected Parsing of Numeric Literals Concatenated with Boolean Operators

2021-04-14 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

@shreyanavigyan This is a bit off-topic, but it's called "short-circuiting", 
described here: 
https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not
(or/and aren't really "operators", like +/- etc, they cannot be overridden, 
they evaluate their components lazily and are therefore almost control flow)

--

___
Python tracker 
<https://bugs.python.org/issue43833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43833] Unexpected Parsing of Numeric Literals Concatenated with Boolean Operators

2021-04-13 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

It's not just about keywords. Eg '1x' tokenizes too but then produces a syntax 
error in the parser. Keywords are only special in that they can be used to 
write syntactically meaningful things with these concatenated numbers.

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue43833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3451] Asymptotically faster divmod and str(long)

2021-04-13 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

yes, that sounds fair. In PyPy we improve things occasionally if somebody feels 
like working on it, but in general competing against GMP is a fools errand.

--

___
Python tracker 
<https://bugs.python.org/issue3451>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3451] Asymptotically faster divmod and str(long)

2021-04-13 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

FWIW, we have implemented a faster algorithm for divmod for big numbers using 
Mark's fast_div.py in PyPy. In particular, this speeds up str(long) for large 
numbers significantly (eg calling str on the result of math.factorial(2**17) is 
now 15x faster than CPython ;-)).

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue3451>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43753] [C API] Add Py_Is(x, y) and Py_IsNone(x) functions

2021-04-07 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Just chiming in to say that for PyPy this API would be extremely useful, 
because PyPy's "is" is not implementable with a pointer comparison on the C 
level (due to unboxing we need to compare integers, floats, etc by value). 
Right now, C extension code that compares pointers is subtly broken and cannot 
be fixed by us.

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue43753>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Make Fraction(string) handle non-ascii slashes

2021-03-23 Thread Carl Anderson

Carl Anderson  added the comment:

>The proposal I like is for a unicode numeric normalization functions that 
>return the ascii equivalent to exist.

@Gregory P. Smith 
this makes sense to me. That does feel like the cleanest solution. 
I'm currently doing s = s.replace("⁄","/") but it would be good to have a 
well-maintained normalization method that contained the all the relevant 
mappings as an independent preprocess step to Fraction would work well.

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Make Fraction(string) handle non-ascii slashes

2021-03-22 Thread Carl Anderson


Carl Anderson  added the comment:

>Carl: can you say more about the problem that motivated this issue?

@mark.dickinson

I was parsing a large corpus of ingredients strings from web-scraped recipes. 
My code to interpret strings such as "1/2 cup sugar" would fall over every so 
often due to this issue as they used fraction slash and other visually similar 
characters

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Carl Meyer  added the comment:

Created a PR that fixes this by being more consistent in how urllib wraps 
network errors. If there are backward-compatibility concerns with this change, 
another option could be some really ugly regex-matching code in 
`test.support.transient_internet`.

--

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +23699
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24938

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] some tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


New submission from Carl Meyer :

In general it seems the CPython test suite takes care to skip instead of 
failing networked tests when the network is unavailable (c.f. 
`support.transient_internet` test helper).

In this case of the 5 FTP tests in `test_urllib2net` (that is, `test_ftp`, 
`test_ftp_basic`, `test_ftp_default_timeout`, `test_ftp_no_timeout`, and 
`test_ftp_timeout`), even though they use `support_transient_internet`, they 
still fail if the network is unavailable.

The reason is that they make calls which end up raising an exception in the 
form `URLError("ftp error: OSError(101, 'Network is unreachable')"` -- the 
original OSError is flattened into the exception string message, but is 
otherwise not in the exception args. This means that `transient_network` does 
not detect it as a suppressable exception.

It seems like many uses of `URLError` in urllib pass the original `OSError` 
directly to `URLError.__init__()`, which means it ends up in `args` and the 
unwrapping code in `transient_internet` is able to find the original `OSError`. 
But the ftp code instead directly interpolates the `OSError` into a new message 
string.

--
components: Tests
messages: 389115
nosy: carljm
priority: normal
severity: normal
status: open
title: some tests in test_urllib2net fail instead of skipping on unreachable 
network
type: behavior
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
title: some tests in test_urllib2net fail instead of skipping on unreachable 
network -> ftp tests in test_urllib2net fail instead of skipping on unreachable 
network

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43562] test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +23697
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24937

___
Python tracker 
<https://bugs.python.org/issue43562>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43562] test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable

2021-03-19 Thread Carl Meyer


New submission from Carl Meyer :

In general it seems the CPython test suite takes care to not fail if the 
network is unreachable, but `test_timeout_connect_ex` fails because the result 
code of the connection is checked without any exception being raised that would 
reach `support.transient_internet`.

--
components: Tests
messages: 389113
nosy: carljm
priority: normal
severity: normal
status: open
title: test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is 
unreachable
type: behavior
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43562>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

Carl Anderson  added the comment:

I guess if we are doing slashes, then the division sign ÷ (U+00F7) should be 
included too. 

There are at least 2 minus signs too (U+002D, U+02D7).

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

Carl Anderson  added the comment:

from https://en.wikipedia.org/wiki/Slash_(punctuation) there is

U+002F / SOLIDUS
U+2044 ⁄ FRACTION SLASH
U+2215 ∕ DIVISION SLASH
U+29F8 ⧸ BIG SOLIDUS
U+FF0F / FULLWIDTH SOLIDUS (fullwidth version of solidus)
U+1F67C  VERY HEAVY SOLIDUS

In XML and HTML, the slash can also be represented with the character entity 
 or  or .[42]

there are a couple more listed here:

https://unicode-search.net/unicode-namesearch.pl?term=SLASH

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

New submission from Carl Anderson :

Fraction works with a regular slash:

>>> from fractions import Fraction
>>> Fraction("1/2")
Fraction(1, 2)

but there are other similar slashes such as (0x2044) in which it throws an 
error:

>>> Fraction("0⁄2")
Traceback (most recent call last):
  File "", line 1, in 
  File "/opt/anaconda3/lib/python3.7/fractions.py", line 138, in __new__
numerator)
ValueError: Invalid literal for Fraction: '0⁄2'


This seems to come from the (?:/(?P\d+))? section of the regex 
_RATIONAL_FORMAT in fractions.py

--
components: Library (Lib)
messages: 388865
nosy: weightwatchers-carlanderson
priority: normal
severity: normal
status: open
title: Fraction only handles regular slashes ("/") and fails with other similar 
slashes
type: enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41972] bytes.find consistently hangs in a particular scenario

2021-02-26 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

> BTW, this initialization in the FASTSEARCH code appears to me to be a  
> mistake:

>skip = mlast - 1;

Thanks for pointing that out Tim! Turns out PyPy had copied that mindlessly and 
I just fixed it.

(I'm also generally following along with this issue, I plan to implement the 
two-way algorithm for PyPy as well, once you all have decided on a heuristic. 
We are occasionally in a slightly easier situation, because for constant-enough 
needles we can have the JIT do the pre-work on the  needle during code 
generation.)

--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue41972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43154] code.InteractiveConsole can crash if sys.excepthook is broken

2021-02-07 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

When using code.InteractiveConsole to implement a Python shell (like PyPy is 
doing), having a broken sys.excepthook set can crash the console (see attached 
terminal log). Instead, it should catch errors and report then ignore them 
(using sys.unraisablehook I would think, but right now it's not that simple to 
call unraisablehook from python code).

Related to https://bugs.python.org/issue43148

--
files: crash.log
messages: 386593
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: code.InteractiveConsole can crash if sys.excepthook is broken
Added file: https://bugs.python.org/file49797/crash.log

___
Python tracker 
<https://bugs.python.org/issue43154>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43148] Call sys.unraisablehook in the REPL when sys.excepthook is broken

2021-02-06 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
nosy: +Carl.Friedrich.Bolz

___
Python tracker 
<https://bugs.python.org/issue43148>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38197] Meaning of tracebacklimit differs between sys.tracebacklimit and traceback module

2020-11-04 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

It's still inconsistent between the two ways to get a traceback, and the 
inconsistency is not documented.

--

___
Python tracker 
<https://bugs.python.org/issue38197>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25292] ssl socket gets into broken state when client exits during handshake

2020-10-04 Thread Carl Bordum Hansen


Carl Bordum Hansen  added the comment:

I have submitted a proposed solution

Just found this similar issue https://bugs.python.org/issue35840

--

___
Python tracker 
<https://bugs.python.org/issue25292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25292] ssl socket gets into broken state when client exits during handshake

2020-10-04 Thread Carl Bordum Hansen


Change by Carl Bordum Hansen :


--
keywords: +patch
nosy: +carlbordum
nosy_count: 5.0 -> 6.0
pull_requests: +21544
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/22541

___
Python tracker 
<https://bugs.python.org/issue25292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29394] Cannot tunnel TLS connection through TLS connection

2020-10-04 Thread Carl Bordum Hansen


Change by Carl Bordum Hansen :


--
keywords: +patch
nosy: +carlbordum
nosy_count: 4.0 -> 5.0
pull_requests: +21542
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/22539

___
Python tracker 
<https://bugs.python.org/issue29394>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30578] Misleading example in sys.set_coroutine_wrapper docs

2020-10-04 Thread Carl Bordum Hansen


Carl Bordum Hansen  added the comment:

I think this can be closed as `sys.set_coroutine_wrapper` was removed in 
https://bugs.python.org/issue36933

--
nosy: +carlbordum

___
Python tracker 
<https://bugs.python.org/issue30578>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29127] Incorrect reference names in asyncio.subprocess documentation

2020-10-01 Thread Carl Bordum Hansen


Carl Bordum Hansen  added the comment:

I do not think this is the case any longer

--
nosy: +carlbordum

___
Python tracker 
<https://bugs.python.org/issue29127>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29893] create_subprocess_exec doc doesn't match software

2020-10-01 Thread Carl Bordum Hansen


Carl Bordum Hansen  added the comment:

This was fixed in https://github.com/python/cpython/pull/12598

--
nosy: +carlbordum

___
Python tracker 
<https://bugs.python.org/issue29893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41567] multiprocessing.Pool from concurrent threads failure on 3.9.0rc1

2020-08-17 Thread Carl Drougge


New submission from Carl Drougge :

If several threads try to start a multiprocessing.Pool at the same time when no 
pool has been started before this often fails with an exception like this (the 
exact import varies):

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/tmp/py3.9.0rc1/lib/python3.9/threading.py", line 950, in 
_bootstrap_inner
self.run()
  File "/tmp/py3.9.0rc1/lib/python3.9/threading.py", line 888, in run
self._target(*self._args, **self._kwargs)
  File "/tmp/py3.9.0rc1/lib/python3.9/multiprocessing/context.py", line 118, in 
Pool
from .pool import Pool
ImportError: cannot import name 'Pool' from partially initialized module 
'multiprocessing.pool' (most likely due to a circular import) 
(/tmp/py3.9.0rc1/lib/python3.9/multiprocessing/pool.py)

This happens even if Pool was imported before starting the threads and is new 
in 3.9. It's easy to work around by starting a pool in the main thread before 
starting the other threads.

I have attached a minimal example that triggers it. Tested on Debian stable and 
FreeBSD 11.3.

--
components: Library (Lib)
files: pool_error_on_3.9.py
messages: 375542
nosy: drougge
priority: normal
severity: normal
status: open
title: multiprocessing.Pool from concurrent threads failure on 3.9.0rc1
type: behavior
versions: Python 3.9
Added file: https://bugs.python.org/file49401/pool_error_on_3.9.py

___
Python tracker 
<https://bugs.python.org/issue41567>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-30 Thread Carl Meyer


Carl Meyer  added the comment:

> Coul you please add a what's new entry for this change?

The committed change already included an entry in NEWS. Is a "What's New" entry 
something different?

> I don't understand why there is a PendingDeprecationWarning and not a 
> DeprecationWarning.

Purely because I was following gps' recommendation in the first comment on this 
issue. Getting rid of PendingDeprecationWarning seems like an orthogonal 
decision; if it happens, this can trivially be upgraded to DeprecationWarning 
as part of a removal sweep.

--

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-29 Thread Carl Meyer


Carl Meyer  added the comment:

Right, although I think it still makes sense to link both LibCST and parso 
since they provide different levels of abstraction that would be suitable for 
different types of tools (e.g. I would rather write an auto-formatter on top of 
parso, because LibCST's careful parsing and assignment of whitespace would 
mostly just get in the way, but I'd rather write any kind of refactoring 
tooling on top of LibCST.)

Another tool that escaped my mind when writing the PR that should probably be 
linked also is Baron/RedBaron (https://github.com/PyCQA/redbaron); 457 stars 
makes it slightly more popular than LibCST (but it's also been around a lot 
longer.)

--

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-24 Thread Carl Meyer


Carl Meyer  added the comment:

@gregory.p.smith

What do you think about the question I raised above about how to make this 
deprecation visible to users of the 2to3 CLI tool, assuming the plan is to 
remove both?

--

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-22 Thread Carl Meyer


Carl Meyer  added the comment:

I opened a PR. It deprecates the lib2to3 library to discourage future use of it 
for Python3, but not the 2to3 tool. This of course means that the lib2to3 
module will in practice stick around in the stdlib as long as 2to3 is still 
bundled with Python.

It seems like the idea in this issue is to deprecate and remove both. I'm not 
sure what we typically do to deprecate a command-line utility bundled with 
Python. Given warnings are silent by default, the deprecation warning for 
lib2to3 won't be visible to users of 2to3. Should I add something to its 
`--help` output? Or something more aggressive; an unconditionally-printed 
warning?

--

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-22 Thread Carl Meyer


Change by Carl Meyer :


--
pull_requests: +18987
pull_request: https://github.com/python/cpython/pull/19663

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40360] Deprecate lib2to3 (and 2to3) for future removal

2020-04-22 Thread Carl Meyer


Carl Meyer  added the comment:

I volunteered in the python-dev thread to write a patch to the docs clarifying 
future status of lib2to3; happy to include the PendingDeprecationWarning as 
well.

Re linking to alternatives, we want to make sure we link to alternatives that 
are committed to updating to support newer Python versions' syntax. This 
definitely includes LibCST; I can inquire with the parso maintainer about 
whether it also includes parso. In future it could also include a 
third-party-maintained copy of lib2to3, if someone picks that up.

--
nosy: +carljm

___
Python tracker 
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40078] asyncio subprocesses allow pids to be reaped, different behavior than regular subprocesses

2020-04-18 Thread Carl Lewin


Carl Lewin  added the comment:

Very first time engaging in such a forum. Apologies is advance if I am doing it 
wrong!

Observation: ps -ef shows "Defunct" process until calling script terminates

Scenario: equivalent test scripts in BASH, Python 2.7 and 3.6 that:
1. Start a ping
2. SIGTERM (kill -15) the associated PID 
3. wait for a user input (hence stopping the script terminating)

I tried P.Open and threading but behaviour is same.

BASH script does not show any "defunct" process.

Is this "Child Reaping" the cause of this observed behaviour?

Problem comes when the Parent script is required to run constantly (server type 
scenario) as the "defunct" processes will presumably eventually consume all 
system resources?

--
nosy: +c-lewin

___
Python tracker 
<https://bugs.python.org/issue40078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-15 Thread Carl Meyer

Carl Meyer  added the comment:

Makes sense. Yes, caution is required about what code runs before fork, but 
forkserver’s solution for that would be a non-starter for us, since it would 
ensure that we can share no basically no memory at all between worker processes.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-15 Thread Carl Meyer


Carl Meyer  added the comment:

> I would be interested to hear the answer to Antoine's question which is 
> basically: why not using the multiprocessing fork server?

Concretely, because for a long time we have used the uWSGI application server 
and it manages forking worker processes (among other things), and AFAIK nobody 
has yet proposed trying to replace that with something built around the 
multiprocessing module. I'm actually not aware of any popular Python WSGI 
application server built on top of the multiprocessing module (but some may 
exist).

What problem do you have in mind that the fork server would solve? How is it 
related to this issue? I looked at the docs and don't see that it does anything 
to help sharing Python objects' memory between forked processes without CoW.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-15 Thread Carl Meyer


Carl Meyer  added the comment:

> Is it a common use case to load big data and then fork to use preloaded data?

A lot of the "big data" in question here is simply lots of Python 
module/class/code objects resulting from importing lots of Python modules.

And yes, this "pre-fork" model is extremely common for serving Python web 
applications; it is the way most Python web application servers work. We 
already have an example in this thread of another large Python web application 
(YouTube) that had similar needs and considered a similar approach.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

I think the concerns about "perfect" behavior in corner cases are in general 
irrelevant here.

In the scenarios where this optimization matters, there is no quantitative 
change that occurs at 100% coverage. Preventing 99% of CoW is 99% as good as 
preventing 100% :) So the fact that a few objects here and there in special 
cases could still trigger CoW just doesn't matter; it's still a massive 
improvement over the status quo. (That said, I wouldn't _mind_ improving the 
coverage, e.g. if you can suggest a better way to find all heap objects instead 
of using the GC.)

And similarly, gps is right that the concern that immortal objects can keep 
other objects alive (even via references added after immortalization) is a 
non-issue in practice. There really is no other behavior one could prefer or 
expect instead.

> if said objects (isolated and untracked before and now tracked) acquire 
> strong references to immortal objects, those objects will be visited when the 
> gc starts calculating the isolated cycles and that requires a balanced 
> reference count to work.

I'm not sure what you mean here by "balanced ref count" or by "work" :) What 
will happen anytime an immortal object gets into the GC, for any reason, is 
that the GC will "subtract" cyclic references and see that the immortal object 
still has a large refcount even after that adjustment, and so it will keep the 
immortal object and any cycle it is part of alive. This behavior is correct and 
should be fully expected; nothing breaks. It doesn't matter at all to the GC 
that this large refcount is "fictional," and it doesn't break the GC algorithm, 
it results only in the desired behavior of maintaining immortality of immortal 
objects.

It is perhaps slightly weird that this behavior falls out of the immortal bit 
being a high bit rather than being more explicit. I did do some experimentation 
with trying to explicitly prevent immortal instances from ever entering GC, but 
it turned out to be hard to do that in an efficient way. And motivation to do 
it is low, because there's nothing wrong with the behavior in the existing PR.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> This may break the garbage collector algorithm that relies on the balance 
> between strong references between objects and its reference count to do the 
> calculation of the isolated cycles.

I don't think it really breaks anything. What happens is that the immortal 
object appears to the GC to have a very large reference count, even after 
adjusting for within-cycle references. So cycles including an immortal object 
are always kept alive, which is exactly the behavior one should expect from an 
immortal object.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> An immortalized object will never start participating in reference counting 
> again after it is immortalized.

Well, "passed to an extension compiled with no-immortal headers" is an 
exception to this.

But for the "not GC tracked but later becomes GC tracked" case, it will not 
re-enter reference counting, only the GC.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> Anything that is touched by the immortal object will be leaked. This can also 
> happen in obscure ways if reference cycles are created.

I think this is simply expected behavior if you choose to create immortal 
objects, and not really an issue. How could you have an immortal object that 
doesn't keep its strong references alive?

> this does not fully cover all cases as objects that become tracked by the GC 
> after they are modified (for instance, dicts and tuples that only contain 
> immutable objects). Those objects will still participate in reference 
> counting after they start to be tracked.

I think the last sentence here is not quite right. An immortalized object will 
never start participating in reference counting again after it is immortalized.

There are two cases. If at the time of calling `immortalize_heap()` you have a 
non-GC-tracked object that is also not reachable from any GC-tracked container, 
then it will not be immortalized at all, so will be unaffected. This is a side 
effect of the PR using the GC to find objects to immortalize.

If the non-GC-tracked object is reachable from a GC-tracked object (I believe 
this is by far the more common case), then it will be immortalized. If it later 
becomes GC-tracked, it will start participating in GC (but the immortal bit 
causes it to appear to the GC to have a very high reference count, so GC will 
never collect it or any cycle it is part of), but that will not cause it to 
start participating in reference counting again.

> if immortal objects are handed to extension modules compiled with the other 
> version of the macros, the reference count can be corrupted

I think the word "corrupted" makes this sound worse than it is in practice. 
What happens is just that the object is still effectively immortal (because the 
immortal bit is a very high bit), but the copy-on-write benefit is lost for the 
objects touched by old extensions.

> 1.17x slower on logging_silent or unpickle_pure_python is a very expensive 
> price

Agreed. It seems the only way this makes sense is under an ifdef and off by 
default. CPython does a lot of that for debug features; this might be the first 
case of doing it for a performance feature?

> I would be more interested by an experiment to move ob_refcnt outside 
> PyObject to solve the Copy-on-Write issue

It would certainly be interesting to see results of such an experiment. We 
haven't tried that for refcounts, but in the work that led to `gc.freeze()` we 
did try relocating the GC header to a side location. We abandoned that because 
the memory overhead of adding a single indirection pointer to every PyObject 
was too large to even consider the option further. I suspect that this memory 
overhead issue and/or likely cache locality problems will make moving refcounts 
outside PyObject look much worse for performance than this immortal-instances 
patch does.

--
nosy: +carljm

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3950] turtle.py: bug in TurtleScreenBase._drawimage

2020-02-10 Thread Carl Tyndall


Change by Carl Tyndall :


--
pull_requests: +17809
pull_request: https://github.com/python/cpython/pull/18435

___
Python tracker 
<https://bugs.python.org/issue3950>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35799] fix or remove smtpd.PureProxy

2020-02-05 Thread Carl Harris


Change by Carl Harris :


--
nosy: +hitbox

___
Python tracker 
<https://bugs.python.org/issue35799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39486] bug in %-formatting in Python, related to escaped %-characters

2020-01-29 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

Ok, that means it's intentional. I still think it's missing a documentation 
change and consistent error messages.

--

___
Python tracker 
<https://bugs.python.org/issue39486>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39486] bug in %-formatting in Python, related to escaped %-characters

2020-01-29 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

The following behaviour of %-formatting changed between Python3.6 and 
Python3.7, and is in my opinion a bug that was introduced.

So far, it has been possible to add conversion flags to a conversion specifier 
in %-formatting, even if the conversion is '%' (meaning a literal % is emitted 
and no argument consumed).

Eg this works in Python3.6:

>>>> "%+%abc% %" % ()
'%abc%'

The conversion flags '+' and ' ' are ignored.

Was it discussed and documented anywhere that this is now an error? Because 
Python3.7 has the following strange behaviour instead:

>>> "%+%abc% %" % ()
Traceback (most recent call last):
  File "", line 1, in 
TypeError: not enough arguments for format string

That error message is just confusing, because the amount of arguments is not 
the problem here. If I pass a dict (thus making the number of arguments 
irrelevant) I get instead:

>>> "%+%abc% %" % {}
Traceback (most recent call last):
  File "", line 1, in 
ValueError: unsupported format character '%' (0x25) at index 2

(also a confusing message, because '%' is a perfectly fine format character)

In my opinion this behaviour should either be reverted to how Python3.6 worked, 
or the new restrictions should be documented and the error messages improved.

--
messages: 360965
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: bug in %-formatting in Python, related to escaped %-characters
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue39486>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39485] Bug in mock running on PyPy3

2020-01-29 Thread Carl Friedrich Bolz-Tereick


Change by Carl Friedrich Bolz-Tereick :


--
keywords: +patch
pull_requests: +17629
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18252

___
Python tracker 
<https://bugs.python.org/issue39485>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39485] Bug in mock running on PyPy3

2020-01-29 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

One of the new-in-3.8 tests for unittest.mock, 
test_spec_has_descriptor_returning_function, is failing on PyPy. This exposes a 
bug in unittest.mock. The bug is most noticeable on PyPy, where it can be 
triggered by simply writing a slightly weird descriptor (CrazyDescriptor in the 
test). Getting it to trigger on CPython would be possible too, by implementing 
the same descriptor in C, but I did not actually do that.

The relevant part of the test looks like this:

from unittest.mock import create_autospec

class CrazyDescriptor(object):
def __get__(self, obj, type_):
if obj is None:
return lambda x: None

class MyClass(object):

some_attr = CrazyDescriptor()

mock = create_autospec(MyClass)
mock.some_attr(1)

On CPython this just works, on PyPy it fails with:

Traceback (most recent call last):
  File "x.py", line 13, in 
mock.some_attr(1)
  File 
"/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/unittest/mock.py", 
line 938, in __call__
_mock_self._mock_check_sig(*args, **kwargs)
  File 
"/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/unittest/mock.py", 
line 101, in checksig
sig.bind(*args, **kwargs)
  File 
"/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/inspect.py", line 
3034, in bind
return args[0]._bind(args[1:], kwargs)
  File 
"/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/inspect.py", line 
2955, in _bind
raise TypeError('too many positional arguments') from None
TypeError: too many positional arguments

The reason for this problem is that mock deduced that MyClass.some_attr is a 
method on PyPy. Since mock thinks the lambda returned by the descriptor is a 
method, it adds self as an argument, which leads to the TypeError.

Checking whether something is a method is done by _must_skip in mock.py. The 
relevant condition is this one:

elif isinstance(getattr(result, '__get__', None), MethodWrapperTypes):
# Normal method => skip if looked up on type
# (if looked up on instance, self is already skipped)
return is_type
else:
return False

MethodWrapperTypes is defined as:

MethodWrapperTypes = (
type(ANY.__eq__.__get__),
)

which is just types.MethodType on PyPy, because there is no such thing as a 
method wrapper (the builtin types look pretty much like python-defined types in 
PyPy).

On PyPy the condition isinstance(getattr...) is thus True for all descriptors! 
so as soon as result has a __get__, it counts as a method, even in the above 
case where it's a custom descriptor.


Now even on CPython the condition makes no sense to me. It would be True for a 
C-defined version of CrazyDescriptor, it's just not a good way to check whether 
result is a method.

I would propose to replace the condition with the much more straightforward 
check:

elif isinstance(result, FunctionTypes):
...

something is a method if it's a function on the class. Doing that change makes 
the test pass on PyPy, and doesn't introduce any test failures on CPython 
either.

Will open a pull request.

--
messages: 360961
nosy: Carl.Friedrich.Bolz, cjw296
priority: normal
severity: normal
status: open
title: Bug in mock running on PyPy3

___
Python tracker 
<https://bugs.python.org/issue39485>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39428] allow creation of "symtable entry" objects from Python

2020-01-22 Thread Carl Meyer


New submission from Carl Meyer :

Currently the "symtable entry" extension type (PySTEntry_Type) defined in 
`Python/symtable.c` defines no `tp_new` or `tp_init`, making it impossible to 
create instances of this type from Python code.

I have a use case for pickling symbol tables (as part of a cache subsystem for 
a static analyzer), but the inability to create instances of symtable entries 
from attributes makes this impossible, even with custom pickle support via 
dispatch_table or copyreg.

If the idea of making instances of this type creatable from Python is accepted 
in principle, I can submit a PR for it.

Thanks!

--
messages: 360522
nosy: carljm
priority: normal
severity: normal
status: open
title: allow creation of "symtable entry" objects from Python
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue39428>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-13 Thread Carl Harris


Change by Carl Harris :


--
nosy: +hitbox

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39278] add docstrings to functions in pdb module

2020-01-09 Thread Carl Bordum Hansen


Change by Carl Bordum Hansen :


--
keywords: +patch
pull_requests: +17331
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/17924

___
Python tracker 
<https://bugs.python.org/issue39278>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39278] add docstrings to functions in pdb module

2020-01-09 Thread Carl Bordum Hansen


New submission from Carl Bordum Hansen :

The functions are documented, but not in doc strings which means you cannot 
call help() on them.

>From this twitter thread: 
>https://twitter.com/raymondh/status/1211414561468952577

--
assignee: docs@python
components: Documentation
messages: 359689
nosy: carlbordum, docs@python
priority: normal
severity: normal
status: open
title: add docstrings to functions in pdb module
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39278>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39220] constant folding affects annotations despite 'from __future__ import annotations'

2020-01-06 Thread Carl Friedrich Bolz-Tereick


Carl Friedrich Bolz-Tereick  added the comment:

I don't have a particularly deep opinion on what should be done, just a bit of 
weirdness I hit upon while implementing the PEP in PyPy. fwiw, we implement it 
as an AST transformer that the compiler runs before running the optimizer to 
make sure that the AST optimizations don't get applied to annotions. The 
transformer replaces all annotations with a Constant ast node, containing the 
unparsed string.

--

___
Python tracker 
<https://bugs.python.org/issue39220>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39220] constant folding affects annotations despite 'from __future__ import annotations'

2020-01-05 Thread Carl Friedrich Bolz-Tereick


New submission from Carl Friedrich Bolz-Tereick :

PEP 563 interacts in weird ways with constant folding. running the following 
code:

```
from __future__ import annotations

def f(a: 5 + 7) -> a ** 39:
return 12

print(f.__annotations__)
```

I would expect this output:

```
{'a': '5 + 7', 'return': 'a ** 39'}
```

But I get:


```
{'a': '12', 'return': 'a ** 39'}
```

--
components: Interpreter Core
files: x.py
messages: 359341
nosy: Carl.Friedrich.Bolz
priority: normal
severity: normal
status: open
title: constant folding affects annotations despite 'from __future__ import 
annotations'
versions: Python 3.7
Added file: https://bugs.python.org/file48827/x.py

___
Python tracker 
<https://bugs.python.org/issue39220>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   >