Brian Carlson added the comment:
I don't think passing `lineno` and `column` is preferred. It makes code
generation harder because `lineno` and `column` are hard to know ahead of when
code is being unparsed.
--
___
Python tracker
&
Brian Carlson added the comment:
The second solution seems more optimal, in my opinion. I monkey patched the
function like this in my own code:
```
def get_type_comment(self, node):
comment = self._type_ignores.get(node.lineno) if hasattr(node, "lineno")
else node.type_comm
New submission from Brian Carlson :
Test file linked. When unparsing the output from ast.parse on a simple class,
unparse throws an error: 'FunctionDef' object has no attribute 'lineno' for a
valid class and valid AST. It fails when programmatically building the module
AS
Brian Skinn added the comment:
Indeed, I hadn't been thinking about the testing/maintenance burden to CPython
or other implementations when I made the suggestion.
I no longer have a strong opinion about this change, so I am happy to
reject/close.
--
resolution: -> reject
New submission from Brian McCutchon :
Consider the following code:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import contextlib
import os
@contextlib.contextmanager
def my_tmp_file():
with tempfile.NamedTemporaryFile('w') as f:
yield f
os.stat(m
Brian Skinn added the comment:
Identifiers starting with two uppercase letters returns a HUGE list.
>>> pat2 = re.compile(r"([.][A-Z][A-Z])[^.]*$")
Filtering down by only those that contain.lower() "type":
>>> pprint([obj.name for obj in inv.objec
Brian Skinn added the comment:
If I understand the problem correctly, these mis-attributions of roles (to
'data' instead of 'class' come about when a thing that is technically a class
is defined in source using simple assignment, as with UnionType.
Problematic entries
Brian added the comment:
txt = ' test'
txt = re.sub(r'^\s*', '^', txt)
substitutes once because the * is greedy.
txt = ' test'
txt = re.sub(r'^\s*?', '^', txt)
substitutes twice, consistent with the \Z behavior.
--
__
Brian added the comment:
I just ran into this change in behavior myself.
It's worth noting that the new behavior appears to match perl's behavior:
# perl -e 'print(("he" =~ s/e*\Z/ah/rg), "\n")'
hahah
--
nosy: +bsammon
__
New submission from Brian Hunt :
Version: Python 3.9.3
Package: Logger + Windows 10 Task Scheduler
Error Msg: None
Behavior:
I built a logging process to use with a python script that will be scheduled
with Windows 10 task scheduler (this could be a bug in task scheduler too, but
starting
Brian Lee added the comment:
Thanks for clarifying - I see now that the docs specifically call out the lack
of guarantees here with "usually but not always regard them as equivalent".
I did want to specifically explain the context of my bug;
1. NumPy's strings have some unex
New submission from Brian Lee :
This seems like unexpected behavior: Two keys that are equal and have equal
hashes should yield cache hits, but they do not.
Python 3.9.6 (default, Aug 18 2021, 19:38:01)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", &
Brian added the comment:
I've attached an example of what I want. It contains a class, a function to be
tested, and a test class which tests the function.
What TestCase.addTypeEqualityFunc feels like it offers is a chance to compare
objects however I feel like is needed for each
New submission from Brian :
Like the title says, TestCase.assertSequenceEqual does not behave like
TestCase.assertEqual where it uses TestCase._getAssertEqualityFunc. Instead,
TestCase.assertSequenceEqual uses `item1 != item2`. That way I can do something
like this:
```
def test_stuff(self
Brian Curtin added the comment:
I think there was either something stale that got linked wrong or some other
kind of build failure, as I just built v3.10.0b3 tag again from a properly
cleaned environment and this is no longer occurring. Sorry for the noise.
--
stage: -> resol
Brian Curtin added the comment:
Hmm. I asked around about this and someone installed 3.10.0-beta3 via pyenv and
this worked fine.
For whatever it's worth, this was built from source on OS X 10.14.6 via a
pretty normal setup of `./configure` with no extra flags and then `make in
New submission from Brian Curtin :
I'm currently trying to run my `tox` testing environment—all of which runs
under `coverage`—against 3.10b3 and am running into some trouble. I initially
thought it was something about tox or coverage, but it looks lower level than
that as the venv scri
Brian Hulette added the comment:
Hey there, I just came across this bug when looking into a problem with
corrupted pyc files. Was the patch ever applied? I'm still seeing the original
behavior in Python 3.7.
Thanks!
--
nosy: +hulettbh
___
P
Brian Romanowski added the comment:
I took a look at Parser/tokenizer.c. From what I can tell, the tokenizer does
fake a newline character when the input buffer does not end with actual newline
characters and that the returned NEWLINE token has an effective length of 1
because of this
Brian Romanowski added the comment:
Shoot, just realized that consistency isn't the important thing here, the most
important thing is that the tokenizer module exactly matches the output of the
Python tokenizer. It's possible that my changes violate that constraint, I'll
Change by Brian Romanowski :
--
keywords: +patch
pull_requests: +23085
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24260
___
Python tracker
<https://bugs.python.org/issu
New submission from Brian Romanowski :
The tokenize module's tokenizer functions output incorrect (or at least
misleading) information when the content being tokenized does not end in a line
ending character. This is related to the fix for issue<33899> which added the
NEWLINE
Brian Costlow added the comment:
There are actually two different issues here.
dtrace -q will not work on Fedora-based linux (haven't tried elsewhere) and
that probably should be corrected, but that is NOT what causes the test fail.
The tests' setup checks whether dtrace is us
Brian Kohan added the comment:
I concur with Gregory. It seems that the action here is to just make it
apparent in the docs the very real possibility of false positives.
In my experience processing data from the wild, I see a pretty high rate of
about 1/1000. I'm sure the probability
Brian Kohan added the comment:
Hi all,
I'm experiencing the same issue. I took a look at the is_zipfile code - seems
like its not checking the start of the file for the magic numbers, but looking
deeper in. I presume because the magic numbers at the start are considered
unreliable for
Brian Rutledge added the comment:
In addition to Ctrl+V, Shift+Insert also doesn't work. This behavior is the
same Command Prompt and PowerShell on Windows 10.
Workarounds include:
- Clicking `Edit > Paste` from the window menu
- Enabling `Properties > Options > Use Ctrl+Shi
Brian Vandenberg added the comment:
I accidentally hit submit too early.
I tried changing the code in posixmodule.c to use lseek(), something like the
following:
offset = lseek( in, 0, SEEK_CUR );
do {
ret = sendfile(...);
} while( ... );
lseek( in, offset, SEEK_SET );
... however, in
Brian Vandenberg added the comment:
Christian, you did exactly what I needed. Thank you.
I don't have the means to do a git bisect to find where it broke. It wasn't a
problem around 3.3 timeframe and I'm not sure when this sendfile stuff was
implemented.
The man page fo
Brian Vandenberg added the comment:
Solaris will be around for at least another 10-15 years.
The least you could do is look into it and offer some speculations.
--
___
Python tracker
<https://bugs.python.org/issue29
New submission from Brian Faherty :
The ConfigParser in Lib has a parameter called `interpolation`, that expects an
instance of a subclass of Interpolation. However, when ConfigParser is given an
argument of an uninstantiated subclass of Interpolation, the __init__ function
of ConfigParser
Brian May added the comment:
Consensus seems to be that this is a bug in sshuttle, not a bug in python.
Thanks for the feedback.
I think this bug can be closed now...
--
___
Python tracker
<https://bugs.python.org/issue39
Brian Curtin added the comment:
graingert: Do you have a workaround for this? I'm doing roughly the same thing
with an asyncpg connection pool nested with a transaction and am getting
nowhere.
async with pg_pool.acquire() as conn:
async with conn.transa
Brian O'Sullivan added the comment:
matplotlib yes
Will do.
--
resolution: third party ->
status: closed -> open
___
Python tracker
<https://bugs.python.
New submission from Brian O'Sullivan :
3D plotting library doesn't occlude objects by depth/perspective.
When the figure is regenerated during the animation, each additional line and
points is printed on top of the prior lines and points.
Bug resolution:
- incorporate occlusions
Brian Quinlan added the comment:
I'll try to take a look at this before the end of the week, but I'm currently
swamped with other life stuff :-(
--
___
Python tracker
<https://bugs.python.o
New submission from Brian May :
After upgrading to Python 3.8, users of sshuttle report seeing this error:
Traceback (most recent call last):
File "", line 1, in
File "assembler.py", line 38, in
File "sshuttle.server", line 298, in main
File "/usr
Brian Quinlan added the comment:
New changeset 884eb89d4a5cc8e023deaa65001dfa74a436694c by Brian Quinlan in
branch 'master':
bpo-39205: Tests that highlight a hang on ProcessPoolExecutor shutdown (#18221)
https://github.com/python/cpython/commit/884eb89d4a5cc8e023deaa65001dfa
Change by Brian Quinlan :
--
keywords: +patch
pull_requests: +17601
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/18221
___
Python tracker
<https://bugs.python.org/issu
New submission from Brian McKim :
When I uninstalled the windows store version 3.8 it appears to have placed two
links in my \AppData\Local\Microsoft\WindowsApps folder (though they may have
always been there), python.exe and python3.exe. When I run these in PowerShell
both send me to the
New submission from Brian Quinlan :
```
from concurrent.futures import ProcessPoolExecutor
import time
t = ProcessPoolExecutor(max_workers=3)
t.map(time.sleep, [1,2,3])
t.shutdown(wait=False)
```
Results in this exception and then a hang (i.e. Python doesn't terminate):
```
Excepti
Brian Kardon added the comment:
Ah, thank you so much! That makes sense. I'll have to switch to 64-bit python.
I've marked this as closed - hope that's the right thing to do here.
--
resolution: -> not a bug
stage: -> resolved
s
New submission from Brian Kardon :
When I try to create a series of multiprocessing.RawArray objects, I get an
"OSError: Not enough memory resources are available to process this command".
However, by my calculations, the total amount of memory I'm trying to allocate
is just
New submission from Brian Shaginaw :
>>> import inspect
>>> def foo(bar, /, **kwargs):
... print(bar, kwargs)
...
>>> foo(1, bar=2)
1 {'bar': 2}
>>> inspect.signature(foo).bind(1, bar=2)
Traceback (most recent call last):
File "", l
New submission from Brian Skinn :
If I read the docs correctly, io.TextIOWrapper is meant to provide a str-typed
interface to an underlying bytes stream.
If a TextIOWrapper is instantiated with the underlying buffer=io.StringIO(), it
breaks:
>>> import io
>>> tw
Brian Skinn added the comment:
On reflection, it would probably be better to limit the ELLIPSIS to 3 or 4
periods ('[.]{3,4}'); otherwise, it would be impossible to express an ellipsis
followed by a period in a 'want'.
--
__
Brian Skinn added the comment:
I suppose one alternative solution might be to tweak the ELLIPSIS feature of
doctest, such that it would interpret a run of >=3 periods in a row (matching
regex pattern of "[.]{3,}") as 'ellipsis'.
The regex for PS2 could then have a n
Brian Skinn added the comment:
Mm, agreed--that regex wouldn't be hard to write.
The problem is, AFAICT there's irresolvable syntactic ambiguity in a line
starting with exactly three periods, if the doctest PS2 specification is not
constrained to be exactly "... &q
New submission from Brian Skinn :
Once the underlying buffer/stream is .detach()ed from an instance of a subclass
of TextIOBase or BufferedIOBase, accession of most attributes defined on
TextIOBase/BufferedIOBase or the IOBase parent, as well as calling of most
methods defined on TextIOBase
Change by Brian Skinn :
--
type: enhancement -> behavior
___
Python tracker
<https://bugs.python.org/issue37699>
___
___
Python-bugs-list mailing list
Un
Brian Quinlan added the comment:
Can I add "needs backport to 3.8" and "needs backport to 3.7" labels now or
do I have to use cherry_picker at this point?
On Mon, Jul 1, 2019 at 3:55 PM Ned Deily wrote:
>
> Ned Deily added the comment:
>
> > I don't
Brian Quinlan added the comment:
I don't know what the backport policy is. The bug is only theoretical AFAIK
i.e. someone noticed it through code observation but it has not appeared in
the wild.
On Mon, Jul 1, 2019 at 3:25 PM Ned Deily wrote:
>
> Ned Deily added the comment
Change by Brian Quinlan :
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
Brian Quinlan added the comment:
New changeset 242c26f53edb965e9808dd918089e664c0223407 by Brian Quinlan in
branch 'master':
bpo-31783: Fix a race condition creating workers during shutdown (#13171)
https://github.com/python/cpython/commit/242c26f53edb965e9808dd918089e6
Change by Brian Quinlan :
--
resolution: -> duplicate
stage: -> resolved
status: open -> closed
superseder: -> function changed when pickle bound method object
___
Python tracker
<https://bugs.python
Change by Brian Quinlan :
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue37208>
___
___
Python-bugs-list
Change by Brian Quinlan :
--
resolution: -> duplicate
superseder: -> picke cannot dump Exception subclasses with different super()
args
___
Python tracker
<https://bugs.python.org/i
Change by Brian Quinlan :
--
title: picke cannot dump exceptions subclasses with different super() args ->
picke cannot dump Exception subclasses with different super() args
___
Python tracker
<https://bugs.python.org/issu
New submission from Brian Quinlan :
$ ./python.exe nopickle.py
TypeError: __init__() missing 1 required positional argument: 'num'
The issue is that the arguments passed to Exception.__init__ (via `super()`)
are collected into `args` and then serialized by pickle e.g.
>>&
Brian Quinlan added the comment:
That's a super interesting bug! It looks like this issue is that your exception
can't be round-tripped using pickle i.e.
>>> class PoolBreaker(Exception):
... def __init__(self, num):
... super().__init__()
...
Brian Quinlan added the comment:
Joshua, I'm closing this since I haven't heard from you in a month. Please
re-open if you use case isn't handled by `initializer` and `initargs`.
--
assignee: -> bquinlan
resolution: -> out of date
stage: -> resolve
Brian Skinn added the comment:
:thumbsup:
Glad I happened to be in the right place at the right time to put it together.
I'll leave the tabslash repo up for future reference.
--
___
Python tracker
<https://bugs.python.org/is
Brian Skinn added the comment:
Brett, to be clear, this sounds like the tabbed solution is not going to be
used at this point? If so, I'll pull down the tabbed docs I'm hosting.
--
___
Python tracker
<https://bugs.python.o
Brian Skinn added the comment:
First, for anyone interested, there are screenshots and links to docs versions
at the SC GH issue
(https://github.com/python/steering-council/issues/12#issuecomment-498856524,
and following) where we're exploring what the tabbed approach to the PEP570
Change by Brian Skinn :
--
nosy: +bskinn
___
Python tracker
<https://bugs.python.org/issue37134>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Brian Spratke :
I am trying to cross compile Python 3.7 for Android. I have Python building,
but I keep getting an error that _ctypes failed to build, but I see nothing
that jumps out as a reason.
_ctypes_test builds, before that I see this INFO message
INFO: Can
Brian Quinlan added the comment:
We can bike shed over the name in the PR ;-)
--
___
Python tracker
<https://bugs.python.org/issue36780>
___
___
Python-bug
Change by Brian Quinlan :
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
Brian Quinlan added the comment:
My understanding is that tracebacks have a pretty large memory profile so I'd
rather not keep them alive. Correct me if I'm wrong about that.
--
___
Python tracker
<https://bugs.python.o
Brian Quinlan added the comment:
After playing with it for a while, https://github.com/python/cpython/pull/6375
seems reasonable to me.
It needs tests and some documentation.
Antoine, are you still -1 because of the complexity increase
Brian Quinlan added the comment:
When I first wrote and started using ThreadPoolExecutor, I had a lot of code
like this:
with ThreadPoolExecutor(max_workers=500) as e:
e.map(download, images)
I didn't expect that `images` would be a large list but, if it was, I wanted
all o
Brian Quinlan added the comment:
Was this fixed by https://github.com/python/cpython/pull/3895 ?
--
___
Python tracker
<https://bugs.python.org/issue30
Brian Quinlan added the comment:
Brian, I was looking for an example where the current executor isn't sufficient
for testing i.e. a useful test that would be difficult to write with a real
executor but would be easier with a fake.
Maybe you have such an example from your
Change by Brian Quinlan :
--
pull_requests: +13087
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mai
Brian Quinlan added the comment:
I think that ProcessPoolExecutor might have a similar race condition - but not
in exactly this code path since it would only be with the queue management
thread (which is only started once).
--
___
Python tracker
Brian McCutchon added the comment:
No, I do not have such an example, as most of my tests try to fake the
executors.
--
___
Python tracker
<https://bugs.python.org/issue36
Brian Quinlan added the comment:
Great report Steven!
I was able to reproduce this with the attached patch (just adds some sleeps and
prints) and this script:
from threading import current_thread
from concurrent.futures import ThreadPoolExecutor
from time import sleep
pool
Change by Brian Quinlan :
--
assignee: -> bquinlan
___
Python tracker
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscrib
Brian Quinlan added the comment:
How did the experiment go? Are people still interested in this?
--
___
Python tracker
<https://bugs.python.org/issue22
Brian Quinlan added the comment:
Ben, do you still think that your patch is relevant or shall we close this bug?
--
___
Python tracker
<https://bugs.python.org/issue22
Brian Quinlan added the comment:
Do the `initializer` and `initargs` parameters deal with this use case for you?
https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor
--
___
Python tracker
<ht
Brian Quinlan added the comment:
If we supported this, aren't we promising that we will always materialize the
iterator passed to map?
I think that we'd need a really strong use-case for this to be worth-while.
--
___
Python track
Brian Quinlan added the comment:
Hey Brian,
I understand the non-determinism. I was wondering if you had a non-theoretical
example i.e. some case where the non-determinism had impacted a real test that
you wrote?
--
___
Python tracker
<ht
Brian Quinlan added the comment:
Hey Hrvoje,
I agree that #1 is the correct approach. `disown` might not be the best name -
maybe `allow_shutdown` or something. But we can bike shed about that later.
Are you interested in writing a patch
Brian McCutchon added the comment:
I understand your hesitation to add a fake. Would it be better to make it
possible to subclass Executor so that a third party implementation of this can
be developed?
As for an example, here is an example of nondeterminism when using a
ThreadPoolExecutor
Brian Quinlan added the comment:
Hey Ethan, I'm really sorry about dropping the ball on this. I've been burnt
out on Python stuff for the last couple of years.
When we left this, it looked like the -1s were in the majority and no one new
has jumped on to support `filter`.
If you
Brian Quinlan added the comment:
Using a default executor could be dangerous because it could lead to deadlocks.
For example:
mylib.py
def my_func():
tsubmit(...)
tsubmit(...)
tsubmit(somelib.some_func, ...)
somelib.py
--
def some_func():
tsubmit(...) # Potential
Brian Quinlan added the comment:
So you actually use the result of ex.submit i.e. use the resulting future?
If you don't then it might be easier to just create your own thread.
--
___
Python tracker
<https://bugs.python.org/is
Brian McCutchon added the comment:
Mostly nondeterminism. It seems like creating a ThreadPoolExecutor with one
worker could still be nondeterministic, as there are two threads: the main
thread and the worker thread. It gets worse if multiple executors are needed.
Another option would be to
Brian Quinlan added the comment:
Do you have a example that you could share?
I can't think of any other fakes in the standard library and I'm hesitant to be
the person who adds the first one ;-)
--
___
Python tracker
<https://bu
Change by Brian Quinlan :
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue26374>
___
___
Python-bugs-list
Brian Quinlan added the comment:
Can we close this bug then?
--
___
Python tracker
<https://bugs.python.org/issue26374>
___
___
Python-bugs-list mailin
Brian Quinlan added the comment:
Hey Brian, why can't you use threads in your unit tests? Are you worried about
non-determinism or resource usage? Could you make a ThreadPoolExecutor with a
single worker?
--
___
Python tracker
&
Change by Brian Quinlan :
--
keywords: +patch
pull_requests: +13045
stage: needs patch -> patch review
___
Python tracker
<https://bugs.python.org/issu
Brian Quinlan added the comment:
OK, I completely disagree with my statement:
"""If you added this as an argument to shutdown() then you'd probably also have
to add it as an option to the constructors (for people using Executors as
context managers). But, if you h
Brian Quinlan added the comment:
BTW, the 61 process limit comes from:
63 - -
--
___
Python tracker
<https://bugs.python.org/issue26903>
___
___
Python-bug
Change by Brian Quinlan :
--
assignee: -> bquinlan
___
Python tracker
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscrib
Brian Quinlan added the comment:
If no one has short-term plans to improve multiprocessing.connection.wait, then
I'll update the docs to list this limitation, ensure that ProcessPoolExecutor
never defaults to >60 processes on windows and raises a ValueError if the user
explicitly
Brian Quinlan added the comment:
>> The current behavior is explicitly documented, so presumably
>> it can't be (easily) changed
And it isn't clear that it would be desirable to change this even if it were
possible - doing structured resource clean-up seems consi
Brian Skinn added the comment:
Ahh, this *will* break some doctests: any with blank PS2 lines in the 'source'
portion without the explicit trailing space:
1] >>> def foo():
2] ...print("bar")
3] ...
4] ...print("baz")
5] >>>
Brian Skinn added the comment:
LOL. No special thanks necessary, that last post only turned into something
coherent (and possibly correct, it seems...) after a LOT of diving into the
source, fiddling with the code, and (((re-)re-)re-)writing! Believe me, it
reads as a lot more knowledgeable
1 - 100 of 1998 matches
Mail list logo