[issue44219] Opening a file holds the GIL when it calls "isatty()"

2021-09-09 Thread Vincent Michel

Vincent Michel  added the comment:

There are a couple of reasons why I did not make changes to the stdstream 
related functions. 

The first one is that a PR with many changes is less likely to get reviewed and 
merged than a PR with fewer changes. The second one is that it's hard for me to 
make sure that those functions are always called with the GIL already held. 
Maybe some of them never hold the GIL in the first place, and I'm not familiar 
enough with the code base to tell.

So in the end it will probably be up to the coredev reviewing the PR, but 
better start small.

> Besides, nothing prevents somebody from starting a FUSE file system and then 
> redirecting stdout to it …

I ran some checks and `python -c 'input()' < my-fuse-mountpoint/data_in` does 
indeed trigger an ioctl call to the corresponding fuse file system. But how 
would `my-fuse-mountpoint/data_in` be the stdin for the process that itself 
starts the fuse filesystem? I don't see how to generate this deadlock, apart 
from setting up interprocess communication between the fuse process and the 
`my-fuse-mountpoint/data_in`-as-stdin process.

--

___
Python tracker 
<https://bugs.python.org/issue44219>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44129] zipfile: Add descriptive global variables for general purpose bit flags

2021-09-09 Thread Vincent Michel


Change by Vincent Michel :


--
pull_requests:  -26670

___
Python tracker 
<https://bugs.python.org/issue44129>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44219] Opening a file holds the GIL when it calls "isatty()"

2021-09-09 Thread Vincent Michel


Change by Vincent Michel :


--
pull_requests: +26671
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/28250

___
Python tracker 
<https://bugs.python.org/issue44219>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44129] zipfile: Add descriptive global variables for general purpose bit flags

2021-09-09 Thread Vincent Michel


Change by Vincent Michel :


--
nosy: +vxgmichel
nosy_count: 2.0 -> 3.0
pull_requests: +26670
pull_request: https://github.com/python/cpython/pull/28250

___
Python tracker 
<https://bugs.python.org/issue44129>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44219] Opening a file holds the GIL when it calls "isatty()"

2021-09-08 Thread Vincent Michel


Vincent Michel  added the comment:

Here's a possible patch that fixes the 3 unprotected calls to `isatty` 
mentioned above. It successfully passes the test suite. I can submit a PR with 
this patch if necessary.

--
keywords: +patch
Added file: https://bugs.python.org/file50270/bpo-44219.patch

___
Python tracker 
<https://bugs.python.org/issue44219>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44219] Opening a file holds the GIL when it calls "isatty()"

2021-09-07 Thread Vincent Michel


Vincent Michel  added the comment:

My team ran into this issue while developing a fuse application too.

In an effort to help this issue move forward, I tried to list all occurrences 
of the `isatty` C function in the cpython code base. I found 14 of them.

9 of them are directly related to stdin/stdout/stderr, so it's probably not 
crucial to release the GIL for those occurrences:
- `main.c:stdin_is_interactive`
- `main.c:pymain_import_readline`
- `readline.c:setup_readline`
- `bltinmodule.c:builtin_input_impl` (x2)
- `frozenmain.c:Py_FrozenMain`
- `pylifecycle.c:Py_FdIsInteractive` (x2)
- `fileobject.c:stdprinter_isatty` (GIL is actually released for this one)

Out of the remaining 4, only 1 releases the GIL:
- `fileio.c:_io_FileIO_isatty_impl`: used for `FileIO.isatty`

Which gives 3 occurrences of non-stdstream specific usage of `isatty` that do 
not release the GIL:
- `posixmodule.c:os_isatty_impl`: used by `os.isatty`
- `fileutils.c:_Py_device_encoding`: used `TextIOWrapper.__init__`
- `fileutils.c:_Py_write_impl`: windows specific, issue #11395

The first one is used by `os.isatty` which means this call can also deadlock. I 
did manage to reproduce it with a simple fuse loopback file system: 
https://github.com/fusepy/fusepy/blob/master/examples/loopback.py

The second one is the one found by @smurfix and gets triggered when `io.open()` 
is used in text mode.

The third one only triggers on windows when writing more than 32767 bytes to a 
file descriptor. A comment points to issue #11395 
(https://bugs.python.org/issue11395). Also, it seems from the function 
signature that this function might be called with or without the GIL held, 
which might cause the fix to be a bit more complicated than the first two use 
cases.

I hope this helps.

--
nosy: +vxgmichel

___
Python tracker 
<https://bugs.python.org/issue44219>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Change by Vincent Michel :


Added file: https://bugs.python.org/file48883/comparing_conversions.py

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Vincent Michel  added the comment:

@mark.dickinson
> To be clear: the following is flawed as an accuracy test, because the 
> *multiplication* by 1e9 introduces additional error.

Interesting, I completely missed that! 

But did you notice that the full conversion might still perform better when 
using only floats?

```
>>> from fractions import Fraction as F 
>>>   
>>> r = 1580301619906185300 
>>>  
>>> abs(int(r / 1e9 * 1e9) - r) 
>>>  
84
>>> abs(round(F(r / 10**9) * 10**9) - r)
>>>
89
```

I wanted to figure out how often that happens so I updated my plotting, you can 
find the code and plot attached.

Notice how both methods seems to perform equally good (the difference of the 
absolute errors seems to average to zero). I have no idea about why that 
happens though.

--
Added file: 
https://bugs.python.org/file48882/Comparing_conversions_over_5_us.png

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Vincent Michel  added the comment:

@serhiy.storchaka
> 1580301619906185300/10**9 is more accurate than 1580301619906185300/1e9.

I don't know exactly what `F` represents in your example but here is what I get:

>>> r = 1580301619906185300 
>>>  
>>> int(r / 10**9 * 10**9) - r  
>>>  
172
>>> int(r / 1e9 * 10**9) - r
>>>  
-84

@vstinner
> I suggest to only document in time.time() is less accurate than 
> time.time_ns().

Sounds good!

--

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Vincent Michel  added the comment:

> The problem is that there is a double rounding in [...]

Actually `float(x) / 1e9` and `x / 1e9` seems to produce the same results:

```
import time
import itertools
now = time.time_ns()
  
for x in itertools.count(now):
assert float(x) / 1e9 == x / 1e9
```

> The formula `time = time_ns / 10**9` may be more accurate.

Well that seems to not be the case, see the plots and the corresponding code. I 
might have made a mistake though, please let me know if I got something wrong :)

--

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Vincent Michel  added the comment:

Thanks for your answers, that was very informative!

> >>> a/10**9
> 1580301619.9061854
> >>> a/1e9
> 1580301619.9061852
>
> I'm not sure which one is "correct".


Originally, I thought `a/10**9` was more precise because I ran into the 
following case while working with hundreds of nanoseconds (because windows):

```
r = 1580301619906185900
print("Ref   :", r)
print("10**7 :", int(r // 100 / 10**7 * 10 ** 7) * 100)
print("1e7   :", int(r // 100 / 1e7 * 10 ** 7) * 100)
print("10**9 :", int(r / 10**9 * 10**9))
print("1e9   :", int(r / 1e9 * 10**9))
[...]
Ref   : 1580301619906185900
10**7 : 1580301619906185800
1e7   : 1580301619906186200
10**9 : 1580301619906185984
1e9   : 1580301619906185984
```

I decided to plot the conversion errors for different division methods over a 
short period of time. It turns out that:
- `/1e9` is equally or more precise than `/10**9` when working with nanoseconds
- `/10**7` is equally or more precise than `/1e7` when working with hundreds 
nanoseconds

This result really surprised me, I have no idea what is the reason behind this.

See the plots and code attached for more information.

In any case, this means there is no reason to change the division in 
`_PyTime_AsSecondsDouble`, closing this issue as wontfix sounds fine :)

---

As a side note, the only place I could find something similar mentioned in the 
docs is in the `os.stat_result.st_ctime_ns` documentation: 

https://docs.python.org/3.8/library/os.html#os.stat_result.st_ctime_ns

> Similarly, although st_atime_ns, st_mtime_ns, and st_ctime_ns are 
> always expressed in nanoseconds, many systems do not provide 
> nanosecond precision. On systems that do provide nanosecond precision,
> 1the floating-point object used to store st_atime, st_mtime, and 
> st_ctime cannot preserve all of it, and as such will be slightly 
> inexact. If you need the exact timestamps you should always use 
> st_atime_ns, st_mtime_ns, and st_ctime_ns.

Maybe this kind of limitation should also be mentioned in the documentation of 
`time.time_ns()`?

--
Added file: 
https://bugs.python.org/file48880/Comparing_division_errors_over_10_us.png

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-02-03 Thread Vincent Michel


Change by Vincent Michel :


Added file: https://bugs.python.org/file48881/comparing_errors.py

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-01-29 Thread Vincent Michel


Vincent Michel  added the comment:

I thought about it a bit more and I realized there is no way to recover the 
time in hundreds of nanoseconds from the float produced by `time.time()` (since 
the windows time currently takes 54 bits and will take 55 bits in 2028). 

That means `time()` and `time_ns()` cannot be compared by converting time() to 
nanoseconds, but it might still make sense to compare them by converting 
time_ns() to seconds (which is apparently broken at the moment).

If that makes sense, a possible roadmap to tackle this problem would be:
- fix `_PyTime_AsSecondsDouble` so that `time.time_ns() / 10**9 == time.time()`
- add a warning in the documentation that one should be careful when comparing 
the timestamps produced by `time()` and time_ns()` (in particular, `time()` 
should not be converted to nanoseconds)

--

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39484] time_ns() and time() cannot be compared on windows

2020-01-29 Thread Vincent Michel


New submission from Vincent Michel :

On windows, the timestamps produced by time.time() often end up being equal 
because of the 15 ms resolution:

>>> time.time(), time.time()
(1580301469.6875124, 1580301469.6875124)

The problem I noticed is that a value produced by time_ns() might end up being 
higher then a value produced time() even though time_ns() was called before:

>>> a, b = time.time_ns(), time.time()
>>> a, b
(1580301619906185300, 1580301619.9061852)
>>> a / 10**9 <= b
False

This break in causality can lead to very obscure bugs since timestamps are 
often compared to one another. Note that those timestamps can also come from 
non-python sources, i.e a C program using `GetSystemTimeAsFileTime`.

This problem seems to be related to the conversion `_PyTime_AsSecondsDouble`:
https://github.com/python/cpython/blob/f1c19031fd5f4cf6faad539e30796b42954527db/Python/pytime.c#L460-L461

# Float produced by `time.time()`
>>> b.hex()
'0x1.78c5f4cf9fef0p+30'
# Basically what `_PyTime_AsSecondsDouble` does:
>>> (float(a) / 10**9).hex()
'0x1.78c5f4cf9fef0p+30'
# What I would expect from `time.time()`
>>> (a / 10**9).hex()
'0x1.78c5f4cf9fef1p+30'

However I don't know if this would be enough to fix all causality issues since, 
as Tim Peters noted in another thread:

> Just noting for the record that a C double (time.time() result) isn't quite 
> enough to hold a full-precision Windows time regardless

(https://bugs.python.org/issue19738#msg204112)

--
components: Library (Lib)
messages: 360958
nosy: vxgmichel
priority: normal
severity: normal
status: open
title: time_ns() and time() cannot be compared on windows
type: behavior
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39484>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35409] Async generator might re-throw GeneratorExit on aclose()

2019-07-13 Thread Vincent Michel


Change by Vincent Michel :


--
pull_requests: +14550
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/14755

___
Python tracker 
<https://bugs.python.org/issue35409>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31062] socket.makefile does not handle line buffering

2019-03-16 Thread Vincent Michel


Vincent Michel  added the comment:

I ran into this issue too so I went ahead and created a pull request 
(https://github.com/python/cpython/pull/12370).

--
nosy: +vxgmichel
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue31062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31062] socket.makefile does not handle line buffering

2019-03-16 Thread Vincent Michel


Change by Vincent Michel :


--
keywords: +patch
pull_requests: +12333
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue31062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35409] Async generator might re-throw GeneratorExit on aclose()

2018-12-04 Thread Vincent Michel


Change by Vincent Michel :


--
keywords: +patch
Added file: https://bugs.python.org/file47974/patch.diff

___
Python tracker 
<https://bugs.python.org/issue35409>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35409] Async generator might re-throw GeneratorExit on aclose()

2018-12-04 Thread Vincent Michel


Change by Vincent Michel :


Added file: https://bugs.python.org/file47973/test.py

___
Python tracker 
<https://bugs.python.org/issue35409>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35409] Async generator might re-throw GeneratorExit on aclose()

2018-12-04 Thread Vincent Michel


New submission from Vincent Michel :

As far as I can tell, this issue is different than: 
https://bugs.python.org/issue34730

I noticed `async_gen.aclose()` raises a GeneratorExit exception if the async 
generator finalization awaits and silence a failing unfinished future (see 
example.py).

This seems to be related to a bug in `async_gen_athrow_throw`. In fact, 
`async_gen.aclose().throw(exc)` does not silence GeneratorExit exceptions. This 
behavior can be reproduced without asyncio (see test.py).

Attached is a possible patch, although I'm not too comfortable messing with the 
python C internals. I can make a PR if necessary.

--
components: Interpreter Core
files: example.py
messages: 331043
nosy: vxgmichel
priority: normal
severity: normal
status: open
title: Async generator might re-throw GeneratorExit on aclose()
type: behavior
versions: Python 3.6, Python 3.7, Python 3.8
Added file: https://bugs.python.org/file47972/example.py

___
Python tracker 
<https://bugs.python.org/issue35409>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35065] Reading received data from a closed TCP stream using `StreamReader.read` might hang forever

2018-10-29 Thread Vincent Michel


Change by Vincent Michel :


--
pull_requests: +9528
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35065>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35065] Reading received data from a closed TCP stream using `StreamReader.read` might hang forever

2018-10-26 Thread Vincent Michel


Vincent Michel  added the comment:

I found the culprit:
https://github.com/python/cpython/blob/a05bef4f5be1bcd0df63ec0eb88b64fdde593a86/Lib/asyncio/streams.py#L350

The call to `_untrack_reader` is performed too soon. Closing the transport 
causes `protocol.connection_lost()` to be "called soon" but by the time it is 
actually executed, the stream reader has been "untracked".  Since the protocol 
doesn't know the stream reader anymore, it has not way to feed it the EOF.

The fix attached removes the `_untrack_reader` call and definition altogether. 
I don't really see the point of this method since one has to wait for 
`connection_lost` to be executed before untracking the reader, but 
`connection_lost` already gets rid of the reader reference.

With this fix, calling `writer.close` then awaiting `writer.wait_closed` (or 
awaiting `writer.aclose`) should:
- close the transport
- schedule `protocol.connection_lost`
- wait for the protocol to be closed
- run `protocol.connection_lost`
- feed the EOF to the reader
- set the protocol as closed
- get rid of the reader reference in the protocol
- return (making aclose causal and safe)
- the reader can then be safely garbage collected

But maybe I'm missing something about `_untrack_reader`?

--
keywords: +patch
Added file: https://bugs.python.org/file47893/patch-bpo-35065.diff

___
Python tracker 
<https://bugs.python.org/issue35065>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35065] Reading received data from a closed TCP stream using `StreamReader.read` might hang forever

2018-10-26 Thread Vincent Michel


Vincent Michel  added the comment:

Hi Andrew!

I reverted the commit associated with the following PR, and the hanging issue 
disappeared:
https://github.com/python/cpython/pull/9201

I'll look into it.

--
type:  -> behavior

___
Python tracker 
<https://bugs.python.org/issue35065>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35065] Reading received data from a closed TCP stream using `StreamReader.read` might hang forever

2018-10-25 Thread Vincent Michel


New submission from Vincent Michel :

I'm not sure whether it is intended or not, but I noticed a change in the  
behavior of `StreamReader` between version 3.7 and 3.8.

Basically, reading some received data from a closed TCP stream using 
`StreamReader.read` might hang forever, under certain conditions.

I'm not sure what those conditions are but I managed to reproduce the issue 
consistently with the following workflow:
 - server writes some data
 - client reads a part of the data
 - client closes the writer
 - server closes the writer
 - client tries to read the remaining data

The test attached implements the behavior. It fails on 3.8 but passes on 3.7

--
components: asyncio
files: stuck_on_py38.py
messages: 328430
nosy: asvetlov, vxgmichel, yselivanov
priority: normal
severity: normal
status: open
title: Reading received data from a closed TCP stream using `StreamReader.read` 
might hang forever
versions: Python 3.8
Added file: https://bugs.python.org/file47891/stuck_on_py38.py

___
Python tracker 
<https://bugs.python.org/issue35065>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31922] Can't receive replies from multicast UDP with asyncio

2017-11-02 Thread Vincent Michel

New submission from Vincent Michel <vxgmic...@gmail.com>:

It's currently not possible to receive replies from multicast UDP with asyncio, 
as reported in the following issue:

https://github.com/python/asyncio/issues/480

That's because asyncio connects the UDP socket to the broadcast address, 
causing all traffic from the receivers to be be dropped, as explained in this 
comment:
https://github.com/python/asyncio/issues/480#issuecomment-278703828

I already submitted a PR on the cpython repository:
https://github.com/python/cpython/pull/423

I figured it was better to report the issue here for better tracking.

--
components: asyncio
messages: 305415
nosy: vxgmichel, yselivanov
priority: normal
pull_requests: 4196
severity: normal
status: open
title: Can't receive replies from multicast UDP with asyncio
type: behavior
versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31922>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29627] configparser.ConfigParser.read() has undocumented/unexpected behaviour when given a bytestring path.

2017-09-07 Thread Vincent Michel

Changes by Vincent Michel <vxgmic...@gmail.com>:


--
pull_requests: +3418

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31307] ConfigParser.read silently fails if filenames argument is a byte string

2017-09-07 Thread Vincent Michel

Changes by Vincent Michel <vxgmic...@gmail.com>:


--
pull_requests: +3417

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31307>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31307] ConfigParser.read silently fails if filenames argument is a byte string

2017-08-30 Thread Vincent Michel

New submission from Vincent Michel:

Calling `config_parser.read` with `'test'` is equivalent to:

config_parser.read(['test'])

while calling `config_parser.read` with `b'test'` is treated as:

config_parser.read([116, 101, 115, 116])

which means python will try to open the file descriptors 101, 115 and 116.

I don't know if byte path should be supported, but this is probably not the 
expected behavior.

The method code: 
https://github.com/python/cpython/blob/master/Lib/configparser.py#L678-L702

--
components: Library (Lib)
messages: 301026
nosy: vxgmichel
priority: normal
severity: normal
status: open
title: ConfigParser.read silently fails if filenames argument is a byte string
type: behavior
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31307>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26969] ascynio should provide a policy to address pass-loop-everywhere problem

2016-05-23 Thread Vincent Michel

Vincent Michel added the comment:

I agree with Yury's ideas about the implementation of this feature. However, it 
is a bit confusing to have `asyncio.get_event_loop` defined as:

def get_event_loop():
policy = get_event_loop_policy()
return policy.get_running_loop() or policy.get_event_loop()

One would expect `asyncio.get_event_loop` to simply work as a shortcut for:

get_event_loop_policy().get_event_loop()

The root of the problem is that we're trying to define 3 concepts with only 2 
wordings. I think it is possible to solve this issue quite easily by renaming 
`AbstractLoopPolicy.get_event_loop` with `AbstractLoopPolicy.get_default_loop`. 
We'd end up with the following definitions:

- default_loop: current default loop as defined in the policy
- running_loop: current running loop (thread-wise) if any 
- event_loop: running loop if any, default_loop otherwise

Changing the API is always annoying, but in this case it would only affect the 
event loop policies. This is a pretty specific use case, and they'll have to be 
updated anyway in order to implement `set_running_loop`. asyncio.set_event_loop 
might be affected too, although it could be kept or deprecated.

Do you think it's worth the trouble?

--
nosy: +vxgmichel

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26969>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25304] Add run_coroutine_threadsafe() to asyncio

2015-10-05 Thread Vincent Michel

Vincent Michel added the comment:

I attached a patch that should sum up all the points we discussed.

I replaced the `call_soon_threadsafe` example with:
  loop.call_soon_threadsafe(callback, *args)
cause I couldn't find a simple specific usage. Let me know if you think of a 
better example.

--
Added file: http://bugs.python.org/file40691/run_coroutine_threadsafe_2.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25304] Add run_coroutine_threadsafe() to asyncio

2015-10-05 Thread Vincent Michel

Vincent Michel added the comment:

> The docs look good.

Should I add a note to explain why the loop argument has to be explicitly 
passed? (there is a note at the beginning of the `task functions` section 
stating "In the functions below, the optional loop argument ...")

> What do you need to add to the concurrency and multithreading section?

This section provides an example to schedule a coroutine from a different 
thread using `ensure_future` and `call_soon_threadsafe`. This example should be 
replaced with another usage of `call_soon_threadsafe` and another paragraph 
about `run_coroutine_threadsafe` should be added.

> I agree on the try/except

Do you think the exception should be re-raised for the logger?

> can you add that to the same diff? 

All right, should I make another PR on the asyncio github repo as well?

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25304] Add run_coroutine_threadsafe() to asyncio

2015-10-05 Thread Vincent Michel

Vincent Michel added the comment:

I attached the first version of the documentation for 
`run_coroutine_threadsafe`. The `Concurrency and multithreading` section also 
needs to be updated but I could already use some feedback.

Also, I think we should add a `try-except` in the callback function, especially 
since users can set their own task factory. For instance:
  
loop.set_task_factory(lambda loop, coro: i_raise_an_exception)

will cause the future returned by `run_coroutine_threadsafe` to wait forever. 
Instead, we could have:

except Exception as exc:
if future.set_running_or_notify_cancel():
future.set_exception(exc)

inside the callback to notify the future.

--
keywords: +patch
Added file: http://bugs.python.org/file40689/run_coroutine_threadsafe_doc.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25304] Add run_coroutine_threadsafe() to asyncio

2015-10-04 Thread Vincent Michel

Vincent Michel added the comment:

While I was working on the documentation update, I realized that what we called 
`run_coroutine_threadsafe` is actually a thread-safe version of 
`ensure_future`. What about renaming it to `ensure_future_threadsafe`? It might 
be a bit late since `run_coroutine_threadsafe` has been committed, but I think 
it is worth to be considered. I can see two benefits:

- it is less confusing, because it has the same name and using the same 
prototype as `ensure_future`
- it accepts futures and awaitables

The docstring would be "Wrap a coroutine, an awaitable or a future in a 
concurrent.futures.Future.". The documentation would explain that it works like 
`ensure_future` except:
1. its execution is threadsafe
2. it provides a thread-safe future instead of a regular future

I attached an implementation of it. Also, note that I added a `try-except` in 
the callback, which is not mandatory but probably a good thing have.

In any case, I'll keep working on the documentation update.

--
Added file: http://bugs.python.org/file40672/ensure_future_threadsafe.py

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18209] Bytearray type not supported as a mutable object in the fcntl.ioctl function

2013-06-14 Thread Vincent Michel

New submission from Vincent Michel:

The Bytearray type is a mutable object that support the read-write buffer 
interface. The fcntl.ioctl() function is supposed to handle mutable object 
(such as array.array) for the system calls in order to pass object that are 
more than 1024 bytes long.

The problem is that in Python 2.7, Bytearray type is not supported as a mutable 
object in the fcntl.ioctl function. In Python 3.2, it works perfectly.

In the specific case where a large C structure is needed (more than 1024 
bytes), the Bytearray type is extremely useful compare to the array.array type 
that is adapted for C arrays.

Example :

 file_handle = open('/dev/my_device')
 arg = bytearray()
 arg += pack('IL',1,2)
 command = 0
 ioctl(file_handle,command,arg)

Traceback (most recent call last):
  File pyshell#22, line 1, in module
ioctl(file_handle,command,arg)
TypeError: an integer is required

--
components: IO
messages: 191110
nosy: vxgmichel
priority: normal
severity: normal
status: open
title: Bytearray type not supported as a mutable object in the fcntl.ioctl 
function
type: behavior
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18209
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com