[issue34852] Counter-intuitive behavior of Server.close() / wait_closed()
Aymeric Augustin added the comment: Would it make sense to add `await asyncio.sleep(0)` in `Server.wait_closed()` to ensure that all connections reach `connection_made()` before `wait_closed()` returns? This would be fragile but it would be an improvement over the current behavior, wouldn't it? -- ___ Python tracker <https://bugs.python.org/issue34852> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41597] Fatal Error on SSL Transport - sslv3 alert bad record mac
Aymeric Augustin added the comment: websockets doesn't use threads (except where asyncio uses them under the hood e.g. for resolving addresses). Perhaps the OP's code passes asyncio connections (wrapped in websocket connections) unsafely across threads. -- nosy: +aymeric.augustin ___ Python tracker <https://bugs.python.org/issue41597> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37639] What happened to StreamReaderProtocol?
Change by Aymeric Augustin : -- stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue37639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37639] What happened to StreamReaderProtocol?
New submission from Aymeric Augustin : `StreamReaderProtocol` was a public API from Python 3.4 to 3.6: - https://docs.python.org/3.4/library/asyncio-stream.html?highlight=streamreaderprotocol#streamreaderprotocol - https://docs.python.org/3.5/library/asyncio-stream.html?highlight=streamreaderprotocol#streamreaderprotocol - https://docs.python.org/3.6/library/asyncio-stream.html?highlight=streamreaderprotocol#streamreaderprotocol In Python 3.7, it vanished from the documentation. I couldn't find which commit made that change and why. - https://docs.python.org/3.7/search.html?q=streamreaderprotocol&check_keywords=yes&area=default Apparently it's deprecated in Python 3.8 as a side effect of https://bugs.python.org/issue36889. Unfortunately, that ticket says nothing about `StreamReaderProtocol` and how to replace it. The documentation added when fixing that ticket doesn't say anything about `StreamReaderProtocol` either. It would be less difficult to swallow the rate of API churn in asyncio if there were at least a few words about why it's happening and what concrete benefits it brings to users. I can understand the need to change APIs, even public APIs. However, when there's zero information in tickets, commit messages and documentations, I don't know why change is happening and how I'm supposed to adapt my code. That's stressful. If what I'm doing is deprecated, probably it's a bad idea, but I can't know why? I'm sure you aren't trying to convince me to avoid stream utilities in asyncio and write my own buffering. However, that's the result you're getting :-/ Providing a quick explanation of deprecations / removals of public APIs would help. (Somewhat related: https://bugs.python.org/issue24885) -- components: asyncio messages: 348226 nosy: asvetlov, aymeric.augustin, yselivanov priority: normal severity: normal status: open title: What happened to StreamReaderProtocol? versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue37639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34506] Traceback logged when SSL handshake fails
Aymeric Augustin added the comment: Yes, that seems reasonable. -- ___ Python tracker <https://bugs.python.org/issue34506> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34506] Traceback logged when SSL handshake fails
Aymeric Augustin added the comment: The same issue was reported in the bug tracker for websockets: https://github.com/aaugustin/websockets/issues/614 -- nosy: +aymeric.augustin ___ Python tracker <https://bugs.python.org/issue34506> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29980] OSError: multiple exceptions should preserve the exception type if it is common
Aymeric Augustin added the comment: A very similar issue came up here: https://github.com/aaugustin/websockets/issues/593 When raising an exception that gathers multiple sub-exceptions, it would be nice to be able to iterate the exception to access the sub-exceptions. -- nosy: +aymeric.augustin ___ Python tracker <https://bugs.python.org/issue29980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34852] Counter-intuitive behavior of Server.close() / wait_closed()
Aymeric Augustin added the comment: For now I will use the following hack: server.close() await server.wait_closed() # Workaround for wait_closed() not quite doing # what I want. await asyncio.sleep(0) # I believe that all accepted connections have reached # connection_made() at this point. -- ___ Python tracker <https://bugs.python.org/issue34852> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34852] Counter-intuitive behavior of Server.close() / wait_closed()
New submission from Aymeric Augustin : **Summary** 1. Is it correct for `Server.wait_closed()` (as implemented in asyncio) to be a no-op after `Server.close()`? 2. How can I tell that all incoming connections have been received by `connection_made()` after `Server.close()`? **Details** After calling `Server.close()`, `_sockets is None`, which makes `Server.wait_closed()` a no-op: it returns immediately without doing anything (as mentioned in https://bugs.python.org/issue33727). I'm not sure why the docs suggest to call `wait_closed()` after `close()` if it's a no-op. My best guess is: "this design supports third-party event loops that requires an asynchronous API for closing servers, but the built-in event loops don't need that". Does someone know? I wrote a very simple server that merely accepts connections. I ran experiments where I saturate the server with incoming client connections and close it. I checked what happens around `close()` (and `wait_closed()` -- but as it doesn't do anything after `close()` I'll just say `close()` from now on.) The current implementation appears to work as documented, assuming an rather low level interpretation of the docs of `Server.close()`. > Stop serving: close listening sockets and set the sockets attribute to None. Correct -- I'm not seeing any `accept` calls in `BaseSelectorEventLoop._accept_connection` after `close()`. > The sockets that represent existing incoming client connections are left open. Correct -- if "existing incoming client connections" is interpreted as "client connections that have gone through `accept`". > The server is closed asynchronously, use the wait_closed() coroutine to wait > until the server is closed. I'm seeing calls to `connection_made()` _after_ `close()` because `BaseSelectorEventLoop._accept_connection2` triggers `connection_made()` asynchronously with `call_soon()`. This is surprising for someone approaching asyncio from the public API rather than the internal implementation. `connection_made()` is the first contact with new connections. The concept of "an existing incoming client connection for which `connection_made()` wasn't called yet" is unexpected. This has practical consequences. Consider a server that keeps track of established connections via `connection_made` and `connection_lost`. If this server calls `Server.close()`, awaits `Server.wait_closed()`, makes a list of established connections and terminates them, there's no guarantee that all connections will be closed. Indeed, new connections may appear and call `connection_made()` after `close()` and `wait_closed()` returned! `wait_closed()` seems ineffective for this use case. -- components: asyncio messages: 326725 nosy: asvetlov, aymeric.augustin, yselivanov priority: normal severity: normal status: open title: Counter-intuitive behavior of Server.close() / wait_closed() type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue34852> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33727] Server.wait_closed() doesn't always wait for its transports to fihish
Aymeric Augustin added the comment: I believe this is by design: the documentation says: > The sockets that represent existing incoming client connections are left open. `Server` doesn't keep track of active transports serving requests. (That said, I haven't figured out what _waiters is here so I could be wrong.) -- nosy: +aymeric.augustin ___ Python tracker <https://bugs.python.org/issue33727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31491] Add is_closing() to asyncio.StreamWriter.
New submission from Aymeric Augustin: asyncio.StreamWriter wraps a transport. The first three document methods of asyncio.BaseTransport are close(), is_closing() and get_extra_info(). It is somewhat surprising that StreamWriter provides close() and get_extra_info() but not is_closing(). I'm proposing that StreamWriter implement is_closing() as well. It's as simple as: def is_closing(self): return self._transport.is_closing() Perhaps it was simply missed in https://github.com/python/asyncio/pull/291. It's trivial to work around this omission with stream_writer.transport.is_closing(). I'm only suggesting to add it for consistency. -- components: asyncio messages: 302335 nosy: aymeric.augustin, yselivanov priority: normal severity: normal status: open title: Add is_closing() to asyncio.StreamWriter. type: enhancement versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue31491> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29930] Waiting for asyncio.StreamWriter.drain() twice in parallel raises an AssertionError when the transport stopped writing
Aymeric Augustin added the comment: I worked around this bug in websockets by serializing access to `drain()` with a lock: https://github.com/aaugustin/websockets/commit/198b71537917adb44002573b14cbe23dbd4c21a2 I suspect this is inefficient but it beats crashing. -- ___ Python tracker <https://bugs.python.org/issue29930> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29930] Waiting for asyncio.StreamWriter.drain() twice in parallel raises an AssertionError when the transport stopped writing
Aymeric Augustin added the comment: drain() returns when the write buffer reaches the low water mark, not when it's empty, so you don't have a guarantee that your bytes were written to the socket. https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/asyncio/protocols.py#L36-L40 The low water mark defaults to 64kB and the high water mark to 256kB. https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/asyncio/transports.py#L290 With websockets, the recommended way to ensure your message was received is: yield from ws.send(...) yield from ws.ping() Given that TCP guarantees ordering, the ping response can only be received after the previous message was fully sent and received. Of course the ping can fail even though the message was received, that's the classical at-most-once vs. at-least-once question. The technique you suggest requires setting the low and high water marks to 0. I'm not sure this is the best way to achieve your goals, since you still don't control the OS buffers. -- ___ Python tracker <http://bugs.python.org/issue29930> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29930] Waiting for asyncio.StreamWriter.drain() twice in parallel raises an AssertionError when the transport stopped writing
Aymeric Augustin added the comment: For context, websockets calls `yield from self.writer.drain()` after each write in order to provide backpressure. If the output buffer fills up, calling API coroutines that write to the websocket connection becomes slow and hopefully the backpressure will propagate (in a correctly implemented application). This is a straightforward application of the only use case described in the documentation. I would find it annoying to have to serialize calls to drain() myself. It doesn't feel like something the "application" should care about. (websockets is the application from asyncio's perspective.) I'm wondering if it could be a problem if a bunch of corountines were waiting on drain() and got released simultaneously. I don't think it would be a problem for websockets. Since my use case seems typical, there's a good chance this also applies to other apps. So I'm in favor of simply allowing an arbitrary number of coroutines to wait on drain() in parallel, if that's feasible. -- nosy: +aymeric.augustin ___ Python tracker <http://bugs.python.org/issue29930> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: The latest patch removes the current statement parsing and unexpected implicit commits. It looks good to me. Unfortunately it also introduces a new kind of statement parsing that detects DDL statements and doesn't open a transaction in that case, while it should. See https://github.com/ghaering/pysqlite/issues/105 for details. As a consequence, this change will allow Django to remove one of the two special cases for transaction in SQLite: https://github.com/django/django/blob/master/django/db/transaction.py#L159-L166 but not the other: https://github.com/django/django/blob/271581df606b307d89c141e8b1a50ace763bea81/django/db/backends/base/base.py#L375-L404 It's still a step forward so I support merging it. -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15873] datetime: add ability to parse RFC 3339 dates and times
Aymeric Augustin added the comment: martin.panter: of course, I'm fine with integrating that code into Python. deronnax: could you create a ticket on https://code.djangoproject.com/ highlighting the differences between Django's original implementation and the improved version that you worked on? I'd like to use the stdlib implementation when it's available and align Django's current implementation to whatever's getting into the stdlib (to prepare the transition, even though we aren't going to drop support for Python 3.5 soon). -- ___ Python tracker <http://bugs.python.org/issue15873> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24539] StreamReaderProtocol.eof_received() should return True to keep the transport open
Aymeric Augustin added the comment: This change appears to have caused a non-obvious regression in `websockets`: https://github.com/aaugustin/websockets/issues/76 Perhaps I was relying on an implementation detail but that's annoying nonetheless. -- nosy: +aymeric.augustin ___ Python tracker <http://bugs.python.org/issue24539> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: Since a better solution seems to be around the corner, I'm withdrawing my proposal. I'm attaching the current state of my patch. It isn't functional. Mostly it proves that my API proposal is very inconvenient to implement in C. That's why I kept it around for over a year without completing it. -- Added file: http://bugs.python.org/file40252/issue10740-aaugustin.patch ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: This bug appears to be fixed upstream: https://github.com/ghaering/pysqlite/commit/f254c534948c41c0ceb8cbabf0d4a2f547754739 -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24885] StreamReaderProtocol docs recommend using private API
New submission from Aymeric Augustin: https://docs.python.org/3/library/asyncio-stream.html?highlight=streamreaderprotocol#stream-functions says: > (If you want to customize the StreamReader and/or StreamReaderProtocol > classes, just copy the code – there’s really nothing special here except some > convenience.) StreamReaderProtocol inherits from streams.FlowControlMixin which isn't documented. Applying this advice means, instead of inheriting from StreamReaderProtocol, inheriting from streams.FlowControlMixin -- in order to obtain the _drain_helper method, which StreamWriter.drain must wait for. At this point inheriting StreamReaderProtocol appears to be the lesser evil. I suggest to remove this paragraph from the documentation. -- components: asyncio messages: 248767 nosy: aymeric.augustin, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: StreamReaderProtocol docs recommend using private API versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue24885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: Jim, I believe this API decision doesn't affect the patch in a major way. I'll write the rest of the patch and the committer who reviews it will decide. -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: > If this statement is accurate, the what you are proposing is just a different > (presumably clearer) spelling for 'isolation_level = None'? This statement is accurate but it doesn't cover the whole scope of what I'm attempting to fix. I'm also trying to address the serialization semantics. SQLite always operates at the serializable isolation level. (I'm talking about the SQL concept of transaction isolation levels, not the unrelated isolation_level parameter in sqlite3.) Currently, since sqlite3 doesn't send a BEGIN before SELECT queries, the following scenario is possible: - Process A reads data -- and the developer who carefully read PEP 249 expects to start a transaction - Process B writes data - Process A writes a modified version of what it read and commits In this scenario A overwrites B, breaking the serialization guarantees expected in that case. A's initial read should have started a transaction (essentially acquiring a shared lock) and B should have been prevented from writing. There are two problems here: - A's code looks like it's safe, but it isn't, because sqlite3 does some magic at odds with PEP 249. - If you're aware of this and try to enforce the proper guarantees, sqlite3 still gets in the way. > I also don't understand why we don't just fix the screwy behavior with > regards to savepoint. It's hard to see how fixing that could be a backward > compatibility problem (in a feature release), since you can't use savepoints > without isolation_level=None as it stands now. Of course, that would be a good step. But the problem doesn't stop there. What about other SQLite commands? What about "PRAGMA"? I suspect you'll have a hard time defining and maintaining a list of which commands should or shouldn't begin or commit a transaction. That's why I didn't choose this path. Furthermore, sqlite3 would still enforce a weird mode where it sacrifices serializable isolation under the assumption that you're having a transactional and concurrent workload but you don't care about transactional isolation. There are other use cases that would be better served: - by favoring serializability over concurrency (eg. the "application file format" case) -- that's what my "standards" mode does - by favoring concurrency over transactions (eg. the "web app" and the "I don't care just save this data" cases) -- that's what "autocommit" is for -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: > The idea was to not take away from what's there already: The sqlite3 module > already has a feature to inspect a command and begin or commit automatically. > Just stripping that away *removes* a feature that has been available for a > long time. I'd rather give the client more control instead of less and let him fine tune this behaviour. For the sake of clarity, I haven't proposed to remove anything. I'm focusing on preserving the same behavior by default (with its advantages and drawbacks) and providing more control for users who need a different behavior, for instance people who use SQLite as an application file format or as a web application storage backend. > When starting with Python I always thought that code like this is harmles: >>> conn = sqlite3.connect("test.db") >>> data = list(conn.execute("select * from mytable")) > Currently it is, but only because sqlite3 module parses the select as DQL and > does not lock the database. It will absolutely remain harmless with my proposal, if you don't change your code. However, for that use case I would like to encourage the use of autocommit mode. That's really the semantics you want here. In fact, you've written several sentences along the lines "currently we have $ADVANTAGE". It's easy to (mis)interpret that as implying that I'm trying to remove these advantages. But that's simply not true. To sum up, if I understand correctly, in_transaction gives you the ability to fight the behavior mandated by the DB API in client code, because you understand it well. My take is to avoid the problem entirely, and not inflict it to new users, by providing an option to start in autocommit mode and then create transactions only when you want them. The DB API doesn't forbid this option. It just prevents it from being the default (even though it's what the average developer expects). It solves the problem of using SQLite as an application file format. Use autocommit as long as you're just reading; start a transaction before writing; commit when you're done writing. Of course, that only applies to new software. Existing software can happily keep using the current behavior, which will be preserved at least for the lifetime of Python 3. -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: The problem is super simple: connection = ... cursor = connection.cursor() cursor.execute("SAVEPOINT foo") cursor.execute("ROLLBACK TO SAVEPOINT foo") # will report that the savepoint doesn't exist. Please don't make it look more complicated than it is. -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: Thanks for your feedback! Indeed this is a documentation patch describing what I could implement if a core dev gave the slightest hint that it stands a small chance of getting included. There's no point in writing code that Python core doesn't want. I designed this API to maximize the chances it would be accepted, based on what I know of Python's development. That's probably why it was merely ignored rather than rejected ;-) More seriously, since I'm interested in improving what the stdlib ships, I have to take backwards compatibility into account. If I just wanted to implement a different API, I'd do it outside of CPython. Yes, I could add PEP 249 compliance considerations to the docs. I'll do it if my general approach is accepted. -- ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10740] sqlite3 module breaks transactions and potentially corrupts data
Aymeric Augustin added the comment: I'm attaching a documentation patch describing improvements of the transaction management APIs that would address this issue as well as three others. It's preserving the current transaction handling by default for backwards compatibility. It's introducing two new, more simple and less surprising transaction modes. Could someone have a look and tell me if such improvements seem reasonable? If so, I'll try to write a patch targeting Python 3.5. -- Added file: http://bugs.python.org/file35498/transaction-api-proposal-issues-8145-9924-10740-16958.patch ___ Python tracker <http://bugs.python.org/issue10740> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com