Re: Intermittent bug with asyncio and MS Edge
The RST from Python is probably caused here by HTTP 1.1 server closing TCP connection without signalling "Connection: Close" in response headers: a fast HTTP client will send another HTTP request before its TCP stack detects the connection is being closed - and packets containing this new requests will be replied with RST. When you delay your response (as you mentioned), the Edge browser probably opens more HTTP connections and will not send more HTTP requests in a single connection as described above. You should search for the cause Python closes the TCP connection (instead of waiting for another HTTP request). Kouli On Sun, Mar 22, 2020 at 12:04 PM Chris Angelico wrote: > On Sun, Mar 22, 2020 at 12:45 AM Frank Millman wrote: > > > > Hi all > > > > I have a strange intermittent bug. > > > > The role-players - > > asyncio on Python 3.8 running on Windows 10 > > Microsoft Edge running as a browser on the same machine > > > > The bug does not occur with Python 3.7. > > It does not occur with Chrome or Firefox. > > It does not occur when MS Edge connects to another host on the network, > > running the same Python program (Python 3.8 on Fedora 31). > > What exact version of Python 3.7 did you test? I'm looking through the > changes to asyncio and came across this one, which may have some > impact. > > https://bugs.python.org/issue36801 > > Also this one made a change that introduced a regression that was > subsequently fixed. Could be interesting. > > https://bugs.python.org/issue36802 > > What happens if you try awaiting your writes? I think it probably > won't make any difference, though - from my reading of the source, I > believe that "await writer.write(...)" is the same as > "writer.write(...); await writer.drain()", so it's going to be exactly > the same as you're already doing. > > ChrisA > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: Yet another Python SNMP lib
> > Unsure why you mention C. Python has a solid and efficient > implementation of SNMP with PySNMP, and except a misunderstanding about > how SNMPv3 should work, it seems better to use it than relying on > Net-SNMP which has many bugs and nobody really willing to correct them. > Wow. I'd like to excuse the authors of PySNMP: from what I had seen mentioned on internet, I thought it differs from Net-SNMP mainly in being written in pure Python and having lower performance. I wish I had skimmed through the PySNMP documentation before as I have done few minutes ago :-) PySNMP solves many of issues mentioned in my original message. I can just add a better (more suitable to me) "OSM" (object - SMI - mapper). You are right, Vincent: sorry for even mentioning Net-SNMP bindings. Milan -- https://mail.python.org/mailman/listinfo/python-list
Yet another Python SNMP lib
Python list users, this is a rather long message primarily addressed to Vincent, but I am sending a copy to this list for those interested in SNMP and/or "object mapping frameworks"... Hello Vincent, I have been using your snimpy in few small scripts. I like it for replacing long dot/number OID strings with human-readable identifiers. Thanks for your work! However, I think you introduced too much abstraction to the SNMP protocol itself. It is fine for smaller projects, but when I tried to talk to a more sophisticated SNMP manager, it does not use the protocol in an optimal way. You must know it, I have already discussed some minor changes with you two years ago... People usually use SNMP to read some status or statistical data. When talking about performance, they usually mean to read only few variables from hundreds to thousands of devices. But, in such a case, it isn't a challenge to write few numerical OID strings in their scripts. The "identifier abstraction" is useful when you have to get/set tens to hundreds variables on a single large SNMP manager (an Ethernet switch fully manageable by SNMP - i.e. to the same extent as by command line -, or, in my case, a GPON OLT serving thousands of ONTs). The Easy SNMP package you mention in snimpy docs is optimized for the first usage scenario - few variables @ thousands managers. For the second - hundreds variables @ single manager - (a) a full control over varbind sets (to lower the number of round-trips needed) and (b) easy mapping of SNMP OIDs to constructs used in your programming language is what matters. With snimpy, you concentrated at (b), but I am not able to: - aggregate GET messages, - (similar) walk tables reading more than 1 column efficiently. Well, why am I writing this? I would like to discuss with you, Vincent, "yet another Python SNMP package" - a one I would like to create. I believe you must have solved many issues while creating snimpy so I am asking you to evaluate my thoughts and advise me to any potential problems. The new package should: 1. represent atom SNMP variables using some lightweight Python classes: instances would serve as variable values, classes would know its OID (and original name, comments/textual conventions from MIB etc.), 2. (not sure) represent SNMP tables using Python classes to make some table operations easier, especially those with "conceptual rows" (SMIv2 RowStatus), 3. be able to "import" whole (or part of a) MIB file as if it was a Python module: this import will define the above mentioned classes; the MIBs would not be stored in a system-wide central directory, but rather at a place where application-specific packages are; even Python's package import machinery could be misused (read: overriden) to achieve this, 4. (not sure) make the above import available to autocompletion in text editors, 5. (far future) be able to hint/change the imported classes: for MIBs which are not fully standards compliant, or, to *shorten* variable names (get rid of long prefixes in names, which exist only because of need of global [at least within a MIB] uniqueness - now the names will be scoped [columns scoped to table class do not need table's identification in their names' prefixes], so can be shorter), 6. replace snimpy's `manager.varname = varvalue` with `manager.set(varinst, ...)` (varinst being above described class instance), add `manager.get(varclass, ...)`, add more such methods for bulk walks, get-next, methods raising exceptions vs. methods returning varbinds etc. -- allow for full control over content of SNMP message payload (including bulk repetition counts etc.) 7. (not sure if it can lift performance for any manager, to be proven in C at first) support asynchronous SNMP transactions (asyncio?), i.e. send a next request to the same manager before we receive a response to previous one (some managed Ethernet switches use multi-core CPUs, so they - potentially - might be able to process parallel requests; example: a very large table with many columns - I can bulk walk columns in parallel, or, walk all index values [only] at first and, knowing them, issue requests for "complete rows" in parallel) 8. (not sure, future) make the performance-critical part, the SNMP protocol library (ASN.1, v3 encryption) "pluggable" - pure PySNMP / faster Net-SNMP with binary dependencies (any recommendation with respect to (7.)?) 9. (probably) support only recent Python 3, drop support for Python 2 completely. Thank you for any review/idea! Milan -- https://mail.python.org/mailman/listinfo/python-list
Re: SNMP
Have a look at https://snimpy.readthedocs.io/en/latest/ On Fri, Mar 24, 2017 at 12:07 AM, Mattwrote: > What is easiest way to read and write SNMP values with Python? > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: How to convert 'ö' to 'oe' or 'o' (or other similar things) in a string?
Hello, try the Unidecode module - https://pypi.python.org/pypi/Unidecode. Kouli On Sat, Sep 17, 2016 at 6:12 PM, Peng Yu <pengyu...@gmail.com> wrote: > Hi, I want to convert strings in which the characters with accents > should be converted to the ones without accents. Here is my current > code. > > > > $ cat main.sh > #!/usr/bin/env bash > # vim: set noexpandtab tabstop=2: > > set -v > ./main.py Förstemann > ./main.py Frédér8ic@ > > $ cat main.py > #!/usr/bin/env python > # vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 > fileencoding=utf-8: > > import sys > import unicodedata > print unicodedata.normalize('NFKD', > sys.argv[1].decode('utf-8')).encode('ascii', 'ignore') > > > > The complication is that some characters have more than one way of > conversions. E.g., 'ö' can be converted to either 'oe' or 'o'. I want > to get all the possible conversions of a string. Does anybody know a > good way to do so? Thanks. > > -- > Regards, > Peng > -- > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: PEP 492: isn't the "await" redundant?
Thank you for all your answers. After all, I am more confident with the current syntax. The most important reason for 'await' to me now is the fact you quite _often_ need to prepare the 'awaitable' object to wait for it later (like the ChrisA's example with print()), i.e. split the expression into more lines: fut = coro(x) await fut I supposed it to be only a minor use case (compared to 'await coro(x)'), but I learned it isn't. Every time you need to "wait for more than one thing" (more than one 'future'), you also need the split. Not only for parallel branching, but also even for simple async operations combined with timeout - asyncio.wait_for() etc. And I prefer the explicit 'await' for simple waiting to special syntax for spliting (i.e. do simple waiting without 'await' as was the proposal at top of this thread - and - introduce more complicated syntax for split - something like functools.partial(coro, x)). Kouli -- https://mail.python.org/mailman/listinfo/python-list
PEP 492: isn't the "await" redundant?
Hello, recently, I have discovered Python's coroutines and enjoyed the whole asyncio system a lot. But I ask you to make me understand one thing in Python's coroutines design: why do we have to use "await" (or "yield from") in coroutines? Why can coroutines etc. not be used _from_coroutines_ (designated by 'async def') by a simple call-like syntax (i.e. without the 'await' keyword)? The same for "await with" and "await from". At most places a coroutine is referenced from another coroutine, it is referenced using "await". Couldn't it be avoided at theese places? This way, one would not have to differentiate between function and coroutine "call" from within a coroutine... Current syntax: async def work(x): await asyncio.sleep(x) def main(x): loop.run_until_complete(work(x)) Proposed syntax: async def work(x): asyncio.sleep(x) # compiler "adds" 'await' automatically when in 'async def' def main(x): loop.run_until_complete(work(x)) # compiler leaves as is when in 'def' Historically, generators were defined by using keyword 'yield' inside its definition. We now have explicit syntax with keyword 'async' so why should we use yet the additional keyword 'await'? I tried to show (a minor) example which would need the "leave as is" behavior inside 'async def', but I haven't found such a coroutine refence in examples. Should it be needed (please, tell me), then it requires a special syntax (at least for arguments - without arguments one can leave out the parentheses). Kouli -- https://mail.python.org/mailman/listinfo/python-list