[issue41594] Intermittent failures of loop.subprocess_exec() to capture output

2020-08-25 Thread Yaroslav Halchenko


Yaroslav Halchenko  added the comment:

Might (although unlikely) be related to https://bugs.python.org/issue40634 
which is about BlockingIOError being raised (and ignored) if SelectorEventLoop 
is reused (not the case here) also in the case of short lived processes.

--
nosy: +Yaroslav.Halchenko

___
Python tracker 
<https://bugs.python.org/issue41594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40634] Ignored "BlockingIOError: [Errno 11] Resource temporarily unavailable" are still haunting us

2020-06-15 Thread Yaroslav Halchenko


Yaroslav Halchenko  added the comment:

any feedback/ideas/fixes would still be highly appreciated.  Thank you!

--

___
Python tracker 
<https://bugs.python.org/issue40634>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40634] Ignored "BlockingIOError: [Errno 11] Resource temporarily unavailable" are still haunting us

2020-05-15 Thread Yaroslav Halchenko


New submission from Yaroslav Halchenko :

This is a reincarnation of previous issues such as 

- older https://bugs.python.org/issue21595 which partially (with ack on that) 
addressed the issue awhile back
- more recent https://bugs.python.org/issue38104 which was closed as "wont fix" 
since "the provided example finishes without any warning" on 3.8 (verified -- 
true for me with 3.8.3rc1); and with the explanation that "You spawn too many 
subprocesses that finish virtually at the same time. It leads to wakeup_fd 
overrun."
- additional similar reports could be found online, e.g. 
https://stackoverflow.com/a/52391791/1265472 .

In our project we are slowly introducing use of asyncio and have a mix of 
execution with asyncio and regular subprocess.Popen.  We do run lots of short 
lived processes serially, and I think it should be Ok, i.e. it should not cause 
underlying libraries to spit out some output to ignore unless we indeed just 
using them incorrectly somehow.

If we recreate the SelectorEventLoop for every separate execution via asyncio 
-- no ignored exception messages are displayed.  But if we start to reuse the 
same loop -- they eventually emerge.  If I enable asyncio debug and log it 
along with our own debug messages, the strange thing that they come around the 
points where we run using regular subprocess.Popen, not asyncio. See 
https://github.com/datalad/datalad/pull/4527#issuecomment-629289819 for more 
information.

Unfortunately I do not have (yet) any short script to reproduce it, but I would 
appreciate possible hints on how to figure out what is actually causing them in 
our particular case.  May be additional logging within asyncio could assist?

--
components: asyncio
messages: 368953
nosy: Yaroslav.Halchenko, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: Ignored "BlockingIOError: [Errno 11] Resource temporarily unavailable" 
are still haunting us
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue40634>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38449] regression - mimetypes guess_type is confused by ; in the filename

2019-10-11 Thread Yaroslav Halchenko


Yaroslav Halchenko  added the comment:

FWIW, our more complete test filename is 

# python3 -c 'import patoolib.util as ut; print(ut.guess_mime(r" \"\`;b 
|.tar.gz"))'
(None, None)

which works fine with older versions

--

___
Python tracker 
<https://bugs.python.org/issue38449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38449] regression - mimetypes guess_type is confused by ; in the filename

2019-10-11 Thread Yaroslav Halchenko


New submission from Yaroslav Halchenko :

Our tests in DataLad started to fail while building on Debian with Python 
3.7.5rc1 whenever they passed just fine previously with 3.7.3rc1. Analysis 
boiled down to mimetypes

$> ./python3.9 -c 'import mimetypes; mimedb = 
mimetypes.MimeTypes(strict=False); print(mimedb.guess_type(";1.tar.gz"))'   

(None, None)

$> ./python3.9 -c 'import mimetypes; mimedb = 
mimetypes.MimeTypes(strict=False); print(mimedb.guess_type("1.tar.gz"))' 
('application/x-tar', 'gzip')

$> git describe
v3.8.0b1-1174-g2b7dc40b2af


Ref: 

- original issue in DataLad: https://github.com/datalad/datalad/issues/3769

--
components: Library (Lib)
messages: 354455
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: regression - mimetypes guess_type is confused by ; in the filename
type: behavior
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32276] there is no way to make tempfile reproducible (i.e. seed the used RNG)

2017-12-12 Thread Yaroslav Halchenko

Yaroslav Halchenko <yarikop...@gmail.com> added the comment:

I have spent too much time in Python to be able to compare to other languages 
;)  but anywhere I saw RNG being used, there was a way to seed it or to provide 
a state.  tempfile provides no such API

my usecase -- comparison of logs from two runs where I need to troubleshoot the 
point of divergence in execution .  Logs in our case (datalad) contain 
temporary directory filenames, so they always "diff" and I need to sift through 
them or to come up with some obscure sed regex to unify them.  I found in other 
projects of ours a really handy to be able to seed RNG globally so two runs 
result in identical execution path -- allows for easier 
reproducibility/comparison.  But when it got to those temporary filenames -- 
apparently I could not make it happen and would need to resort to some heavy 
monkey patching.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32276>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32276] there is no way to make tempfile reproducible (i.e. seed the used RNG)

2017-12-11 Thread Yaroslav Halchenko

New submission from Yaroslav Halchenko <yarikop...@gmail.com>:

It is quite often desired to reproduce the same failure identically. In many 
cases sufficient to seed the shared random._inst (via random.seed). tempfile 
creates new instance(s) for its own operation and does not provide API to seed 
it.  I do not think it would be easy (unless I miss some pattern) to make it 
deterministic/reproducible for multi-process apps, but I wondered why initially 
(for the main process) tempfile module doesn't just reuse the random._inst 
while only creating a new _Random in children processes?
Another alternative solution would be to allow to specify seed for all those 
mkstemp/mkdtemp/... and pass it all way to _RandomNameSequence which would 
initialize _Random with it.  This way, developers who need to seed it, could do 
so

--
messages: 308043
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: there is no way to make tempfile reproducible (i.e. seed the used RNG)

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32276>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31651] io.FileIO cannot write more than 2GB (-4096) bytes??? must be documented (if not fixed)

2017-09-30 Thread Yaroslav Halchenko

Yaroslav Halchenko <yarikop...@gmail.com> added the comment:

Thank you for the follow-ups!  

Wouldn't it be better if Python documentation said exactly that 

On Linux, write() (and similar system calls) will transfer at most 0x7000 
(2,147,479,552) bytes, returning the number of bytes 
actually transferred.  (This is true on both 32-bit and 64-bit 
systems.)

Also, it might be nice to add a note on top, that this module is for 'low 
level' IO interface, and that it is recommended to use regular file type for 
typical file operations (not io.FileIO) to avoid necessity of dealing 
limitations such as the one mentioned.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31651>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31651] io.FileIO cannot write more than 2GB (-4096) bytes??? must be documented (if not fixed)

2017-09-30 Thread Yaroslav Halchenko

New submission from Yaroslav Halchenko <yarikop...@gmail.com>:

originally detected on python 2.7, but replicated with python 3.5.3 -- 
apparently io.FileIO, if given a bytestring of 2GB or more, cannot write it all 
at once -- saves (and returns that size) only 2GB - 4096.

I found no indication for such behavior anywhere in the documentation. And it 
is surprising to me especially since regular file.write does it just fine!  
attached is the code snippet which I list below and which demonstrates it

$> python3 --version; python3 longwrite.py
Python 3.5.3
Written 2147479552 out of 2147483648
4096 bytes were not written
Traceback (most recent call last):
  File "longwrite.py", line 28, in 
assert in_digest == out_digest, "Digests do not match"
AssertionError: Digests do not match
python3 longwrite.py  7.03s user 5.80s system 99% cpu 12.848 total
1 11365 ->1.:Sat 30 Sep 2017 04:56:26 PM 
EDT:.
smaug:/mnt/btrfs/scrap/tmp
$> cat longwrite.py
# -*- coding: utf-8 -*-
import io
import os
import hashlib

s = u' '*(256**4//2)  #+ u"перфекто"
s=s.encode('utf-8')
#s=' '*(10)

in_digest = hashlib.md5(s).hexdigest()
fname = 'outlong.dat'

if os.path.exists(fname):
os.unlink(fname)

with io.FileIO(fname, 'wb') as f:
#with open(fname, 'wb') as f:
 n = f.write(s)

#n = os.stat(fname).st_size
print("Written %d out of %d" % (n, len(s)))
if n != len(s):
print("%d bytes were not written" % (len(s) - n))

# checksum
with open(fname, 'rb') as f:
out_digest = hashlib.md5(f.read()).hexdigest()
assert in_digest == out_digest, "Digests do not match"
print("all ok")

--
components: IO
files: longwrite.py
messages: 303429
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: io.FileIO cannot write more than 2GB (-4096) bytes??? must be documented 
(if not fixed)
type: behavior
versions: Python 2.7, Python 3.5
Added file: https://bugs.python.org/file47182/longwrite.py

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31651>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30438] tarfile would fail to extract tarballs with files under R/O directories (twice)

2017-05-24 Thread Yaroslav Halchenko

Changes by Yaroslav Halchenko <yarikop...@gmail.com>:


--
title: tarfile would fail to extract tarballs with files under R/O directories 
-> tarfile would fail to extract tarballs with files under R/O directories 
(twice)

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30438>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30438] tarfile would fail to extract tarballs with files under R/O directories

2017-05-24 Thread Yaroslav Halchenko

Yaroslav Halchenko added the comment:

Dear Catherine,

Thank you very much for looking into it!! And sorry that I have missed the fact 
of recursive addition when pointing to a directory.  Indeed though, tar handles 
that case a bit more gracefully.

BUT I feel somewhat dumb since I am afraid that may be the actual original 
issue I have observed was simply because I already had that archive extracted 
and tried to extract it twice, overriding existing files.  That leads to the 
failure I think I was trying to chase down (example with a sample tiny real 
annex repo):

$> wget -q http://onerussian.com/tmp/sample.tar ; python -c 'import tarfile; 
tarfile.open("sample.tar").extractall()'
$> python -c 'import tarfile; tarfile.open("sample.tar").extractall()'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/tarfile.py", line 2081, in extractall
self.extract(tarinfo, path)
  File "/usr/lib/python2.7/tarfile.py", line 2118, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
  File "/usr/lib/python2.7/tarfile.py", line 2194, in _extract_member
self.makefile(tarinfo, targetpath)
  File "/usr/lib/python2.7/tarfile.py", line 2234, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 13] Permission denied: 
'./sample/.git/annex/objects/G6/qW/SHA256E-s4--181210f8f9c779c26da1d9b2075bde0127302ee0e3fca38c9a83f5b1dd8e5d3b/SHA256E-s4--181210f8f9c779c26da1d9b2075bde0127302ee0e3fca38c9a83f5b1dd8e5d3b'

$> tar -xf sample.tar && echo "extracted ok"
extracted ok


But I wouldn't even consider it a failure but would take it as a feature in my 
case (stuff is read-only for a reason!)

Altogether, I do not have the earth-shaking problem now, thus if you feel that 
issue needs retitle or closing, feel free to do so

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30438>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30438] tarfile would fail to extract tarballs with files under R/O directories

2017-05-22 Thread Yaroslav Halchenko

New submission from Yaroslav Halchenko:

If tarfile contains a file under a directory which has no write permission, 
extractall would fail since chmod'ing of the directory is done right when it is 
"extracted".

Please find attached a quick script to demonstrate the problem using 
Python code.  The issue is not just of an academic interest -- git-annex uses 
read-only permission to safe-guard against manual deletion of content. So 
tarball of any of git-annex repository carrying content for at least a single 
file, would not be extractable using Python's tarfile module (works fine with 
pure tar, verified that it is still failing to extract with Python 
v3.6.1-228-g1398b1bc7d from http://github.com/python/cpython).

--
components: IO
files: tarfilero.py
messages: 294217
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: tarfile would fail to extract tarballs with files under R/O directories
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6
Added file: http://bugs.python.org/file46888/tarfilero.py

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30438>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13248] deprecated in 3.2/3.3, should be removed in 3.5 or ???

2015-07-30 Thread Yaroslav Halchenko

Yaroslav Halchenko added the comment:

the function getargspec was removed but references to it within docstrings 
remained:

$ git grep getargspec
Doc/library/inspect.rst:   The first four items in the tuple correspond to 
:func:`getargspec`.
Doc/library/inspect.rst:   :func:`getargspec` or :func:`getfullargspec`.
Doc/whatsnew/3.4.rst::func:`~inspect.getfullargspec` and 
:func:`~inspect.getargspec`
Doc/whatsnew/3.5.rst:* :func:`inspect.getargspec` is deprecated and scheduled 
to be removed in
Doc/whatsnew/3.6.rst:* ``inspect.getargspec()`` was removed (was deprecated 
since CPython 3.0).
Lib/inspect.py:getargspec(), getargvalues(), getcallargs() - get info about 
function arguments
Lib/inspect.py:The first four items in the tuple correspond to getargspec().
Lib/inspect.py:Format an argument spec from the values returned by 
getargspec
Lib/test/test_inspect.py:# getclasstree, getargspec, getargvalues, 
formatargspec, formatargvalues,
Misc/NEWS:- Issue #13248: Remove deprecated inspect.getargspec and 
inspect.getmoduleinfo

--
nosy: +Yaroslav.Halchenko

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13248
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16997] subtests

2013-02-11 Thread Yaroslav Halchenko

Changes by Yaroslav Halchenko yarikop...@gmail.com:


--
nosy:  -Yaroslav.Halchenko

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10006] non-Pythonic fate of __abstractmethods__

2010-10-01 Thread Yaroslav Halchenko

New submission from Yaroslav Halchenko yarikop...@gmail.com:

We ran into this while generating documentation for our project (PyMVPA) with 
recent sphinx and python2.6 (fine with 2.5, failed for 2.6, 2.7, 3.1), which 
relies on traversing all attributes given by dir(obj), BUT apparently 
__abstractmethods__ becomes very special -- it is given by dir(obj) since it 
is present in obj.__class__, but getattr(obj, __abstractmethods__) fails for 
classes derived from type.  E.g. following sample demonstrates it:

print(in type's dir , '__abstractmethods__' in dir(type))
print(type.__abstractmethods__)

class type3(type):
pass

print(in type3's dir , '__abstractmethods__' in dir(type3))
print(type3.__abstractmethods__)


results in output:

$ python2.6 trash/type_subclass.py
(in type's dir, True)
attribute '__abstractmethods__' of 'type' objects
(in type3's dir, True)
Traceback (most recent call last):
  File trash/type_subclass.py, line 9, in module
print(type3.__abstractmethods__)
AttributeError: __abstractmethods__


$ python3.1 trash/type_subclass.py 
in type's dir True
attribute '__abstractmethods__' of 'type' objects
in type3's dir True
Traceback (most recent call last):
  File trash/type_subclass.py, line 9, in module
print(type3.__abstractmethods__)
AttributeError: __abstractmethods__


And that seems to be the only attribute behaving like that (others are fine and 
accessible).  Some people even seems to provide workarounds already, e.g.:
http://bitbucket.org/DasIch/bpython-colorful/src/19bb4cb0a65d/bpython/repl.py
when __abstractmethods__ is accessed only for the subclasses of ABCmeta ...

so, is it a bug or a feature (so we have to take care about it in all 
traversals of attributes given by dir())? ;)

--
messages: 117798
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: non-Pythonic fate of __abstractmethods__
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10006
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10006] non-Pythonic fate of __abstractmethods__

2010-10-01 Thread Yaroslav Halchenko

Yaroslav Halchenko yarikop...@gmail.com added the comment:

yikes... surprising resolution -- I expected that fix would either makes 
__abstractmethods__ accessible in derived types or becomes absent from output 
of dir() -- but none of those has happened.  Now we ended up with a consistent 
non-Pythonic fate of __abstractmethods__ listed in output of dir() but not 
accessible.  is that a feature?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10006
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9235] missing import sys in Tools/gdb/libpython.py

2010-07-12 Thread Yaroslav Halchenko

New submission from Yaroslav Halchenko yarikop...@gmail.com:

as you can see from below, sys. is used, but never imported (besides a 
docstring)


$ git describe
upstream/0.5.0.dev-875-gf06319e

$ grep -5 'sys' /home/yoh/proj/misc/python/Tools/gdb/libpython.py


During development, I've been manually invoking the code in this way:
(gdb) python

import sys
sys.path.append('/home/david/coding/python-gdb')
import libpython
end

then reloading it after each edit like this:
(gdb) python reload(libpython)
--

def print_summary(self):
if self.is_evalframeex():
pyop = self.get_pyop()
if pyop:
sys.stdout.write('#%i %s\n' % (self.get_index(), 
pyop.get_truncated_repr(MAX_OUTPUT_LEN)))
sys.stdout.write(pyop.current_line())
else:
sys.stdout.write('#%i (unable to read python frame 
information)\n' % self.get_index())
else:
sys.stdout.write('#%i\n' % self.get_index())

class PyList(gdb.Command):
'''List the current Python source code, if any

Use
--
for i, line in enumerate(all_lines[start-1:end]):
linestr = str(i+start)
# Highlight current line:
if i + start == lineno:
linestr = '' + linestr
sys.stdout.write('%4s%s' % (linestr, line))


# ...and register the command:
PyList()

--
components: Demos and Tools
messages: 110134
nosy: Yaroslav.Halchenko
priority: normal
severity: normal
status: open
title: missing import sys in Tools/gdb/libpython.py
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9235
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9235] missing import sys in Tools/gdb/libpython.py

2010-07-12 Thread Yaroslav Halchenko

Yaroslav Halchenko yarikop...@gmail.com added the comment:

sorry -- git describe was by mistake in there... report is based on SVN 
revision 82502

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9235
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7897] Support parametrized tests in unittest

2010-05-11 Thread Yaroslav Halchenko

Yaroslav Halchenko yarikop...@gmail.com added the comment:

Hi Nick,

Am I reading your right, Are you suggesting to implement this
manual looping/collecting/reporting separately in every unittest
which needs that?

On Tue, 11 May 2010, Nick Coghlan wrote:
 Nick Coghlan ncogh...@gmail.com added the comment:

 I agree with Michael - one test that covers multiple settings can easily be 
 done by collecting results within the test itself and then checking at the 
 end that no failures were detected (e.g. I've done this myself with a test 
 that needed to be run against multiple input files - the test knew the 
 expected results and maintained lists of filenames where the result was 
 incorrect. At the end of the test, if any of those lists contained entries, 
 the test was failed, with the error message giving details of which files had 
 failed and why).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7897
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7897] Support parametrized tests in unittest

2010-04-09 Thread Yaroslav Halchenko

Yaroslav Halchenko yarikop...@gmail.com added the comment:

Fernando, I agree... somewhat ;-)

At some point (whenever everything works fine and no unittests fail) I wanted 
to merry sweepargs to nose and make it spit out a dot (or animate a spinning 
wheel ;)) for every passed unittest, so instead of 300 dots I got a picturesque 
field of thousands dots and Ss and also saw how many were skipped for some 
parametrizations.  But I became Not sure of such feature since field became 
quite large and hard to grasp visually although it gave me better idea indeed 
of what was the total number of testings were done and skipped.  So may be it 
would be helpful to separate notions of tests and testings and provide user 
ability to control the level of verbosity (1 -- tests, 2 -- testings, 3 -- 
verbose listing of testings (test(parametrization)))

But I blessed sweepargs every time whenever something goes nuts and a test 
starts failing for (nearly) all parametrization at the same point.  And that is 
where I really enjoy the concise summary.
Also I observe that often an ERROR bug reveals itself through multiple tests.  
So, may be it would be worth developing a generic 'summary' output which would 
collect all tracebacks and then groups them by the location of the actual 
failure and tests/testings which hit it?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7897
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7897] Support parametrized tests in unittest

2010-04-08 Thread Yaroslav Halchenko

Yaroslav Halchenko yarikop...@gmail.com added the comment:

In PyMVPA we have our little decorator as an alternative to Fernando's 
generators,  and which is closer, I think, to what Michael was wishing for:
@sweepargs

http://github.com/yarikoptic/PyMVPA/blob/master/mvpa/testing/sweepargs.py

NB it has some minor PyMVPA specificity which could be easily wiped out, and 
since it was at most 4 eyes looking at it and it bears evolutionary changes, 
it is far from being the cleanest/best piece of code, BUT:

* it is very easy to use, just decorate a test method/function and give an 
argument which to vary within the function call, e.g smth like

@sweepargs(arg=range(5))
def test_sweepargs_demo(arg):
ok_(arg  5)
ok_(arg  3)
ok_(arg  2)

For nose/unittest it would still look like a single test

* if failures occur, sweepargs groups failures by the type/location of the 
failures and spits out a backtrace for one of failures + summary (instead of 
detailed backtraces for each failure) specifying which arguments lead to what 
error... here is the output for example above:

$ nosetests -s test_sweepargs_demo.py
F
==
FAIL: mvpa.tests.test_sweepargs_demo.test_sweepargs_demo
--
Traceback (most recent call last):
  File /usr/lib/pymodules/python2.5/nose/case.py, line 183, in runTest
self.test(*self.arg)
  File /usr/lib/pymodules/python2.5/nose/util.py, line 630, in newfunc
return func(*arg, **kw)
  File /home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py, line 
11, in test_sweepargs_demo
ok_(arg  2)
  File /usr/lib/pymodules/python2.5/nose/tools.py, line 25, in ok_
assert expr, msg
AssertionError: 
 Different scenarios lead to failures of unittest test_sweepargs_demo (specific 
tracebacks are below):
  File /home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py, line 
10, in test_sweepargs_demo
ok_(arg  3)
File /usr/lib/pymodules/python2.5/nose/tools.py, line 25, in ok_
assert expr, msg
  on
arg=3 
arg=4 

  File /home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py, line 
11, in test_sweepargs_demo
ok_(arg  2)
File /usr/lib/pymodules/python2.5/nose/tools.py, line 25, in ok_
assert expr, msg
  on
arg=2 

--
Ran 1 test in 0.003s

FAILED (failures=1)

* obviousely multiple decorators could be attached to the same test, to test on 
all combinations of more than 1 argument but output atm is a bit cryptic ;-)

--
nosy: +Yaroslav.Halchenko

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7897
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com