Re: [py-dev] [TIP] yield-tests and fixtures: should they have a future?

2012-12-22 Thread holger krekel
Hi Ronny,

On Fri, Dec 21, 2012 at 11:37 +0100, Ronny Pfannschmidt wrote:
 Hi all,
 
 My oppinion on yield Tests is,
 that they should be turned into part of reporting extensions
 instead of their current place at running/collection
 
 i would like to have more than one report for functional/acceptance
 tests anyway preferably in a way that allows parts to fail but still
 run the complete test

 with that in place a yield test would be just a loop running the check 
 of all items without propagating single item failures to the outside

Well, one of the main reasons people used yield is to get separate progress
dots, and to allow some callables to fail without disrupting the whole sequence.
This means that the running of single callables needs to be reported separately.
Let's put detailed discussions of this to #pylib rather, it's going to get
pytest-specific.  I am not sure yet, also taking Jason's confirmation
into account, supporting yield is worth much.  Especially since pytest already
has extensive means for parametrization.

 also there is another upcoming task
 yield tests based on the the new async api

that's not materialized yet i think.

holger

 
 best,
 Ronny
 
 On 12/21/2012 10:51 AM, holger krekel wrote:
  Hi testing folks, hi Jason,
 
  i am looking at some recent pytest issues and would like to simplify
  pytest's internal fixture handling.  One obstacle/complication are
  yield-tests, i.e. the style of producing tests with a generator::
 
   def test_gen(self):
   for x in range(10):
   yield check, x
 
  This currently produces 10 test items, calling the check function with the
  respective parameter.  This by itself is not a big deal to support.
  However, some people expect fixtures/setup_function/method functions to 
  execute
  before the generator does and this mixes the collection with the runtest 
  phase.
  Unfortunately, nose also supports this notion although i am wondering how
  nose2 is going to deal with it as Jason also plans to separate collection
  from running.
 
  So i am thinking about dropping fixture/setup support for yield-tests in 
  pytest
  but OTOH i'd like to keep backward and nose compatibility.  As far as pytest
  is concerned, it has many others means of parametrization independently
  from yield, see e. g. 
  http://pytest.org/latest/fixture.html#fixture-parametrize
  and http://pytest.org/latest/parametrize.html and more and more people
  are starting to use those (pytest does not document or recommend yield
  for 1-2 years now).
 
  If anyone has any input/thoughts on this, please shoot.
 
  best,
  holger
 
  ___
  testing-in-python mailing list
  testing-in-pyt...@lists.idyll.org
  http://lists.idyll.org/listinfo/testing-in-python
 
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] this list moving to pytest-...@python.org

2012-12-21 Thread holger krekel
Hi folks,

this list is going to move to move python.org, probably even today.
The new list address is:

pytest-...@python.org

the commit list will be:

pytest-com...@python.org

The old addresses (py-dev@codespeak.net,py-...@codespeak.net) will 
continue to function so the move shouldn't be disruptive.

I am going to send another mail once the move is complete.

best,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] reversing fixture/xunit setup call order?

2012-12-20 Thread holger krekel
On Wed, Dec 19, 2012 at 16:27 -0700, lahwran wrote:
 that looks good to me. I'm not sure I understand the reasoning behind
 having `clsarg(...) # @pytest.fixture(class, autouse=False)` come after
 setupClass, though.

The ideas is that implicit fixtures come before explicit ones.
All autouse=True fixtures are implicit - they are not requested explicitely
through a funcarg or a @pytest.mark.usefixtures(...) decoration.

best,
holger

 On Wed, Dec 19, 2012 at 1:42 AM, holger krekel hol...@merlinux.eu wrote:
 
  On Tue, Dec 18, 2012 at 21:11 +0100, Floris Bruynooghe wrote:
   On 16 December 2012 12:23, holger krekel hol...@merlinux.eu wrote:
Currently, if you define e.g. an autouse fixture function it is going
  to
be called _after_ the xUnit setup functions.  This is especially
surprising when you do a session-scoped autouse fixture.  I am
  wondering
if we could reverse the order, i.e. call fixture functions (including
autouse-fixtures of course) ahead of xUnit setup methods.
   
any thoughts?
  
   Agreed, I think it would be a good idea to have at least autouse
   fixtures before the xUnit setup.
 
  However, i realize we also have scopes.  And this is where any attempt
  to decide ordering between pytest versus xUnit fixtures seesm to break
  down:
 
  - We don't want setup_class execute after a function-scoped pytest fixture.
 
  - We don't want class-scoped pytest fixture execute after setup_method.
 
  Maybe, we could internally add autouse-fixtures at module/class/function
  scope which would look for setupX/teardownX and act accordingly.  This
  way xUnit setup/teardown methods would appear as pytest fixtures.
  Here is an example how it could look like for a mix of
  xUnit/pytest class/function scoped autouse- and non-autouse fixtures:
 
  user-function found by
 
  --
  ...
  autoclass(...)# @pytest.fixture(scope=class, autouse=True))
  setup_class(cls)  # internal @pytest.fixture(class, autouse=True)
  clsarg(...)   # @pytest.fixture(class, autouse=False)
  funcfixture(...)  # @pytest.fixture(scope=function, autouse=True)
  setup_method()# internal @pytest.fixture(function, autouse=True)
  arg1  # @pytest.fixture(scope=function, autouse=False))
  ...
  test_function(arg1, clsarg)
  # teardowns execute in LIFO registration order
 
  Makes sense?
 
  Ideally, we could produce something like the above output with
  some command line option to help debugging.
 
  best,
  holger
 
   Regards,
   Floris
  
  
   --
   Debian GNU/Linux -- The Power of Freedom
   www.debian.org | www.gnu.org | www.kernel.org
  
  ___
  py-dev mailing list
  py-dev@codespeak.net
  http://codespeak.net/mailman/listinfo/py-dev
 

 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] reversing fixture/xunit setup call order?

2012-12-19 Thread holger krekel
On Tue, Dec 18, 2012 at 21:11 +0100, Floris Bruynooghe wrote:
 On 16 December 2012 12:23, holger krekel hol...@merlinux.eu wrote:
  Currently, if you define e.g. an autouse fixture function it is going to
  be called _after_ the xUnit setup functions.  This is especially
  surprising when you do a session-scoped autouse fixture.  I am wondering
  if we could reverse the order, i.e. call fixture functions (including
  autouse-fixtures of course) ahead of xUnit setup methods.
 
  any thoughts?
 
 Agreed, I think it would be a good idea to have at least autouse
 fixtures before the xUnit setup.

However, i realize we also have scopes.  And this is where any attempt 
to decide ordering between pytest versus xUnit fixtures seesm to break down:

- We don't want setup_class execute after a function-scoped pytest fixture.
  
- We don't want class-scoped pytest fixture execute after setup_method.
 
Maybe, we could internally add autouse-fixtures at module/class/function
scope which would look for setupX/teardownX and act accordingly.  This
way xUnit setup/teardown methods would appear as pytest fixtures.
Here is an example how it could look like for a mix of 
xUnit/pytest class/function scoped autouse- and non-autouse fixtures:

user-function found by
--
...
autoclass(...)# @pytest.fixture(scope=class, autouse=True))
setup_class(cls)  # internal @pytest.fixture(class, autouse=True)
clsarg(...)   # @pytest.fixture(class, autouse=False)
funcfixture(...)  # @pytest.fixture(scope=function, autouse=True)
setup_method()# internal @pytest.fixture(function, autouse=True)
arg1  # @pytest.fixture(scope=function, autouse=False))
...
test_function(arg1, clsarg)
# teardowns execute in LIFO registration order

Makes sense?

Ideally, we could produce something like the above output with 
some command line option to help debugging.

best,
holger

 Regards,
 Floris
 
 
 --
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] getting syspath handling right (was Re: making py.test ignore an __init__.py)

2012-12-17 Thread holger krekel
Hi lahwran, all,

On Sun, Dec 16, 2012 at 23:34 -0700, lahwran wrote:
 Hi, I've got a bit of a problem related to how pytest determines the fully
 qualified name for a module. I have a django 1.3 layout project which has
 an __init__.py at its root, due to oddities in how django functions. so I
 have something like this layout:
 
 ~/project_venv/ # containing the virtualenv for the project
 ~/project_venv/project/ # the directory I cd to when I work
 ~/project_venv/project/.git
 ~/project_venv/project/__init__.py # because django's stuff doesn't work
 without it
 ~/project_venv/project/applications/
 ~/project_venv/project/applications/__init__.py
 ~/project_venv/project/applications/projectapp/
 ~/project_venv/project/applications/projectapp/__init__.py
 ~/project_venv/project/applications/projectapp/some_file.py
 ~/project_venv/project/applications/projectapp/some_other_file.py
 ~/project_venv/project/applications/projectapp/tests/
 ~/project_venv/project/applications/projectapp/tests/__init__.py
 ~/project_venv/project/applications/projectapp/tests/test_some_file.py
 ~/project_venv/project/applications/projectapp/tests/test_some_other_file.py
 
 in test_some_file.py, I have something like:
 
 from applications.projectapp.some_file import Herp, Derp, doop
 
 and in test_some_other_file.py, I have something similar:
 
 from applications.projectapp.some_other_file import Herk, Derk, foo
 from applications.projectapp.tests.test_some_file import
 SomeTestUtilityThingy
 
 however, because project/ has an __init__.py, when pytest does its
 collection, rather than importing
 applications.projectapp.tests.test_some_file, it imports,
 project.applications.projectapp.tests.test_some_file! worse, when
 test_some_other_file.py imports test_some_file, it creates a *duplicate* -
 the import creates an applications.projectapp.tests.test_some_file when
 project.applications.projectapp.tests.test_some_file already existed.

evil.  If it would be some arbitrary project promoting this strange
__init__.py file practice i'd be inclined to say fix it.  If a project
like Django really promotes this, then i guess we have to deal with it :/

 so, my question is: how can I tell pytest to please just pretend that
 __init__.py doesn't exist? right now I'm using conftest.py to shove
 dirname(__file__) onto sys.path, but that was written before I thought to
 check if there was an __init__.py in the root of the project.

It's tricky business to get sys-path manipulations workable in all
the different kind of real life situations.  I think nose just adds
the dir of the test module to sys.path and imports it, and if another
test module with the same basename exists in a different directories
unloads the module.  Or maybe it even always performs reloading, not sure.
In nose2 this mechanism is to be dropped, anyway.

In python3 unittest discovery requires a toplevel directory setting.
If you don't set it from the command line the current working dir is
used.  Unless i am missing something  that makes importing between
test modules rather fragile.

pytest always tries to import under a fully qualified name by walking the
directories up that contain an __init__.py.  This avoids the reloading
business (which i think is not a good idea, nose2/Jason seem to agree)
and, unlike unittest, it gives test modules a reliable cross-importing
behaviour.  Given the above problem, I think we should refine the pytest
algorithm to also take existing sys.path settings into account and stop
going further up if we find one.  In your situation it would stop at
projects/ even though it contains an __init__.py file which would
otherwise lead it to go further up.

With python3.3 we need to also allow no __init__.py files at all (the
new namespace import stuff) and just check directly for a sys.path that fits.

Sounds like a plan?  Note that i was also considering inifile-settings
or new hooks to influence the behaviour.  But this is not easy to get
right in all situations.  At least i didn't manage.  Eventually i came
up with the above refinement.  It's anyway better if things work by
default and you don't have to read lengthy explanations like this
mail here :)

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Tricky parametrization problem

2012-12-17 Thread holger krekel
On Mon, Dec 17, 2012 at 12:24 +0200, Tomi Pieviläinen wrote:
 I have a a simulation that has different kinds of forces to simulate,
 each a different function. I have implemented those functions in
 several modules: baseline python (Numpy), C (accessed via ctypes) and
 PyCUDA. I need to make sure I get the same results from the optimized
 versions compared to the slow but correct python version. Later I will
 be adding modules/implementations for cython, numba etc. too. Not all
 modules are always available. For example cuda depends on the current
 system hardware, so I need to skip some tests.
 
 For each opimized module I want to take the func1, func2 and func3,
 and compare them to the baseline func1, func2 and func3. So I was
 hoping I could do test functions with
 
 @parametrize(('basefunc', 'fastfunc'),
  [(func1, mod.func1),
   (func2, mod.func2),
   (func3, mod.func3)])
 @parametrize(('x, 'y'),
  [(numpy.randn(8), numpy.randn(8)),
   (numpy.randn(16), numpy.randn(16)),
   ... ])
 def test_random_arrays(x, y, basefunc, fastfunc)
 assert_arrays_almost_equal(basefunc(x, y), fastfunc(x, y))
 
 
 and have the mod parametrized. But I can't use @parametrize for
 that, since the modules can't be always imported.
 
 So I was hoping then to have a fixture for the module or the fastfunc,
 that would call pytest.skip() if the import fails, but I would need
 the fixture return a bunch of parameters for just one call which
 I've understood is not possible.
 
 I'm not really sure if what I'm aiming at makes any sense, but hopefully
 someone has an idea on how to do this in a clean way.

I came up with a rather simple approach how to handle optional imports.  
See here:


http://pytest.org/latest/example/parametrize.html#indirect-parametrization-of-optional-implementations-imports

Works for you?

Btw, you might also want to checkout the pytest-quickcheck plugin:

http://pypi.python.org/pypi/pytest-quickcheck/

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] reversing fixture/xunit setup call order?

2012-12-16 Thread holger krekel
Hi all,

Currently, if you define e.g. an autouse fixture function it is going to
be called _after_ the xUnit setup functions.  This is especially
surprising when you do a session-scoped autouse fixture.  I am wondering
if we could reverse the order, i.e. call fixture functions (including
autouse-fixtures of course) ahead of xUnit setup methods.

any thoughts?

holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] reversing fixture/xunit setup call order?

2012-12-16 Thread holger krekel
On Sun, Dec 16, 2012 at 13:21 +0100, Ronny Pfannschmidt wrote:
 sounds like the correct curse of action else legacy tests cant be
 integrated with fixtures propperly

curse of action ... like that one :)

 i wonder if we should go as far as allowing fixtures to be arguments
 to pytest xunit test functions

that'd be tricky at least for setup_module and setup_method/function which
support a positional argument.  Note that you can easily turn your setup
function into one that accepts fixtures:

@pytest.fixture
def setup_method(self, request, tmpdir, ...):
...

In this case there is no positional argument but you can get the
current function under test via ``request.function``.
I think it's clearer to add that extra line.

holger

 On 12/16/2012 12:23 PM, holger krekel wrote:
 Hi all,
 
 Currently, if you define e.g. an autouse fixture function it is going to
 be called _after_ the xUnit setup functions.  This is especially
 surprising when you do a session-scoped autouse fixture.  I am wondering
 if we could reverse the order, i.e. call fixture functions (including
 autouse-fixtures of course) ahead of xUnit setup methods.
 
 any thoughts?
 
 holger
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] reversing fixture/xunit setup call order?

2012-12-16 Thread holger krekel
On Sun, Dec 16, 2012 at 13:49 -0700, lahwran wrote:
 I think it's best to think of the xunit-style setup and teardown as part of
 the actual test - as in, if you're blindly translating this:
 
 class SomethingTest(TestCase):
 def setUp(self):
 self.doop = 1
 
 def tearDown(self):
 del self.doop
 
 def test_something(self):
 assert self.doop == 1
 
 def test_something_else(self):
 assert self.doop != 2
 
 you'd get this:
 
 
 class TestSomething(object):
 def setUp(self):
 self.doop = 1
 
 def tearDown(self):
 del self.doop
 
 def test_something(self):
 self.setUp()
 try:
 assert self.doop == 1
 finally:
 self.tearDown()
 
 def test_something_else(self):
 self.setUp()
 try:
 assert self.doop != 2
 finally:
 self.tearDown()

 I'm not sure how to demonstrate an equivalent to setUpClass, but my point
 is that fixtures shouldn't even be able to tell that setup and teardown
 aren't part of the actual test method unless they look for it.

That speaks for executing fixtures ahead of xunit methods, right?

 As for allowing funcargs to the setup functions, I think marking them as
 @pytest.fixture(autouse=True) would be great. I do think that it'd be more
 intuitive for people who are used to xunit style if the @pytest.setup thing
 being discussed on the other thread was made available.

Do you know that you can use @pytest.fixture(autouse=True) on xUnit
setup methods already with the current release?  We could think about
introducing a ``@pytest.setup`` shortcut, but i don't think it's worth it.

best,
holger

 On Sun, Dec 16, 2012 at 12:24 PM, holger krekel hol...@merlinux.eu wrote:
 
  On Sun, Dec 16, 2012 at 13:21 +0100, Ronny Pfannschmidt wrote:
   sounds like the correct curse of action else legacy tests cant be
   integrated with fixtures propperly
 
  curse of action ... like that one :)
 
   i wonder if we should go as far as allowing fixtures to be arguments
   to pytest xunit test functions
 
  that'd be tricky at least for setup_module and setup_method/function which
  support a positional argument.  Note that you can easily turn your setup
  function into one that accepts fixtures:
 
  @pytest.fixture
  def setup_method(self, request, tmpdir, ...):
  ...
 
  In this case there is no positional argument but you can get the
  current function under test via ``request.function``.
  I think it's clearer to add that extra line.
 
  holger
 
   On 12/16/2012 12:23 PM, holger krekel wrote:
   Hi all,
   
   Currently, if you define e.g. an autouse fixture function it is going to
   be called _after_ the xUnit setup functions.  This is especially
   surprising when you do a session-scoped autouse fixture.  I am wondering
   if we could reverse the order, i.e. call fixture functions (including
   autouse-fixtures of course) ahead of xUnit setup methods.
   
   any thoughts?
   
   holger
   ___
   py-dev mailing list
   py-dev@codespeak.net
   http://codespeak.net/mailman/listinfo/py-dev
  
 
 

 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] steps to include a new plugin

2012-11-27 Thread holger krekel
Hi Adam,

On Mon, Nov 26, 2012 at 11:25 -0500, Adam Goucher wrote:
 If I wanted to try and add https://github.com/adamgoucher/pytest-marks 
 to the main pytest distribution. Is there a process for consideration, 
 code style rules, etc.?

posting here is just fine.  After discussion and agreement a pull
request with tests and docs would be the next step.

 The idea of this plugin is to allow script creators to not have to do
 
  @pytest.mark.red
  @pytest.mark.green
  @pytest.mark.blue
  @pytest.mark.black
  @pytest.mark.orange
  @pytest.mark.pink
  def some_test_method(self):
  # some check-y stuff
 
 but rather
 
  @pytest.marks('red', 'green', 'blue', 'black', 'orange', 'pink')
  def some_test_method(self):
  # some check-y stuff

I can see how each mark consuming a line can be cumbersome.  I wonder if
there would be a way to have less line noise, however.  For example::

@pytest.mark.red.green.blue.black.orange.pink
def test_method(...):
...

best,
holger

 -adam
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] using tmpdir/monkeypatch/... from non-function scopes

2012-11-20 Thread holger krekel
On Tue, Nov 20, 2012 at 09:52 +, Floris Bruynooghe wrote:
 On 19 November 2012 22:04, holger krekel hol...@merlinux.eu wrote:
  A tmpdir requested in function-scope and a tmpdir requested with session
  scope would be two different directories.  I don't see a problem with this,
  do you?
 
 When they both have a side-effect, like e.g. chdir, this could be an
 issue I thought.
 
 Also, which value does the test function see when it requests tmpdir
 in this case?  I'm guessing it would get the tmpdir instance closest
 to itself, i.e. function-scope over module- or session-scope.  But
 maybe it would be useful if it could also retrieve the value of other
 scopes?  E.g. tmpdir.session_scope is the other tmpdir instance?

if you need differentiation you could do::

@pytest.fixture(scope=module)
def tmpdir_module(tmpdir):
return tmpdir

def test_function(tmpdir_module):
...

Alternatively, we could think about::

@pytest.mark.usefixtures(tmpdir:module)
def test_function(tmpdir):
...

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.3.4: bugfixes and extended selection with -k expr

2012-11-20 Thread holger krekel
pytest-2.3.4: stabilization, more flexible selection via -k expr
===

pytest-2.3.4 is a small stabilization release of the py.test tool
which offers uebersimple assertions, scalable fixture mechanisms
and deep customization for testing with Python.  This release
comes with the following fixes and features:

- make -k option accept an expressions the same as with -m so that one
  can write: -k name1 or name2 etc.  This is a slight usage incompatibility
  if you used special syntax like TestClass.test_method which you now
  need to write as -k TestClass and test_method to match a certain
  method in a certain test class.  
- allow to dynamically define markers via
  item.keywords[...]=assignment integrating with -m option
- yielded test functions will now have autouse-fixtures active but 
  cannot accept fixtures as funcargs - it's anyway recommended to
  rather use the post-2.0 parametrize features instead of yield, see:
  http://pytest.org/latest/example/parametrize.html
- fix autouse-issue where autouse-fixtures would not be discovered
  if defined in a a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with 256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example
- fixes related to autouse discovery and calling

Thanks in particular to Thomas Waldmann for spotting and reporting issues.

See

 http://pytest.org/

for general information.  To install or upgrade pytest:

pip install -U pytest # or
easy_install -U pytest

best,
holger krekel
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Contributing to py

2012-11-19 Thread holger krekel
Hello Philipp,

On Mon, Nov 19, 2012 at 12:41 +0100, Philipp Konrad wrote:
 Hello,
 
 my name is Philipp Konrad, I am a computer science student, a young Python
 programmer and researcher from Vienna, Austria.

welcome!

 My developer experience started around two years ago in Java, but half year
 ago I was introduced to the Python world.
 I want to contribute to the py or py.test project and can assign one
 working day per week. Generally, I never have contributed
 to an open source project, so I would need some help for my first steps.

sure.  pytest fits better than py to contribute to, i think.

- 1. Where is a good point to start? Is there a good site with first
steps, a manual or something similiar?

This depends on your prior experience.  To begin with, i assume
your have walked through http://pytest.org including some of the examples.
A few answers would help to better understand where you are starting from::

- Do you have experience in some form of automated testing? Have you
  played with nose, unittest?  Played with pytest itself?
- are you familiar with mercurial or git?  Bitbucket.org?
- Are you familiar with Python2 versus Python3 differences?
- have written docutils/RestructuredText?
- ever written a parser for configuration files?
- written a distributed application?

- 2. Do you have special coding / testing guidelines/ 'code of conduct'
additional to PEP8?

Apart from PEP8 not much apart from general good practise like e. g.
not using any global state, writing a test for each feature added/bug fixed
along with the actual change.  Usually changes are developed in bitbucket
clones and then you open a pull request.

- 3. In which domain do you need new people?
- 3.1 Code new features
   - 3.2 Documentation
   - 3.3 Write unit and integration tests
   - 3.4 Translation
   - 3.5 Community work

All of these domains make some sense.  You should probably try to tackled
an issue listed in http://bitbucket.org/hpk42/pytest/issues - this will
require reading up and understanding how pytest internally works.
 
One bigger area would be to

a) develop a pytest plugin for testing command line application
b) rewrite pytest's own tests to use the plugin

for a) i have a starting point including some specs and ideas.

Other areas include for example writing a http server that allows to
search/manage the many examples currently in sections of  the
rest-documents in doc/en/example/*.

- 4. Is there an organizational structure  or hierachy that I should
bear in mind?

Rather flat.  It's probably best if you establish an IRC presence at
irc.freenode.net .  Apart from me (hpk42) there usually are ronny and
flub who have contributed a lot of code already.  Others have helped
in various ways and may also be able to answer questions.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Contributing to py

2012-11-19 Thread holger krekel
On Mon, Nov 19, 2012 at 14:54 +0100, Philipp Konrad wrote:
 Hello Holger,
 
 - Do you have experience in some form of automated testing? Have you
   played with nose, unittest?  Played with pytest itself?
  Regularly I use unittest and basic applications of pytest. So far, I
 never have used nose.
 
 - are you familiar with mercurial or git?  Bitbucket.org?
  No, I only used subversion.

For contributing you will need to learn the basics of mercurial
and bitbucket.

 - Are you familiar with Python2 versus Python3 differences?
  No, I have only used Python 2.
 
 - have written docutils/RestructuredText?
  Yes, I used RestructuredText and create some documentation with Sphinx.
 
 - ever written a parser for configuration files?
  Yes.
 
 - written a distributed application?
  No.
 
 Great, so I will try to solve an issue from the bitbucket list.
 Can you recommend me one or should I just choose by myself?

Try to choose one.  I feel a bit bad sending you to pytest source code
without much guidance, though.  If you can't make sense of it I can try to
write up a bit of docs but that might last a few days.  Let me just say
that pytest's functionality is implemented almost entirely in plugins.  
The core and the plugins themselves usually call each other through hooks,
defined in _pytest/hookspec.py.  Whenever you see something like
*hook.pytest_*(...) it is a call to such a hook, basically a 1:N
relation because there might be multiple hook functions involved coming
from multiple plugins.

best,
holger

 
 2012/11/19 holger krekel hol...@merlinux.eu
 
  Hello Philipp,
 
  On Mon, Nov 19, 2012 at 12:41 +0100, Philipp Konrad wrote:
   Hello,
  
   my name is Philipp Konrad, I am a computer science student, a young
  Python
   programmer and researcher from Vienna, Austria.
 
  welcome!
 
   My developer experience started around two years ago in Java, but half
  year
   ago I was introduced to the Python world.
   I want to contribute to the py or py.test project and can assign one
   working day per week. Generally, I never have contributed
   to an open source project, so I would need some help for my first steps.
 
  sure.  pytest fits better than py to contribute to, i think.
 
  - 1. Where is a good point to start? Is there a good site with first
  steps, a manual or something similiar?
 
  This depends on your prior experience.  To begin with, i assume
  your have walked through http://pytest.org including some of the examples.
  A few answers would help to better understand where you are starting from::
 
  - Do you have experience in some form of automated testing? Have you
played with nose, unittest?  Played with pytest itself?
  - are you familiar with mercurial or git?  Bitbucket.org?
  - Are you familiar with Python2 versus Python3 differences?
  - have written docutils/RestructuredText?
  - ever written a parser for configuration files?
  - written a distributed application?
 
  - 2. Do you have special coding / testing guidelines/ 'code of
  conduct'
  additional to PEP8?
 
  Apart from PEP8 not much apart from general good practise like e. g.
  not using any global state, writing a test for each feature added/bug fixed
  along with the actual change.  Usually changes are developed in bitbucket
  clones and then you open a pull request.
 
  - 3. In which domain do you need new people?
  - 3.1 Code new features
 - 3.2 Documentation
 - 3.3 Write unit and integration tests
 - 3.4 Translation
 - 3.5 Community work
 
  All of these domains make some sense.  You should probably try to tackled
  an issue listed in http://bitbucket.org/hpk42/pytest/issues - this will
  require reading up and understanding how pytest internally works.
 
  One bigger area would be to
 
  a) develop a pytest plugin for testing command line application
  b) rewrite pytest's own tests to use the plugin
 
  for a) i have a starting point including some specs and ideas.
 
  Other areas include for example writing a http server that allows to
  search/manage the many examples currently in sections of  the
  rest-documents in doc/en/example/*.
 
  - 4. Is there an organizational structure  or hierachy that I should
  bear in mind?
 
  Rather flat.  It's probably best if you establish an IRC presence at
  irc.freenode.net .  Apart from me (hpk42) there usually are ronny and
  flub who have contributed a lot of code already.  Others have helped
  in various ways and may also be able to answer questions.
 
  best,
  holger
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] using tmpdir/monkeypatch/... from non-function scopes

2012-11-19 Thread holger krekel
Hi folks,

while writing tests on a new project using pytest-2.3 i noticed again an
inconvience: fixtures such as tmpdir or monkeypatch could implementation-wise
easily support being called from non-function scoped fixtures.  But
currently if you do::

@pytest.fixture(scope=module)
def something(monkeypatch):
...

you get a ScopeMismatchError because the function-scoped monkeypatch
fixture cannot be called from a module-scoped fixture.  I am considering
introducing an any scope for a fixture declaration that would avoid
this error.  The monkeypatch and something fixture would then look
like this::

@pytest.fixture(scope=any)
def monkeypatch(...):
# unmodified builtin monkeypatch implementation

@pytest.fixture(scope=module)
def something(monkeypatch):
...

This would not raise a ScopeMismatchError but just work:
monkeypatch-finalizers would be called when the last test in a module
using the something fixture has run.

However, if we additionally have a function-scoped fixture::

@pytest.fixture(scope=function)
def other(monkeypatch):
...

The monkeypatch instance could obviously not be the same object as
the one in ``something(monkeypatch)`` above.  monkeypatch-finalizers
would raher be called after a test function using the other
fixture has finalized.  I am not sure if there is confusion potential
about this.

If there are any questions or comments, please shoot.

best,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] using tmpdir/monkeypatch/... from non-function scopes

2012-11-19 Thread holger krekel
On Mon, Nov 19, 2012 at 21:53 +, Floris Bruynooghe wrote:

  @pytest.fixture(scope=any)
  def monkeypatch(...):
  # unmodified builtin monkeypatch implementation
 
  @pytest.fixture(scope=module)
  def something(monkeypatch):
  ...
 
  This would not raise a ScopeMismatchError but just work:
  monkeypatch-finalizers would be called when the last test in a module
  using the something fixture has run.
 
  However, if we additionally have a function-scoped fixture::
 
  @pytest.fixture(scope=function)
  def other(monkeypatch):
  ...
 
  The monkeypatch instance could obviously not be the same object as
  the one in ``something(monkeypatch)`` above.  monkeypatch-finalizers
  would raher be called after a test function using the other
  fixture has finalized.  I am not sure if there is confusion potential
  about this.
 
 For monkeypatch this would not be too bad as you can have two
 instances which don't cause harm to each other.  But what happens with
 e.g. tmpdir?  How can you avoid a temporary directory being created at
 the session-level and then later one at the function level?

A tmpdir requested in function-scope and a tmpdir requested with session
scope would be two different directories.  I don't see a problem with this,
do you?

 It's almost as the any-scoped fixture needs to be able to specify how
 things should be handled when it's requested from different scopes.

An any-scoped fixture function can look at request.scope and act accordingly.

An interesting question maybe is: how can a function-scoped resource
_use_ itself but higher scoped :)  (actually not too hard, i guess:
you could just invent a fixture name, have its fixture function use a
higher scope and pass through the resource (e. g. tmpdir).  you can 

best and good night,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] using tmpdir/monkeypatch/... from non-function scopes

2012-11-19 Thread holger krekel
On Mon, Nov 19, 2012 at 22:04 +, holger krekel wrote:
   the one in ``something(monkeypatch)`` above.  monkeypatch-finalizers
   would raher be called after a test function using the other
   fixture has finalized.  I am not sure if there is confusion potential
   about this.
  
  For monkeypatch this would not be too bad as you can have two
  instances which don't cause harm to each other.  But what happens with
  e.g. tmpdir?  How can you avoid a temporary directory being created at
  the session-level and then later one at the function level?
 
 A tmpdir requested in function-scope and a tmpdir requested with session
 scope would be two different directories.  I don't see a problem with this,
 do you?

It's btw probably better to name it each as in scope='each' which
makes it clearer that something happens for each scope separately.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] @pytest.setup as shortcut for @pytest.fixture(autouse=True)?

2012-10-28 Thread holger krekel
On Sat, Oct 27, 2012 at 23:53 +0100, Floris Bruynooghe wrote:
 On 26 October 2012 21:19, Ronny Pfannschmidt ronny.pfannschm...@gmx.de 
 wrote:
  i think just having the name setup will make people
  wonder about the teardown again
 
  if i correctly recall the name setup did
  cause people to misunderstand already
  (expecting a teardown of some kind)
 
  Unfortunately i cant think of a fit short name.
 
 @pytest.autofixture, but I'd be -1 on that.
 
 If the @pytest.setup shortcut is deemed required then a
 @pytest.teardown shortcut could also be made for:
 
 @pytest.setup
 def generated_teardown_func(request):
 request.addfinalizer(original_teardown_func)
 
 I have no opinion on whether such shortcuts are useful, they are more
 then one way to do things which makes me think they should not exist.
 But if when talking to users and many examples show such usage as
 common then maybe they should be considered.  Personally I haven't
 wanted a plain setup/teardown since funcargs so I tend to think they
 are just people's resistance to change.

probably true and personally i have the same experience in my projects.

However, due to its unittest/nose/trial support, py.test has a bit of a
multi-paradigm approach. So i think it's sometimes ok to offer more than
one way to do things because people are really coming from different
backgrounds.  However, i think Ronny also has a point remininding how it
came to pytest.fixture instead of pytest.setup. So let's keep things how
they are for now and if anything provide examples and help to people
beginning to use them.

best,
holger

 Regards,
 Floris
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] pytest-twisted 1.0

2012-10-22 Thread holger krekel
Hi Ralf,

On Mon, Oct 22, 2012 at 00:53 +0200, Ralf Schmitt wrote:
 Hi,
 
 I've upload pytest-twisted to pypi [1]. It's a plugin which allows to test
 twisted code with pytest. The code is also available on github [2].
 
 
 [1] http://pypi.python.org/pypi/pytest-twisted
 [2] https://github.com/schmir/pytest-twisted

interesting little plugin.  On a general note, using pytest_configure
is not the best way to setup global state.  It's better to do this::

@pytest.fixture(scope=session, autouse=True):
def setup_twisted_reactor(request):
...
request.addfinalizer(...)

This autouse-fixture (i.e. an automatically active fixture without the
need to use it as a funcarg or declare a usefixtures) will be executed
only in processes which execute tests, so works cleaner with distributed
testing.  The pytest_configure is also called for the xdist-master
process which does not execute or collect tests at all.

If there were multiple reactors / global states you could also use
params to run the whole test suite multiple times with different
reactors - only one reactor / global state instance will be active at
any time. That's not possible when using pytest_configure.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] 2.3.1: regression and freebsd fixes

2012-10-20 Thread holger krekel

Did a quick 2.3.1 with some fixes:

- fix issue202 - regression with fixture functions/funcarg factories:  
  using self with class-level fixtures is now safe again and works as
  in 2.2.4.  Thanks to Eduard Schettino for the quick bug report.

- disable pexpect pytest self tests on Freebsd - thanks Koob for the 
  quick reporting

- fix/improve interactive docs with --markers

I am planning to do further quick regression-fixing minor 2.3.* releases
if neccessary - not going to announce each of them, though.

So make sure you have the newest version with e. g. pip install -U pytest
and keep the bug reports flowing :)

best,
holger



Changes between 2.3.0 and 2.3.1
---

- fix issue202 - fix regression: using self from fixture functions now
  works as expected (it's the same self instance that a test method
  which uses the fixture sees)

- skip pexpect using tests (test_pdb.py mostly) on freebsd* systems
  due to pexpect not supporting it properly (hanging)

- link to web pages from --markers output which provides help for
  pytest.mark.* usage.
On Fri, Oct 19, 2012 at 09:44 +, holger krekel wrote:
 pytest-2.3: improved fixtures / better unittest integration
 =
 
 pytest-2.3 comes with many major improvements for fixture/funcarg management
 and parametrized testing in Python.  It is now easier, more efficient and
 more predicatable to re-run the same tests with different fixture
 instances.  Also, you can directly declare the caching scope of
 fixtures so that dependent tests throughout your whole test suite can
 re-use database or other expensive fixture objects with ease.  Lastly,
 it's possible for fixture functions (formerly known as funcarg
 factories) to use other fixtures, allowing for a completely modular and
 re-useable fixture design.
 
 For detailed info and tutorial-style examples, see:
 
 http://pytest.org/latest/fixture.html
 
 Moreover, there is now support for using pytest fixtures/funcargs with
 unittest-style suites, see here for examples:
 
 http://pytest.org/latest/unittest.html
 
 Besides, more unittest-test suites are now expected to simply work
 with pytest.
 
 All changes are backward compatible and you should be able to continue
 to run your test suites and 3rd party plugins that worked with
 pytest-2.2.4.
 
 If you are interested in the precise reasoning (including examples) of
 the pytest-2.3 fixture evolution, please consult
 http://pytest.org/latest/funcarg_compare.html
 
 For general info on installation and getting started:
 
 http://pytest.org/latest/getting-started.html
 
 Docs and PDF access as usual at:
 
 http://pytest.org
 
 and more details for those already in the knowing of pytest can be found
 in the CHANGELOG below.
 
 Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
 Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping 
 to get the new features right and well integrated.  Ronny and Floris
 also helped to fix a number of bugs and yet more people helped by
 providing bug reports.
 
 have fun,
 holger krekel
 
 
 Changes between 2.2.4 and 2.3.0
 ---
 
 - fix issue202 - better automatic names for parametrized test functions
 - fix issue139 - introduce @pytest.fixture which allows direct scoping
   and parametrization of funcarg factories.  Introduce new @pytest.setup
   marker to allow the writing of setup functions which accept funcargs.
 - fix issue198 - conftest fixtures were not found on windows32 in some
   circumstances with nested directory structures due to path manipulation 
 issues
 - fix issue193 skip test functions with were parametrized with empty
   parameter sets
 - fix python3.3 compat, mostly reporting bits that previously depended
   on dict ordering
 - introduce re-ordering of tests by resource and parametrization setup
   which takes precedence to the usual file-ordering
 - fix issue185 monkeypatching time.time does not cause pytest to fail
 - fix issue172 duplicate call of pytest.setup-decoratored setup_module
   functions
 - fix junitxml=path construction so that if tests change the
   current working directory and the path is a relative path
   it is constructed correctly from the original current working dir.
 - fix python setup.py test example to cause a proper errno return
 - fix issue165 - fix broken doc links and mention stackoverflow for FAQ
 - catch unicode-issues when writing failure representations
   to terminal to prevent the whole session from crashing
 - fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
   will now take precedence before xfail-markers because we
   can't determine xfail/xpass status in case of a skip. see also:
   
 http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
 
 - always report installed 3rd party plugins

[py-dev] pytest-2.3: improved fixtures/funcargs and unittest support

2012-10-19 Thread holger krekel
pytest-2.3: improved fixtures / better unittest integration
=

pytest-2.3 comes with many major improvements for fixture/funcarg management
and parametrized testing in Python.  It is now easier, more efficient and
more predicatable to re-run the same tests with different fixture
instances.  Also, you can directly declare the caching scope of
fixtures so that dependent tests throughout your whole test suite can
re-use database or other expensive fixture objects with ease.  Lastly,
it's possible for fixture functions (formerly known as funcarg
factories) to use other fixtures, allowing for a completely modular and
re-useable fixture design.

For detailed info and tutorial-style examples, see:

http://pytest.org/latest/fixture.html

Moreover, there is now support for using pytest fixtures/funcargs with
unittest-style suites, see here for examples:

http://pytest.org/latest/unittest.html

Besides, more unittest-test suites are now expected to simply work
with pytest.

All changes are backward compatible and you should be able to continue
to run your test suites and 3rd party plugins that worked with
pytest-2.2.4.

If you are interested in the precise reasoning (including examples) of
the pytest-2.3 fixture evolution, please consult
http://pytest.org/latest/funcarg_compare.html

For general info on installation and getting started:

http://pytest.org/latest/getting-started.html

Docs and PDF access as usual at:

http://pytest.org

and more details for those already in the knowing of pytest can be found
in the CHANGELOG below.

Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping 
to get the new features right and well integrated.  Ronny and Floris
also helped to fix a number of bugs and yet more people helped by
providing bug reports.

have fun,
holger krekel


Changes between 2.2.4 and 2.3.0
---

- fix issue202 - better automatic names for parametrized test functions
- fix issue139 - introduce @pytest.fixture which allows direct scoping
  and parametrization of funcarg factories.  Introduce new @pytest.setup
  marker to allow the writing of setup functions which accept funcargs.
- fix issue198 - conftest fixtures were not found on windows32 in some
  circumstances with nested directory structures due to path manipulation issues
- fix issue193 skip test functions with were parametrized with empty
  parameter sets
- fix python3.3 compat, mostly reporting bits that previously depended
  on dict ordering
- introduce re-ordering of tests by resource and parametrization setup
  which takes precedence to the usual file-ordering
- fix issue185 monkeypatching time.time does not cause pytest to fail
- fix issue172 duplicate call of pytest.setup-decoratored setup_module
  functions
- fix junitxml=path construction so that if tests change the
  current working directory and the path is a relative path
  it is constructed correctly from the original current working dir.
- fix python setup.py test example to cause a proper errno return
- fix issue165 - fix broken doc links and mention stackoverflow for FAQ
- catch unicode-issues when writing failure representations
  to terminal to prevent the whole session from crashing
- fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
  will now take precedence before xfail-markers because we
  can't determine xfail/xpass status in case of a skip. see also:
  
http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get

- always report installed 3rd party plugins in the header of a test run

- fix issue160: a failing setup of an xfail-marked tests should
  be reported as xfail (not xpass)

- fix issue128: show captured output when capsys/capfd are used

- fix issue179: propperly show the dependency chain of factories

- pluginmanager.register(...) now raises ValueError if the
  plugin has been already registered or the name is taken

- fix issue159: improve http://pytest.org/latest/faq.html 
  especially with respect to the magic history, also mention
  pytest-django, trial and unittest integration.

- make request.keywords and node.keywords writable.  All descendant
  collection nodes will see keyword values.  Keywords are dictionaries
  containing markers and other info.

- fix issue 178: xml binary escapes are now wrapped in py.xml.raw

- fix issue 176: correctly catch the builtin AssertionError
  even when we replaced AssertionError with a subclass on the
  python level

- factory discovery no longer fails with magic global callables
  that provide no sane __code__ object (mock.call for example)

- fix issue 182: testdir.inprocess_run now considers passed plugins

- fix issue 188: ensure sys.exc_info is clear on python2
 before calling into a test

- fix issue 191: add

Re: [py-dev] Using funcargs with decorators

2012-10-11 Thread holger krekel
Hi Sebastian,

On Thu, Oct 11, 2012 at 11:47 +0200, Sebastian Rahlf wrote:
 Hi!
 
 At work we use a decorator @rollback on selected test functions which
 will rollback any db changes made during that test.
 
 I've recently started using pytest's dependency injection for a few
 use cases, both with @pytest.mark.parametrize(...) and the
 pytest_funcarg__XXX hook.
 Unfortunately, this clashes with our decorated test functions.
 
 How can I make this work?
 
 My first idea was using a custom marker, say @pytest.mark.rollback and
 do something like:
 
 def rollback(meth):
 Original rollback function
 ...
 
 def pytest_runtest_setup(item):
 if not isinstance(item, pytest.Function):
 return
 if hasattr(item.obj, 'rollback'):
 item = rollback(item)
 
 Would an approach like this actually work?

I think so - probably you need to call rollback(item.obj) though.

 Sebastian
 
 P.S. I've posted this to stackoverflow before I remembered that there
 is a mailing list
 http://stackoverflow.com/questions/12836134/pytest-using-dependency-injection-with-decorators

I answered there as well.

best,
holger


___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Using funcargs with decorators

2012-10-11 Thread holger krekel
Hi Sebastian,

On Thu, Oct 11, 2012 at 14:44 +0200, Sebastian Rahlf wrote:
 Hi Holger!
 
  At work we use a decorator @rollback on selected test functions which
  will rollback any db changes made during that test.
 
  I've recently started using pytest's dependency injection for a few
  use cases, both with @pytest.mark.parametrize(...) and the
  pytest_funcarg__XXX hook.
  Unfortunately, this clashes with our decorated test functions.
 
  How can I make this work?
 
  My first idea was using a custom marker, say @pytest.mark.rollback and
  do something like:
 
  def rollback(meth):
  Original rollback function
  ...
 
  def pytest_runtest_setup(item):
  if not isinstance(item, pytest.Function):
  return
  if hasattr(item.obj, 'rollback'):
  item = rollback(item)
 
  Would an approach like this actually work?
 
  I think so - probably you need to call rollback(item.obj) though.
 
 Thanks for your feedback. I've tried it again with the following code:
 
 # conftest.py
 import pytest
 from unittests import rollback
 
 def pytest_configure(config):
 # register an additional marker
 config.addinivalue_line(markers,
 rollback: rollback any db changes after test)
 
 def pytest_runtest_setup(item):
 if not isinstance(item, pytest.Function):
 return
 if hasattr(item.obj, 'rollback'):
 item.obj = rollback(item.obj)
 
 # test_my_tests.py
 
 import pytest
 
 @pytest.mark.rollback
 def test_rollback(monkepatch):
 # ...
 assert True
 
 What I get is a TypeError: test_rollback() takes exactly 1 argument (0 
 given).
 How can I make this work?

ah, now i get it.  You want to assign the function back.
That is indeed not going to work as pytest then sees the rollback
function (i assume you return another function from this decorator).
What is the decorator-returned function doing?  

Did you check out the transact example in
http://pytest.org/dev/fixture.html that i referenced in the
stackoverflow anwser?

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Using funcargs with decorators

2012-10-11 Thread holger krekel
Hi Anto,

On Thu, Oct 11, 2012 at 22:10 +0200, Antonio Cuni wrote:
 Hi Holger, Sebastian,
 
 On 10/11/2012 03:16 PM, holger krekel wrote:
  ah, now i get it.  You want to assign the function back.
  That is indeed not going to work as pytest then sees the rollback
  function (i assume you return another function from this decorator).
  What is the decorator-returned function doing?  
 
 I admit I did not follow the discussion deeply. However, if the problem is
 that py.test sees the decorated function (which presumably uses *args and
 **kwargs) instead of the original one, it can be solved by using the same
 technique I used for enforceargs in pypy:
 
 https://bitbucket.org/pypy/pypy/src/7f6d5c878b90/pypy/rlib/objectmodel.py#cl-170
 
 in practice, the trick is to exec() a function def with the correct argument
 list instead of just relying on *args, **kwargs. This way, py.test should be
 able to find the correct signature.

I agree that is one way to solve it.  However, if hiding the
function can be avoided alltogether, then it's even better.

On a sidenote, i am not sure Python's decorator design was such
a great idea.  Maybe it should have been restricted to setting attributes
(like C# and also java IIRC) and then a way to get those attributed
functions on a per-class, per-module or even global basis.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [hpk42/pytest] disable the creation of the __pycache__ directory (issue #200)

2012-10-09 Thread holger krekel
On Tue, Oct 09, 2012 at 00:33 -, astrofrog wrote:
 --- you can reply above this line ---
 
 New issue 200: disable the creation of the __pycache__ directory
 https://bitbucket.org/hpk42/pytest/issue/200/disable-the-creation-of-the-__pycache__
 
 astrofrog:
 
 Is there a way to disable the creation of the __pycache__ directories, or 
 cleanup these directories after testing, without installing any plugin or 
 other packages?

Try setting the environment variable PYTHONDONTWRITEBYTECODE to some value.
This is a generic Python Interpreter setting.

best,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Autoactive fixtures

2012-10-06 Thread holger krekel
On Sat, Oct 06, 2012 at 01:08 +0100, Floris Bruynooghe wrote:
 On 6 October 2012 00:02, holger krekel hol...@merlinux.eu wrote:
  Hi Floris,
 
  On Fri, Oct 05, 2012 at 23:42 +0100, Floris Bruynooghe wrote:
  Hi Holger,
 
  One nice feature of the funcarg/setup merge into fixture is that you
  can now return a value from an autoactive fixture and request it
  anywhere else.  I didn't think of this before but this is surprisingly
  useful as it provides a great alternative to storing things on e.g.
  session or item objects.
 
  Yes, i think so too.
 
  However one issue I discovered is that is possible for a user to
  accidentally override an autoactive fixture.  If you create a fixture
  with an identical name to the autoactive one it will be lost.  I think
  this can be problematical for plugins.
 
  Hum, indeed, we have a global namespace for fixtures and unintentionally
  shadowing plugin from project specific fixtures is easy.
 
  What do you think the correct behaviour should be?  I realise changing
  this could be hard/ugly.
 
  Not sure it's hard. first we need an idea what to do :)
 
  Hum. We could error out on the definition of fixtures if they already exist 
  -
  unless a flag aka i know what i am doing (e.g. override=True) is supplied.
 
 That would not be as nice in the normal case.  I do tend to extend
 funcargs quite a lot in my test suites.  Although I guess it's not a
 horrible thing, it would have to default to override=True when using
 the backwards compatible pytest_funcarg__ syntax which then kind of
 defeats the point.
 
 Or are you proposing to only need this for overriding autoactive fixtures?

Not proposing anything yet, just thinking.  For extending, do you usually refer
to the shadowed fixture by listing it as a function argument?

Maybe time to continue discussing on IRC :)

holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Autoactive fixtures

2012-10-05 Thread holger krekel
Hi Floris,

On Fri, Oct 05, 2012 at 23:42 +0100, Floris Bruynooghe wrote:
 Hi Holger,
 
 One nice feature of the funcarg/setup merge into fixture is that you
 can now return a value from an autoactive fixture and request it
 anywhere else.  I didn't think of this before but this is surprisingly
 useful as it provides a great alternative to storing things on e.g.
 session or item objects.

Yes, i think so too.

 However one issue I discovered is that is possible for a user to
 accidentally override an autoactive fixture.  If you create a fixture
 with an identical name to the autoactive one it will be lost.  I think
 this can be problematical for plugins.

Hum, indeed, we have a global namespace for fixtures and unintentionally
shadowing plugin from project specific fixtures is easy.

 What do you think the correct behaviour should be?  I realise changing
 this could be hard/ugly.

Not sure it's hard. first we need an idea what to do :)

Hum. We could error out on the definition of fixtures if they already exist -
unless a flag aka i know what i am doing (e.g. override=True) is supplied.

cheers,
holger

 Regards,
 Floris
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] using logging in py.test

2012-09-30 Thread holger krekel
Hi Anton,

On Sun, Sep 30, 2012 at 19:16 +0300, Anton P wrote:
 Hi All,
 
 I'd like to use standard logging module with py.test. Messages
 generated from test functions are visible on stdout if -s option is
 set. But log messages generated from conftest.py or other custom
 modules used inside test function are not visible on stdout but
 visible in captured stdout in case -s option isn't set.

 I want to see all the messages from all modules on stdout, not only
 from test function.

An initial conftest.py file is loaded very early and pytest uses
unconditional capturing.  This goes back to a request from the PyPy
folks who start GCC at this stage and otherwise see lots of stuff.
If this proves to be a problem we have to see how to make it conditional.
Problem is that the conftest.py often add command line options and thus
need to to be loaded before opts are parsed (and thus -s).
If logging is active during conftest.py import-time it will see the
captured stream and probably record it and use it from there on.
As the captured stream is not analyzed/used further in the case of -s
it is kind of lost, loosing the logging messages.

Potential solutions:

- try to avoid doing calls/imports of logging during conftest.py import-time.
- try out pytest-logging plugin to see if it helps
- program a PYTEST_NOCAPTURE environement setting to pytest to allow switching
  off capturing totally

best,
holger

That 
 Thank you in advance!
 -Anton
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Ordering of setup and funcargs

2012-09-24 Thread holger krekel
Hi Floris,

On Sun, Sep 23, 2012 at 21:23 +0100, Floris Bruynooghe wrote:
 Hello,
 
 As I understand it the order of setup functions being called is:
 
 pytest_runtest_setup
 funcarg resources
 @setup marked functions
 
 And if you mark pytest_runtest_setup with trylast it will be moved
 down to the bottom.  But it is impossible to get a @setup marked
 function to be called before the funcarg resources are called.  This
 means that in order to be able to influence what is possible inside
 funcargs [0] you still need to use pytest_runtest_setup.  I have no
 particular opinion on this (yet) but was surprised about this so
 thought I'd point it out since it still forces plugin writers to use
 pytest_runtest_setup in some cases.

I just made sure setups are called ahead of the funcarg
factories of the main function.  I think it makes more
sense this way.  For example, my setup functions sometimes
decide to skip a test and in this case there is no need to
further setup the main function and its funcargs.

best,
holger


 
 [0] In my case monkeypatch django so you can't get access to it's
 database using all of django's global state.
 
 Regards,
 Floris
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] RFC for (hopefully) final commments on pytest-2.3 API

2012-09-18 Thread holger krekel
Hi Brianna, Floris, Carl, Ronny, all,

first, let me thank for all the useful feedback you already provided.
I am back trying to finalize the next pytest release -- FYI trunk is actually
fully functional and implements all previously discussed features.

However, I am still pondering Brianna's feedback (that that she found it
surprising that there is a @pytest.setup but no teardown) and a few
other little naming and protocol issues.  If you find some time to state
your opinion on one or another issue i'd very much appreciate it:

- rename @pytest.setup to @pytest.fixture and the docs would talk 
  about fixture instead of setup functions. (See 
  http://pytest.org/dev/setup.html for the current docs)
  This would induce less of a question about @pytest.teardown?!

- rename @pytest.factory to @pytest.funcarg - because with pytest-trunk
  it's already the case that the former pytest_funcarg__NAME is just
  a shortcut - there is no internal distinction anymore if you use 
  the decorator or the prefix or both.  Removing the pytest_funcarg__ prefix
  and using a @pytest.funcarg(...) declaration looks naturally related?!

- introduce a resource-centric convenience protocol for fixtures and 
  funcarg factories.  It would be used if you decorate a class instead of 
  a function and would lead to the calling of (optional) setup/teardown
  methods. Here is an example:

# content of test_module.py

import pytest

@pytest.funcarg
class db:
def setup(self, funcarg1, funcarg2):
# called during the setup phase of each test using the db
# funcarg.  this setup function will be called multiple
# times in case db.params is defined

def teardown(self):
# called during the teardown phase of a test which
# previously saw a successful db.setup() call

- moreover, there could optionally be a class-method configure which is
  called ahead of any setup() and would allow to dynamically compute the scope 
  and params attribute and influence them e. g. from command line options::

@pytest.funcarg
class db:
@classmethod
def configure(cls, config):
scope = config.option.scope
params = ...

...

  Dynamic scoping/parametrization is used today if you want to have 
  broadly-scoped less-parametrized resources during development but 
  use more function-scope more-parametrized resources during CI runs.


- the same class convenience protocol would work for fixtures:

import pytest
@pytest.fixture
class transact:
def setup(self, db):
db.begin()
self.db = db
def teardown(self):
self.db.commit()

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] new resource API documentation comments

2012-08-28 Thread holger krekel
Hi Brianna,

On Thu, Aug 16, 2012 at 18:58 +1000, Brianna Laugher wrote:
 Hi,
 
 I just spent some time reading the dev docs so these comments are just
 based on the docs and not actually using the new API. In general it
 looks pretty sensible.

thanks for your time and your feedback, I appreciate it!

 - being able to have funcargs like funcargs directly is really nice, a
 lot more obvious than calling request.getfuncargvalue('foo')

agreed :)

 - why addfinalizer and not teardown?

The old-style pytest_funcarg__NAME(request) api offered a 
request.addfinalizer() function already so i wanted to carry
on with this name.  I also considered addcleanup() similar to
the python unittest package and am open to other naming suggestions 
before the september release.

 -although I don't really know what cached_setup did, the trinity of
 defining the scope, setup and teardown methods made sense to me. Now
 the scope is in a decorator, the setup is implicitly the entire thing
 that is happening and the teardown seems somewhat awkwardly tacked on.

I see that point.  In principle we could have the factory decorator 
take a teardown parameter and it would receive the factory-created value.
I'd find it a bit awkward to advertise first naming a teardown before even
stating the setup code, though.  And it wouldn't help the asymetry you are 
observing.

 - none of the addfinalizer examples take an argument, how would you
 convert an old-style teardown method to that? e.g. we have a lot of
 funcargs which do things like
 return request.cached_setup(setup=setup,
 teardown=lambda obj: obj.close(),
 scope='function')

In a factory-function you will create the value to return at some point
and then you can use it from the finalizer, like so:

...
val = createval()
def fin():
uncreate(val)
testcontext.addfinalizer(fin)

 - Sometimes things are referred to as funcargs, sometimes they are
 referred to as injected resources. Is there any difference here? The
 funcarg is the actual function and the injected resource is the
 instance in a specific test function? I suggest to use the term
 funcarg as much as possible as it is specific and a necessary
 concept for using pytest with any depth.

I agree.  In an early draft a radically different approach which used 
the term resources was considered.  Due to feedback similar to yours
I eventually went with using funcargs again.  A funcarg is what appears
in a test or setup function as an argument, it also _is_ a resource.  I am
open to reduce usages of resource if that is confusing.

 Some of the following comments are fairly picky so feel free to ignore them.
 
 funcargs.txt
 line 118 - I think in this first incarnation of the smtp funcarg
 (factory? what to call it now?), it doesn't actually need to take a
 testcontext, right?

fixed.


 line 527 Parametrizing test functions - may be worth having a simple
 example showing a combination using both (test data) parametrization
 and funcarg parametrization, to emphasise how they are differently
 useful. Using a database as an example of funcarg parametrization is
 good, maybe better than values like 1/2. I feel like parametrizing
 tests (test data) is probably the more common use case and it is a
 little buried amongst the heavy duty parametrized funcargs.

I think with test function parametrization the parametrized funcargs
usually relate to the particular test and the parameters are simple
objects.  Objects such as Databases are more complex resources and
parametrization is then best defined at factory level.  Adding a mixed
example makes sense.

 line 598 Basic ``pytest_generate_tests`` example - I think this is
 not a very basic example! I think it is copied from parametrize.txt
 page, where it might make more sense. Here is what I would consider a
 basic example.
 
 # code
 def isSquare(n):
 n = n ** 0.5
 return int(n) == n
 
 # test file
 def pytest_generate_tests(metafunc):
 squares = [1, 4, 9, 16, 25, 36, 49]
 for n in range(1, 50):
 expected = n in squares
 if metafunc.function.__name__ == 'test_isSquare':
 metafunc.addcall(id=n, funcargs=dict(n=n, expected=expected))
 
 
 def test_isSquare(n, expected):
 assert isSquare(n) == expected
 
 Well, this is so trivial you might bundle it all into a single test,
 but it can be useful to have each case genuinely be a separate test.
 You could also shoe-horn this data into a parametrize decorator I
 suppose but it is nicer to have more space if your test data is more
 complicated, to explain it.

OK, I consider adding this when i tackle another doc refactoring. thanks
for providing it.

 I am starting to have a feeling that the way my project has been using
 generate_tests is not the way everyone else uses it. In our
 conftest.py one of the enterprising developers on the project (who got
 us all onto py.test initially) put this:
 
 def 

Re: [py-dev] new resource API documentation comments

2012-08-28 Thread holger krekel
On Fri, Aug 17, 2012 at 17:32 +1000, Brianna Laugher wrote:
 Also,
 
 Would this be a roughly equivalent old-style to the smtp examples in
 http://pytest.org/dev/funcargs.html ?
 
 def pytest_funcarg__smtpMerLinux(request):
 smtp = smtplib.SMTP(merlinux.eu)
 def teardown(smtp):
 print (finalizing %s % smtp)
 smtp.close()
 return request.cached_setup(setup=lambda: smtp, teardown=teardown,
 scope='session')
 
 
 def pytest_funcarg__smtpMailPython(request):
 smtp = smtplib.SMTP(mail.python.org)
 def teardown(smtp):
 print (finalizing %s % smtp)
 smtp.close()
 return request.cached_setup(setup=lambda: smtp, teardown=teardown,
 scope='session')
 
 # test file
 def pytest_generate_tests(metafunc):
 merlinux = request.getfuncargvalue('smtpMerLinux')
 mailpython = request.getfuncargvalue('smtpMailPython')
 
 if'smtp' in metafunc.funcargnames:
 metafunc.addcall(id='merlinux.eu', param=merlinux) # ? would
 this work? seems magic
 metafunc.addcall(id='mail.python.org', param=mailpython)
 
 I feel like this makes it clear how much more powerful parametrizing
 funcargs themselves is.

The old-style code would even be a bit more involved.  I guess it could
be added to http://pytest.org/dev/funcarg_compare.html

I wouldn't put it to the entry-level main docs because for newcomers
it's not immediately neccessary to know the old ways :)

best,
holger

 cheers
 Brianna
 
 
 
 
 On 16 August 2012 18:58, Brianna Laugher brianna.laug...@gmail.com wrote:
  Hi,
 
  I just spent some time reading the dev docs so these comments are just
  based on the docs and not actually using the new API. In general it
  looks pretty sensible.
 
  - being able to have funcargs like funcargs directly is really nice, a
  lot more obvious than calling request.getfuncargvalue('foo')
  - why addfinalizer and not teardown?
  -although I don't really know what cached_setup did, the trinity of
  defining the scope, setup and teardown methods made sense to me. Now
  the scope is in a decorator, the setup is implicitly the entire thing
  that is happening and the teardown seems somewhat awkwardly tacked on.
  - none of the addfinalizer examples take an argument, how would you
  convert an old-style teardown method to that? e.g. we have a lot of
  funcargs which do things like
  return request.cached_setup(setup=setup,
  teardown=lambda obj: obj.close(),
  scope='function')
  - Sometimes things are referred to as funcargs, sometimes they are
  referred to as injected resources. Is there any difference here? The
  funcarg is the actual function and the injected resource is the
  instance in a specific test function? I suggest to use the term
  funcarg as much as possible as it is specific and a necessary
  concept for using pytest with any depth.
 
  Some of the following comments are fairly picky so feel free to ignore them.
 
  funcargs.txt
  line 118 - I think in this first incarnation of the smtp funcarg
  (factory? what to call it now?), it doesn't actually need to take a
  testcontext, right?
 
  line 527 Parametrizing test functions - may be worth having a simple
  example showing a combination using both (test data) parametrization
  and funcarg parametrization, to emphasise how they are differently
  useful. Using a database as an example of funcarg parametrization is
  good, maybe better than values like 1/2. I feel like parametrizing
  tests (test data) is probably the more common use case and it is a
  little buried amongst the heavy duty parametrized funcargs.
 
  line 598 Basic ``pytest_generate_tests`` example - I think this is
  not a very basic example! I think it is copied from parametrize.txt
  page, where it might make more sense. Here is what I would consider a
  basic example.
 
  # code
  def isSquare(n):
  n = n ** 0.5
  return int(n) == n
 
  # test file
  def pytest_generate_tests(metafunc):
  squares = [1, 4, 9, 16, 25, 36, 49]
  for n in range(1, 50):
  expected = n in squares
  if metafunc.function.__name__ == 'test_isSquare':
  metafunc.addcall(id=n, funcargs=dict(n=n, expected=expected))
 
 
  def test_isSquare(n, expected):
  assert isSquare(n) == expected
 
  Well, this is so trivial you might bundle it all into a single test,
  but it can be useful to have each case genuinely be a separate test.
  You could also shoe-horn this data into a parametrize decorator I
  suppose but it is nicer to have more space if your test data is more
  complicated, to explain it.
 
  I am starting to have a feeling that the way my project has been using
  generate_tests is not the way everyone else uses it. In our
  conftest.py one of the enterprising developers on the project (who got
  us all onto py.test initially) put this:
 
  def pytest_generate_tests(__multicall__, metafunc):
  Supports parametrised tests using generate_ fns.
  

Re: [py-dev] New resource API feedback

2012-08-15 Thread holger krekel
Hi Floris,

On Tue, Aug 14, 2012 at 23:41 +0100, Floris Bruynooghe wrote:
 Hello Holger,
 
 I've started experimenting a bit more with the new resource api in
 pytest-django, I haven't got very far yet but do have already some
 feedback and questions.
 
 Firstly my main issue, I don't know how to inspect the marks on the
 function/item in a function-scoped setup.  Looking at the code the
 only thing I could find was TestContext._resource.keywords and
 TestContext._resource.applymarker().  The latter which has an explicit
 comment saying it is unavailable on purpose.  The former almost
 exposed as TestContext.keywords but commented out.  So how do you use
 markers?  This should probably be documented as well.

Accessing and working with markers is missing - i only briefly touched
it during the implementation of the new resource API.  I intend to
have testcontext grow a markers dictionary, mapping mark names to 
lists of MarkInfo objects.

You can currently work-around/hack using testcontext._request._pyfuncitem.obj
as a reference to the underlying test function.

 Secondly the docs should probably show how to do teardown in an @setup
 function.  I think it would be nice to show an example of scope and
 teardown before going into the global resource example.  Related to
 this TestContext.addfinalizer() is not documented in the TestContext
 API docs.  Probably because autodoc doesn't pick it up.  Maybe simply
 merging TestContextRequest into TestContext is enough?
 TestContextSetup would not need any changes to keep it's behaviour in
 that case.

Makes all sense i think.

 Next something I have mentioned before, marking a pytest_funcarg__foo
 function with @factory seems to sill give an incomprehensible error.
 Personally I think it should be possible and consume the funcarg
 just like @setup consumes e.g. setup_module(), but if I'm alone in
 that a clearer error would be good improvement.

Agreed, i'll see to lopok into it.

 Another thing which surprised me was that @pytest.setup() needs to be
 called in order to have any effect.  Not calling the decorator will
 simply ignore the setup function, I expected it to treat it as a
 function-scoped setup.

I wasn't quite sure if to mimick the current pytest.mark behaviour
of allowing usage with and without ().  While trying to write
a docstring for it i thought it's maybe better to allow just one way.
But there definitely should be some clear error

best,
holger

 
 Regards,
 Floris
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] xdist and pytest.main

2012-08-09 Thread holger krekel
On Sun, Aug 05, 2012 at 17:31 -0400, Adam Goucher wrote:
 Whoops. Didn't look at the list reply-to settings so pulling the list 
 back in. This does seem to be the cause. I commented out the 
 sys.path.append in the wrapper and added
 
 def pytest_configure(config):
  sys.path.append(os.path.join(os.getcwd(), modules))
 
 to the conftest.py in the root and it 'seems' to be working.
 
 My initial thoughts around error detection still stands though.

Your passing of --debug is also what i would have tried.
One can also enable EXECNET debugging, see
http://codespeak.net/execnet/basics.html#debugging-execnet

This gives a lot of low-level network messages but is sometimes
helpful.

best,
holger

 Thanks for the nudge to finding the solution.
 
 -adam
  A little more debugging (including the --debug flag) has led me to 
  something I believe...
 
  For context my wrapper runs from a prescribed directory structure 
  shown at 
  https://github.com/Element-34/Py.Saunter-Examples/tree/master/ebay 
  (for example). And part of that wrapper is a modification to the 
  system path as such
 
  sys.path.append(os.path.join(cwd, modules))
 
  What I am now guessing is that the environment does not get forked 
  into the slave processes as alluded to in this message.
 
  [slave-gw1] sending collectreport {'data': {'longrepr': 
  'scripts/DressShirts.py:15: in module\n   from 
  saunter.testcase.webdriver import 
  SaunterTestCase\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.4-py2.7.egg/_pytest/assertion/rewrite.py:156:
   
  in load_module\n   py.builtin.exec_(co, 
  mod.__dict__)\n/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py.saunter-0.48-py2.7.egg/saunter/testcase/webdriver.py:34:
   
  in module\n   from tailored.webdriver import WebDriver\nE   
  ImportError: No module named tailored.webdriver', 'outcome': 'failed', 
  'sections': [], 'result': None, 'nodeid': 'scripts/DressShirts.py'}}
 
  If that is the case [and I have to go fix my sister's email else I 
  would debug it further]
 
  a) how do I pass that environment change to my forked processes 
  (guessing just move that to conftest.py?)
  b) if there is an exception thrown in the collection process on a 
  slave, it likely should bubble up to the user
 
  -adam
  Hum, this looks like no tests are collected at all.  If you leave
  away the -n option, tests do run?  Can you show the -v output of
  that?  I assume you are running things in the correct directory
  and have no change-directory code in your tests/plugin?
 
  best,
  holger
 
 
 
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] optionally considering setup (needsdb usecase)

2012-08-07 Thread holger krekel
On Mon, Aug 06, 2012 at 21:55 +0100, Floris Bruynooghe wrote:
 On 6 August 2012 08:50, holger krekel hol...@merlinux.eu wrote:
  On Sun, Aug 05, 2012 at 17:14 +0200, Floris Bruynooghe wrote:
  On 4 August 2012 14:13, holger krekel hol...@merlinux.eu wrote:
   On Sat, Jun 30, 2012 at 12:26 +0100, Floris Bruynooghe wrote:
   As an aside however, one of my usecases for merged request/item
   objects was so I could put setup in a session-wide scoped funcarg but
   also automatically request this funcarg based on a mark::
  
  def pytest_runtest_setup(item):
  if 'needsdb' in item.keywords:  # or a more explicit API
 item.getresource('db')
  
   I understand that this will still be possible via::
  
  def pytest_runtest_setup(item):
  if 'needsdb' in item.keywords:
  item.session.getresource('db')
  
   Or something similar to that.
  
   With the current @setup API this is not possible but it should be.  I'd 
   like
   to understand the exact use case a bit.  What do you do with the db
   object here?  I guess you cause side effects because you would otherwise
   just request a funcarg in the tests, right?
 
  Actually there is no side effect here.  This was born out of Andreas
  Pelme's desire to be able to mark tests with a marker while trying to
  re-use the session-wide caching that funcargs gave us.  But the new
  @setup already covers this case completely since it can handle the
  caching just fine.  But I still think this is a nice use-case, since
  it would allow being able to use the same setup and request it with
  either a funcargs or a mark.
 
  But if you request it with a mark, we are talking about side effects, 
  aren't we?
  (I'd define side-effects as something that doesn't inject a test
  dependency directly to a test).
 
 Yes, I guess so.
 
   If so, then i can imagine the following solution:
  
   @pytest.setup(enabled=myhelper)
   def perform_side_effect_with(db):
   ...
  
   The enabled helper would be called during collection so that
   pytest gets to know which tests will actually execute the setup
   function and its (potentially parametrized) required resources.
   It could look like this::
  
   def myhelper(collectioncontext):
   return needsdb in collectioncontext.markers:
  
   and collectioncontext also carries module/class/function (depending on
   the scope specified with the setup).  If the helper returns True then
   the setup is considered and thus receives the DB object.  Do you
   think this would solve your use case? (Collectioncontext would not
   have a addfinalizer() and might in the future offer more 
   collection-specific
   things).
 
  This does sound like a very neat solution indeed, I think this would
  be a good addition.
 
  OK, i'll see to implement it but i guess it wouldn't be too bad to do
  it after a pytest-2.3 release, unless there is a concrete need already.
 
 This would still make the dual setup triggered by mark/resource easier
 by e.g. (I'm just using some pseudo-api here):
 
 @pytest.funcarg
 def db():
  print 'setting up db'
 
 def helper(collectioncontext):
 if 'needsdb' in collectioncontect.markers
 
 @pytest.setup(enabled=helper)
 def dbsetup(db):
 pass
 
 Here the funcarg can do all the work.  But this is not a deal-breaker,
 since you can also do this:
 
 def setupdb():
 print 'idempotent db setup'
 
 @pytest.funcarg():
 def db():
 setupdb()
 
 @pytest.setup
 def marksetup(testcontext):
 if 'needsdb' in testcontext.markers:
 setupdb()
 
 So leaving this till later is fine.

Right, there is one difference between the two solutions however.
The first works even if db is parametrized: The dbsetup function (and
all of the tests using it) will be executed multiple times.   With the
second example dbsetup may only execute once for tests using
db implicitely (through the setup).  

The basic rule is: if resources appear as funcargs in a function signature
(test, setup or factory functions) then pytest can make parametrization work 
without further ado.  Any getfuncargvalue()-like dynamic resource access 
can/would break it.  This is the reason for the new 
testcontext/collectioncontext object and its more minimal API.

  Btw,  if you can find some time sometime to look at a) test
  pytest-django with pytest-trunk  b) port pytest-django to pytest-trunk
  features, that would be super-helpful.  My personal target for a release
  is end august but not before some more real-world beta usages have happened.
 
 I'll try to have a go at this in the next week or two as I think it
 would be very good exercise as well.

Great, thanks.

best,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest FAQs on stackoverflow

2012-08-04 Thread holger krekel
Hi everybody,

i am subscribed to testing questions regarding pytest on

http://stackoverflow.com/questions/tagged/py.test

and i increasingly consider it the main FAQ system.  Maybe some of you
also want to subscribe (just hoover over the tag and hit subscribe if
you have an SO account) to see latest questions and answers (or give answers).

For the future, i'd like to see a selection of SO-FAQs on py.test
integrated on the pytest web page itself but have no clue yet how
that can be done.  Does anybody here know?

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] greppability of factories (was: Re: new resource API nearing completion including impl)

2012-08-03 Thread holger krekel
On Thu, Aug 02, 2012 at 20:03 +, holger krekel wrote:
 On Thu, Aug 02, 2012 at 19:47 +0100, Floris Bruynooghe wrote:
  On 2 August 2012 18:24, holger krekel hol...@merlinux.eu wrote:
   On Thu, Aug 02, 2012 at 13:50 +0100, Floris Bruynooghe wrote:
   Would it not make sense to allow this (or at least provide a clearer
   error)?  I still like that form because of the grep-ability (doing a
   2-line grep is much harder and would still not cover ppl doing from
   pytest import factory etc).
  
   Grepability is an argument.  Would adding a name=... parameter for
   the factory-decorator help enough?  Allowing and advertising
   pytest_funcarg__foo feels strange to somehow taking a fresh look i think.
  
  I would argue the opposite, allowing the @factory decroator on
  pytest_funcarg__* seems like a more gentle progression giving more the
  impression that it is simply funcargs evolved.  To a newcomer it might
  otherwise look like funcargs where not thought out fully yet and make
  them think py.test just isn't stable enough yet.
 
 But when using the factory decorator on pytest_funcarg__ named functions,
 they shall at least not be able to receive request anymore, right?
 (The current implementation probably allows it but i feel uneasy about it).

One more thought: What will actually happen  if you grep for def
FUNCARGNAME - does this not usually yield the location of your factory
and very few or no false positives?

holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] new resource API nearing completion including impl

2012-08-02 Thread holger krekel
Hi Again,

On Thu, Aug 02, 2012 at 07:36 +, holger krekel wrote:
 Hi Floris,
 
 On Wed, Aug 01, 2012 at 23:21 +0100, Floris Bruynooghe wrote:
  Hello Holger,
  
  Apologies for not responding earlier, but I've been on holiday.
 
 You are just-in-time right for this.  It anyway took me a while
 to sort out impl issues.
 
  In general this looks like it is shaping up rather nicely.
 
  My first part of feedback is somewhat bikeshedding-like: while using
  the pytest.mark implementation makes a lot of sense I do wonder
  whether it does not make more sense to keep the pytest.mark api for
  marking test functions and not to start using it for setup/factory
  functions.  The setup functions would be equally clear when written as
  @pytest.factory, @pytest.setup etc and I think keeping the way of
  marking test functions with setup/factory functions different is
  worthwhile.
 
 Good point!  Apart from confusingness, The features of applying marks to
 classes or modules are of no use here.  Making them pytest.X functions
 also allows a more concise and better documented signature. Very good
 bikeshedding! :)

I've now implemented the move to @pytest.factory and @pytest.setup and 
also updated and refined the documentation a bit, see:

http://pytest.org/dev/resources.html

and the new

http://pytest.org/dev/setup.html

Hope the latter begins to make more sense.

I still intend to refine docs and add more examples but now is lunch and
then child-care summer party time :)

I also uploaded a new package pytest-2.3.0.dev8 to be installed
via:

pip install -i http://pypi.testrun.org -U pytest

best,
holger


  Secondly, and this could be a bad idea, while I do like the new
  decorators I did grow attached to the old pytest_funcarg__* syntax
  even if it could be argued they are a bit more magical.  Since there
  already is a precedent for using __tracebackhide__ I was wondering if
  the scoping could be added to the old-style funcargs using e.g.
  __scope__ and maybe even __parameterise__ in the function body?
 
 Not really possible i think.  Traceback-printing has the frames at
 hand to look at locals().  When the namespace of a module is scanned
 the bodies of functions are opaque. 
 
  Old-style funcargs could also be made to directly accept other
  funcargs/resources I think and so really bridge the gap.
 
 Didn't document it but old-style funcargs do accept other funcargs actually.
 
  I do realise however that this would probably seem pretty weird to the
  general python public and decorators are probably a better api, but I
  still wanted to mention this.
 
 My goal when designing the last incarnation of the API was to make it
 easy for newcomers.
 
  A better idea is probably marking pytest_funcarg__* functions with
  @factory but I'm failing to use the new-style resources code for now
  so not sure if that works.
 
 Why does it not work for you? (Ping me on IRC maybe).
 
  
  The setting and parameterisation of a global in the introduction of
  @pytest.mark.setup seems very advanced and not very suitable to
  introduce the @setup decorator.
 
 Good point again.  I was focused on the difficult cases first -
 the @setup documentation is utterly lacking, sorry about that.  
 Probably it should go to its own page and fully document all xunit
 functionality.
 
  I'm actually rather dubious to it's
  use, it seems very difficult to notice that the test_1 and test_2
  functions will be invoked twice.  While it is very nice for xUnit
  setup functions to have access to funcargs/resources I'm not actually
  convinced the decorator version adds much value, they already have an
  explicit and well-known scope associated with them via their location.
 
 Maybe i am wrong but I guess you got this impression because i 
 didn't document the wide range of uses of the setup functions.  It's
 actually a feature that you can define per-function/per-class/per-module
 setup code in conftest.py files or plugins.  It is to replace almost all
 uses of implementing pytest_runtest_init/pytest_sessionstart.
 
  If there really is a need for modifying the scope or adding
  parametrisation (which I'm not sure about, I think funcargs/resouces
  could achieve the same in a more obvious way) then just re-using
  @factory on the existing xUnit seems like less confusing approach.
 
 The main difference between setup and factory/funcargs is that setup
 performs side-effects so test functions do not need to list funcargs in
 their signature.
 
  I hope this feedback makes sense so far, I apologise if not, I'm
  pretty tired right now.  I'd really like to have a go at making a
  prototype of pytest-django using this in order to give more feedback,
  but that's not for tonight.  There are a few interesting cases I
  encountered there which I should try out and I'm intrigued to see if
  the parametrisation would allow it to test multiple db backends in one
  process (probably not, but that will be Django's fault, not
  py.test's

Re: [py-dev] new resource API nearing completion including impl

2012-08-02 Thread holger krekel
Hi Ronny,

On Thu, Aug 02, 2012 at 13:32 +0200, Ronny Pfannschmidt wrote:
 Hi Holger,
 
 i don't see a way to determine parametrization  of a global resource
 at pytest_configure time (or later)
 
 it would be nice to be able to determine them after configure for
 considering cli args

I was thinking about allowing a function for params:

@pytest.factory(params=func)

That function would receive the config object and maybe some
marker information.  (It would be called at collection time.)

best,
holger


 -- Ronny
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] new resource API nearing completion including impl

2012-08-02 Thread holger krekel
On Thu, Aug 02, 2012 at 14:54 +0200, Ronny Pfannschmidt wrote:
 Hi Holger,
 
 
 I was thinking about allowing a function for params:
 
  @pytest.factory(params=func)
 
 That function would receive the config object and maybe some
 marker information.  (It would be called at collection time.)
 
 sounds good,
 
 btw, what about other parameters that could/should be global
 
 for example in anyvc i got sets of xspecs
 that get feed into a different ressource
 but they dont really have other code around them,
 should i just pass on testressource.param as result?

I guess so. Not sure i understand parameters that should be global.

best,
holger

 -- Ronny
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] new resource API nearing completion including impl

2012-08-02 Thread holger krekel
On Thu, Aug 02, 2012 at 13:50 +0100, Floris Bruynooghe wrote:
 On 2 August 2012 11:44, holger krekel hol...@merlinux.eu wrote:
  http://pytest.org/dev/setup.html
 
  Hope the latter begins to make more sense.
 
 Yes, it does.  I now see the power @setup.  One thing you might want
 to add is compare the module-global setting to simply using the
 global statement inside the setup function.

Do you mean that in the case where the global-setting happens in the
conftest.py using global does not work?

 Btw, is it a bug in the assertion that when using a global variable
 the assert-printing does not seem to show the value of that global
 variable?

I noticed this as well and consider it a bug, yes.

  I still intend to refine docs and add more examples but now is lunch and
  then child-care summer party time :)
 
 I hope you're having a better summer to party in then the torrential
 rain we seem to be getting this afternoon ;-)

Yes, almost too hot here actually ...

  I also uploaded a new package pytest-2.3.0.dev8 to be installed
  via:
 
  pip install -i http://pypi.testrun.org -U pytest
 
 I was playing with this over lunch and discovered this doesn't work:
 
 @pytest.factory(scope='session')
 def pytest_funcarg__foo():
 return 42
 
 Would it not make sense to allow this (or at least provide a clearer
 error)?  I still like that form because of the grep-ability (doing a
 2-line grep is much harder and would still not cover ppl doing from
 pytest import factory etc).

Grepability is an argument.  Would adding a name=... parameter for
the factory-decorator help enough?  Allowing and advertising
pytest_funcarg__foo feels strange to somehow taking a fresh look i think.

 Also doing this results in setup_module being called twice:
 
 @pytest.setup(scope='module')
 def setup_module():
 print 'setting up module'
 
 I'm not sure what the correct behaviour should be here.

Hum, I think the decorator consumes the function and it should not
be considered for anything else. Do you agree?

best,
holger

 
 Regards,
 Floris
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] xdist and pytest.main

2012-08-02 Thread holger krekel
Hi Adam,

On Thu, Aug 02, 2012 at 14:49 -0400, Adam Goucher wrote:
 I have a WebDriver framework that wraps Py.Test and after a bunch of 
 setup stuff calls into things with
 
 run_status = pytest.main(args=arguments, plugins=[marks.MarksDecorator()])
 
 which works fine for a single execution. But now I've got a couple 
 clients that /really/ want parallel execution. My thinking for this 
 would be to add in the necessary arguments to pass in the way all the 
 other arguments have been...
 
 if 'n' in results.__dict__ and results.__dict__['n'] != None:
  arguments.append(--dist=load)
  arguments.append(--tx=%s*popen % results.__dict__['n'])
 
 But my scripts are not actually executing. (Which shouldnt be too too 
 surprising since I likely wouldnt be writing an email if they were...)
 
 Adam-Gouchers-MacBook:ebay adam$ pysaunter.py -m shirts -n 2
  
 test session starts 
 =
 platform darwin -- Python 2.7.2 -- pytest-2.2.4
 gw0 [0] / gw1 [0]
 scheduling tests via LoadScheduling
 
 - generated xml file: 
 /Users/adam/work/saunter/py.saunter-examples/ebay/logs/2012-08-02-14-21-46.xml
  
 -
 == 
 in 1.21 seconds 
 ==
 Adam-Gouchers-MacBook:ebay adam$
 
 Any suggestions on how to debug why the workers are being created but 
 the scripts not executed? I'd like to not write my own xdist style 
 plugin so would like to make things behave with the existing one.

Hum, this looks like no tests are collected at all.  If you leave
away the -n option, tests do run?  Can you show the -v output of
that?  I assume you are running things in the correct directory 
and have no change-directory code in your tests/plugin?

best,
holger



 -adam
 
 
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [PY-DEV] Accessing Item property from a test case

2012-07-26 Thread holger krekel
Hi Laurent,

On Thu, Jul 26, 2012 at 12:39 -0700, Brack, Laurent P. wrote:
 I was wondering if there was any way for a test case to retrieve a
 property containing arbitrary data that could be set by a hook?

Funcargs look like the natural place for this,
http://pytest.org/latest/funcargs.html

 An example could be a logger that has been opened specially for that
 test or conversely, the test could attach data which can be processed
 by a hook (example: test notes containing measurements, etc.) One could
 argue that the logger could be created by a factory but in this case,
 the test is correlated to a specific test case ID on our test management
 system (testlink) and this information is not available from a factory. 

Almost all information is available within funcarg factories and can thus
be made available to the test through injection.  For example, if you
have a decorator with a testcase-id then the factory can read it and
pass it along to the test etc.

 In one of our plugin, I am currently attaching metadata to items to
 convey information between hooks and I wished there was a way for the
 test itself to access this. Maybe there is, I just don't know about it. 

I need some simple example to understand why funcargs would not work
if they won't :)

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] direct funcarg scoping/parametrization implementation (and resource-v3 draft)

2012-07-20 Thread holger krekel
Hi Floris, Ronny, Carl, all,

i've managed to do a first round of implementation of the recently
discussed resources API.  For an example on what is now possible see:

http://pytest.org/dev/example/newexamples.html

As far as i see the new features did not break backward-compatibility
with existing plugin or test code.

I've also written a V3-work-in-progress of the resources document 
and marked the status of features in each section:

http://pytest.org/dev/resources.html

If you'd like to try things out you can either grab it via::

pip install -i http://pypi.testrun.org -U pytest

which should give a py.test --version = 2.3.0.dev3
or you can go to the bitbucket repository.

For the next step, i am considering if to try to extend setup_X methods
of if to introduce a new @pytest.mark.setup marker, see the resources.html
doc.  I lean towards the letter and believe that its introduction would
make a lot of pytest_runtest_setup() and pytest_configure() unneccessary.

grateful for any feedback or comments,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] direct funcarg scoping/parametrization implementation (and resource-v3 draft)

2012-07-20 Thread holger krekel
Hi Carl,

On Fri, Jul 20, 2012 at 12:08 -0600, Carl Meyer wrote:
 Hi Holger,
 
 I love the pytest.mark.funcarg decorator.

Good to hear!  I also like it :) And i think it makes sense to just
extend the funcargs system rather than to invent a parallel resources one.

 I think pytest.mark.setup is likely a good idea, too, but there are some
 questions I'm not clear on:
 
 1. How do I handle teardown for these setup functions? I would expect
 they'd take a request and I'd do request.addfinalizer(...), but in some
 of your examples they don't seem to take request, and in the one where
 it does, it says In addition to normal funcargs you can also receive
 the “request” funcarg which represents a takes on each of the values in
 the params=[1,2,3] decorator argument - which I'm having trouble
 parsing, and it isn't clear to me this request object would have
 addfinalizer().

Let me fix the paragraph, it should read something like:

This would execute the ``modes`` function once for each parameter
which will be put at ``request.param``.  This request object offers
the ``addfinalizer(func)`` helper which allows to register a function
which will be executed when test functions within the specified scope 
finished execution.

The ``request`` is a funcarg and thus setup functions can choose to
receive it or not by stating it in their signature. It will always
be available.  Depending on the scope, ``request.node`` will be the
corresponding node i guess.

 2. It's not entirely clear how the two types of scope Floris referred to
 earlier (scope based on location of the decorated function, and the
 scope keyword passed to the decorator) interact with each other. I
 presume that if I have a setup-decorated function in a conftest.py, it
 only applies to that directory and subdirectories.

right.

 If it's located in a
 module, I guess it only applies to tests in that module? 

Yes, this has been the rule of pytest_funcarg__ and pytest_generate_tests
definitions and thus it makes sense to use the same logic i think.

 What if it's
 located in a module and I give it scope=session - what does that mean?
 Would that be functionally equivalent to scope=module in that case,
 since it still only applies to that module?

Yes, exactly.

 Similarly, if I decorate a
 method of a class with the setup marker, does it only apply to test
 methods on that class?

Yes. In this case, session, module and class would all have the same
meaning or at least almost the same. Currently the effectively tighter
scope is not detected so a session-scoped finalizer will be executed
at the end of a session and not at the end of the class.

 3. Is there a class value for the scope kwarg, in addition to
 session, module, and function? It would be nice to see a full list
 of the accepted values for that kwarg.

so far: session, module, class, function 
i am thinking about directory - it's actually slightly tricky to implement
so i may postpone until someone really wants it :)

However, what i am thinking about is allowing to specify a function
as scope.  In that case, something like function(config) would be called 
so that you could define scope according to command line options.
This is useful when you want to have slow but better isolated 
per-function scopes for resources in CI runs.
 
 That's all that comes to mind at the moment! Thanks for all your work on
 this.

Thanks for all your feedback, it's really helpful.  My next goal is to implement
the setup-functions and do some more examples.  I believe some pretty
cool things can be done with it, one example is doing per-function 
transactions on a session-scoped database::

# content of conftest.py
import pytest

@pytest.mark.funcarg(scope=session, ...)
def db(request):
   ...

@pytest.mark.setup(scope=function)
def dbtransact(request, db):
if should_transact(request.node.obj ...):
db.begin()
request.addfinalizer(db.end)

Here, test functions themselves do not need to require db because
th setup function requires it.  This way you can manage global resources
without explicitely caring from test functions.

best,
holger

P.S: sometimes i wonder if a web framework couldn't use similar
 ideas as funcargs/scoping/declarations/etc. for implementing
 interactions between components ... But don't worry, i am not
 going to implement my own any time soon :)
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] KeyboardInterrupt during setup() and teardown()

2012-07-12 Thread holger krekel
Hi Pärham, Ronny, 

On Thu, Jul 12, 2012 at 15:26 +0200, Ronny Pfannschmidt wrote:
 Hi Pärham,
 
 as i already explained in irc before,
 a cached setup only calls tear-down if it was successful,
 
 and if you want it to work property, you should split it up
 
 an example of doing that would be something like
 
 def pytest_funcarg__app(request):
   def setup()
  return create_app()  # FAST
   def teardown(app):
  app.stop()
   app = request.cached_setup(setup, teardown, scope='session')
   app.start() #can wait
   return app

I don't think this is correct as it would start() the app 
in each test function that uses the app funcarg.

It's indeed unusual that despite a failing setup you 
want to do some teardown.  Maybe in this case controling
it yourself is the best:

def setup():
try:
app.start()
except Exception: # or KeyboardInterrupt etc.
app.stop()
raise

hth,
holger

 -- Ronny
 
 On 07/12/2012 02:12 PM, Pärham Fazelzadeh H wrote:
 Hi Holger,
 
 
 Here below is an example:
 
 def pytest_funcarg__app(request):
 
  def setup():
  # Blocking call to start(), returns when application has
 finished booting up
 
  app.start()
 
  return app
 
 
  def teardown(val):
 
 app.stop()
 
  return request.cached_setup(
 
  setup=setup,
 
  teardown=teardown,
 
  scope='session',
 
  extrakey='app'
  )
 
 
 Now, during setup(), when the process is waiting for the app to boot, if
 a keyboardinterrupt is raised (say by the user during from the test
 execution summary screen) then it will not call the teardown() of this
 funcarg 'app'. I assume this is because py.test assumes that the setup()
 of 'app' has not been finished and therefor it is not in a proper state
 for teardown() of 'app'.
 
 Regards,
 Parham
 
 On 12 July 2012 12:05, holger krekel hol...@merlinux.eu
 mailto:hol...@merlinux.eu wrote:
  
   Hi Pärham,
  
   On Thu, Jul 12, 2012 at 11:46 +0200, Pärham Fazelzadeh H wrote:
Hi all,
   
I am using py.test to perform integration and functional testing of an
application and had some issues with interrupts and was advised to
 submit
my use case.
   
Basically the problem is related to funcargs and how setup() and
 teardown()
are affected by interrupts. The issue we are having is that some of our
funcargs take longer to setup than what is maybe recommended. One
 of the
funcargs instantiate and start the application that is to be tested and
this can take some time; in the setup() you basically wait for the
application to boot up to verify that it has started. If a
KeyboardInterrupt is raised during setup() it will leave the
 application in
a dirty state since no teardown will be run, i.e the application
 will be
left running. Similarly this can happen during configuration stages in
funcargs.
   
I learned that this is default behaviour (and also reasonable),
 seeing as
the idea is that funcargs should be small and be fast.
  
   funcargs with a longer setup are fine.  I am not sure i understand
   how you are missing teardowns. How do you perform teardown, with
   request.addfinalizer()? Can you provide a little examples that reproduces
   the problem?
  
   best,
  
   holger
  
Still, this is our use case so here you go! :)
   
Regards,
Parham
  
___
py-dev mailing list
py-dev@codespeak.net mailto:py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev
  
 
 
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [TIP] (RFC) multi-dimensional/variant tox configuration (V1)

2012-07-10 Thread holger krekel
Hi Stefan,

On Tue, Jul 10, 2012 at 10:20 +0200, Stefan Scherfke wrote:
 Hi Holger,
 
 I really like the idea. However, I found one bug and have one note:
 
  Generating and selecting variants
  --
  
  …
  
  Without much further introduction, here is an example ``tox.ini``::
  
envlist = 
py[26,27,32,py]-mypkg[13,14]
  
  …
  
[testenv-mypkg13]
+deps = mypkg1.4
  
[testenv-django14]
+deps = mypkg1.5
 
 I think it should be “testenv-mypkg14” instead of “testenv-django14”?

right.

  If you don't want to run django-mypkg with pypy the envlist would look like
  this::
  
envlist = 
py[26,27,32]-mypkg[13,14]
pypy-mypkg14
  
  
  Generator expressions in the envlist setting
  --
  
  Generator expressions in the ``envlist`` work like this:
  
  * ``[...]`` parts contain a comma-separated list of names. Each name
  will generate a new environment reference. 
  * repeat the process until there are no more generator expressions
 
 I think you should make it more clear that:
 
 * you split the envlist entries by “-” -- ['py[26,27,32]', 'mypkg[13,14]']
 * you then expand the generator expressions  -- [['py26', 'py27', 'py32'], 
 ['mypkg13', 'mypkg14']]
 * and finally compute the cartesian product of that nested list.
 * you can create a section for each item in the resulting list (i.e., “py26” 
 or “mypkg13”)
 * some of theses entries are predfined in tox (i.e. py26, pypy, …) (you state 
 this later, but it would be more helpful to remind the reader a bit earlier)

It's a bit underspecified, i agree.  The algorithm i had in mind
works slightly different.  Consider we have a list of environment
names, some of which may contain [CSV]-generator expressions where
the CSV part is a comma-separated list of variants. We
then enter a loop as long as there are such expressions and then:

- expand: for each CSV-expression in an environment name in the list
  produce an additional environment name for each value in the CSV
- repeat: as long as there are CSV-expressions, continue the process

Moreover, variants are defined by respective [testenv-VARIANT] sections.
If neccessary, one can still override/special case a certain 
[testenv:VAR1-VAR2...] section by defining it.

I believe it's all effectively very similar to what you describe except
that there does not need to be special treatment of the - character.

best,
holger

 
 Cheers,
 Stefan
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] (RFC) multi-dimensional/variant tox configuration (V1)

2012-07-07 Thread holger krekel
Hi tox users,

I'd like to find a good way to introduce multi-dimensional configuration
to tox.ini files.  I have written up a draft idea on how to do it and
would appreciate feedback.  I provide two examples of transformed tox.ini
files.  If you have suggestions (or example tox.ini which you would like
to transform) i'd be grateful to hear about them.

best  thanks,
holger

P.S.: Ronny Pfannschmidt just posted a different way to handle
variants with tox, i'll check it up together with other feedback.


V1 draft: multi-dimensional configuration with tox
---

Problems:

- there is no way to define dependency or other variants in tox.ini files, 
  instead you have explicitely spell out all combinations in 
  separate testenvs. Examples:

http://code.larlet.fr/django-rest-framework/src/eed0f39a7e45/tox.ini
https://bitbucket.org/tabo/django-treebeard/src/93b579395a9c/tox.ini

- tox always uses pip currently.  So there is no check for your packages
  that installing with easy_install will work. Moreover, some packages,
  like greenlet on Win32, require easy_install if you have no suitable
  C-compiler on the machine.  Tox cannot be used then currently.

Goals: 

- allow to more easily define and run dependency/interpreter variants 
  with testenvs
- allow to run variants of installing via easy_install or pip. 


Generating and selecting variants
--

Suppose you want to test your package against mypkg-1.3 and mypkg-1.4
framework versions and against python2.6,2.7,pypy-1.9 and 3.2 interpreters. 
Today you would have to create 2*4 = 8 ``[testenv*]`` sections to instruct
tox to test against all of them.  With tox-1.X you can use a new mechanism
which is based on two ideas:

* allow to specify partial testenv values in [testenv-VARIANT] sections
* introduce generator expressions to the envlist setting to ease
  enumerating all variant combinations.

Without much further introduction, here is an example ``tox.ini``::

envlist = 
py[26,27,32,py]-mypkg[13,14]

[testenv]
deps = nose
commands = nosetests

[testenv-mypkg13]
+deps = mypkg1.4

[testenv-django14]
+deps = mypkg1.5

If you don't want to run django-mypkg with pypy the envlist would look like
this::

envlist = 
py[26,27,32]-mypkg[13,14]
pypy-mypkg14


Generator expressions in the envlist setting
--

Generator expressions in the ``envlist`` work like this:

* ``[...]`` parts contain a comma-separated list of names. Each name
  will generate a new environment reference. 
* repeat the process until there are no more generator expressions

Variant specification with [testenv-VARNAME]
--

The ``[testenv-mypkg13]`` and ``[testenv-mypkg14]`` can define
settings for the respective variant.  Their specification
can use a new ``+deps`` setting to allow for appending dependencies rather than 
replacing it.

Showing all expanded sections
---

To help with understanding how the variants will produce section values,
you can ask tox to show their expansion with a new option:

$ tox -l
[XXX output ommitted for now]


Making sure your packages installs with easy_install
--

The new installer testenv setting allows to specify the tool for installation:

[testenv]
installer = easy_install

If you want to have your package installed with both easy_install
and pip, you can use variants again:

[testenv-easy]
installer = easy_install

[testenv-pip]
installer = pip

[tox]
envlist = py[26,27,32]-django[13,14]-[easy,pip]

Note that tox comes with some predefined variants, namely:

- [easy,pip] use easy_install or pip
- [py24,py25,py26,py27,py31,py32,pypy,jy] use the respective pythonNN
  or PyPy or Jython interpreter 

You can use those in your envlist specification without the need to
define them yourself.
 
Transforming the examples: django-rest


The original `tox.ini 
http://code.larlet.fr/django-rest-framework/src/eed0f39a7e45/tox.ini`_ file 
has 159 lines and a lot of repetition, the new one would 
have 26 lines and almost no repetition::

[tox]
envlist = py[25,26,27]-django[12,13]-[example]

[testenv]
commands = python setup.py test

deps=
coverage==3.4
unittest-xml-reporting==1.2
Pyyaml==3.10

[testenv-django12]
+deps= django==1.2.4

[testenv-django12]
+deps= django==1.2.4

[testenv-example]
deps = 
wsgiref==0.1.2
Pygments==1.4
httplib2==0.6.0
Markdown==2.0.3

commands = python examples/runtests.py

Apart from the much more concise specification, it is now also easy to add 
further variants like testing installation with 

Re: [py-dev] INTERNAL ERROR doesn't give a good exit code?

2012-07-06 Thread holger krekel
Hi John,

thanks for fixing it - i committed it to the trunk docs.

best,
holger

On Fri, Jul 06, 2012 at 22:30 -0500, John Anderson wrote:
 Turns out it was the documented setup.py PyTestCommand I found:
 
 class PyTest(TestCommand):
 def finalize_options(self):
 TestCommand.finalize_options(self)
 self.test_args = []
 self.test_suite = True
 def run_tests(self):
 #import here, cause outside the eggs aren't loaded
 import pytest
 pytest.main(self.test_args)
 
 Changed the last line to
 result = pytest.main(self.test_args)
 sys.exit(result)
 
 and everything is working.
 
 
 On Fri, Jul 6, 2012 at 9:31 PM, John Anderson son...@gmail.com wrote:
  I'm running my builds on travis-ci and I had some bad imports and it
  threw an INTERNAL ERROR on py.test, but gave an exit code 0 so the
  build doesn't fail.. Any way to fix this?
 
 
  = test session starts 
  ==
 
  626platform linux2 -- Python 2.7.2 -- pytest-2.2.4
 
  627INTERNALERROR Traceback (most recent call last):
 
  628INTERNALERROR   File
  /home/vagrant/virtualenv/python2.7/local/lib/python2.7/site-packages/_pytest/main.py,
  line 72, in wrap_session
 
  629INTERNALERROR config.hook.pytest_sessionstart(session=session)
 
  630INTERNALERROR   File
  /home/vagrant/virtualenv/python2.7/local/lib/python2.7/site-packages/_pytest/core.py,
  line 421, in __call__
 
  631INTERNALERROR return self._docall(methods, kwargs)
 
  632INTERNALERROR   File
  /home/vagrant/virtualenv/python2.7/local/lib/python2.7/site-packages/_pytest/core.py,
  line 432, in _docall
 
  633INTERNALERROR res = mc.execute()
 
  634INTERNALERROR   File
  /home/vagrant/virtualenv/python2.7/local/lib/python2.7/site-packages/_pytest/core.py,
  line 350, in execute
 
  635INTERNALERROR res = method(**kwargs)
 
  636INTERNALERROR   File
  /home/vagrant/builds/sontek/hiero/conftest.py, line 10, in
  pytest_sessionstart
 
  637INTERNALERROR from hiero.tests.models import Base
 
  638INTERNALERROR   File
  /home/vagrant/builds/sontek/hiero/hiero/__init__.py, line 2, in
  module
 
  639INTERNALERROR from hem.config import get_class_from_config
 
  640INTERNALERROR ImportError: No module named hem.config
 
  641
 
  642Done. Build script exited with: 0
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: V2 of the new resource setup/parametrization facilities

2012-07-02 Thread holger krekel
On Sat, Jun 30, 2012 at 12:26 +0100, Floris Bruynooghe wrote:
  On Sat, Jun 30, 2012 at 01:23 +0100, Floris Bruynooghe wrote:
   On Fri, Jun 29, 2012 at 10:55:23AM +, holger krekel wrote:

def setup_directory(db):
# called when the first test in the directory tree is about
to execute
   
   I think the naming of these functions break the py.test convention,
   normally the only functions picked up from conftest.py start with
   pytest_.  I can certainly imagine a conftest.py or plugin already
   having a setup_session function.  These are new functions and do not
   provide a compatibility API with other testing frameworks, so I think
   they would be better named pytest_setup_session and
   pytest_setup_directory.
  
  I think using pytest_* hooks also has consistency problems:
  
  * hooks cannot usually receive arbitrary funcargs
 
 This is why a signature with a request/node for these might be better::
 
def pytest_setup_session(session):
session.getresource('db')  # or .getfuncargvalue()?
...
 
  * xUnit-style consistency: consider explaining the new functions
to someone only knowing setup_module/ class etc.
 
 As I tried to say before, they do not come for xUnit so I don't think
 this is too important.  I think the consistency inside conftest.py is
 more important.

Well, pytest introduced setup_module/class and nose/unittest ported it.
I consider setup_directory (or setup_session) to be xUnit consistent
from a user perspective and maybe nose/unittest will also add it - i guess
there are extensions there implementing something like this already.

In generaly, if we go the route of making setup_X more powerful there
probably is less need for pytest_runtest_setup calls.  The only difference
would be that runtest_setup is called in plugins/conftest.py whereas 
setup_function/method need to be defined around the actual test code.
(TBH i wonder if setup_module/class/function/method could be allowed in
conftest.py files as well - in many cases there are easier to handle
than pytest_runtest_item(item) which does not even guarantee that
the item is a python function - could be a PEP8 checker Item).

  I am wondering, however, do we even need a setup_session? setup_directory
  should usually be enough, i guess, and it's more unlikely people used
  that name already (and we could warn about setup_session in 2.X to
  reserve introducing it in 2.X+1).
 
 Maybe not, but if you don't provide setup_session (or
 pytest_setup_session) then pytest_sessionstart will be used again
 when someone thinks of a reason to use it.  And that's what you wanted
 to avoid.

We definitely need to provide prominent examples for whole-session setup
to avoid further usage of sessionstart or configure.

 [...]
  If a setup-function has no body, then tests could just require it themselves
  and that'd be enough.  If there is a need, we could introduce a marker for 
  requiring funcarg-resources such that tests do not need to require it 
  in their signature.
 
 I'm not sure what that would save, either the test function must
 request the resource or must be marked to need the resource.  If
 anything the second takes more work.

 
 As an aside however, one of my usecases for merged request/item
 objects was so I could put setup in a session-wide scoped funcarg but
 also automatically request this funcarg based on a mark::
 
def pytest_runtest_setup(item):
if 'needsdb' in item.keywords:  # or a more explicit API
   item.getresource('db')
 
 I understand that this will still be possible via::
 
def pytest_runtest_setup(item):
if 'needsdb' in item.keywords:
item.session.getresource('db')
 
 Or something similar to that.

It'd probably be best if this requirement is know at collection time so
--collectonly can present the complete picture.  This could look like this::

def pytest_itemcollected(item):
if 'needsdb' in item.keywords:
item.applymarker(pytest.mark.needsresource(db))

Also, the above issue of requiring a global resource could be expressed 
like this::

def pytest_collection_finish(session):
session.applymarker(pytest.mark.needsresource(db))

This should also in principle work well in case of a parametrized db
so that all tests requiring db can be run multiple times.
(I am not sure if the above already existing hooks fit well for that, however.)

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: V2 of the new resource setup/parametrization facilities

2012-06-30 Thread holger krekel
Hi Floris,

some preliminary notes, i'll probably think some more about your feedback ...

On Sat, Jun 30, 2012 at 01:23 +0100, Floris Bruynooghe wrote:
 Hello Holger,
 
 On Fri, Jun 29, 2012 at 10:55:23AM +, holger krekel wrote:
 [...]
  Direct scoping of funcarg factories
 [...]
  Direct parametrization of funcarg factories 
 
 These two seem fine, but personally I would prefer them to use the
 same marker with keyword-only arguments::
 
@pytest.mark.factory(scope='session', parametrize=['mysql', 'pg'])
def pytest_funcarg__db(request):
...
 
 This seems like a more natural API which collects the different
 functions, certainly when using both for one funcarg.

I'll consider it, probably under the name of factoryattr or so. 

 However it bothers me that funcargs now have two types of scope: an
 implied scope derived from where it is defined and which defines their
 visibility (e.g. only inside a class, module, directory).  And then
 this new scope which is essentially a caching/teardown scope.  The
 fact that the ScopeMismatch exception is needed is a result of this I
 think.

previously, the scope-mismatch could happen as well and go unnoticed::

def pytest_funcarg__Y(request):
return request.function.__name__

def pytest_funcarg__X(request):
def setup():
return request.getfuncargvalue(Y)
return request.cached_setup(setup, scope=session)

The result will depend on which test function is first requested.
In the future, we might want to try raise a ScopeMismatchError here
as well.

 The previous resource/funcarg split avoided this confusion.

a) What about just naming it cachescope?
b) i moved register_factory/getresource to implementation details
   not the least because Carl Meyer as a relatively recent pytest user
   expressed his expectation of a consistent pytest_funcarg__ factory
   story - and if we are going to anyway have to support the existing ones, 
   i'd now like to focus on extending it and only go for a usage-level visible
   paradigm change if it's really needed. Does this make general
   sense to you?
 
 Lastly, when do scoped funcarg resources get invoked?  Only at the
 time a test function requests it or always at the time when the scope
 is entered?

factories are invoked when a test function or one of its involved setup 
methods needs it.  A scope is only entered if there is a test to be executed
within it. Does this clarify?

  support for setup_session and setup_directory
  --
 [...]
  # content of conftest.py
  def setup_session(db):
  ... use db resource or do some initial global init stuff
  ... before any test is run.
  
  def setup_directory(db):
  # called when the first test in the directory tree is about
  to execute
 
 I think the naming of these functions break the py.test convention,
 normally the only functions picked up from conftest.py start with
 pytest_.  I can certainly imagine a conftest.py or plugin already
 having a setup_session function.  These are new functions and do not
 provide a compatibility API with other testing frameworks, so I think
 they would be better named pytest_setup_session and
 pytest_setup_directory.

I think using pytest_* hooks also has consistency problems:

* hooks cannot usually receive arbitrary funcargs
* xUnit-style consistency: consider explaining the new functions
  to someone only knowing setup_module/ class etc.

I am wondering, however, do we even need a setup_session? setup_directory
should usually be enough, i guess, and it's more unlikely people used
that name already (and we could warn about setup_session in 2.X to
reserve introducing it in 2.X+1).

And what what about putting setup_directory into an __init__.py file?
I don't really like requiring __init__ files, but am fine to go with it if
you and others prefer that.  I would guess, that using the already 
directory-scoped conftest.py file feels fine to someone coming new to pytest.

 It also feels slightly weird that they do not get their respective
 Node passed in.  This is a little inconsistent with the current
 setup_X method which all take a module, class or method argument.  I
 can't think of an immediate use for it as you can push out pretty much
 everything you need to do to a properly scoped funcarg resource.  

We can certainly add modulenode, classnode etc. to the respective
setup-methods because they participate in the funcarg-protocol 
(which allows accepting less parameters than are available).

 And following that reasoning the setup function would end up having no
 body at all, which also seems weird.

If a setup-function has no body, then tests could just require it themselves
and that'd be enough.  If there is a need, we could introduce a marker for 
requiring funcarg-resources such that tests do not need to require it 
in their signature.

  Implementation level

[py-dev] RFC: V2 of the new resource setup/parametrization facilities

2012-06-29 Thread holger krekel
Hi all, particularly Floris and Carl,

i have finally arrived at the V2 resource-API draft based on the very valuable
feedback you gave to the first version.  The document implements a largely
changed approach, see the Changes from V1 to V2 at the beginning, and
focuses on usage-level documentation instead of internal details.

I have also uploaded this doc as HTML which makes it a bit more colorful
to read, and also contains some cross-references:

http://pytest.org/dev/resources.html

Please find the the source txt-file also attached for your
inline-commenting usage.  Before i target the actual (substantial)
refactoring, i'd actually be very grateful for some more of your time
and comments on this new version.

I believe that the new resource parametrization facilities are a major
step forward - they should allow test writers to much more seldomly having
to resort to pytest_* hooks, and make access and working with parametrized 
resources straight forward, irrespective of previous xUnit/pytest background.
Plugin writers, of course, may still use the hooks for good value.

best  thanks,
holger


V2: Creating and working with parametrized test resources
===

Abstract: pytest-2.X provides generalized scoping and parametrization
of resource setup.  It does so by introducing new scoping and parametrization
capabilities directly to to funcarg factories and by enhancing
xUnit-style setup_X methods to directly accept funcarg resources.
Moreover, new xUnit setup_directory() and setup_session() methods allow
fixture code (and resource usage) at previously unavailable scopes.
Pre-existing test suites and plugins written to work for previous pytest
versions shall run unmodified.

This V2 draft is based on incorporating feedback provided by Floris Bruynooghe, 
Carl Meyer and Ronny Pfannschmidt. It remains as draft documentation, pending 
further refinements and changes according to implementation or backward 
compatibility issues. The main changes to V1 are:

* changed approach: now based on improving ``pytest_funcarg__``
  factories and extending setup_X methods to directly accept
  funcarg resources, also including a new per-directory
  setup_directory() and setup_session() function for respectively
  scoped setup.
* the funcarg versus resource naming issue is disregarded in favour
  of keeping with funcargs and talking about funcarg resources 
  to ease a later possible renaming (whose value is questionable)
* The register_factory/getresource methods are moved to an
  implementation section for now, drawing a clear boundary between
  usage-level docs and impl-level ones.
* use 2.X as the version for introduction (might be 2.3, else 2.4)

.. currentmodule:: _pytest


Shortcomings of the previous pytest_funcarg__ mechanism
-

The previous funcarg mechanism calls a factory each time a
funcarg for a test function is requested.  If a factory wants
t re-use a resource across different scopes, it often used 
the ``request.cached_setup()`` helper to manage caching of 
resources.  Here is a basic example how we could implement 
a per-session Database object::

# content of conftest.py 
class Database:
def __init__(self):
print (database instance created)
def destroy(self):
print (database instance destroyed)

def pytest_funcarg__db(request):
return request.cached_setup(setup=DataBase, 
teardown=lambda db: db.destroy,
scope=session)

There are some problems with this approach:

1. Scoping resource creation is not straight forward, instead one must
   understand the intricate cached_setup() method mechanics.

2. parametrizing the db resource is not straight forward: 
   you need to apply a parametrize decorator or implement a
   :py:func:`~hookspec.pytest_generate_tests` hook 
   calling :py:func:`~python.Metafunc.parametrize` which
   performs parametrization at the places where the resource 
   is used.  Moreover, you need to modify the factory to use an 
   ``extrakey`` parameter containing ``request.param`` to the 
   :py:func:`~python.Request.cached_setup` call.

3. the current implementation is inefficient: it performs factory discovery
   each time a db argument is required.  This discovery wrongly happens at 
   setup-time.

4. there is no way how you can use funcarg factories, let alone 
   parametrization, when your tests use the xUnit setup_X approach.

5. there is no way to specify a per-directory scope for caching.

In the following sections, API extensions are presented to solve 
each of these problems. 


Direct scoping of funcarg factories


Instead of calling cached_setup(), you can decorate your factory
to state its scope::

@pytest.mark.factory_scope(session)
def pytest_funcarg__db(request):

Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-28 Thread holger krekel
On Thu, Jun 28, 2012 at 08:47 +0100, Floris Bruynooghe wrote:
 On 27 June 2012 19:36, holger krekel hol...@merlinux.eu wrote:
  On Wed, Jun 27, 2012 at 16:59 +0100, Floris Bruynooghe wrote:
  On 27 June 2012 13:57, holger krekel hol...@merlinux.eu wrote:
   Setting resources as class attributes
   ---
  
   If you want to make an attribute available on a test class, you can
   use the resource_attr marker::
  
      @pytest.mark.resource_attr(db)
      class TestClass:
          def test_something(self):
              #use self.db
 
  I'm not convinced of creating a special purpose mark for this.
  Firstly it seems like an anti-pattern in py.test to me, more like
  xUnit style.
 
  unittest/xUnit-compat is the main idea for this new marker. It would
  work on pytest and unittest.TestCase classes alike.  It's also reminiscent
  of Rob Collin's testscenario unittest-extension.
 
  easily done with::
 
     class TestClas(object):
         @classmethod
         def setup_class(cls, item):
             cls.db = item.getresource('db')
 
  Not really.  Here we would need to check if the setup_class()
  accepts an item parameter and setup_class methods do not follow
  the hook-keyword-arg-calling convention.  Also passing an
  item would be slightly arbitrary as the setup_class would
  only be called once for all of its test items (functions).
 
 Oh, that will teach me to talk about an API I haven't used in a long
 time without looking it up.  I thought a node (not item) was already
 passed in.  Still, I think it would look nicer if it was possible to
 get to the resources API from within .setup_module(module),
 .setup_class(cls) and .setup_method(self, method) rather then needing
 a new marker for this.  The first of these should not be a problem I
 guess, since it already has a node passed in.  For .setup_method() the
 method argument could have an item attribute.  But I guess
 .setup_class(cls) is the hardest.  Would it be tricky to inspect the
 arguments as done for hooks?

setup_module/setup_class/setup_method all receive native python objects,
not collection nodes.  We could stick node attributes somewhere (not sure if 
on functions - they can be invoked multiple times in case of parametrization).

If we stick attributes e.g. on a pytest.current.item/classnode/... we
are doing side-effect programming - some internals will set those
attributes (and should take care to remove it to avoid
misusage/confusion) and some other places will read it.  These days, i
prefer to design APIs that communicate neccesarry state directly and
to use higher-level declarations to state intents rather than do everything
through imperative programming.

/me does import this and sees: Although practicality beats purity ...

I am still fine to consider e. g. the introduction of a pytest.current
namespace.  It could lead to make setup_X methods more powerful::

import pytest
def setup_module():  # pytest accepts it to keep nose compat
db = pytest.current.modulenode.getresource(db)

The current namespace could be set by the respective node setup
methods.  For classes it's the same idea::

class TestClass:
def setup_class(cls):
cls.db = pytest.current.classnode.getresource(db)

Due to the non-declarative nature of this approach, however, i don't
see a way to rerun the testclass with multiple db instances.

On a side note, many Java programmers have gone from the old JUnit
approach to TestNG, see the wikipedia entries.  py.test rather
goes for similar ideas as TestNG.

best,
holger

 
  Also, I realised this API provides for what is probably most of the
  cases of where I want dynamic resources:
 
  def pytest_setup_init(session):
      for item in my_item_generator():
          session.register_resource_factory(item.name, item)
 
  Not sure i understand this idea.  Is it intended as a mixture of
  collection (my_item_generator) and setup (as the hook name suggests)?
 
 My bad for writing a bad example, I shouldn't have used the word
 item in there.  Anyway the main point is that thanks to
 .register_resource_factory() taking the name of the resource as an
 argument I believe most, if not all, the cases where I wanted to
 create funcargs/resources without knowing what they where beforehand
 are solved.
 
 
 Regards,
 Floris
 
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-28 Thread holger krekel
On Thu, Jun 28, 2012 at 08:15 +, holger krekel wrote:
 I am still fine to consider e. g. the introduction of a pytest.current
 namespace.  It could lead to make setup_X methods more powerful::
 
 import pytest
 def setup_module():  # pytest accepts it to keep nose compat
 db = pytest.current.modulenode.getresource(db)
 
 The current namespace could be set by the respective node setup
 methods.  For classes it's the same idea::
 
 class TestClass:
 def setup_class(cls):
 cls.db = pytest.current.classnode.getresource(db)
 
 Due to the non-declarative nature of this approach, however, i don't
 see a way to rerun the testclass with multiple db instances.

Actually i see a way :)

@pytest.mark.uses_resource(db)
class TestClass:
...

This would signal pytest that this class is (somehow) using
the parameter db and thus collect multiple variations if the
resource is parametrized at register_factory time.  Of course,
plugins such as pytest-django would be able to declare this resource
usage automatically, reducing boilerplate for test writers.

What do you think?

holger



 On a side note, many Java programmers have gone from the old JUnit
 approach to TestNG, see the wikipedia entries.  py.test rather
 goes for similar ideas as TestNG.
 
 best,
 holger
 
  
   Also, I realised this API provides for what is probably most of the
   cases of where I want dynamic resources:
  
   def pytest_setup_init(session):
       for item in my_item_generator():
           session.register_resource_factory(item.name, item)
  
   Not sure i understand this idea.  Is it intended as a mixture of
   collection (my_item_generator) and setup (as the hook name suggests)?
  
  My bad for writing a bad example, I shouldn't have used the word
  item in there.  Anyway the main point is that thanks to
  .register_resource_factory() taking the name of the resource as an
  argument I believe most, if not all, the cases where I wanted to
  create funcargs/resources without knowing what they where beforehand
  are solved.
  
  
  Regards,
  Floris
  
  
  -- 
  Debian GNU/Linux -- The Power of Freedom
  www.debian.org | www.gnu.org | www.kernel.org
  
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-28 Thread holger krekel
On Thu, Jun 28, 2012 at 13:08 +0100, Floris Bruynooghe wrote:
 On 28 June 2012 09:15, holger krekel hol...@merlinux.eu wrote:
  /me does import this and sees: Although practicality beats purity ...
 
  I am still fine to consider e. g. the introduction of a pytest.current
  namespace.  It could lead to make setup_X methods more powerful::
 
 I think it would be nice to make setup_X methods more powerful by
 giving them access to resources, but it's not a deal breaker.  And I'm
 not a fan of pytest.current either for the same reasons you don't like
 it.
 
 But you didn't explain why inspecting the arguments like is done for
 the hooks is not viable?  To me that would seem like a neat solution.
 And I'm tempted to say not to bother if the only alternative is to use
 someting pytest.current-like.  It's certainly no regression.

It is in some sense logical to extend the funcarg-idea to setup-methods.
I used to think that the scoping is a problem, but given the new
node.register_factory/getresource() API it could be done somewhat
sanely.  It will remain a bit of a heuristic approach, though, because
setup_module/class/method have traditionally not required exact names -
for example, some people wrongly use:

def setup_class(self):
self.xyz = ...

Of course this works.  And I guess we could start funcarg/resource-requesting
based on all previously not possible arguments.  So

def setup_class(xyz, tmpdir):
xyz.tmpdir = tmpdir

would work because the first argument does not take part in discovery.
The tmpdir argument would lead to a classnode.getresource(tmpdir) call.
It wouldn't matter if tmpdir is created through a pytest_funcarg__tmpdir or a
register_factory() function.  Do you like this?

     import pytest
     def setup_module():  # pytest accepts it to keep nose compat
         db = pytest.current.modulenode.getresource(db)
 
  The current namespace could be set by the respective node setup
  methods.  For classes it's the same idea::
 
     class TestClass:
         def setup_class(cls):
             cls.db = pytest.current.classnode.getresource(db)
 
  Due to the non-declarative nature of this approach, however, i don't
  see a way to rerun the testclass with multiple db instances.
 
 I don't see how all other uses don't have these issues:
 
 def pytest_funcarg__foo(item):
 item.getresource('db')
 
 or
 
 def factory_foo(name, node):
 pass
 def facotry_bar(name, node):
 node.getresource('foo')
 .register_resource_factory('foo', factory_foo)
 .register_resource_factory('bar', factory_bar)
 
 Don't these suffer the same problem?  Or am I missing someting.

The latter would work::

node.register_factory(foo, [fac1, fac2])

this makes it clear that there are two foo parameter values.

best,
holger

 Regards,
 Floris
 
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] RFC: draft new resource management API (v1)

2012-06-27 Thread holger krekel
Hi all,

based on on initial discussions with Ronny and Floris i have now written
a usage-level document for a new test resource management API.  It aims
to better support plugin and test writers in managing cross-test-suite
resources such as databases, temporary directories, etc.  It generalizes
the existing funcarg factory mechanism - currently some knowledge of
such pytest usages is required to understand the document. Also, it does
not fully spell out all details yet - i hope it nevertheless transports
the main ideas.

Happy about any feedback,

holger

V1: Creating and working with test resources
==

pytest-2.3 provides generalized resource management allowing
to flexibly manage caching and parametrization across your test suite.

This is draft documentation, pending refinements and changes according
to feedback and to implementation or backward compatibility issues
(the new mechanism is supposed to allow fully backward compatible
operations for uses of the funcarg mechanism).

the new global pytest_runtest_init hook
--

Prior to 2.3, pytest offered a pytest_configure and a pytest_sessionstart
hook which was used often to setup global resources.  This suffers from
several problems. First of all, in distributed testing the master would
also setup test resources that are never needed because it only co-ordinates
the test run activities of the slave processes.  Secondly, in large test
suites resources are setup that might not be needed for the concrete test
run.  The first issue is solved through the introduction of a specific
hook::

def pytest_runtest_init(session):
# called ahead of pytest_runtestloop() test execution

This hook will only be called in processes that actually run tests.

The second issue is solved through a new register/getresource API which
will only ever setup resources if they are needed.  See the following
examples and sections on how this works.


managing a global database resource
---

If you have one database object which you want to use in tests
you can write the following into a conftest.py file::

class Database:
def __init__(self):
print (database instance created)
def destroy(self):
print (database instance destroyed)

def factory_db(name, node):
db = Database()
node.addfinalizer(db.destroy)
return db

def pytest_runtest_init(session):
session.register_resource(db, factory_db, atnode=session)

You can then access the constructed resource in a test like this::

def test_something(db):
...

The db function argument will lead to a lookup of the respective
factory value and be passed to the function body.  According to the
registration, the db object will be instantiated on a per-session basis
and thus reused across all test functions that require it.

instantiating a database resource per-module
---

If you want one database instance per test module you can restrict
caching by modifying the atnode parameter of the registration 
call above::

def pytest_runtest_init(session):
session.register_resource(db, factory_db, atnode=pytest.Module)

Neither the tests nor the factory function will need to change.
This also means that you can decide the scoping of resources
at runtime - e.g. based on a command line option: for developer
settings you might want per-session and for Continous Integration
runs you might prefer per-module or even per-function scope like this::

def pytest_runtest_init(session):
session.register_resource_factory(db, factory_db, 
  atnode=pytest.Function)

parametrized resources
--

If you want to rerun tests with different resource values you can specify
a list of factories instead of just one::

def pytest_runtest_init(session):
session.register_factory(db, [factory1, factory2], atnode=session)

In this case all tests that depend on the db resource will be run twice
using the respective values obtained from the two factory functions.


Using a resource from another resource factory
--

You can use the database resource from a another resource factory through
the ``node.getresource()`` method.  Let's add a resource factory for
a db_users table at module-level, extending the previous db-example::

def pytest_runtest_init(session):
...
session.register_factory(db_users, createusers, atnode=module)

def createusers(name, node):
db = node.getresource(db)
table = db.create_table(users, ...)
node.addfinalizer(lambda: db.destroy_table(users)

def test_user_creation(db_users):
...

The create-users will be called for each module.  After the 

Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-27 Thread holger krekel
On Wed, Jun 27, 2012 at 08:43 -0600, Carl Meyer wrote:
 I like it! In particular the parametrization support by passing a list
 is a quite intuitive extension of the API.
 
 atnode seems like an opaque arg name - what's wrong with scope? The
 latter name seems more intuitive to me. Would this arg have a default value?

scope makes sense - it's just that in the current API scope is a
class, module, ... string.  Existing users might easily get a bit of
type clash - especially if you have a mixed funcarg/resource scenario.
Maybe scopenode?

The default scopenode would be the one on which you are calling
register_factory.  So in the first documented example call:

session.register_factory(db, createdb, scopenode=session)

the scopenode call would actually be superfluous.  (Sidenote: the session 
object is also a node - the root node from which all collection and item 
nodes are descendants.  Each node has a .session reference back to this
root node).

 In the long run, if funcarg-style is considered a useful shortcut and
 will not be deprecated, it would be nice if there were a bit more naming
 and API consistency between funcarg-style and new-style resource
 handling -- it would make them feel more aspects of one system rather
 than two different systems. I think this would really just require
 switching from pytest_funcarg__foo to pytest_resource__foo, renaming
 cached_setup to register_factory (and having it use the same API), and
 renaming getfuncargvalue to getresource. Of course I don't know whether
 this consistency is really worth the backwards-compatibility/deprecation
 hassles.

* getresource/getfuncargvalue: makes sense to me to go for advertising and
  documenting getresource() instead of getfuncargvalue() and keeping
  the latter as an alias with or without deprecation.

* addfinalizer would remain unmodified - it's just that the request
  object passed to funcarg-factories adds finalizers with test function 
  invocation scope, whereas node.addfinalizer() does it for the respective
  node scope (so e.g. called from a Class node it would register a per-class 
  finalizer)

* cached_setup: i hope that we do not need to offer this method anymore
  other than for compatibility.  It's internal caching-key is not easy 
  to explain and more than once users have stumbled about understanding it.
  cached_setup is required as long as pytest_funcarg__ factories are called
  _each_ time a resource is requested. (By contrast the new getresource()
  only triggers a factory call once for the registered scope - thus
  the factory implementation itself does not need to care for caching).

  Note that register_factory is a different beast than cached_setup: 
  it does not create a value, just registers a factory. So i don't see 
  how we can unify this.

As to a possible resource-factory auto-discovery, i can imagine it to
work with introducing a marker::

# example content in a test module or in a conftest.py file

@pytest.mark.resourcefactory(db, scope=pytest.Class)
def myfactory(name, node):
# factory called once per each requesting class (methods
# on this class will share the returned value)

this declaration would trigger a register_factory(db, myfactory) call. 
If we want to extend this to parametrization (multiple db factories)
we probably need something like this::

@pytest.mark.resourcefactory(db, scope=pytest.Class, multi=True)
def make_db_factories(name, node):
factoryfuncs = [compute list of factory funcs]
return factoryfuncs

This would be called at collection time and the scope and the number
of to-be-created values would be known in advance.  It's basically
equivalent to a classnode.register_factory([list of factory funcs]) call.
(we could auto-magically recognize yield-generating functions but i'd
like to avoid it).

To go the full circle, the signature of factory functions could rather
accept a request object instead of (name, node). Actually today, a 
request object has this internal state anyway. pytest_funcarg__ would thus 
only look slighly special in that it skips the marker and has a fixed scope
of pytest.Function.

Hope this thought train makes some sense :)
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-27 Thread holger krekel
Hi Floris,

On Wed, Jun 27, 2012 at 16:59 +0100, Floris Bruynooghe wrote:
 Hello Holger,
 
 Thanks for the detailed document.  As I understand it the vast
 majority of the functionality is already possible using funcargs.
 Maybe a summary of the benefits over plain funcargs could be helpful?

makes sense.  Maybe some X versus Y examples would help.

 And some guidance for plugin writers on which of the two to choose
 would also be helpful.

Yip.  I think the examples should make it clear that probably the
new API is best fitting in most cases.

 Also, as a general observation I think it will become harder to find
 where a resource factory lives.  Before you could just grep for
 pytest_funcarg__foo which was actually quite nice, certainly when
 you're extending a resource at different levels.  This is still
 possible but not as easy.

Right, the grep-ability was intended behaviour.  It would remain
possible with the new resourcefactory markers i pondered in my
reply to Carl.  Otherwise you would need to invoke py.test --funcargs
to get the locations.

 Other then that I've got only one comment really:
 
 On 27 June 2012 13:57, holger krekel hol...@merlinux.eu wrote:
  Setting resources as class attributes
  ---
 
  If you want to make an attribute available on a test class, you can
  use the resource_attr marker::
 
     @pytest.mark.resource_attr(db)
     class TestClass:
         def test_something(self):
             #use self.db
 
 I'm not convinced of creating a special purpose mark for this.
 Firstly it seems like an anti-pattern in py.test to me, more like
 xUnit style.

unittest/xUnit-compat is the main idea for this new marker. It would
work on pytest and unittest.TestCase classes alike.  It's also reminiscent
of Rob Collin's testscenario unittest-extension.

 easily done with::
 
class TestClas(object):
@classmethod
def setup_class(cls, item):
cls.db = item.getresource('db')

Not really.  Here we would need to check if the setup_class() 
accepts an item parameter and setup_class methods do not follow
the hook-keyword-arg-calling convention.  Also passing an
item would be slightly arbitrary as the setup_class would
only be called once for all of its test items (functions).

 Also, I realised this API provides for what is probably most of the
 cases of where I want dynamic resources:
 
 def pytest_setup_init(session):
 for item in my_item_generator():
 session.register_resource_factory(item.name, item)

Not sure i understand this idea.  Is it intended as a mixture of 
collection (my_item_generator) and setup (as the hook name suggests)? 

Note that the doc is currently wrong a bit as well - the register_factory
would need to happen at collection-time, not setup-time as the Module.setup()
wrongly suggested.

best,
holger

 (presuming atnode=session is the default)
 
 
 Regards,
 Floris
 
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] RFC: draft new resource management API (v1)

2012-06-27 Thread holger krekel
On Wed, Jun 27, 2012 at 18:32 +0200, Ronny Pfannschmidt wrote:
 i noticed that there is no way to name items in those lists

could you insert this comment in context?

If you refer to the resourcefactory(multi=True) example, then
the name is known at factory-list construction time.

A a sidenote, i now notice that it's unclear how this whole api
discussion relates to the recently introduced @parametrize decorator and
the pytest_generate_tests hook and metafunc.parametrize() call.

hum'ly yours,
holger



 On 06/27/2012 06:15 PM, holger krekel wrote:
  On Wed, Jun 27, 2012 at 08:43 -0600, Carl Meyer wrote:
  I like it! In particular the parametrization support by passing a list
  is a quite intuitive extension of the API.
 
  atnode seems like an opaque arg name - what's wrong with scope? The
  latter name seems more intuitive to me. Would this arg have a default 
  value?
 
  scope makes sense - it's just that in the current API scope is a
  class, module, ... string.  Existing users might easily get a bit of
  type clash - especially if you have a mixed funcarg/resource scenario.
  Maybe scopenode?
 
  The default scopenode would be the one on which you are calling
  register_factory.  So in the first documented example call:
 
   session.register_factory(db, createdb, scopenode=session)
 
  the scopenode call would actually be superfluous.  (Sidenote: the session
  object is also a node - the root node from which all collection and item
  nodes are descendants.  Each node has a .session reference back to this
  root node).
 
  In the long run, if funcarg-style is considered a useful shortcut and
  will not be deprecated, it would be nice if there were a bit more naming
  and API consistency between funcarg-style and new-style resource
  handling -- it would make them feel more aspects of one system rather
  than two different systems. I think this would really just require
  switching from pytest_funcarg__foo to pytest_resource__foo, renaming
  cached_setup to register_factory (and having it use the same API), and
  renaming getfuncargvalue to getresource. Of course I don't know whether
  this consistency is really worth the backwards-compatibility/deprecation
  hassles.
 
  * getresource/getfuncargvalue: makes sense to me to go for advertising and
 documenting getresource() instead of getfuncargvalue() and keeping
 the latter as an alias with or without deprecation.
 
  * addfinalizer would remain unmodified - it's just that the request
 object passed to funcarg-factories adds finalizers with test function
 invocation scope, whereas node.addfinalizer() does it for the respective
 node scope (so e.g. called from a Class node it would register a 
  per-class
 finalizer)
 
  * cached_setup: i hope that we do not need to offer this method anymore
 other than for compatibility.  It's internal caching-key is not easy
 to explain and more than once users have stumbled about understanding it.
 cached_setup is required as long as pytest_funcarg__ factories are called
 _each_ time a resource is requested. (By contrast the new getresource()
 only triggers a factory call once for the registered scope - thus
 the factory implementation itself does not need to care for caching).
 
 Note that register_factory is a different beast than cached_setup:
 it does not create a value, just registers a factory. So i don't see
 how we can unify this.
 
  As to a possible resource-factory auto-discovery, i can imagine it to
  work with introducing a marker::
 
   # example content in a test module or in a conftest.py file
 
   @pytest.mark.resourcefactory(db, scope=pytest.Class)
   def myfactory(name, node):
   # factory called once per each requesting class (methods
   # on this class will share the returned value)
 
  this declaration would trigger a register_factory(db, myfactory) call.
  If we want to extend this to parametrization (multiple db factories)
  we probably need something like this::
 
   @pytest.mark.resourcefactory(db, scope=pytest.Class, multi=True)
   def make_db_factories(name, node):
   factoryfuncs = [compute list of factory funcs]
   return factoryfuncs
 
  This would be called at collection time and the scope and the number
  of to-be-created values would be known in advance.  It's basically
  equivalent to a classnode.register_factory([list of factory funcs]) call.
  (we could auto-magically recognize yield-generating functions but i'd
  like to avoid it).
 
  To go the full circle, the signature of factory functions could rather
  accept a request object instead of (name, node). Actually today, a
  request object has this internal state anyway. pytest_funcarg__ would thus
  only look slighly special in that it skips the marker and has a fixed scope
  of pytest.Function.
 
  Hope this thought train makes some sense :)
  holger
  ___
  py-dev

Re: [py-dev] Storing terminal width in py.test config object

2012-06-26 Thread holger krekel
On Mon, Jun 25, 2012 at 18:11 +0100, Floris Bruynooghe wrote:
 On 25 June 2012 15:36, holger krekel hol...@merlinux.eu wrote:
  On Mon, Jun 25, 2012 at 16:09 +0200, Ronny Pfannschmidt wrote:
  given the nature of the problem,
  i think its wrong to go for terminal width there,
  instead we should serialize the exponations,
  and render them on the master.
 
  that way we could also have other ways of display more nicely.
 
  It's indeed true that a frontend-independent format that can be
  rendered on the master would be nice ... um, html? (not sure it's a joke).
 
 Not sure that would cover all situations, py.io.saferepr() would
 somehow have to serialise the full thing (which might be simply too
 much data) and the other side would have to then notice this and
 somehow figure out the right argument for the maxsize argument of
 saferepr.  I'm sure there's a way of encoding all that but that but
 it's a lot of complexity and still won't solve the too much serialised
 data problem.

What about a reportsettings object with all neccessary settings
(including maxsize, tb-represenation defaults, ...) which slaves use to
parametrize their reporting?  We should then also allow reading
reportsettings from the ini-file and let cmdline options override them.

Particulary with respect to coloring/bold effects, i'd still want a
more abstract representation of output than plain (ansi-colored)
strings, i think.  It should allow to easily produce html-output or to adapt
to windows/unix terminal coloring styles.  This is some effort to get right
but it can at least be tested separately.

In general, i'd target the following goals with such a refactoring:

* a new --htmlreport option to produce html-output (maybe with a little
  javascript to fold/unfold tracebacks etc.)

* colored,linewidth adapted terminal output for xdist on all platforms

* much less network/cpu usage for long failure reps if --tb=native/no
  is supplied

Additionally, if we manage to get most of this functionality into pylib,
detox might also make use of it as it aims to support cross-host distributed
testing on a different level.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Resource providers

2012-06-26 Thread holger krekel
On Tue, Jun 26, 2012 at 15:07 +0100, Floris Bruynooghe wrote:
 On 25 June 2012 16:23, holger krekel hol...@merlinux.eu wrote:
  On Mon, Jun 25, 2012 at 15:21 +0100, Floris Bruynooghe wrote:
  On 25 June 2012 14:29, holger krekel hol...@merlinux.eu wrote:
   On Mon, Jun 25, 2012 at 10:55 +0100, Floris Bruynooghe wrote:
   The concrete example I have now is that it could be nice in
   pytest-django to be able to request e.g. Users which is a model
   class used to access the User table in the database.  Currently this
   is only possible by someone explicitly defining pytest_funcarg__Users,
   but Django allows you to dynamically look up all the models in the
   database so there is no reason this can't be build automatically.
  
   I think this is what the API you proposed was for, but as I said I
   can't remember the details.  And in this case I might be less
   enthusiastic in postponing it's implementation to a later release ;-)
  
   It's probably true that we could invent an register-factory API for this.
  
   However, what about a single models object (done traditionally
   with a pytest_funcarg__models definition) which itself provides
   an API to give Users or others data?
 
  Yes of course, that is what I currently have in my conftest.py.  But
  it would still be a nice thing to be able to do and a nice example of
  functionality I have wished I had before.  Hence I was wondering if
  the API you talked about yesterday would support it.
 
  I guess it could, for example, look like this::
 
     def pytest_configure(config):  # [1]
         def createmodel(name, node):
              return django model object. 
             # node can be None, Directory, Module, Class, Item, etc.
             # (code to compute model)
             return model
 
         for name in modelnames:
             config.register_factory(name, createmodel)
 
 I think for addressing this specific usecase I was more imagining a
 standard pytest hook:
 
def pytest_resource_factory(name, item):
The docstring which can show up in --funcargs
if name == 'User':
return models.User

The docstring cannot show up in --funcargs because it would
be unclear to which name it actually belongs without executing it.
And just showing that there is a generic factory hook would not
be very informative.

 That way it can be scoped per-directory in the conftest.py files and
 used in plugins, which would be scoped globally.
 
 I appreciate that this does not provide the other benefits nor can it
 be used to implement funcargs themself.  So maybe there is a need for
 an API you just describe which would allow one to implement what I
 just described in a plugin as well as write the funcargs as a plugin.
 But I'm much less comfortable suggesting what that API should be like,
 I do not fully know the innards of py.test like you do.

thanks for the feedback.  I am going to see if i get a day
to play around with an implementation for the suggested api.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Resource providers

2012-06-26 Thread holger krekel
On Tue, Jun 26, 2012 at 15:13 +0100, Floris Bruynooghe wrote:
 Another remark about this.  If my use-case of dynamically creating
 funcargs/resources is making this too complicated then feel free to
 ignore it for now.  As you rightly point out it is not that cumbersome
 to achieve the same with the existing funcargs.  See it more as a
 nice-to-have but I don't want it to become an ill-thought-out legacy
 you have to provide compatibility for later.

no worries.  I appreciate the use-case perspective a lot.  If i get
into too many details about implementation, feel free to stop me :)

holger

 Regards,
 Floris
 
 
 On 25 June 2012 16:23, holger krekel hol...@merlinux.eu wrote:
  On Mon, Jun 25, 2012 at 15:21 +0100, Floris Bruynooghe wrote:
  On 25 June 2012 14:29, holger krekel hol...@merlinux.eu wrote:
   On Mon, Jun 25, 2012 at 10:55 +0100, Floris Bruynooghe wrote:
   The concrete example I have now is that it could be nice in
   pytest-django to be able to request e.g. Users which is a model
   class used to access the User table in the database.  Currently this
   is only possible by someone explicitly defining pytest_funcarg__Users,
   but Django allows you to dynamically look up all the models in the
   database so there is no reason this can't be build automatically.
  
   I think this is what the API you proposed was for, but as I said I
   can't remember the details.  And in this case I might be less
   enthusiastic in postponing it's implementation to a later release ;-)
  
   It's probably true that we could invent an register-factory API for this.
  
   However, what about a single models object (done traditionally
   with a pytest_funcarg__models definition) which itself provides
   an API to give Users or others data?
 
  Yes of course, that is what I currently have in my conftest.py.  But
  it would still be a nice thing to be able to do and a nice example of
  functionality I have wished I had before.  Hence I was wondering if
  the API you talked about yesterday would support it.
 
  I guess it could, for example, look like this::
 
     def pytest_configure(config):  # [1]
         def createmodel(name, node):
              return django model object. 
             # node can be None, Directory, Module, Class, Item, etc.
             # (code to compute model)
             return model
 
         for name in modelnames:
             config.register_factory(name, createmodel)
 
  Getting a resource would work like this::
 
     config.getresource(name)
 
  The --funcargs option would (remain) able to show the docstring
  and location of the createmodel function.
 
  Another interesting bit is how to use register_factory
  to connect the existing pytest_funcarg__... factories
  which have a certain scope.  I guess something like this::
 
     config.register_factory(name, factoryfunc, node)
 
  would suffice - it would restrict the scope of the factory
  function to the specified node and all of its descendents.
  It could be called from Directory, Module, Class's setup methods
  to register the respective pytest_funcarg__ functions scoped as
  per-directory (conftest.py), per-module or per-class factories.
 
  Note that the node passed to the createmodel factory function
  above is probably neccessary for this case because existing
  funcarg-factories operate on Items (or in the future Nodes).
 
  getfuncargvalue() would then be implemented in terms of a
  call to config.getresource(name, node).
 
  In general, register_factory needs to be callable multiple
  times with the same name.  accept multiple factories for
  the same resource will One little issues is that we want to
 
  This new resource registration/lookup could work much
  more efficiently than the current scheme which - upon every 
  getfuncargvalue() -
  iterates over all plugins, modules and classes to discover matching
  pytest_funcarg__ factories.
 
  hope this all makes some sense.
 
  best,
  holger
 
 
  [1] We really need a new hook like pytest_runtest_init() which
     is called once before the runtest loop actually starts it work.
     pytest_configure() usually works but it is also called on the
     xdist-master process for which setting up resources makes no sense.
 
  Floris
 
  --
  Debian GNU/Linux -- The Power of Freedom
  www.debian.org | www.gnu.org | www.kernel.org
 
 
 
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Resource providers

2012-06-25 Thread holger krekel
Hi Floris,

On Mon, Jun 25, 2012 at 10:55 +0100, Floris Bruynooghe wrote:
 Hi Holger, everyone,
 
 Yesterday a resource provider API was considered on IRC, unfortunately
 I have no logs and forgot the details already.  But today I remembered
 a, possibly invalid, use case which might want to benefit form this:
 occasionally I wish it was possible to dynamically create funcarg
 objects.
 
 The concrete example I have now is that it could be nice in
 pytest-django to be able to request e.g. Users which is a model
 class used to access the User table in the database.  Currently this
 is only possible by someone explicitly defining pytest_funcarg__Users,
 but Django allows you to dynamically look up all the models in the
 database so there is no reason this can't be build automatically.
 
 I think this is what the API you proposed was for, but as I said I
 can't remember the details.  And in this case I might be less
 enthusiastic in postponing it's implementation to a later release ;-)

It's probably true that we could invent an register-factory API for this.

However, what about a single models object (done traditionally
with a pytest_funcarg__models definition) which itself provides
an API to give Users or others data?

best,
holger

 Regards,
 Floris
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Storing terminal width in py.test config object

2012-06-25 Thread holger krekel
On Mon, Jun 25, 2012 at 11:06 +0100, Floris Bruynooghe wrote:
 On 21 June 2012 07:16, holger krekel hol...@merlinux.eu wrote:
  On Thu, Jun 21, 2012 at 00:16 +0100, Floris Bruynooghe wrote:
  An annoyance of the pytest_assertrepr_compare hook is that it can not
  normally access the terminal width since usually it is called while
  stdout and stderr are being captured which breaks
  py.io.get_terminal_width().  Since I think it is fairly rare to get a
  changing terminal size during a py.test run I would propose py.test
  stores the terminal width on it's config object which would solve that
  annoyance.
 
  Would this be a reasonable thing to do?
 
  I think so.  However, I'd like to have this working with xdist as well
  and slaves generally do not have access to the master terminal.
  I guess xdist could take care to explicitely transfer terminal_width
  once it is on the config object. You can leave the latter to me if you 
  prefer.
 
 Ok, as I'm in no hurry on this I've created an issue for this so I
 don't completely loose track.  I don't mind attempting to look at
 xdist when I get round to it, I probably need to get to know it for my
 pytest-timeout plugin anyway.

There is a semi-official way to pass data between master and slaves,
see this test:


https://bitbucket.org/hpk42/pytest-xdist/src/6d23d5c1326f/testing/acceptance_test.py#cl-159

With this, you could define the appropriate code in the xdist-plugin
(or in any other plugin) to make config.terminal_width available 
on slaves.  I guess that pytest itself should grow a config.terminal_width
and xdist would just take care to transfer it.

Actually, it would be interesting to make a virtual py.io.TerminalWriter()
available which uses the settings from the master terminalwriter (as
used in the terminal plugin). It could be used to produce colored
output on the slaves to be shown on the master terminal.  I am hesitant
to point you to py.io.TerminalWritter, however, because its code is in
need for a cleanup and a unicode-review ... but maybe this is unrelated.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Resource providers

2012-06-25 Thread holger krekel
On Mon, Jun 25, 2012 at 15:21 +0100, Floris Bruynooghe wrote:
 On 25 June 2012 14:29, holger krekel hol...@merlinux.eu wrote:
  On Mon, Jun 25, 2012 at 10:55 +0100, Floris Bruynooghe wrote:
  The concrete example I have now is that it could be nice in
  pytest-django to be able to request e.g. Users which is a model
  class used to access the User table in the database.  Currently this
  is only possible by someone explicitly defining pytest_funcarg__Users,
  but Django allows you to dynamically look up all the models in the
  database so there is no reason this can't be build automatically.
 
  I think this is what the API you proposed was for, but as I said I
  can't remember the details.  And in this case I might be less
  enthusiastic in postponing it's implementation to a later release ;-)
 
  It's probably true that we could invent an register-factory API for this.
 
  However, what about a single models object (done traditionally
  with a pytest_funcarg__models definition) which itself provides
  an API to give Users or others data?
 
 Yes of course, that is what I currently have in my conftest.py.  But
 it would still be a nice thing to be able to do and a nice example of
 functionality I have wished I had before.  Hence I was wondering if
 the API you talked about yesterday would support it.

I guess it could, for example, look like this::

def pytest_configure(config):  # [1]
def createmodel(name, node):
 return django model object. 
# node can be None, Directory, Module, Class, Item, etc.
# (code to compute model)
return model

for name in modelnames:
config.register_factory(name, createmodel)

Getting a resource would work like this::

config.getresource(name)

The --funcargs option would (remain) able to show the docstring
and location of the createmodel function.

Another interesting bit is how to use register_factory
to connect the existing pytest_funcarg__... factories
which have a certain scope.  I guess something like this::

config.register_factory(name, factoryfunc, node)

would suffice - it would restrict the scope of the factory
function to the specified node and all of its descendents.
It could be called from Directory, Module, Class's setup methods
to register the respective pytest_funcarg__ functions scoped as
per-directory (conftest.py), per-module or per-class factories.

Note that the node passed to the createmodel factory function
above is probably neccessary for this case because existing
funcarg-factories operate on Items (or in the future Nodes).

getfuncargvalue() would then be implemented in terms of a
call to config.getresource(name, node).

In general, register_factory needs to be callable multiple
times with the same name.  accept multiple factories for
the same resource will One little issues is that we want to 

This new resource registration/lookup could work much
more efficiently than the current scheme which - upon every getfuncargvalue() -
iterates over all plugins, modules and classes to discover matching 
pytest_funcarg__ factories.

hope this all makes some sense.

best,
holger


[1] We really need a new hook like pytest_runtest_init() which
is called once before the runtest loop actually starts it work.
pytest_configure() usually works but it is also called on the
xdist-master process for which setting up resources makes no sense.

 Floris
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] new cache and pep8 pytest plugin releases

2012-06-21 Thread holger krekel

I quickly released a pytest-pep8-1.0.1 which includes an explicit dependency on
pytest-cache.  Thanks to Hynek Schlawack for reporting.

On Wed, Jun 20, 2012 at 20:52 +, holger krekel wrote:
 i just released two new plugins:
 
 * pytest-cache-0.9 (initial) for easy caching of values across test runs 
   and a new --lf option to rerun the failing tests of a previous run. Install,
   basic example and API (for use by other plugins) is here:
 
   http://packages.python.org/pytest-cache/readme.html
 
 * pytest-pep8-1.0 a flexible pep8 checker which allows to keep your project
   PEP8 compliant with your choice of ignore-options on a per-file basis.
   It avoids checking files that haven't changed.  Examples and docs at:
   
   http://pypi.python.org/pypi/pytest-pep8
 
 have fun,
 holger
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] new cache and pep8 pytest plugin releases

2012-06-20 Thread holger krekel

i just released two new plugins:

* pytest-cache-0.9 (initial) for easy caching of values across test runs 
  and a new --lf option to rerun the failing tests of a previous run. Install,
  basic example and API (for use by other plugins) is here:

  http://packages.python.org/pytest-cache/readme.html

* pytest-pep8-1.0 a flexible pep8 checker which allows to keep your project
  PEP8 compliant with your choice of ignore-options on a per-file basis.
  It avoids checking files that haven't changed.  Examples and docs at:
  
  http://pypi.python.org/pypi/pytest-pep8

have fun,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] intermittent bugs in pytest/python on osx lion ?

2012-06-17 Thread holger krekel

Thanks for your persistence.  I recommend also compiling Python-3.3 (not
released yet - you need to grab the repository i think)  and throwing
that in the mix. FYI you could use tox (http://tox.testrun.org) to
automate the various test runs.

Your guess of something corrupting bytecode makes sense.
You might want to use export PYTHONDONTWRITECODE=1 to inhibit
pyc file writing (this also disables it for pytest's test module
assertion rewriting, so you may return to using the default 
assert mode).

good luck,
holger


On Sat, Jun 16, 2012 at 21:39 -0400, Ian Miers wrote:
 Well, I just got the __import__ error  but with a different file and module
  using assert=plain and not running the doctest module.
 
 The subsequent runs could not find a certain c extension function. I
 removed all the __pytcache__  and .pyc files with git clean -fxd -e **.so
 and  it claimed it can't import charm.core despite the fact that it exists
 and can by  imported via python3.2 -c import charm.core (which is
 imported from the expected location even ).  The next test pass worked.
 
 No I am not using any plugins.
 
 Interestingly, I just got the error on a run with the unittest module.
 Though it maybe because something was corrupted since clearing the pyc
 files fixed it.
 I am now completely baffled.
 It looks like something is , for lack of a better term, corrupting  the
 running copy of the interpreted code.  Do you know anything that could
 possibly do that? Poorly written c extensions perhaps ?
 
 I think I am going to set up some script and just see if I can collect
 errors with pytest and unittest
 
 Ian
 
 
 
 
 
 ython3.2 -m pytest --assert=plain
 === test session starts
 
 platform darwin -- Python 3.2.3 -- pytest-2.2.4
 collected 98 items / 1 errors
 
 schemes/test/chamhash_test.py ..
 schemes/test/commit_test.py ..
 schemes/test/dabenc_test.py ..
 schemes/test/encap_bchk05_test.py .
 schemes/test/grpsig_test.py ..
 schemes/test/hibenc_test.py .
 schemes/test/ibenc_test.py .
 schemes/test/pk_vrf_test.py .
 schemes/test/pkenc_test.py ...
 schemes/test/pksig_test.py .
 schemes/test/rsa_alg_test.py 
 charm/test/toolbox/conversion_test.py ...
 charm/test/toolbox/paddingschemes_test.py s..
 charm/test/toolbox/secretshare_test.py .
 charm/test/toolbox/symcrypto_test.py ..
 charm/toolbox/paddingschemes_test.py s..
 charm/toolbox/symcrypto_test.py ..
 
 == ERRORS
 ==
 ___ ERROR collecting
 schemes/test/abenc_test.py 
 schemes/test/abenc_test.py:1: in module
from schemes.abenc.abenc_adapt_hybrid import HybridABEnc as HybridABEnc
 schemes/abenc/abenc_adapt_hybrid.py:6: in module
from charm.toolbox.symcrypto import AuthenticatedCryptoAbstraction
 charm/toolbox/symcrypto.py:9: in module
import hmac
 E   TypeError: __import__() argument 1 must be str without null bytes, not
 str
 
 On Sat, Jun 16, 2012 at 12:30 PM, holger krekel hol...@merlinux.eu wrote:
 
  On Sat, Jun 16, 2012 at 11:59 -0400, Ian Miers wrote:
   So these issues only happen on OSX and as I said,they come and go and
   change,which is why I am skeptical it's our code. Actually if I had to
   guess I'd say its something with python3 on OSX Lion, but so far we've
  only
   seen the issues in test runs, not when attempting to manually recreate
  the
   issue.
  
   Regarding import os.path error, I got it again. Trying to import the
  module
   containing  that import after that error caused a bus error. However all
   subsequent imports worked fine. The strange thing is that the pyc file
   didn't change ( or at least its md5 sum didn't) and the modification date
   for all of the pyc files for that package remain unchanged.
  
  
python3.2 -m pytest --assert=plain seemed to work for a little bit, but
   then we started getting the same intermittent changing  errors.
 
  With --assert=plain pytest should not interfere with anything related
  to importing.  Did you get the strange error with import os.path
  in this configuration?   Are you using any plugins (see output of
  py.test --version)?
 
  The below error may in principle be a doctest/pytest bug with
  python3.2 although it would be strange if it only happened sometimes.
 
  Holger
 
   Allmost all runs with assert=reinterp  , including after removing all
   __pychache__ and recompiling the c extensions  produce the INTERNALERROR
below.
  
  
  
   Thanks again,
   Ian
  
   python3.2 -m pytest --assert=reinterp
   = test session starts
   ==
   platform darwin -- Python 3.2.3 -- pytest-2.2.4
   collected 232 items
  
   schemes/__init__.py .
   schemes/chamhash_adm05.py .
   schemes/chamhash_rsa_hw09.py

Re: [py-dev] intermittent bugs in pytest/python on osx lion ?

2012-06-16 Thread holger krekel
Hi Ian,

On Fri, Jun 15, 2012 at 20:32 -0400, Ian Miers wrote:
 Hi, I just started using pytest. It's lovely.
 The TLDR on this is we are getting intermittent non reproducible errors
  that change and sometimes disappear between test runs like the following :
import os.path
 E   TypeError: __import__() argument 1 must be str without null bytes, not
 str

Looks odd.  pytest does not override __import__ but it does by default
use a PEP302 compliant module loader.  If you use

py.test --assert=reinterpret  # or --assert=plain

do the errors go away?  If so that points to a problem in pytest's module
loader or some interaction problem with python3.2.

If changing the assertion mode still leads to errors then i strongly
suspect it's other parts of the code you are running.  I'd then suggest
to check if something in your environment modifies __import__
e.g. by writing a test that checks/prints out __import__?

best,
holger

 Longer version:
 We've been getting buggy results out of test runs on OSX Lion with python
 3.2.3  and  py.test version 2.2.4. Specifically we've been getting what
 appear to be false-positive  test failures that change from run to run  and
 cannot be reproduced by running some code ourselves or in the case of
 doctest manually running  python3.2 -m doctest file.
 
 Some of these errors will stay around for multiple test runs even after
 make clean, etc. Some will change from run to run. All of them
 eventually disappeared  temporarily and we got a clean test pass, though
 how I don't know. Morever, the problems cropped up again. All of this was
 with no code changes and code known to work on ubuntu and partially
 manually tested on OSX.
 
 Ordinarly I'd say there was something wrong with our code. However, some of
 the errors are vanishingly unlikely. Claims that modules don't exist when
 they do and are importable via python3.2 -c from foo.bar.baz import narf
 and such.
 
 The most glaringly, however, is this gem:
 charm/toolbox/pairinggroup.py:3: in module
import os.path
 E   TypeError: __import__() argument 1 must be str without null bytes, not
 str
 
 We also got a lovely bug where it appeared __pycache__  was corrupted
 during test runs.  On an initial run, we could import a function from a
 python c extension. On subsequent runs, it didn't exist. The function was
 still in the .so file, as shown by nm, however help(module) returned
 function_name#$@^%#$% Function description. Deleting __pychache__ folders
  resolved it for the next test run but then it came back. It too
 disappeared after a couple of test runs never to be seen since.
 
 Has anyone seen anyhting like this? Are their known issues on OSX with
 python ? With pytest? Does anyone have any idea how I might get a better
 idea whats going on?
 
 As an addendum, the latest error I am getting is now :
test = [hashFn(struct.pack(%dsI % (len(seed)), seed, i)) for i in
 ran]
 E   UnicodeEncodeError: 'ascii' codec can't encode character '\x9e' in
 position 0: ordinal not in range(128
 This actually might be in our code,though again it works on ubuntu and at
 one point on OSX.  Given that its a pattern that points to python or pytest
 doing something  to binaries,  I'm including it anyway.
 
 The project is charm, you can see the code on this branch here via
 https://github.com/JHUISI/charm/tree/dev. Python was installed via fenc ( I
 think, its not my box). The errors happened both with python3.2 -m pytest
  and with python3.2 setup.py test.  Though it appears more so with the
 later.
 
 Thanks,
 
 Ian

 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-pep8-0.9.1: compatibility to pep8-1.3

2012-06-16 Thread holger krekel

I just did a quick release of pytest-pep8, version 0.9.1 which fixes
compatibility issues with the recent pep8 package (1.3).

See http://pypi.python.org/pypi/pytest-pep8 for more info.

best,
holger

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] pytest documents with Japanese translation

2012-06-06 Thread holger krekel
Hi Tetsuya,

accepted it, many thanks!

I created an issue for some futher neccessary work:

https://bitbucket.org/hpk42/pytest/issue/155/bring-japanese-translation-online-reorg

which i hope to tackle soon.   

On a sidenote, I am also thinking about packaging docs separately now that
they grow larger (and maybe even tests at some point). But that can come
later.

best,
holger

On Wed, Jun 06, 2012 at 09:07 +0900, Tetsuya Morimoto wrote:
 Hi Holger,
 
 I sent a pull request. Confirm it!
 
 https://bitbucket.org/hpk42/pytest/pull-request/14/added-japanese-translation-documentation
 
 thanks,
 Tetsuya
 
 On Sat, Jun 2, 2012 at 8:35 PM, holger krekel hol...@merlinux.eu wrote:
  Hi Tetsuya,
 
  On Sat, Jun 02, 2012 at 20:15 +0900, Tetsuya Morimoto wrote:
  Hi Holger,
 
  I merged the changes of 2.2.4 into my translation repository. I mean
  it's available to send a pull request.
  https://bitbucket.org/t2y/pytest-ja
 
  I think it's better to send a pull request from me after you
  reorganize the docs directory in your repository. Maybe, I re-fork
  pytest repository to drop the named branch for translation, anyway, I
  will follow your way.
 
  Can you imagine to do the doc reorganisation yourself?  This way there
  is no need for a named branch - you could just do the final sub
  directory layout as discussed in the last mail.  I would care for the
  push-to-website bit and the apache side reconfiguration.
 
  best,
  holger
 
 
  thanks,
  Tetsuya
 
  On Wed, May 30, 2012 at 12:07 AM, Tetsuya Morimoto
  tetsuya.morim...@gmail.com wrote:
   Hi Holger,
  
   Thanks for remembering the translation! :)
  
   Does this also make sense to you?  If so, it'd be great if you could
   submit a pull request that reorganises pytest's docs accordingly,
   preferably starting from the current trunk.
   Yes, it makes sense for me. I have experienced a similar work for
   virtualenvwrapper. That might be an informative.
  
   http://www.doughellmann.com/docs/virtualenvwrapper/index.html
   http://www.doughellmann.com/docs/virtualenvwrapper/ja/index.html
   https://bitbucket.org/dhellmann/virtualenvwrapper/src/e1a05e751a56/docs
  
   There weren't much changes
   between 2.2.3 and 2.2.4 in the doc directory, anyway.
   OK! I will update the Japanese translation and submit a pull request!
  
   I'll take care to push the docs to pytest.org - maybe using a scheme 
   like this:
  
      pytest.org/VER/... # for english documentation
      pytest.org/en/VER  # for english documentation
      pytest.org/ja/VER  # for japanese documentation
  
   where VER is a version number and latest could point to the latest 
   number.
   It looks nice. I agree with you.
  
   that there is another error i am not sure about at the moment.
   Maybe Ronny can take a look.
   I see. I will talk to Ronny later.
  
   thanks,
   Tetsuya
  
   On Tue, May 29, 2012 at 11:13 PM, holger krekel hol...@merlinux.eu 
   wrote:
   Hi Tetsuya,
  
   On Mon, Apr 23, 2012 at 03:43 +0900, Tetsuya Morimoto wrote:
   Hi Holger,
  
   Thanks for thinking about Japanese translation.
  
   sorry for taking a while.
  
* how do the eventual URLs look like? Maybe pytest.org/latest-jp/...?
 instead of pytest.org/latest/ ?
 I'd definitely like to keep existing URLs working.
   It's good idea and I suggest latest-ja is proper than latest-jp
   since jp is the country code.
  
   I think the eventual URL is not a big problem.  We just need to take
   care that it is compatible with the existing scheme, i.e. allows 
   existing
   URLs to remain valid.
  
* how to organise the repository so that both EN and JP are included
 (without the need to have a branch)
   I forked original branch for Japanese translation and made named
   branch to represent the version, but do you want to manage the
   translation in original repository? If so, I know Sphinx has i18n
   feature using gettext since 1.1. Is this useful for your purpose?
   http://sphinx.pocoo.org/latest/intl.html
  
   I think what we need is something along those lines:
  
   http://mark-story.com/posts/view/creating-multi-language-documentation-with-sphinx
  
   Does this also make sense to you?  If so, it'd be great if you could
   submit a pull request that reorganises pytest's docs accordingly,
   preferably starting from the current trunk.  There weren't much changes
   between 2.2.3 and 2.2.4 in the doc directory, anyway.
  
   I think we'd end up with something like:
  
      pytest/
          doc/
              en/
                  # current english files and dirs
              ja/
                  # current japanese files and dirs
  
   I'll take care to push the docs to pytest.org - maybe using a scheme 
   like this:
  
      pytest.org/VER/... # for english documentation
      pytest.org/en/VER  # for english documentation
      pytest.org/ja/VER  # for japanese documentation
  
   where VER is a version number and latest could point to the latest 
   number.
  
* do we

Re: [py-dev] pytest documents with Japanese translation

2012-06-06 Thread holger krekel
Hello Ronny,

On Wed, Jun 06, 2012 at 13:59 +0200, Ronny Pfannschmidt wrote:
 Hi Holger, Tetsuya
 
 while taking a look i noticed that sphinx has a integrated
 translation system based on .po files
 
 see http://sphinx.pocoo.org/latest/intl.html
 
 im under the impression that this way is more maintainable than he
 current way due to using standard tools to manage translated strings
 
 i'd like to suggest some investigation in its usefullness

I am happy to consider using it - not sure i get to try it myself
any time soon and wonder specifically if a two-language situation with
not so often changing docs warrants using the above system.

Tetsuya, what do you think? Can you imagine investigating a bit?

best,
holger

 -- ronny
 
 
 On 06/06/2012 02:07 AM, Tetsuya Morimoto wrote:
 Hi Holger,
 
 I sent a pull request. Confirm it!
 
 https://bitbucket.org/hpk42/pytest/pull-request/14/added-japanese-translation-documentation
 
 thanks,
 Tetsuya
 
 On Sat, Jun 2, 2012 at 8:35 PM, holger krekelhol...@merlinux.eu  wrote:
 Hi Tetsuya,
 
 On Sat, Jun 02, 2012 at 20:15 +0900, Tetsuya Morimoto wrote:
 Hi Holger,
 
 I merged the changes of 2.2.4 into my translation repository. I mean
 it's available to send a pull request.
 https://bitbucket.org/t2y/pytest-ja
 
 I think it's better to send a pull request from me after you
 reorganize the docs directory in your repository. Maybe, I re-fork
 pytest repository to drop the named branch for translation, anyway, I
 will follow your way.
 
 Can you imagine to do the doc reorganisation yourself?  This way there
 is no need for a named branch - you could just do the final sub
 directory layout as discussed in the last mail.  I would care for the
 push-to-website bit and the apache side reconfiguration.
 
 best,
 holger
 
 
 thanks,
 Tetsuya
 
 On Wed, May 30, 2012 at 12:07 AM, Tetsuya Morimoto
 tetsuya.morim...@gmail.com  wrote:
 Hi Holger,
 
 Thanks for remembering the translation! :)
 
 Does this also make sense to you?  If so, it'd be great if you could
 submit a pull request that reorganises pytest's docs accordingly,
 preferably starting from the current trunk.
 Yes, it makes sense for me. I have experienced a similar work for
 virtualenvwrapper. That might be an informative.
 
 http://www.doughellmann.com/docs/virtualenvwrapper/index.html
 http://www.doughellmann.com/docs/virtualenvwrapper/ja/index.html
 https://bitbucket.org/dhellmann/virtualenvwrapper/src/e1a05e751a56/docs
 
 There weren't much changes
 between 2.2.3 and 2.2.4 in the doc directory, anyway.
 OK! I will update the Japanese translation and submit a pull request!
 
 I'll take care to push the docs to pytest.org - maybe using a scheme 
 like this:
 
 pytest.org/VER/... # for english documentation
 pytest.org/en/VER  # for english documentation
 pytest.org/ja/VER  # for japanese documentation
 
 where VER is a version number and latest could point to the latest 
 number.
 It looks nice. I agree with you.
 
 that there is another error i am not sure about at the moment.
 Maybe Ronny can take a look.
 I see. I will talk to Ronny later.
 
 thanks,
 Tetsuya
 
 On Tue, May 29, 2012 at 11:13 PM, holger krekelhol...@merlinux.eu  
 wrote:
 Hi Tetsuya,
 
 On Mon, Apr 23, 2012 at 03:43 +0900, Tetsuya Morimoto wrote:
 Hi Holger,
 
 Thanks for thinking about Japanese translation.
 
 sorry for taking a while.
 
 * how do the eventual URLs look like? Maybe pytest.org/latest-jp/...?
   instead of pytest.org/latest/ ?
   I'd definitely like to keep existing URLs working.
 It's good idea and I suggest latest-ja is proper than latest-jp
 since jp is the country code.
 
 I think the eventual URL is not a big problem.  We just need to take
 care that it is compatible with the existing scheme, i.e. allows existing
 URLs to remain valid.
 
 * how to organise the repository so that both EN and JP are included
   (without the need to have a branch)
 I forked original branch for Japanese translation and made named
 branch to represent the version, but do you want to manage the
 translation in original repository? If so, I know Sphinx has i18n
 feature using gettext since 1.1. Is this useful for your purpose?
 http://sphinx.pocoo.org/latest/intl.html
 
 I think what we need is something along those lines:
 
 http://mark-story.com/posts/view/creating-multi-language-documentation-with-sphinx
 
 Does this also make sense to you?  If so, it'd be great if you could
 submit a pull request that reorganises pytest's docs accordingly,
 preferably starting from the current trunk.  There weren't much changes
 between 2.2.3 and 2.2.4 in the doc directory, anyway.
 
 I think we'd end up with something like:
 
 pytest/
 doc/
 en/
 # current english files and dirs
 ja/
 # current japanese files and dirs
 
 I'll take care to push the docs to pytest.org - maybe using a scheme 
 like this:
 
 pytest.org/VER/... # for english documentation
 pytest.org/en/VER  # for 

Re: [py-dev] pytest tmpdir / test directories

2012-06-06 Thread holger krekel
Hi Ronny, CCing py-dev again, was lost in between,

On Wed, Jun 06, 2012 at 14:44 +0200, Ronny Pfannschmidt wrote:
 On 06/06/2012 02:35 PM, holger krekel wrote:
 On Tue, Jun 05, 2012 at 08:18 +0200, Ronny Pfannschmidt wrote:
 Hi Holger,
 
 i was thinking of just naming the current tmpdir funcarg testdatadir,
 and implementing tmpdir in terms of testdatadir.ensure('tmpdir', dir=1)
 
 sure.  no immediate need to have this as a pytest core plugin, i guess.
 
 
 this as meant as a patch to pytests tmpdir plugin

Ah, i slowly get it.  If you get to the habit of providing a full example 
of things at the beginning i might be able to understand things quicker.

I am hesitant because it introduces another generic core funcarg which
needs explanation and examples for relatively little benefit.  I'd
rather suggest to think about extending reporting such that the paths
to interesting files for a (failing) test are presented, like in your
example the one to couchdb.dump

best,
holger


 -- Ronny
 
 holger
 
 -- Ronny
 
 On 06/04/2012 04:49 PM, holger krekel wrote:
 Hi Ronny,
 
 i think you could implement your scheme by writing maybe a testdir
 funcarg (a py.path.local object) which has a .tmpdir attribute.
 
 The existing testdatadir is only for internal purposes and a
 bit convoluted anyway - i don't want to promote its usage but i
 also don't currently feel like refactoring it on a large scale.
 
 best,
 holger
 
 On Tue, May 15, 2012 at 19:17 +0200, Ronny Pfannschmidt wrote:
 Hi Holger,
 
 currently, the tmpdir funcarg creates a directory like::
 
/tmp/pytest-0/test_python0
 
 i currently abuse that in a plugin to store db dump, that currently
 looks like::
 
$ tree /tmp/pytest-0/test_python0
/tmp/pytest-0/test_python0
|-- couchdb.dump
`-- proc
 
 where couchdb.dump is a database dump, thats actually related to a
 test, but shouldnt really be in its tmpdir,
 
 while proc is a directory actually created by the test
 
 i would rather have the following tree::
 
/tmp/pytest-0/ (test session root as before)
 test_python0 (test data directory for that test)
   couchdb.dump (the db dump)
   tmpdir (this path will be in the
 proc (the directory the test actually made)
 
 
 so testdatadir will be a funcarg *and* literally a directory that
 will be used by other functionalities as a place to put data,
 
 tmpdir would just be a directory within that
 
 if this structure is a given,
 we can also easily add per test coverage data and some extra dumps
 (like for example a logging filehandler)
 
 without disturbing expectations about having a clean tmpdir
 
 also i would like to grab screenshoots there as well
 (acceptance tests with a headless webkit)
 
 -- Ronny
 
 On 05/15/2012 06:56 PM, holger krekel wrote:
 Hi Ronny,
 
 On Sun, May 13, 2012 at 10:26 +0200, Ronny Pfannschmidt wrote:
 Hi Holger,
 
 for one of my pytest plugins i drop out files, that dont exctly fit
 tmpdir, so i'd like to propose to flip it around a bit,
 
 so we get a testdatadir funcarg/directory and tmpdir is a directory 
 below it
 
 confused a bit - do you literally mean funcarg/directory?
 could you give a more concrete example?
 
 from pytest plugins i'd use it to drop things like screenshots
 (pytest-ghost i.e. headless webkit),
 db dumps (pytest_couchdbkit)
 and later mabe for something like per test coverage reports
 
 for later it would also be nice to be able to transfer that data
 with a report and store it in some kind of test repository (for my
 thesis i'll probably prototype a couchdb one)
 
 I'd definitely like to see a test result repository and give test code
 the possibility to create some payload and make this easily accessible
 from outside a test run for visualization or other purposes.  I'd like
 to list a few use cases for such a repository and then design the API.
 But this is rather something post-may.
 
 holger
 
 
 -- Ronny
 
 
 
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Performance tests with py.test

2012-06-04 Thread holger krekel
Hi Bogdan,

sorry for taking a while ...

On Thu, May 03, 2012 at 11:18 +1000, Bogdan Opanchuk wrote:
 For some reason this message has not shown up in the list archives, so
 I'll resend it again.
 
 On Sun, Apr 1, 2012 at 12:21 PM, holger krekel hol...@merlinux.eu wrote:
 
  do you want to consider performance regressions as a failure?
 
 Not really. I just need some table with performance results that I
 could get for different systems/versions and compare them. Besides,
 performance regressions can be implemented using existing
 functionality, because they do not have some continuous result
 associated with them — only pass/fail.
 
  Could you maybe provide a simple example test file and make up
  some example output that you'd like to see?
 
 Sure. Consider the following test file:
 
 -
 import pytest
 
 def test_matrixmul():
pass
 
 @pytest.mark.perf('seconds')
 def test_reduce():
# some lengthy preparation that I do not want to time
# actual work
return 1.0
 
 @pytest.mark.perf('GFLOPS')
 def test_fft():
# again, some lengthy preparation that I do not want to time
# actual work
return 1.0, 1e10
 -
 
 Here test_matrixmul() is a normal pass/fail test, test_reduce() is
 marked as performance test that returns execution time, and
 test_fft() is marked as a test that returns execution time + the
 number of operations (thus allowing us to calculate GFLOPS value).
 
 I have put together a clunky solution (see the end of this letter)
 using existing hooks that gives me more or less what I want to see:
 
 $ py.test -v
 ...
 test_test.py:3: test_matrixmul PASSED
 test_test.py:6: test_reduce 1.0 s
 test_test.py:10: test_fft 10.0 GFLOPS
 ...
 
 The only problem here is that I have to explicitly increase verbosity
 level. I'd prefer 'perf' marked tests show their result even for
 default verbosity, but I haven't found a way to do it yet.

not sure it helps but are you aware that you can put 

[pytest]
addopts = -v

in a pytest.ini file?

  Meanwhile, if you haven't already you might want to look at the output
  of py.test --durations=10 and see about its implementation (mostly
  contained in _pytest/runner.py, grep for 'duration').
 
 Yes, I know about it, but it is not quite what I need:
 - it measures the time of the whole testcase, while I usually need to
 time only specific part

right.  It differentiates between setup and runtest phases for a test, though.

 - it does not allow me to measure anything more complicated (e.g.
 GFLOPS, as another variant I may want to see the error value)
 - it prints its report after all the tests are finished, while it is
 much more convenient to see testcase result as soon as it is finished
 (my performance tests may run for quite a long time)

right.

 So, the solution I have now is shown below. pytest_pyfunc_call()
 implementation annoys me the most, because I had to copy-paste it from
 python.py, so it exposes some py.test internals and can easily break
 when something (seemingly hidden) inside the library is changed.

That's true.  If you want to open an issue about this (pytest_pyfunc_call
to return test function return value), i can see to care about it.

 -
 def pytest_configure(config):
config.pluginmanager.register(PerfPlugin(config), '_perf')
 
 class PerfPlugin(object):
 
def __init__(self, config):
pass
 
def pytest_pyfunc_call(self, __multicall__, pyfuncitem):
# collect testcase return result
testfunction = pyfuncitem.obj
if pyfuncitem._isyieldedfunction():
res = testfunction(*pyfuncitem._args)
else:
funcargs = pyfuncitem.funcargs
res = testfunction(**funcargs)
pyfuncitem.result = res
 
def pytest_report_teststatus(self, __multicall__, report):
outcome, letter, msg = __multicall__.execute()
 
# if we have some result attached to the testcase, print it
 instead of 'PASSED'
if hasattr(report, 'result'):
msg = report.result
 
return outcome, letter, msg
 
def pytest_runtest_makereport(self, __multicall__, item, call):
report = __multicall__.execute()
 
# if the testcase has passed, and has 'perf' marker, process its 
 results
if call.when == 'call' and report.passed and
 hasattr(item.function, 'perf'):
perf = item.function.perf
perftype = perf.args[0]
if perftype == 'seconds':
report.result = str(item.result) +  s
else:
seconds, operations = item.result
report.result = str(operations / seconds / 1e9) +  GFLOPS
 
return report
 -

We could also think about a convention that would allow setting
the short/longform (letter,msg) on the report object so that could
also get rid of the report_teststatus hook.  I am slightly less
inclined to go for this because the above teststatus bit seems
nice enough.

best,
holger

[py-dev] pytest-2.2.4 - bugfixes and better junitxml/unittest/python3 compat

2012-05-22 Thread holger krekel
pytest-2.2.4: bug fixes, better junitxml/unittest/python3 compat
===

pytest-2.2.4 is a minor backward-compatible release of the versatile
py.test testing tool.   It contains bug fixes and a few refinements
to junitxml reporting, better unittest- and python3 compatibility.

For general information see here:

 http://pytest.org/

To install or upgrade pytest:

pip install -U pytest # or
easy_install -U pytest

Special thanks for helping on this release to Ronny Pfannschmidt
and Benjamin Peterson and the contributors of issues.

best,
holger krekel

Changes between 2.2.3 and 2.2.4
---

- fix error message for rewritten assertions involving the % operator
- fix issue 126: correctly match all invalid xml characters for junitxml
  binary escape
- fix issue with unittest: now @unittest.expectedFailure markers should
  be processed correctly (you can also use @pytest.mark markers)
- document integration with the extended distribute/setuptools test commands
- fix issue 140: propperly get the real functions
  of bound classmethods for setup/teardown_class
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
- fix issue #143: call unconfigure/sessionfinish always when
  configure/sessionstart where called
- fix issue #144: better mangle test ids to junitxml classnames
- upgrade distribute_setup.py to 0.6.27

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [TIP] ANN: pytest-couchdbkit 0.4

2012-04-15 Thread holger krekel
Hey Ronny,

Not using couchdb myself yet but that might change soon - do you have a
complete usage example somewhere of how to use it or did i miss that?

best,
holger


On Sun, Apr 15, 2012 at 20:26 +0200, Ronny Pfannschmidt wrote:
 Hello,
 
 I'm happy to announce the release of pytest-couchdbkit 0.4,
 which is the first release of the py.test couchdbkit integration i
 consider reasonably complete for others to use.
 
 It is available on http://pypi.python.org/pypi/pytest-couchdbkit/0.4
 
 -- Ronny
 
 pytest-couchdbkit
 =
 
 pytest-couchdbkit is a simple pytest extension that manages test databases
 for your couchdbkit using apps.
 
 In order to use it, you only need to set the ini option
 `couchdbkit_suffix` to something fitting your app.
 Additionally you may use `couchdbkit_backend` to select
 the gevent/eventlet backends.
 
 
 To setup couchapps before running the tests,
 there is the `pytest_couchdbkit_push_app(server, dbname)` hook
 
 It can be used to create a pristine database,
 which is replicated into each test database.
 
 
 
 The provided funcarg `couchdb` will be a freshly flushed database
 named `pytest_` + couchdbkit_suffix,
 additionally, after each test item,
 the database will be dumped to tmpdir.join(couchdb.dump)
 
 which is a simple file having entries in the format::
 
 number(\d+) + \r\n + number bytes + \r\n
 
 entries are:
 
 * the db info
 * documents
 * raw attachment data following the document
 
 Attachments are ordered by name,
 so they can be reassigned to their metadata on loading.
 
 The dump format is meant to be human-readable.
 
 
 
 Future
 --
 
 * fs fixtures (like couchapp)
 * code fixtures
 * dump fixtures
 * comaring a db to sets of defined fixtures
 
 CHANGELOG
 =
 
 from 0.3 to 0.4
 ---
 
 - add pytest_couchdbkit_push_app hook
 
 
 ___
 testing-in-python mailing list
 testing-in-pyt...@lists.idyll.org
 http://lists.idyll.org/listinfo/testing-in-python
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] OpenStack

2012-03-31 Thread holger krekel
Hi Laurent, (sorry for the previous misnomer!)

I had a bit of a hard time reading your mail because you didn't use
the typical reply formatting. Actually i only know discovered the content,
at first i completely skipped the content because i thought it was just
a forwarded mail ... others may have done the same.  If it's not too much
effort maybe repost as a real reply?

best,
holger

On Tue, Mar 27, 2012 at 09:23 -0700, Brack, Laurent P. wrote:
 Hi Holger,
 
 No problem for the delayed response. I know you were in the bay area a
 while back and may have felt everyone is always in a rush but this is
 not our case :)
 
 -Original Message-
 From: holger krekel [mailto:hol...@merlinux.eu] 
 Sent: Saturday, March 24, 2012 6:42 AM
 To: Brack, Laurent P.
 Cc: py-dev@codespeak.net
 Subject: Re: [py-dev] OpenStack
 
 Hi Brack,
 [Laurent]  This is my last name :)
 
 sorry for taking a while.  Am still on travel, currently in the Sonoran
 deserts.  My answers up until first half May are thus likely to lag.
 (They do have suprisingly good internet connection ... better than in
 the three-times more expensive hotel in the San Francisco valley last
 week).
 
 On Tue, Mar 20, 2012 at 10:27 -0700, Brack, Laurent P. wrote:
  Forking from the [py-dev] pytest-timeout 0.2 e-mail thread.
  
   I am interested in OpenStack but you can you detail a bit more of 
   what you want to achieve?
  
  I have attached a rough diagram of what we are building internally
 (hopefully it will not be filtered out). 
 
 thanks for sharing.
 
  About a year ago, we attended a presentation on OpenStack (when it was
 still driven by Anso Labs before they got acquired by Rackspace). 
  We were (and are) currently using a private cloud (VMWare) but 
  contemplated the idea of scaling up to public clouds. Problem we had 
  was that we had to develop custom code for each cloud vendor whether
 Amazon EC2, Penguin, VMWare, etc. so OpenStack was really a viable path
 to not lock ourselves in with a given vendor.
 
 sounds sensible.
 
  While a bulk of our tests require embedded devices (in which case 
  virtualization makes little sense) a good chunk can be run on standard
 workstations (all OS flavors) and in this case it only makes sense to
 move to a virtual environment.
  
  PyTest combined with xdist was the dream come true as we could focus 
  on developing tests meant to run on a single machine and later
 seamlessly parallelize their execution. There is nothing new here.
  
  We were thinking of writing a plugin to xdist (cxdist - c for cloud) 
  that would interface with OpenStack, using the pytest_xdist_setupnodes
 and pytest_configure_node hooks.
 
 ok.
 
  In those hooks, the plugin would provision the machine (via OpenStack)
 
  and then make xdist believe that it is dealing with physical slaves.
 
  Finally at the end of the run, we would teardown the slaves, leaving
 resources for other tests to run. 
  
  We have not started yet on this as scalability is not an issue but 
  this is our plan. As the diagram shows, the read boxes are plugins we
 intend to release to the open source community.
 
 Is your general idea to use the open stack integration to get
 parallelizable test runs with totally controllable environments? 
 
 [Laurent]  It is. We have a fairly large VMWare infrastructure but I
 feel we are locked in. Also there is a point where scaling up this
 private cloud will simply make no sense from an economic standpoint.
 Finally, infrastructure like VMWare are of little use for the open
 source community. 
 
  Our teams write a lot of data driven tests (testing various AV 
  codecs with different configuration parameters using very similar
 verification procedure) and as a result we make heavy use of funcargs.
 
 Curious, are you using 2.2.3 for this?  There are some plans to further
 improve parametrization where i would be interested in your feedback.
 
 [Laurent]  Yes. As a matter of fact we had to make changes to our
 plugin to work properly with the new metafunc.parametrize method. Maybe
 here is the time to open a small parenthesis. We have simplified the
 generation process
 For our common users, although this doesn't prevent them from using
 custom hook implementations or factories. One thing that the plugin does
 is attach meta data to items (which is carried over from hook to
 hook). This meta data contains information about test cases on the
 testlink Server. One of the challenge with metafunc was to find a way to
 attach this information in a way it would not be lost during the test
 generation done by pytest. In 2.1.3, with addcall we had done this by
 hacking  the param formal parameter intended use (while preserving the
 functionality). In 2.2.3 (the addcall hack was broken - meta data got
 lost), we found another way. Again a hack and therefore no guaranty it
 will work with future versions. It would be nice to have a supported way
 to attach data as part of the parametrize process which is then
 carried

Re: [py-dev] pytest-timeout 0.2

2012-03-17 Thread holger krekel
Hi Floris,

On Sat, Mar 17, 2012 at 18:31 +0100, Floris Bruynooghe wrote:
 I've made a second release of the pytest-timeout plugin for py.test
 which can time out long running tests.  This release includes a number
 of suggestions made on this list, major changes include:
 
 * Fixed the activation problem
 * Set timeout using configuration file
 * Add a timeout marker to modify timeout of one item
 * The marker can also choose the method (signal/thread)
 * Renamed --nosigalrm to --timeout_method to future proof adding of
 eventlet and gevent timeout methods
 * Works on python 3, tested on 2.6, 2.7 and 3.2
 
 Not yet done:
 
 * Automatic enabling of the plugin, you still need to enable it on the
 command line or configuration file before you can use the marker.
 This was probably a bad idea but I felt bad about stealing a marker by
 default.

I went ahead and created a test function with

@pytest.mark.timeout(1)
def test_hello():
...

but the timeout was not honoured.  Then i skimmed the docs :)
added timeout_method = signal to my ini-file and ran, still not honoured.
Then i figured i need to set some dummy timeout = 10 in the ini - and now 
i get the proper timeout of 1 second.

I understand the hesitance to grab a general name like timeout but then
again installing pytest-timeout is a deliberate act and it grabbing the
timeout marker is not surprising IMO.  So i'd kindly encourage you to
go for it. I wonder btw. if the output of --markers should be merged with
--help.  The latter would get yet longer but then again it's nice to 
have all the info at a fingertip.

Another feedback item: @pytest.mark.timeout(5, 'signal') ought to work.
It's slightly awkward because of the marker args/kwargs API but it's expected
from a pure user perspective i think.

Moreover i'd eventually like to include the timeout plugin
in pytest core.  It's an important feature for functional testing.

 * eventlet and gevent timeouts

Here is what i did for eventlet (only accessing the decorator here):

https://bitbucket.org/hpk42/detox/src/f9f8c0107cc1/tests/conftest.py#cl-108

cheers  thanks,
holger

 
 As before the release is on pypi:
 http://pypi.python.org/pypi/pytest-timeout and the development
 repository and issue tracker on bitbucket:
 https://bitbucket.org/flub/pytest-timeout/
 
 I'd be pleased to receive any further feedback you may have.
 
 Floris
 
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] New plugin: pytest-timeout

2012-03-08 Thread holger krekel
On Thu, Mar 08, 2012 at 23:08 +, Floris Bruynooghe wrote:
 On 7 March 2012 21:55, holger krekel hol...@merlinux.eu wrote:
  or with a marker.  Not using some implicit magic number is anyway a good
  idea i think.
 
 Not sure what you mean.  Do you mean using 0 as saying no timeout is
 a magic number but e.g. None is fine?  Essentially anything else other
 then a positive number is disable timeout to me.

just meant 5 seconds as a default.  0 or None is fine to mean no
timeout IMHO.

best,
holger

 Regards,
 Floris
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.2.2: bug fixes, collectonly improvements

2012-02-05 Thread holger krekel

pytest-2.2.2 is a minor backward-compatible release of the py.test
testing tool.   It contains bug fixes and a few refinements particularly
to reporting with --collectonly, see below for betails.  

For general information see here:

 http://pytest.org/

To install or upgrade pytest:

pip install -U pytest # or
easy_install -U pytest

Special thanks for helping on this release to Ronny Pfannschmidt
and Ralf Schmitt and the contributors of issues.

best,
holger krekel


Changes between 2.2.1 and 2.2.2


- fix issue101: wrong args to unittest.TestCase test function now
  produce better output
- fix issue102: report more useful errors and hints for when a 
  test directory was renamed and some pyc/__pycache__ remain
- fix issue106: allow parametrize to be applied multiple times
  e.g. from module, class and at function level.
- fix issue107: actually perform session scope finalization
- don't check in parametrize if indirect parameters are funcarg names
- add chdir method to monkeypatch funcarg
- fix crash resulting from calling monkeypatch undo a second time
- fix issue115: make --collectonly robust against early failure
  (missing files/directories)
- -qq --collectonly now shows only files and the number of tests in them
- -q --collectonly now shows test ids 
- allow adding of attributes to test reports such that it also works
  with distributed testing (no upgrade of pytest-xdist needed)
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] xdist and thread-safe resource counting

2012-01-20 Thread holger krekel
Hi Eli,

interesting problem.

On Wed, Jan 18, 2012 at 20:55 -0800, Ateljevich, Eli wrote:
 I have a question about managing resources in a threadsafe way across xdist 
 -n.
 
 My group is using py.test as a high-level driver for testing an mpi-based 
 numerical code. Many of our system-level tests wrap a system call to mpirun 
 then postprocess results. I have a decorator for the tests that hints at the 
 number of processors needed (usually something like 1,2,8).
 
 I would like to launch as much as I can at once given the available 
 processors. For instance, if 16 processors are available there is no reason I 
 couldn't be doing a 12 and a 4 processor test. I was thinking of using xdist 
 with some modest number of processors representing the maximum number of 
 concurrent tests. The xdist test processors would launch mpi jobs when enough 
 processors become available to satisfy the np hint for that test. This would 
 be managed by having the tests check out cores and sleep if they aren't 
 available yet.
 
 This design requires a threadsafe method to query, acquire and lock the count 
 of available mpi cores. I could use some sort of lock or semaphore from 
 threading, but I thought it would be good to run this by the xdist 
 cognoscenti and find out if there might be a preferred way of doing this 
 given how xdist itself distributes its work or manages threads.

pytest-xdist itself does not provide or use a method to query the number
of available processors.  Quick background of xdist: Master process starts 
a number of processes which collect tests (see output of py.test --collectonly) 
and the master sees the test ids of all those collections.  It then decides 
the scheduling (Each or Load at the moment, -n5 implies load-balancing) and 
sends test ids to the nodes to execute.  It pre-loads tests with test ids
and then waits for completion for sending more test ids to each node.
There is no node-to-node communication for co-ordination.

It might be easiest to not try to extend the xdist-mechanisms
but to implement an independent method which co-ordinates the number of running
MPI tests / used processors via a file or so.  For example, on posix you 
can get read/write a file with some meta-information and use the 
atomic os.rename operation.  Not sure about the exact semantics but
this should be doable and testable without any xdist involvement. 
If you have such a method which helps to restrict the number
of MPI-processes you can then use it from a pytest_runtest_setup which 
can read your decorator-attributes/markers and then make the decision 
if to wait or run the test.  This method also makes you rather independent
from the number of worker processes started with -nNUM.

HTH,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] xdist and thread-safe resource counting

2012-01-20 Thread holger krekel
On Fri, Jan 20, 2012 at 12:56 -0800, Ateljevich, Eli wrote:
 Thanks, Holger. I appreciate the hint about where to do the testing and 
 waiting in pytest_runtest_setup and I think the atomic file rename idea is an 
 interesting way to set up a signal.
 
 I may not fully understand about xdist. I certainly agree it is efficient use 
 of mpirun that is the crux of doing the job right ... so probably any load 
 balancing offered by xdist is going to be wasted on me. 
 
 The main service I was looking for out of xdist was the ability to run tests 
 concurrently. As I think you realize, if I have a pool of 16 processors and 
 the first four tests collected require 8, 4, 8, 4 processors, I would want 
 this behavior:
 1.  the first test to start immediately
 2.  the second test to start immediately without the first finishing
 3.  the third test to either wait or start in a python sense but sleep 
 before launching mpi
 4.  the fourth test to start immediately
 
 Is vanilla py.test able to do this kind of concurrent testing? Or would I 
 need to tweak it to launch tests in threads according to my criterion for 
 readiness? 

A run with pytest-xdist, notably, py.test -nNUM allows to implement this
behaviour, i think.

 I think we have settled how I would allocate resources, but your idea implies 
 I might have all the test hints in one place. If I have full control all the 
 test launches this might allow me to do some sort of knapsack problem-ish 
 kind of reorganization to keep everything fully utilized rather than taking 
 the test in the order they were collected. For instance, if I had 16 
 processors and the first four tests take 12-12-4-4 I could do this in the 
 order (12+4 concurrently) (12+4 concurrently). Do I have this level of 
 control?

I think so yes.  IIRC pytest-xdist distributed the first four tests
such that they each land at different nodes.  So, given the algorithm
i hinted at, and running with py.test -n3 the first sub process would 
start and run on 12 processors.  The second process would see that 
there are 12 used and wait until 12 become available. The 
third process would only need 4 and immediatly continue, utilizing
all 16 processors at that time.  When the first one finishes the
second sub process would see that there now are enough and proceed
with its testing.  This is all fully compatible with pytest-xdist
semantics and only needs code at pytest_runtest_setup time i think.

best,
holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] tox-1.3: virtualenv-based test run automizer

2011-12-21 Thread holger krekel
tox 1.3: the virtualenv-based test run automatizer
===

I am happy to announce tox 1.3, containing a few improvements
over 1.2.  TOX automates tedious test activities driven from a
simple ``tox.ini`` file, including:

* creation and management of different virtualenv environments
  with different Python interpreters
* packaging and installing your package into each of them
* running your test tool of choice, be it nose, py.test or unittest2 or
  other tools such as sphinx doc checks
* testing dev packages against each other without needing to upload to PyPI

Docs and examples are at:

http://tox.testrun.org/

Installation:

pip install -U tox

code hosting and issue tracking on bitbucket:

http://bitbucket.org/hpk42/tox

best,
Holger Krekel

1.3
-

- fix: allow to specify wildcard filesystem paths when
  specifiying dependencies such that tox searches for
  the highest version

- fix issue21: clear PIP_REQUIRES_VIRTUALENV which avoids
  pip installing to the wrong environment, thanks to bb's streeter

- make the install step honour a testenv's setenv setting
  (thanks Ralf Schmitt)
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.2.0: test marking++, parametrization++ and duration profiling

2011-11-18 Thread holger krekel
py.test 2.2.0: test marking++, parametrization++ and duration profiling
===

pytest-2.2.0 is a test-suite compatible release of the popular
py.test testing tool.  Plugins might need upgrades. It comes 
with these improvements:

* easier and more powerful parametrization of tests:

  - new @pytest.mark.parametrize decorator to run tests with different arguments
  - new metafunc.parametrize() API for parametrizing arguments independently
  - see examples at http://pytest.org/latest/example/parametrize.html
  - NOTE that parametrize() related APIs are still a bit experimental
and might change in future releases.

* improved handling of test markers and refined marking mechanism:

  - -m markexpr option for selecting tests according to their mark
  - a new markers ini-variable for registering test markers for your project
  - the new --strict bails out with an error if using unregistered markers.
  - see examples at http://pytest.org/latest/example/markers.html

* duration profiling: new --duration=N option showing the N slowest test 
  execution or setup/teardown calls. This is most useful if you want to
  find out where your slowest test code is.

* also 2.2.0 performs more eager calling of teardown/finalizers functions 
  resulting in better and more accurate reporting when they fail

Besides there is the usual set of bug fixes along with a cleanup of
pytest's own test suite allowing it to run on a wider range of environments.

For general information, see extensive docs with examples here:

 http://pytest.org/

If you want to install or upgrade pytest you might just type::

pip install -U pytest # or
easy_install -U pytest

Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri,
Alfredo Doza and all who gave feedback or sent bug reports.

best,
holger krekel


notes on incompatibility
--

While test suites should work unchanged you might need to upgrade plugins:

* You need a new version of the pytest-xdist plugin (1.7) for distributing 
  test runs.  

* Other plugins might need an upgrade if they implement
  the ``pytest_runtest_logreport`` hook which now is called unconditionally
  for the setup/teardown fixture phases of a test. You may choose to
  ignore setup/teardown failures by inserting if rep.when != 'call': return
  or something similar. Note that most code probably just works because 
  the hook was already called for failing setup/teardown phases of a test
  so a plugin should have been ready to grok such reports already.


Changes between 2.1.3 and 2.2.0


- fix issue90: introduce eager tearing down of test items so that
  teardown function are called earlier.
- add an all-powerful metafunc.parametrize function which allows to 
  parametrize test function arguments in multiple steps and therefore
  from indepdenent plugins and palces. 
- add a @pytest.mark.parametrize helper which allows to easily
  call a test function with different argument values
- Add examples to the parametrize example page, including a quick port 
  of Test scenarios and the new parametrize function and decorator.
- introduce registration for pytest.mark.* helpers via ini-files
  or through plugin hooks.  Also introduce a --strict option which 
  will treat unregistered markers as errors
  allowing to avoid typos and maintain a well described set of markers
  for your test suite.  See exaples at http://pytest.org/latest/mark.html
  and its links.
- issue50: introduce -m marker option to select tests based on markers
  (this is a stricter and more predictable version of '-k' in that -m
  only matches complete markers and has more obvious rules for and/or
  semantics.
- new feature to help optimizing the speed of your tests: 
  --durations=N option for displaying N slowest test calls 
  and setup/teardown methods.
- fix issue87: --pastebin now works with python3
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
- fix and cleanup pytest's own test suite to not leak FDs 
- fix issue83: link to generated funcarg list
- fix issue74: pyarg module names are now checked against imp.find_module false 
positives
- fix compatibility with twisted/trial-11.1.0 use cases
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] tox-1.2: some bug fixes for the virtualenv-based generic test runner

2011-11-10 Thread holger krekel
tox 1.2: the virtualenv-based test run automatizer
===

I am happy to announce tox 1.2, now using and depending on the latest
virtualenv code and containing some bug fixes.  TOX automates tedious
test activities driven from a simple ``tox.ini`` file, including:

* creation and management of different virtualenv environments with
  different Python interpreters
* packaging and installing your package into each of them
* running your test tool of choice, be it nose, py.test or unittest2 or
  other tools such as sphinx doc checks
* testing dev packages against each other without needing to upload to PyPI

It works well on virtually all Python interpreters that support virtualenv.

Docs and examples are at:

http://tox.testrun.org

Installation:

pip install -U tox

code hosting and issue tracking on bitbucket:

http://bitbucket.org/hpk42/tox

best,
Holger Krekel

1.2 compared to 1.1
-

- remove the virtualenv.py that was distributed with tox and depend
  on virtualenv-1.6.4 (possible now since the latter fixes a few bugs
  that the inling tried to work around)
- fix issue10: work around UnicodeDecodeError when inokving pip (thanks
  Marc Abramowitz)
- fix a problem with parsing {posargs} in tox commands (spotted by goodwill)
- fix the warning check for commands to be installed in testevironment
  (thanks Michael Foord for reporting)
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Status of py.test's unit tests now ?

2011-11-08 Thread holger krekel
Hi Pere,

i think i fixed (most of) the issues behind the failures on OSX.
could you do a pip install -i http://pypi.testrun.org -U pytest
and rerun the tests and send along the full output if there remain
issues?

thanks,
holger

On Fri, Oct 28, 2011 at 21:08 +, holger krekel wrote:
 On Fri, Oct 28, 2011 at 01:32 +0200, Pere Martir wrote:
  On Thu, Oct 27, 2011 at 2:15 PM, Ronny Pfannschmidt
  ronny.pfannschm...@gmx.de wrote:
   On 10/27/2011 01:54 PM, Pere Martir wrote:
   By the way, where is the CI server please ?
   http://hudson.testrun.org/job/pytest/
  
  It seems to be a problem specific to Mac OS X. Since there is no Mac
  OS X slave, I suppose that the unit tests on this platform has not
  been paid much attention ?
 
 right, is the reason i asked for a build host in my last mail.
 
  It's strange that if I only executed a subset of unit tests with -k,
  they don't fail. For example:
  
python pytest.py -k TestGenerator
  
  But as you can see in the attachment of my previous post, they failed.
  It's also true for many other test suites, any clue ?
 
 
 on my OSX 10.8 all tests pass FWIW.
 
  By the way, I fixed a problem. The failure of
  TestSession.test_parsearg is because on Mac OS X /var is actually a
  symbolic link of /private/var, and many other path under /. Patching
  TmpDir to return realpath fixes the problem. This doesn't fix many
  failures. tox -e py26 still blocks at test_pdb forever, for example.
  
  I feel like that the other failures are due to the other bugs. Is it
  worthwhile spending time looking at the code ? Or it's probably the
  problem of my configuration/environment (not the source code) ?
 
 I don't know.  could you send a new py.test testing log with the
 x.realpath() patch applied?  
 
 Is there a way to log into your OSX machine by chance?
 
 thanks,
 holger
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.1.2: bug fixes and fixes for jython

2011-09-24 Thread holger krekel

pytest-2.1.2 is a minor backward compatible maintenance release of the
popular py.test testing tool.  pytest is commonly used for unit,
functional- and integration testing.  See extensive docs with examples
here:

 http://pytest.org/

Most bug fixes address remaining issues with the perfected assertions
introduced in the 2.1 series - many thanks to the bug reporters and to Benjamin
Peterson for helping to fix them.  pytest should also work better with
Jython-2.5.1 (and Jython trunk, but not Jython-2.5.2).

If you want to install or upgrade pytest, just type one of::

pip install -U pytest # or
easy_install -U pytest

best,
holger krekel / http://merlinux.eu

Changes between 2.1.1 and 2.1.2


- fix assertion rewriting on files with windows newlines on some Python versions
- refine test discovery by package/module name (--pyargs), thanks Florian Mayer
- fix issue69 / assertion rewriting fixed on some boolean operations
- fix issue68 / packages now work with assertion rewriting
- fix issue66: use different assertion rewriting caches when the -O option is 
passed
- don't try assertion rewriting on Jython, use reinterp

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.1.1: assertion fixes and improved junitxml output

2011-08-20 Thread holger krekel

pytest-2.1.1 is a backward compatible maintenance release of the
popular py.test testing tool.  See extensive docs with examples here:

 http://pytest.org/

Most bug fixes address remaining issues with the perfected assertions
introduced with 2.1.0 - many thanks to the bug reporters and to Benjamin
Peterson for helping to fix them.  Also, junitxml output now produces
system-out/err tags which lead to better displays of tracebacks with Jenkins.

Also a quick note to package maintainers and others interested: there now
is a pytest man page which can be generated with make man in doc/.

If you want to install or upgrade pytest, just type one of::

pip install -U pytest # or
easy_install -U pytest

best,
holger krekel / http://merlinux.eu

Changes between 2.1.0 and 2.1.1
--

- fix issue64 / pytest.set_trace now works within pytest_generate_tests hooks
- fix issue60 / fix error conditions involving the creation of __pycache__
- fix issue63 / assertion rewriting on inserts involving strings containing '%'
- fix assertion rewriting on calls with a ** arg
- don't cache rewritten modules if bytecode generation is disabled
- fix assertion rewriting in read-only directories
- fix issue59: provide system-out/err tags for junitxml output
- fix issue61: assertion rewriting on boolean operations with 3 or more operands
- you can now build a man page with cd doc ; make man

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] pytest-2.1.0: perfected assertions and bug fixes

2011-07-09 Thread holger krekel

Welcome to the relase of pytest-2.1, a mature testing tool for Python,
supporting CPython 2.4-3.2, Jython and latest PyPy interpreters.  See
the improved extensive docs (now also as PDF!) with tested examples here:

 http://pytest.org/

The single biggest news about this release are **perfected assertions**
courtesy of Benjamin Peterson.  You can now safely use ``assert``
statements in test modules without having to worry about side effects
or python optimization (-OO) options.  This is achieved by rewriting
assert statements in test modules upon import, using a PEP302 hook.
See http://pytest.org/assert.html#advanced-assertion-introspection for
detailed information.  The work has been partly sponsored by my company,
merlinux GmbH.
  
For further details on bug fixes and smaller enhancements see below.

If you want to install or upgrade pytest, just type one of::

pip install -U pytest # or
easy_install -U pytest

best,
holger krekel / http://merlinux.eu

Changes between 2.0.3 and 2.1.0
--

- fix issue53 call nosestyle setup functions with correct ordering
- fix issue58 and issue59: new assertion code fixes
- merge Benjamin's assertionrewrite branch: now assertions
  for test modules on python 2.6 and above are done by rewriting
  the AST and saving the pyc file before the test module is imported.
  see doc/assert.txt for more info.
- fix issue43: improve doctests with better traceback reporting on
  unexpected exceptions
- fix issue47: timing output in junitxml for test cases is now correct
- fix issue48: typo in MarkInfo repr leading to exception
- fix issue49: avoid confusing error when initizaliation partially fails
- fix issue44: env/username expansion for junitxml file path
- show releaselevel information in test runs for pypy
- reworked doc pages for better navigation and PDF generation
- report KeyboardInterrupt even if interrupted during session startup
- fix issue 35 - provide PDF doc version and download link from index page

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] tox-1.1: bug fixes and improved workflow

2011-07-09 Thread holger krekel
Hey all,

i just released tox-1.1, the virtualenv/test/CI automation tool.
See here for general information and install info:

http://codespeak.net/~hpk/tox

or

http://tox.readthedocs.org
(which is missing some navigation links at time of sending email)

The release incorporates a number of bug fixes and an enhanced work
flow: repeatedly calling tox without increasing version numbers now
works (by calling pip -U --nodeps).  

With this release i consider tox pretty stable and fit for general use.

best  thanks to all contributors,
holger krekel

1.1
-

- fix issue5 - don't require argparse for python versions that have it
- fix issue6 - recreate virtualenv if installing dependencies failed
- fix issue3 - fix example on frontpage
- fix issue2 - warn if a test command does not come from the test
  environment
- fixed/enhanced: except for initial install always call -U
  --no-deps for installing the sdist package to ensure that a package
  gets upgraded even if its version number did not change. (reported on
  TIP mailing list and IRC)
- inline virtualenv.py (1.6.1) script to avoid a number of issues, 
  particularly failing to install python3 environents from a python2 
  virtualenv installation.
- rework and enhance docs for display on readthedocs.org

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] conftest setting option_tbstyle doesn't work?

2011-07-06 Thread holger krekel
Hi Brianna,

On Wed, Jul 06, 2011 at 17:16 +1000, Brianna Laugher wrote:
 Hi,
 
 I'm not sure if I'm using it wrong, but it seems like setting
 option_tbstyle to a value in conftest.py is not having the desired
 effect.
 In the project I use at work we had
 option_tbstyle = 'short'
 and it was working fine, but lately I noticed it doesn't work and
 tracebacks are long. Manually doing 'py.test --tb=short' still works.

Hum, can you try adding/using an .ini file with an addopts value
liek --tb=short like described here:

http://doc.pytest.org/en/latest/customize.html?highlight=addopts#adding-default-options

this should work with pytest-2.0 and above.  In fact, the tbstyle is not 
configurable from conftest.py files anymore from the 2.0 version on.

HTH,
holger


 At work we are using 1.3.4 (planning to upgrade soon!) but I just did
 a little test (see below) and it seems to be the case with 2.0.3, as
 well. I put  pytest_runtest_setup in the conftest to validate that it
 was being used at all. (This example is pretty trivial but with our
 system tests, the long tracebacks take many pages to scroll through.)
 
 So am I specifying this option wrong, or what else might I be doing wrong?
 
 Also, was the  --help-config option removed? v2.0.3 doesn't seem to
 know about it.
 
 thanks,
 Brianna
 
 
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$ls
 conftest.py  test_pytest.py
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$cat conftest.py
 
 option_tbstyle = 'line'
 
 
 def pytest_runtest_setup(item):
 print (setting up, item)
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$cat test_pytest.py
 
 def func(x):
 return x + 1
 
 def test_answer():
 result = func(3)
 assert result == 5
 
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$py.test
  test session starts
 =
 platform linux2 -- Python 2.7.0 -- pytest-2.0.3
 collected 1 items
 
 test_pytest.py F
 
 == FAILURES
 ==
  test_answer
 _
 
 def test_answer():
 result = func(3)
assert result == 5
 E   assert 4 == 5
 
 test_pytest.py:7: AssertionError
 -- Captured stdout
 ---
 ('setting up', Function 'test_answer')
 == 1 failed in 0.04 seconds
 ==
 
 
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$py.test --tb=short
  test session starts
 =
 platform linux2 -- Python 2.7.0 -- pytest-2.0.3
 collected 1 items
 
 test_pytest.py F
 
 == FAILURES
 ==
  test_answer
 _
 test_pytest.py:7: in test_answer
assert result == 5
 E   assert 4 == 5
 -- Captured stdout
 ---
 ('setting up', Function 'test_answer')
 == 1 failed in 0.04 seconds
 ==
 (testpytest)blaugher@gfedev21 ~/software/testpytest]$
 
 
 
 
 
 
 -- 
 They've just been waiting in a mountain for the right moment:
 http://modernthings.org/
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Decorators and funcargs in py.test

2011-06-01 Thread holger krekel
Hi Vyacheslav, hi Ronny,

On Mon, May 30, 2011 at 22:38 +0200, Ronny Pfannschmidt wrote:
 On Mon, 2011-05-30 at 16:23 -0400, Vyacheslav Rafalskiy wrote:
  No problem. Here is my (real life) example.
  
  My functional test functions may or may not return for different
  reasons (like a faulty web application or middleware). I want to
  declare a fail if it takes more than so many seconds to complete. So I
  write a decorator
  run_with_timeout(), which will start the function in a new thread and
  abandon it after timeout.
 
 This can easily be solved by combining something like
 pytest.mark('timeout') and a override to pytest_pyfunc_call using it

I agree.

Vyacheslav, if you can't make sense of the use pytest.mark() and
provide-a-hook suggestion, one of us will be certainly be happy to
provide a more complete example and add it to the docs.

 i suppose this could also take a look at extending the --boxed mode
 (which forks for each test and uses subprocesses but doesn’t handle
 timeouts atm.

this is part of the pytest-xdist plugin, however.  Question is,
if we shouldn't eventually grow timeout-support in core pytest
through alarm or so.

best,
holger

  
  A I stated in OP, it is not that I cannot do it. I can and I do. My
  point is that it makes sense to allow decorators and should not be
  very difficult (see the example in OP).
 
 personally i am opposed to creating new data conventions for problems
 that can be solved with plain marks + a hook
 
 usually there are 2 reasons people use to decorate tests
 
 my opinions for implementing those are 
 a) add arguments + their cleanups - funcargs please, they are made for
 that
 b) use more sophisticated call's - hooks please, maybe add a bug ticket
 for empowering one to return a exception-info, so stuff like
 thread-wrappers can pass that more nicely for failures
 
 *i* really think it is wrong to decorate test functions that way and
 expect stuff to work
 
 there are already plenty of mechanisms to change the behavior of pytest
 test function calling in the desired way, none of those require hacks to
 pass around argspecs
 
 -- Ronny
  
  Thanks,
  Vyacheslav
  
  On Mon, May 30, 2011 at 3:55 PM, Ronny Pfannschmidt
  ronny.pfannschm...@gmx.de wrote:
   Hi,
  
   can you try to explain the usecase those decorators are fulfilling,
  
   there may be a better integrated way using pytest.mark + setup/teardown
   hooks
  
   -- Ronny
  
   On Mon, 2011-05-30 at 15:17 -0400, Vyacheslav Rafalskiy wrote:
   Hi Holger,
  
   I am trying to make decorators work with test functions, which depend
   on funcargs. As it stands, they don't.
   Decorated functions lose funcargs. A workaround would be to decorate
   an internal function like this:
  
   def test_it(funcarg_it):
   @decorate_it
   def _test_it():
   # test it
  
   _test_it()
  
   This works, but it is not nice. I'd rather wrote a decorator like
  
   def decorate_it(f):
   def _wrap_it(*args, **kwargs):
   # wrap f() here
  
   _wrap_it._varnames = _pytest.core.varnames(f)
   return _wrap_it
  
   and apply it straight to the test function.
  
   After examining the source code I even expected it to just work
   (magically of course) but it didn't.
   Do you think it is worthwhile? If so I can enter a feature request.
  
   Thanks,
   Vyacheslav
   ___
   py-dev mailing list
   py-dev@codespeak.net
   http://codespeak.net/mailman/listinfo/py-dev
  
  
 



 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] migration from 1.3 to 2.0

2011-05-29 Thread holger krekel
Hey Vyacheslav,

On Wed, May 04, 2011 at 10:53 -0400, Vyacheslav Rafalskiy wrote:
 On Tue, May 3, 2011 at 5:48 AM, holger krekel hol...@merlinux.eu wrote:
  On Mon, May 02, 2011 at 17:40 -0400, Vyacheslav Rafalskiy wrote:
  On Sat, Apr 30, 2011 at 10:22 AM, holger krekel hol...@merlinux.eu wrote:
   On Thu, Apr 07, 2011 at 12:29 -0400, Vyacheslav Rafalskiy wrote:
 
  
   (sidenote: configure and even sessionstart hooks are both a bit
not 100% right because they happen even on the master side of a 
distributed
test run and the master side does not collect or run tests at all)
  
   I see. Perhaps something like setup_package() in the top-level 
   __init__.py
   could be a solution?
  
   I guess you mean an __init__.py file of a test directory.
   With a layout of test dirs within an application this might mean
   one has to put this setup into the main package __init__.py
   and mixing test code and application code is often not a good idea.
 
  Yes, exactly. In my case of functional testing I don't even have
  application code here.
  When I start the tests I tell the runner where in the network the
  system under test is.
 
  
   So i'd rather put it into a conftest.py file as a normal hook.
   Maybe pytest_pyfunc_setup(request) would be good where request
   is the same object as for the funcarg factories.
  
   You could then write:
  
      # content of conftest.py
      def pytest_pyfunc_setup(request):
          val = request.cached_setup(setup=makeval, scope=session)
          # use val for some global setting of the package
  
   Alternatively we could see to call something like:
  
      def pytest_setup_testloop(config):
          val = makeval()
          # use val for some global setting of the package
  
      def pytest_teardown_testloop(config):
          ...
  
   which would be called once for a test process.
 
  The reason why I suggested setup_package() is that you already have
  setup_function, setup_method, setup_class and setup_module so
  the former would just complete the line-up. And the natural place
  for it would be __init__.py of that package.
 
  On the other hand, you can put conftest.py in every folder, which
  can do precisely the same thing as well as many others. The
  question is which way is more intuitive and results in cleaner code.
  The answer is perhaps It depends.
 
  I like setup_module(module) because it lets me dump the configuration
  straight into the namespace where I use it and setup_package(package)
  could do the same.
 
  good line of reasoning.  It's mostly my intuition making me hesitant
  to add setup_package like you suggest.  And i wonder what it is about :)
  Maybe it's that the root namespace of a test directory is often called
  testing or tests  (the test one is taken by Python stdlib already).
  And therefore you would end up having to do import testing and
  then use global state with something like testing.STATE.
  But i guess this doesn't look so bad to you, does it? :)
  (The plugin/conftest system is designed such that it hardly
  requires any imports to manage test state.)
 
  Any more opinions on setup_package(package)? If others find it useful
  as well, i will consider introducing it with pytest-2.1.
 
 I guess I will have to withdraw the idea. Having to explicitly import
 the test package does not look nice at all.
 conftest.py rules!
 
 As to the two alternatives above I'd rather use
 pytest_setup_testloop(config) with direct access to config.

I am now pondering following your original intention and introduce a
setup_directory to be put into conftest.py files.  You could then
access the config object via pytest.config. Would that make sense
to you as well?

best,
holger

 Regards,
 Vyacheslav
 
 
  best,
  holger
 
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] setup_directory / xUnit extension? Re: migration from 1.3 to 2.0

2011-05-29 Thread holger krekel
Hey again,

On Sun, May 29, 2011 at 07:43 +, holger krekel wrote:
   good line of reasoning.  It's mostly my intuition making me hesitant
   to add setup_package like you suggest.  And i wonder what it is about :)
   Maybe it's that the root namespace of a test directory is often called
   testing or tests  (the test one is taken by Python stdlib already).
   And therefore you would end up having to do import testing and
   then use global state with something like testing.STATE.
   But i guess this doesn't look so bad to you, does it? :)
   (The plugin/conftest system is designed such that it hardly
   requires any imports to manage test state.)
  
   Any more opinions on setup_package(package)? If others find it useful
   as well, i will consider introducing it with pytest-2.1.
  
  I guess I will have to withdraw the idea. Having to explicitly import
  the test package does not look nice at all.
  conftest.py rules!
  
  As to the two alternatives above I'd rather use
  pytest_setup_testloop(config) with direct access to config.
 
 I am now pondering following your original intention and introduce a
 setup_directory to be put into conftest.py files.  You could then
 access the config object via pytest.config. Would that make sense
 to you as well?

To elaborate a wee bit:

* setup_directory would be guranteed to be called for any test
  (both python, doctest or other test) within the directory hierarchy
  of the conftest.py dir and before any setup_module/class etc. is called.

* teardown_directory would be guranteed to be called when a test
  is run that is not in the directory hierarchy.

* if neccessary one can have setup_directory push test related 
  state to some global module (from which tests could import for example)

i am not yet sure about the idea but i guess it would be somewhat natural
and complete the setup_*/teardown_* xUnit style fixture methods.

holger

  
 best,
 holger
 
  Regards,
  Vyacheslav
  
  
   best,
   holger
  
  
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


[py-dev] tox 1.0 - rapid multi-python test automation

2011-05-28 Thread holger krekel
tox 1.0: the rapid multi-python test automation
===

I am happy to announce tox 1.0, a stabilization and maintenance release
with some small improvements.  tox automates tedious test activities
driven from a simple ``tox.ini`` file, including:

* creation and management of different virtualenv environments with
  different Python interpreters
* packaging and installing your package into each of them
* running your test tool of choice, be it nose, py.test or unittest2 or
  other tools such as sphinx doc checks
* testing dev packages against each other without needing to upload to PyPI

Docs and examples are now hosted at:

http://tox.readthedocs.org

Installation or upgrade with:

pip install -U tox

Note that code hosting and issue tracking has moved from Google to Bitbucket:

http://bitbucket.org/hpk42/tox

The 1.0 release includes contributions and is based on feedback and
work from Chris Rose, Ronny Pfannschmidt, Jannis Leidel, Jakob Kaplan-Moss,
Sridhar Ratnakumar, Carl Meyer and others.  Many thanks!

best,
Holger Krekel

CHANGES
-

- fix issue24: introduce a way to set environment variables for
  for test commands (thanks Chris Rose)
- fix issue22: require virtualenv-1.6.1, obsoleting virtualenv5 (thanks Jannis 
Leidel)
  and making things work with pypy-1.5 and python3 more seemlessly
- toxbootstrap.py (used by jenkins build slaves) now follows the latest release 
of virtualenv
- fix issue20: document format of URLs for specifying dependencies
- fix issue19: substitute Hudson for Jenkins everywhere following the renaming
  of the project.  NOTE: if you used the special [tox:hudson]
  section it will now need to be named [tox:jenkins].
- fix issue 23 / apply some ReST fixes
- change the positional argument specifier to use {posargs:} syntax and
  fix issues #15 and #10 by refining the argument parsing method (Chris Rose)
- remove use of inipkg lazy importing logic -
  the namespace/imports are anyway very small with tox.
- fix a fspath related assertion to work with debian installs which uses
  symlinks
- show path of the underlying virtualenv invocation and bootstrap
  virtualenv.py into a working subdir
- added a CONTRIBUTORS file
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] What is the recommended way to run test with a coverage report?

2011-05-18 Thread holger krekel
Hey Baptiste,

On Wed, May 18, 2011 at 17:42 +0200, Baptiste Lepilleur wrote:
 The pytest documentation page indicates that it is supported, but provides
 no pointer on how do to this...

 Doing a search seem to reveal multiple plug-ins to do that. What is the
 recommended one? I'm working on Windows XP / Python 2.6  3.2.

sorry about that.  The plugin is named pytest-cov, see here

http://pypi.python.org/pypi/pytest-cov

If it doesn't work for you i am sure Meme (also here on the list)
can answer questions or to issues.

 By the way, is there a centralized list of useful plug-ins for pytest ?

not really.  However, i recommend to install pip and then type:

pip search pytest

to get a good list.

best,
holger



 
 Baptiste.

 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] How to run a given test by name?

2011-05-01 Thread holger krekel
On Sun, May 01, 2011 at 12:08 +0200, Baptiste Lepilleur wrote:
 typically when troubleshooting multiple test failures, I want to be able to
 run a single parametrized test case.
 
 How can you tell py.test to run only tests matching a specific name.
 
 For example, I'd like to be able to run:
 
 py.test src --filter-by-name test_mandatory_property[ZText]
 
 This would run all tests with a name
 matching test_mandatory_property[ZText].
 
 Is there a way to do that?

two possibilites.  You can do a run with py.test -rf which will report test 
IDs
for all failures.  You can then pass one or more of those IDs to a py.test 
invocation.
Secondly you can use the keyword option  try something like -k ZTEXT, see 
option help.

holger
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [ANN] plugin: pytest-incremental

2011-04-29 Thread holger krekel
Hi Eduardo,

i like the idea.  A few notes:

* it's not compatible with pytest-xdist, is it?

* i got BSDDB database corruption (i CTRL-Ced the run before)

* can you add an example of a project layout and what one would
  call wrt to watch_pkg?

I guess things don't work for me on pytest itself because
it has a plugin-based dynamic namespace construction/imports
so your AST scanning method does not really see the deps.
A different method would be to try to record imports
during the import and running of a test.  Myself, i also
experimented with specifying dependencies manually at
some point which also solves the issue when invoking 
shell commands provided a project - i guess those
would not naturally be found by your scanner.

best,
holger

On Mon, Apr 25, 2011 at 23:40 +0800, Eduardo Schettino wrote:
 Hi all,
 
 I have just pushed a new pytest plugin to pypi
 (http://pypi.python.org/pypi/pytest-incremental).
 
 The idea is to execute your tests faster by executing not all of them
 but only the required ones.
 
 This is very new (alpha), feedback is welcome :)
 
 Regards,
   Eduardo
 
 +++
 
 pytest-incremental
 
 
 an incremental test runner (pytest plugin)
 
 
 What is an incremental test runner ?
 ===
 
 When talking about build-tools it is common to refer to the terms:
 
  * initial (full) build - all files are compiled
  * incremental build (or partial rebuild) - just changed files are compiled
  * no-op build - no files are compiled (none changed since last execution)
 
 So an incremental test runner will only re-execute tests that were affected
 by changes in the source code since last test execution.
 
 
 How it works ?
 
 
 `pytest-incremental` is a `pytest http://pytest.org/`_ plugin. So if
 you can run your test suite with pytest you can use
 `pytest-incremental`.
 
 The plugin will analyse your python source files and through its
 imports define the dependencies of the modules. `doit
 http://python-doit.sourceforge.net`_ is used to keep track of the
 dependencies and save results. The plugin will modify how pytest
 collect your tests. pytest do the rest of the job of actually running
 the tests and reporting the results.
 
 
 Install
 =
 
 pytest-incremental is tested on python 2.6, 2.7.
 
 ``pip install pytest-incremental```
 
 ``python setup.py install``
 
 local installation
 
 
 You can also just grab the plugin `module
 https://bitbucket.org/schettino72/pytest-incremental/src/tip/pytest_incremental.py`_
 file and put in your project path. Then enable it (check `pytest docs
 http://pytest.org/plugins.html#requiring-loading-plugins-in-a-test-module-or-conftest-file`_).
 
 
 Usage
 ==
 
 Just pass the parameter ``--incremental`` when calling from the command line::
 
   $ py.test --incremental
 
 
 You can also enable it by default adding the following line to your
 ``pytest.ini``::
 
   [pytest]
   addopts = --incremental
 
 
 watched packages
 --
 
 By default all modules collected by pytest will used as dependencies
 if imported. In order to limit or extend the watched folders you must
 use the parameter ``--watch-pkg``
 
 
 Limitations
 ==
 
 ``pytest-incremental`` looks for imports recursively to find
 dependencies (using AST). But given the very dynamic nature of python
 there are still some cases that a module can be affected by a module
 that are not detected.
 
  * `from package import *` modules imported from __all__ in a package
 are not counted as a dependency
  * modules imported not using the *import* statement
  * modules not explictitly imported but used at runtime (i.e.
 conftest.py when running your tests with pytest)
  * monkey-patching. (i.e. A imports X.  B monkey-patches X. In this
 case A might depend on B)
  * others ?
 
 
 Project Details
 ===
 
  - Project code + issue track on `bitbucket
 https://bitbucket.org/schettino72/pytest-incremental`_
  - `Discussion group http://groups.google.co.in/group/python-doit`_
 ___
 py-dev mailing list
 py-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/py-dev
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] [ANN] plugin: pytest-incremental

2011-04-29 Thread holger krekel
On Fri, Apr 29, 2011 at 21:54 +0800, Eduardo Schettino wrote:
 On Fri, Apr 29, 2011 at 8:00 PM, holger krekel hol...@merlinux.eu wrote:
  * it's not compatible with pytest-xdist, is it?
 I actually had never tried pytest-xdist... is there anything that I
 could do to make them compatible?

Many people use it for distributing tests to multiple CPUs (or hosts).
If you just consider the multi-CPU case the main issue is to make
sure the slave processes don't step onto each other when writing
or determining your state information.

  * i got BSDDB database corruption (i CTRL-Ced the run before)
 I tried hitting CTRL-C at several points and never got a corruption.
 At which point you hit Ctrl-C? before the test execution starts?
 Although I got a bug that it does not detect that not all tests were
 executed and mark them as successful.

Not sure i can help with reproducing it.
  * can you add an example of a project layout
 The plugin is supposed to work with any project layout...

ok ...

  and what one would call wrt to watch_pkg?
 By default it will look for changes in all python modules that pass
 through py.test collection. This way doesnt work well when you try to
 run tests from a single file like:
 $ py.test  tests/test_foo.py
 
 If you try to use the plugin like this it will give an error message
 saying that you must specify watch_pkg. lets say you have the folders:
  /tests
  /my_lib
 
 you should call
 $ py.test --incremental --watch-pkg my_lib tests/test_foo.py
 (no need to pass the package of the test file itself)
 
 It can also be used in case you want to watch for changes in modules
 that are in another project. for example if you are testing pytest and
 want to check for changes in dependencies from your py package.
 $ py.test --incremental --watch-pkg my_lib --watch-pkg ../py-trunk/py

ok, now it's clear that one need to specify file system paths.
Throughout the python world --XYZ-pkg might mean a module
import path like os.path or a filesystem path.  Maybe good
to mention this in the help string for the option - i'd probably
rather call it --watch-path to disambiguate.

  I guess things don't work for me on pytest itself because
  it has a plugin-based dynamic namespace construction/imports
  so your AST scanning method does not really see the deps.
  A different method would be to try to record imports
  during the import and running of a test.  Myself, i also
  experimented with specifying dependencies manually at
  some point which also solves the issue when invoking
  shell commands provided a project - i guess those
  would not naturally be found by your scanner.
 
 Yes. There is also the problem of dependencies on text files (or any
 other non-python files).
 I think dependencies should really be defined by the user, this AST
 scanner should be just one way of doing it that works out of the box
 for most projects.

Right.  Do you plan to implement a manual way to specify deps
for your plugin?

sidenote: you may want to announce the next release of the plugin
also to the TIP (testing in python) list - 
http://lists.idyll.org/listinfo/testing-in-python -
a number of people are rather following there rather than here.

best,
holger

 Regards,
   Eduardo
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


Re: [py-dev] Creating unicode strings in pytest testsuit

2011-04-17 Thread holger krekel
Hi Floris,

thanks, looks good.  testrun.org is not too reliable these days, indeed.
(I'd like to have a much simpler and more robust service than
Hudson/Jenkins some day).

best,
holger

On Sat, Apr 16, 2011 at 00:51 +0100, Floris Bruynooghe wrote:
 Hello
 
 On 13 April 2011 12:53, holger krekel hol...@merlinux.eu wrote:
  Btw, i'd like to do a py/pytest release around the coming weekend -
  would be cool to have your fix in, even if the test isn't written
  in the optimal way yet.
 
 I've finally managed to stop confusing myself and seem to have
 succeeded in a more comprehensive fix.  Though I did take some
 shortcuts in the tests as you suggested, there where some more issues
 with testing unicode characters so the tests just skip things outside
 of the ascii range.
 
 testrun.org seems happy, though two of the builds (pypy, py31) seem to
 be hanging on unrelated issues.
 
 Regards
 Floris
 
 -- 
 Debian GNU/Linux -- The Power of Freedom
 www.debian.org | www.gnu.org | www.kernel.org
 
___
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev


  1   2   3   >