Re: Unittest

2016-07-25 Thread Terry Reedy

On 7/25/2016 12:45 PM, Joaquin Alzola wrote:

Hi Guys

I have a question related to unittest.

I suppose a SW that is going to live will not have any trace of
unittest module along their code.


In order to test idlelib, I had to a _utest=False (unittest = False) 
parameter to some functions.  They are there when you run IDLE.


I like to put

if __name__ == '__main__':  at the bottom 
of non-script files.  Some people don't like this, but it makes running 
the tests trivial while editing a file -- whether to make a test pass or 
to avoid regressions when making 'neutral' changes.



So is it the way to do it to put all unittest in a preproduction
environment and then remove all lines relate to unittest once the SW
is release into production?


How would you know that you do not introduce bugs when you change code 
after testing?


When you install Python on Windows, installing the test/ directory is a 
user option.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest

2016-07-25 Thread Ben Finney
Joaquin Alzola  writes:

> I suppose a SW that is going to live will not have any trace of
> unittest module along their code.

Many packages are deployed with their unit test suite. The files don't
occupy much space, don't interfere with the running of the program, and
can be helpful to run the tests in the deployed environment.

> So is it the way to do it to put all unittest in a preproduction
> environment and then remove all lines relate to unittest once the SW
> is release into production?

I would advise not to bother. Prepare the release of the entire source
needed to build the distribution, and don't worry about somehow
excluding the test suite.

> This email is confidential and may be subject to privilege. If you are not 
> the intended recipient, please do not copy or disclose its content but 
> contact the sender immediately upon receipt.

Please do not use an email system which appends these obnoxious messages
in a public forum.

Either convince the people who impose that false disclaimer onto your
message to stop doing that; or, stop using that system for writing to a
public forum.

-- 
 \   “Are you pondering what I'm pondering?” “Umm, I think so, Don |
  `\  Cerebro, but, umm, why would Sophia Loren do a musical?” |
_o__)   —_Pinky and The Brain_ |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest

2016-07-25 Thread Brendan Abel
Generally, all your unittests will be inside a "tests" directory that lives
outside your package directory.  That directory will be excluded when you
build or install your project using your setup.py script.  Take a look at
some popular 3rd party python packages to see how they structure their
projects and setup their setup.py.

On Mon, Jul 25, 2016 at 9:45 AM, Joaquin Alzola 
wrote:

> Hi Guys
>
> I have a question related to unittest.
>
> I suppose a SW that is going to live will not have any trace of unittest
> module along their code.
>
> So is it the way to do it to put all unittest in a preproduction
> environment and then remove all lines relate to unittest once the SW is
> release into production?
>
> What is the best way of working with unittest?
>
> BR
>
> Joaquin
>
> This email is confidential and may be subject to privilege. If you are not
> the intended recipient, please do not copy or disclose its content but
> contact the sender immediately upon receipt.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-04-30 Thread Ethan Furman

On 03/11/2014 01:58 PM, Ethan Furman wrote:


So I finally got enough data and enough of an understanding to write some unit 
tests for my code.



The weird behavior I'm getting:

   - when a test fails, I get the E or F, but no summary at the end
 (if the failure occurs in setUpClass before my tested routines
 are actually called, I get the summary; if I run a test method
 individually I get the summary)

   - I have two classes, but only one is being exercised

   - occasionally, one of my gvim windows is unceremoniously killed
(survived only by its swap file)

I'm running the tests under sudo as the routines expect to be run that way.

Anybody have any ideas?


For posterity's sake:

I added a .close() method to the class being tested which destroys its big data structures; then I added a tearDownClass 
method to the unittest.  That seems to have done the trick with getting the tests to /all/ run, and by apps don't 
suddenly disappear.  :)


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Terry Reedy

On 3/12/2014 11:32 AM, Ethan Furman wrote:


I strongly suspect it's memory.  When I originally wrote the code I
tried to include six months worth of EoM data, but had to back it down
to three as my process kept mysteriously dying at four or more months.
There must be waaay too much stuff being kept alive by the stack
traces of the failed tests.


There is an issue or two about unittest not releasing memory. Also, 
modules are not cleared from sys.modules, so anything accessible from 
global scope is kept around.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 04:38 PM, Steven D'Aprano wrote:


[snip lots of good advice for unit testing]


I was just removing the Personally Identifiable Information.  Each test is pulling a payment from a batch of payments, 
so the first couple asserts are simply making sure I have the payment I think I have, then I run the routine that is 
supposed to match that payment with a bunch of invoices, and then I test to make sure I got back the invoices that I 
have manually verified are the correct ones to be returned.


There are many different tests because there are many different paths through the code, depending on exactly which 
combination of insanities the bank, the customer, and the company choose to inflict at that moment.  ;)


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 04:47 PM, Steven D'Aprano wrote:


top -Mm -d 0.5


Cool, thanks!

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:

>> Some systems have an oom (Out Of Memory) process killer, which nukes
>> (semi-random) process when the system exhausts memory.  Is it possible
>> this is happening?  If so, you should see some log message in one of
>> your system logs.
> 
> That would explain why my editor windows were being killed.


Try opening a second console tab and running top in it. It will show the 
amount of memory being used. Then run the tests in the first, jump back 
to top, and watch to see if memory use goes through the roof:

top -Mm -d 0.5

will sort by memory use, display memory in more sensible human-readable 
units instead of bytes, and update the display every 0.5 second. You can 
then hit the "i" key to toggle display of idle processes and only show 
those that are actually doing something (which presumably will include 
Python running the tests).

This at least will allow you to see whether or not memory is the concern.




-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Tue, 11 Mar 2014 13:58:17 -0700, Ethan Furman wrote:

> class Test_wfbrp_20140225(TestCase):
> 
>  @classmethod
>  def setUpClass(cls):
>  cls.pp = wfbrp.PaymentProcessor(
>  '.../lockbox_file',
>  '.../aging_file',
>  [
>  Path('month_end_1'),
>  Path('month_end_2'),
>  Path('month_end_3'),
>  ],
>  )

This has nothing to do with your actual problem, which appears to be the 
Linux(?) OOM killer reaping your applications, just some general 
observations on your test.


>  def test_xxx_1(self):

Having trouble thinking up descriptive names for the test? That's a sign 
that the test might be doing too much. Each test should check one self-
contained thing. That may or may not be a single call to a unittest 
assert* method, but it should be something you can describe in a few 
words:

"it's a regression test for bug 23"
"test that the database isn't on fire"
"invoices should have a valid debtor"
"the foo report ought to report all the foos"
"...and nothing but the foos."

This hints -- its just a hint, mind you, since I lack all specific 
knowledge of your application -- that the following "affirm" tests should 
be pulled out into separate tests.

>  p = self.pp.lockbox_payments[0]
>  # affirm we have what we're expecting 
>  self.assertEqual(
>  (p.payer, p.ck_num, p.credit),
>  ('a customer', '010101', 1),
>  )
>  self.assertEqual(p.invoices.keys(), ['XXX'])
>  self.assertEqual(p.invoices.values()[0].amount, 1)

which would then leave this to be the Actual Thing Being Tested for this 
test, which then becomes test_no_missing_invoices rather than test_xxx_1.

>  # now make sure we get back what we're expecting 
>  np, b = self.pp._match_invoices(p)
>  missing = []
>  for inv_num in ('123456', '789012', '345678'):
>  if inv_num not in b:
>  missing.append(inv_num)
>  if missing:
>  raise ValueError('invoices %r missing from batch' %
>  missing)

Raising an exception directly inside the test function should only occur 
if the test function is buggy. As Terry has already suggested, this 
probably communicates your intention much better:

self.assertEqual(missing, [])



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:

> There must
> be waaay too much stuff being kept alive by the stack traces of the
> failed tests.


I believe that unittest does keep stack traces alive until the process 
ends. I thought that there was a recent bug report for it, but the only 
one I can find was apparently fixed more than a decade ago:

http://bugs.python.org/issue451309





-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Roy Smith
In article ,
 Ethan Furman  wrote:

> > Alternatively, maybe something inside your process is just calling
> > sys.exit(), or even os._exit().  You'll see the exit() system call in
> > the strace output.
> 
> My bare try/except would have caught that.

A bare except would catch sys.exit(), but not os._exit().  Well, no 
that's not actually true.  Calling os._exit() will raise:

TypeError: _exit() takes exactly 1 argument (0 given)

but it won't catch os._exit(0) :-)

> > what happens if you reduce that to:
> >
> >   def test_xxx_1(self):
> >self.fail()
> 
> I only get the strange behavior if more than two (or maybe three) of my test 
> cases fail.  Less than that magic number, 
> and everything works just fine.  It doesn't matter which two or three, 
> either.

OK, well, assuming this is a memory problem, what if you do:

  def test_xxx_1(self):
   l = []
   while True:
   l.append(0)

That should eventually run out of memory.  Does that get you the same 
behavior in a single test case?  If so, that at least would be evidence 
supporting the memory exhaustion theory.

> I strongly suspect it's memory.  When I originally wrote the code I tried to 
> include six months worth of EoM data, but 
> had to back it down to three as my process kept mysteriously dying at four or 
> more months.  There must be waaay too 
> much stuff being kept alive by the stack traces of the failed tests.

One thing you might try is running your tests under nose 
(http://nose.readthedocs.org/).  Nose knows how to run unittest tests, 
and one of the gazillion options it has is to run each test case in an 
isolated process:

  --process-restartworker
If set, will restart each worker process once their
tests are done, this helps control memory leaks from
killing the system. [NOSE_PROCESS_RESTARTWORKER]

that might be what you need.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 06:44 AM, Roy Smith wrote:

In article ,
  Ethan Furman  wrote:


I've tried it both ways, and both ways my process is being killed, presumably
by the O/S.


What evidence do you have the OS is killing the process?


I put a bare try/except around the call to unittest.main, with a print 
statement in the except, and nothing ever prints.



Some systems have an oom (Out Of Memory) process killer, which nukes
(semi-random) process when the system exhausts memory.  Is it possible
this is happening?  If so, you should see some log message in one of
your system logs.


That would explain why my editor windows were being killed.



You didn't mention (or maybe I misssed it) which OS you're using.


Ubuntu 13 something or other.


I'm
assuming you've got some kind of system call tracer (strace, truss,
dtrace, etc).


Sadly, I have no experience with those programs yet, and until now didn't even 
know they existed.


Try running your tests under that.  If something is
sending your process a kill signal, you'll see it:

[gazillions of lines elided]
write(1, ">>> ", 4>>> ) = 4
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(1, [0], NULL, NULL, NULL)= ? ERESTARTNOHAND (To be
restarted)
--- SIGTERM (Terminated) @ 0 (0) ---
+++ killed by SIGTERM +++

Alternatively, maybe something inside your process is just calling
sys.exit(), or even os._exit().  You'll see the exit() system call in
the strace output.


My bare try/except would have caught that.



And, of course, the standard suggestion to reduce this down to the
minimum test case.  You posted:

  def test_xxx_1(self):
  p = self.pp.lockbox_payments[0]
  # affirm we have what we're expecting
  self.assertEqual(
  (p.payer, p.ck_num, p.credit),
  ('a customer', '010101', 1),
  )
  self.assertEqual(p.invoices.keys(), ['XXX'])
  self.assertEqual(p.invoices.values()[0].amount, 1)
  # now make sure we get back what we're expecting
  np, b = self.pp._match_invoices(p)
  missing = []
  for inv_num in ('123456', '789012', '345678'):
  if inv_num not in b:
  missing.append(inv_num)
  if missing:
  raise ValueError('invoices %r missing from batch' % missing)

what happens if you reduce that to:

  def test_xxx_1(self):
   self.fail()


I only get the strange behavior if more than two (or maybe three) of my test cases fail.  Less than that magic number, 
and everything works just fine.  It doesn't matter which two or three, either.




do you still get this strange behavior?  What if you get rid of your
setUpClass()?  Keep hacking away at the test suite until you get down to
a single line of code which, if run, exhibits the behavior, and if
commented out, does not.  At that point, you'll have a clue what's
causing this.  If you're lucky :-)


I strongly suspect it's memory.  When I originally wrote the code I tried to include six months worth of EoM data, but 
had to back it down to three as my process kept mysteriously dying at four or more months.  There must be waaay too 
much stuff being kept alive by the stack traces of the failed tests.


Thanks for your help!

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Roy Smith
In article ,
 Ethan Furman  wrote:

> I've tried it both ways, and both ways my process is being killed, presumably 
> by the O/S.

What evidence do you have the OS is killing the process?

Some systems have an oom (Out Of Memory) process killer, which nukes 
(semi-random) process when the system exhausts memory.  Is it possible 
this is happening?  If so, you should see some log message in one of 
your system logs.

You didn't mention (or maybe I misssed it) which OS you're using.  I'm 
assuming you've got some kind of system call tracer (strace, truss, 
dtrace, etc).  Try running your tests under that.  If something is 
sending your process a kill signal, you'll see it:

[gazillions of lines elided]
write(1, ">>> ", 4>>> ) = 4
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(1, [0], NULL, NULL, NULL)= ? ERESTARTNOHAND (To be 
restarted)
--- SIGTERM (Terminated) @ 0 (0) ---
+++ killed by SIGTERM +++

Alternatively, maybe something inside your process is just calling 
sys.exit(), or even os._exit().  You'll see the exit() system call in 
the strace output.

And, of course, the standard suggestion to reduce this down to the 
minimum test case.  You posted:

 def test_xxx_1(self):
 p = self.pp.lockbox_payments[0]
 # affirm we have what we're expecting
 self.assertEqual(
 (p.payer, p.ck_num, p.credit),
 ('a customer', '010101', 1),
 )
 self.assertEqual(p.invoices.keys(), ['XXX'])
 self.assertEqual(p.invoices.values()[0].amount, 1)
 # now make sure we get back what we're expecting
 np, b = self.pp._match_invoices(p)
 missing = []
 for inv_num in ('123456', '789012', '345678'):
 if inv_num not in b:
 missing.append(inv_num)
 if missing:
 raise ValueError('invoices %r missing from batch' % missing)

what happens if you reduce that to:

 def test_xxx_1(self):
  self.fail()

do you still get this strange behavior?  What if you get rid of your 
setUpClass()?  Keep hacking away at the test suite until you get down to 
a single line of code which, if run, exhibits the behavior, and if 
commented out, does not.  At that point, you'll have a clue what's 
causing this.  If you're lucky :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/11/2014 08:36 PM, Terry Reedy wrote:

On 3/11/2014 6:13 PM, John Gordon wrote:

In  Ethan Furman 
 writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Yes. I believe the methods all raise AssertionError on failure, and the test 
methods are wrapped with try:.. except
AssertionError as err:

if missing:
  raise ValueError('invoices %r missing from batch' % missing)

should be "assertEqual(missing, [], 'invoices missing from batch')" and if that 
fails, the non-empty list is printed
along with the message.


I've tried it both ways, and both ways my process is being killed, presumably 
by the O/S.

I will say it's an extra motivating factor to have few failing tests -- if more than two of my tests fail, all I see are 
'.'s, 'E's, and 'F's, with no clues as to which test failed nor why.  Thank goodness for '-v' and being able to specify 
which method of which class to run!


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Terry Reedy

On 3/11/2014 6:13 PM, John Gordon wrote:

In  Ethan Furman 
 writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Yes. I believe the methods all raise AssertionError on failure, and the 
test methods are wrapped with try:.. except AssertionError as err:


   if missing:
 raise ValueError('invoices %r missing from batch' % missing)

should be "assertEqual(missing, [], 'invoices missing from batch')" and 
if that fails, the non-empty list is printed along with the message.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Ethan Furman

On 03/11/2014 03:13 PM, John Gordon wrote:

Ethan Furman writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Drat.  Tried it, same issue.  O/S kills it.  :(

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Ethan Furman

On 03/11/2014 01:58 PM, Ethan Furman wrote:


Anybody have any ideas?


I suspect the O/S is killing the process.  If I manually select the other class to run (which has all successful tests, 
so no traceback baggage), it runs normally.


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread John Gordon
In  Ethan Furman 
 writes:

>  if missing:
>  raise ValueError('invoices %r missing from batch' % missing)

It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?

-- 
John Gordon Imagine what it must be like for a real medical doctor to
gor...@panix.comwatch 'House', or a real serial killer to watch 'Dexter'.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Martin Schöön
On 2013-06-29, Steven D'Aprano  wrote:
> On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:
>
>> $PYTHONPATH points at both the code and the test directories.
>> 
>> When I run blablabla_test.py it fails to import blablabla.py
>
> What error message do you get?
>
>  
>> I have messed around for oven an hour and get nowhere. I have done
>> unittesting like this with success in the past and I have revisited one
>> of those projects and it still works there.
> [...]
>> Any leads?
>
> The first step is to confirm that your path is setup correctly. At the 
> very top of blablabla_test, put this code:
>
> import os, sys
> print(os.getenv('PYTHONPATH'))
> print(sys.path)
>
Yes, right, I had not managed to make my change to PYTHONPATH stick.
I said the explanation would be trivial, didn't I?

Thanks for the quick replies. I am back in business now.

No, neither English nor Python are native languages of mine but I
enjoy (ab)using both :-)

/Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Steven D'Aprano
On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:

> $PYTHONPATH points at both the code and the test directories.
> 
> When I run blablabla_test.py it fails to import blablabla.py

What error message do you get?

 
> I have messed around for oven an hour and get nowhere. I have done
> unittesting like this with success in the past and I have revisited one
> of those projects and it still works there.
[...]
> Any leads?

The first step is to confirm that your path is setup correctly. At the 
very top of blablabla_test, put this code:

import os, sys
print(os.getenv('PYTHONPATH'))
print(sys.path)


What do they say? What should they say?


The second step is to confirm that you can import the blablabla.py 
module. From the command line, cd into the code directory and start up a 
Python interactive session, then run "import blablabla" and see what it 
does.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Roy Smith
In article ,
 Martin Schöön  wrote:

> I know the answer to this must be trivial but I am stuck...
> 
> I am starting on a not too complex Python project. Right now the
> project file structure contains three subdirectories and two
> files with Python code:
> 
> code
>blablabla.py
> test
>blablabla_test.py
> doc
>(empty for now)
> 
> blablabla_test.py contains "import unittest" and "import blablabla"
> 
> $PYTHONPATH points at both the code and the test directories.

A couple of generic debugging suggestions.  First, are you SURE the path 
is set to what you think?  In your unit test, do:

import sys
print sys.path

and make sure it's what you expect it to be.

> When I run blablabla_test.py it fails to import blablabla.py

Get unittest out of the picture.  Run an interactive python and type 
"import blablabla" at it.  What happens?

One trick I like is to strace (aka truss, dtrace, etc on various 
operating systems) the python process and watch all the open() system 
calls.  See what paths it attempts to open when searching for blablabla.  
Sometimes that gives you insight into what's going wrong.

> I have messed around for oven an hour and get nowhere.

What temperature was the oven set at?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest for system testing

2012-10-18 Thread Steven D'Aprano
Sorry for breaking threading, but the original post has not come through 
to me.

> On 18/10/2012 01:22, Rita wrote:
> Hi,
>
> Currently, I use a shell script to test how my system behaves before I
> deploy an application. For instance, I check if fileA, fileB, and fileC
> exist and if they do I go and start up my application.

Do you run the shell script once, before installing the application, or 
every time the application launches?

Do you realise that this is vulnerable to race conditions? E.g:

Time = 10am exactly: shell script runs, fileA etc exist;

Time = 10am and 1 millisecond: another process deletes fileA etc;

Time = 10am and 2 milliseconds: application launches, cannot find 
   fileA etc and crashes.


Depending on what your application does, this could be a security hole.

Regardless of what the shell script reports, to be robust your Python 
application needs to protect against the case that fileA etc are missing. 
Even if all it does is report an error, save the user's work and exit.


> This works great BUT
>
> I would like to use python and in particular unittest module to test my
> system and then deploy my app. I understand unittest is for functional
> testing but I think this too would be a case for it. Any thoughts? I am
> not looking for code in particular but just some ideas on how to use
> python better in situations like this.

Well, you *could* use unittest, but frankly I think that's a case of 
using a hammer to nail in screws. Unittest is awesome for what it does. 
It's not so well suited for this.

Compare these two pieces of code (untested, so they probably won't work 
exactly as given):

# sample 1
import os
import sys
for name in ['fileA', 'fileB', 'fileC']:
if not os.path.exists(name):
print('missing essential file %s' % name)
sys.exit(1)

run_application()



# sample 2
import os
import sys
import unittest

class PreRunTest(unittest.TestCase):
list_of_files = ['fileA', 'fileB', 'fileC']
def testFilesExist(self):
for name in self.list_of_files:
assertTrue(os.path.exists(name)

total_tests, failed_tests = unittest.testmod()  # I think...

if failed_tests != 0:
sys.exit(1)

run_application()



I think the first sample is much to be preferred, and not just because it 
is a couple of lines shorter. There's less magic involved.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest for system testing

2012-10-17 Thread Dave Angel
On 10/17/2012 08:22 PM, Rita wrote:
> Hi,
>
> Currently, I use a shell script to test how my system behaves before I
> deploy an application. For instance, I check if fileA, fileB, and fileC
> exist and if they do I go and start up my application.
>
> This works great BUT
>
> I would like to use python and in particular unittest module to test my
> system and then deploy my app. I understand unittest is for functional
> testing but I think this too would be a case for it. Any thoughts? I am not
> looking for code in particular but just some ideas on how to use python
> better in situations like this.
>
>

You have perhaps a different meaning for deploy than I do.  An app is
deployed at installation time.  After that, it is simply run.

I think you're saying that you have some extra sanity checks that you
test before you RUN the application.  If so, why aren't they just part
of the main script?

If you're not comfortable putting it there, you could put it in a simple
module that gets imported and called by the main script.

I can't see ANY connection between this and unit testing, with or
without frameworks.  Are you planning to run your test suite each time
the user runs your application?  Can he really wait an hour or three for
it to get started?



-- 

DaveA

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest for system testing

2012-10-17 Thread Rita
thanks.

I suppose I would need a simple example from one of these libraries. ( i
typed too soon for , "no code needed" )



On Wed, Oct 17, 2012 at 8:49 PM, Mark Lawrence wrote:

> On 18/10/2012 01:22, Rita wrote:
>
>> Hi,
>>
>> Currently, I use a shell script to test how my system behaves before I
>> deploy an application. For instance, I check if fileA, fileB, and fileC
>> exist and if they do I go and start up my application.
>>
>> This works great BUT
>>
>> I would like to use python and in particular unittest module to test my
>> system and then deploy my app. I understand unittest is for functional
>> testing but I think this too would be a case for it. Any thoughts? I am
>> not
>> looking for code in particular but just some ideas on how to use python
>> better in situations like this.
>>
>>
> Plenty of options here http://wiki.python.org/moin/**
> PythonTestingToolsTaxonomyand
>  an active mailing list that I read via gmane.comp.python.testing.
> **general
>
> --
> Cheers.
>
> Mark Lawrence.
>
> --
> http://mail.python.org/**mailman/listinfo/python-list
>



-- 
--- Get your facts first, then you can distort them as you please.--
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest for system testing

2012-10-17 Thread Mark Lawrence

On 18/10/2012 01:22, Rita wrote:

Hi,

Currently, I use a shell script to test how my system behaves before I
deploy an application. For instance, I check if fileA, fileB, and fileC
exist and if they do I go and start up my application.

This works great BUT

I would like to use python and in particular unittest module to test my
system and then deploy my app. I understand unittest is for functional
testing but I think this too would be a case for it. Any thoughts? I am not
looking for code in particular but just some ideas on how to use python
better in situations like this.



Plenty of options here 
http://wiki.python.org/moin/PythonTestingToolsTaxonomy and an active 
mailing list that I read via gmane.comp.python.testing.general


--
Cheers.

Mark Lawrence.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-31 Thread 88888 Dihedral
On Saturday, September 1, 2012 12:19:10 AM UTC+8, Chris Withers wrote:
> On 23/08/2012 12:25, Tigerstyle wrote:
> 
> > class FileTest(unittest.TestCase):
> 
> >
> 
> >  def setUp(self):
> 
> >  self.origdir = os.getcwd()
> 
> >  self.dirname = tempfile.mkdtemp("testdir")
> 
> >  os.chdir(self.dirname)
> 
> 
> 
> I wouldn't change directories like this, it's pretty fragile, just use 
> 
> absolute paths.
> 
> 
> 
> >  def test_1(self):
> 
> >  "Verify creation of files is possible"
> 
> >  for filename in ("this.txt", "that.txt", "the_other.txt"):
> 
> >  f = open(filename, "w")
> 
> >  f.write("Some text\n")
> 
> >  f.close()
> 
> >  self.assertTrue(f.closed)
> 
> >
> 
> >  def test_2(self):
> 
> >  "Verify that current directory is empty"
> 
> >  self.assertEqual(glob.glob("*"), [], "Directory not empty")
> 
> >
> 
> >  def tearDown(self):
> 
> >  os.chdir(self.origdir)
> 
> >  shutil.rmtree(self.dirname)
> 
> 
> 
> Seeing this, you might find the following tools useful:
> 
> 
> 
> http://packages.python.org/testfixtures/files.html
> 
> 
> 
> cheers,
> 
> 
> 
> Chris
> 
> 
> 
> -- 
> 
> Simplistix - Content Management, Batch Processing & Python Consulting
> 
>  - http://www.simplistix.co.uk

Well, I am thinking  that the directory tree listing services or daemons
supported by the OS by some iterators could be better than the stack
based model.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-31 Thread Chris Withers

On 23/08/2012 12:25, Tigerstyle wrote:

class FileTest(unittest.TestCase):

 def setUp(self):
 self.origdir = os.getcwd()
 self.dirname = tempfile.mkdtemp("testdir")
 os.chdir(self.dirname)


I wouldn't change directories like this, it's pretty fragile, just use 
absolute paths.



 def test_1(self):
 "Verify creation of files is possible"
 for filename in ("this.txt", "that.txt", "the_other.txt"):
 f = open(filename, "w")
 f.write("Some text\n")
 f.close()
 self.assertTrue(f.closed)

 def test_2(self):
 "Verify that current directory is empty"
 self.assertEqual(glob.glob("*"), [], "Directory not empty")

 def tearDown(self):
 os.chdir(self.origdir)
 shutil.rmtree(self.dirname)


Seeing this, you might find the following tools useful:

http://packages.python.org/testfixtures/files.html

cheers,

Chris

--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-26 Thread Tigerstyle
Ahh,

thank you very much Rob.

Fixed now.

Have a great day.

T

kl. 19:51:54 UTC+2 søndag 26. august 2012 skrev Rob Day følgende:
> On Sun, 2012-08-26 at 10:36 -0700, Tigerstyle wrote:
> 
> > self.assertEqual(statinfo.st_size, filesize)
> 
> > 
> 
> > I'm still getting AssertionError and the error says: 100 !=b'
> 
> > 
> 
> > 
> 
> 
> 
> filesize is the character 'b' repeated one million times (the contents
> 
> of the file, in other words). statinfo.st_size is the number of bytes in
> 
> the file, i.e. 1,000,000. So when your assertEqual code checks if those
> 
> two values are equal, what do you think happens?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-26 Thread Rob Day
On Sun, 2012-08-26 at 10:36 -0700, Tigerstyle wrote:
> self.assertEqual(statinfo.st_size, filesize)
> 
> I'm still getting AssertionError and the error says: 100 !=b'
> 
> 

filesize is the character 'b' repeated one million times (the contents
of the file, in other words). statinfo.st_size is the number of bytes in
the file, i.e. 1,000,000. So when your assertEqual code checks if those
two values are equal, what do you think happens?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-26 Thread Tigerstyle
Thanks Rob,

I'v modified the test_3 like this:


def test_3(self):
f = open("test.dat", "wb")
filesize = (b'b'*100)
f.write(filesize)
f.close()
statinfo = os.stat("test.dat")
self.assertEqual(statinfo.st_size, filesize)

I'm still getting AssertionError and the error says: 100 !=b'

Help appreciated.

T 

kl. 21:04:54 UTC+2 fredag 24. august 2012 skrev Robert Day følgende:
> On Fri, 2012-08-24 at 09:20 -0700, Tigerstyle wrote:
> 
> 
> 
> > def test_3(self):
> 
> > f = open("test.dat", "wb")
> 
> > filesize = b"0"*100
> 
> > f.write(filesize)
> 
> > f.close()
> 
> > self.assertEqual(os.stat, filesize)
> 
> 
> 
> > The test_3 is to test if the created binary file har the size of 1 million 
> > bytes. Somehow it is not working. Any suggestions?
> 
> >  
> 
> 
> 
> rob@rivertam:~$ python
> 
> Python 2.7.3 (default, Jul 24 2012, 10:05:38) 
> 
> [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2
> 
> Type "help", "copyright", "credits" or "license" for more information.
> 
> >>> import os
> 
> >>> os.stat
> 
> 
> 
> >>> 
> 
> 
> 
> So that's what 'os.stat' is. Why are you testing whether that's equal to
> 
> b"0"*100?
> 
> 
> 
> (You may find the documentation on os.stat at
> 
> http://docs.python.org/library/os.html#os.stat helpful; it's a function
> 
> which takes a path as its argument, and returns an object with some
> 
> relevant attributes.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-24 Thread Robert Day
On Fri, 2012-08-24 at 09:20 -0700, Tigerstyle wrote:

> def test_3(self):
> f = open("test.dat", "wb")
> filesize = b"0"*100
> f.write(filesize)
> f.close()
> self.assertEqual(os.stat, filesize)

> The test_3 is to test if the created binary file har the size of 1 million 
> bytes. Somehow it is not working. Any suggestions?
>  

rob@rivertam:~$ python
Python 2.7.3 (default, Jul 24 2012, 10:05:38) 
[GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.stat

>>> 

So that's what 'os.stat' is. Why are you testing whether that's equal to
b"0"*100?

(You may find the documentation on os.stat at
http://docs.python.org/library/os.html#os.stat helpful; it's a function
which takes a path as its argument, and returns an object with some
relevant attributes.)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-24 Thread Tigerstyle
Thank you guys, Roy and Terry.

I has been great help.

I still need some help. Here is the updated code:


Demostration of setUp and tearDown.
The tests do not actually test anything - this is a demo.
"""
import unittest
import tempfile
import shutil
import glob
import os

class FileTest(unittest.TestCase):

def setUp(self):
self.origdir = os.getcwd()
self.dirname = tempfile.mkdtemp("testdir")
os.chdir(self.dirname)

def test_1(self):
"Verify creation of files is possible"
filenames = {"this.txt", "that.txt", "the_other.txt"} 
for filename in filenames: 
f = open(filename, "w")
f.write("Some text\n")
f.close()
self.assertTrue(f.closed)
dir_names = set(os.listdir('.')) 
self.assertEqual(set(dir_names), set(filenames)) 

def test_2(self):
"Verify that current directory is empty"
self.assertEqual(glob.glob("*"), [], "Directory not empty")

def test_3(self):
f = open("test.dat", "wb")
filesize = b"0"*100
f.write(filesize)
f.close()
self.assertEqual(os.stat, filesize)
def tearDown(self):
os.chdir(self.origdir)
shutil.rmtree(self.dirname

The test_3 is to test if the created binary file har the size of 1 million 
bytes. Somehow it is not working. Any suggestions?
 
Thanks

T

kl. 21:06:29 UTC+2 torsdag 23. august 2012 skrev Roy Smith følgende:
> On Thursday, August 23, 2012 1:29:19 PM UTC-4, Terry Reedy wrote:
> 
> 
> 
> > One can start with a set rather than tuple of file names.
> 
> >  filenames = {"this.txt", "that.txt", "the_other.txt"}
> 
> 
> 
> Yeah, that's even cleaner.  Just be aware, the set notation above is only 
> available in (IIRC), 2.7 or above.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-23 Thread Roy Smith
On Thursday, August 23, 2012 1:29:19 PM UTC-4, Terry Reedy wrote:

> One can start with a set rather than tuple of file names.
>  filenames = {"this.txt", "that.txt", "the_other.txt"}

Yeah, that's even cleaner.  Just be aware, the set notation above is only 
available in (IIRC), 2.7 or above.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-23 Thread Terry Reedy

On 8/23/2012 8:28 AM, Roy Smith wrote:


I think you want to end up with something like:

 def test_1(self):
 "Verify creation of files is possible"
 filenames = ("this.txt", "that.txt", "the_other.txt")
 for filename in filenames:
 f = open(filename, "w")
 f.write("Some text\n")
 f.close()
 self.assertTrue(f.closed)
 dir_names = os.listdir()
 self.assertEqual(set(dir_names), set(filenames))

The above code isn't tested, but it should give you the gist of what you
need to do.


One can start with a set rather than tuple of file names.

def test_1(self):
"Verify creation of files is possible"
filenames = {"this.txt", "that.txt", "the_other.txt"}
for filename in filenames:
f = open(filename, "w")
f.write("Some text\n")
f.close()
self.assertTrue(f.closed)
dir_names = set(os.listdir())
self.assertEqual(dir_names, filenames)

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest - testing for filenames and filesize

2012-08-23 Thread Roy Smith
In article <6b0299df-bc24-406b-8d69-489e990d8...@googlegroups.com>,
 Tigerstyle  wrote:

> Hi.
> 
> I need help with an assignment and I hope you guys can guide me in the right 
> direction.
> [code elided]
> 1. The test_1() method includes code to verify that the test directory 
> contains only the files created by the for loop. Hint: You might create a set 
> containing the list of three filenames, and then create a set from the 
> os.listdir() method.

I'm not sure what your question is.  The hint you give above pretty much 
tells you what to do.  The basic issue here is that you started out with 
a list (well, tuple) of filenames.  You can use os.listdir() to get a 
list of filenames that exist in the current directory.  The problem is 
that you can't compare these two lists directly, because lists are 
ordered.  Converting both lists to sets eliminates the ordering and lets 
you compare them.
 
> I'm new to Python programming so I don't know where to put the set in point 
> 1. Before the test or under test1.

I think you want to end up with something like:

def test_1(self):
"Verify creation of files is possible"
filenames = ("this.txt", "that.txt", "the_other.txt")
for filename in filenames:
f = open(filename, "w")
f.write("Some text\n")
f.close()
self.assertTrue(f.closed)
dir_names = os.listdir()
self.assertEqual(set(dir_names), set(filenames))

The above code isn't tested, but it should give you the gist of what you 
need to do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest - sort cases to be run

2012-08-21 Thread Peter Otten
Kevin Zhang wrote:

> I want to sort the order of the unittest cases to be run, but found such
> statement in Python doc,
> "Note that the order in which the various test cases will be run is
> determined by sorting the test function names with respect to the built-in
> ordering for strings."
> 
> s.addTest(BTest())
> s.addTest(ATest())
> TextTestRunner().run(ts)
> 
> I need BTest() to be run prior to ATest(), is there any natural/beautiful
> way to achieve this? Thanks,

Did you try the above? I think BTest *will* run before ATest. The sorting is 
performed by the TestLoader if there is one, i. e. if you don't build a test 
suite manually. If you *do* use a TestLoader you can still influence the 
sort order by defining a sortTestMethodsUsing static method. Here's a fairly 
complex example:

[Ordering tests in a testsuite]
http://mail.python.org/pipermail/python-list/2010-October/589058.html

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest - sort cases to be run

2012-08-21 Thread goon12
On Tuesday, August 21, 2012 5:34:33 AM UTC-4, Terry Reedy wrote:
> On 8/21/2012 5:09 AM, Kevin Zhang wrote:
> 
> > Hi all,
> 
> >
> 
> > I want to sort the order of the unittest cases to be run, but found such
> 
> > statement in Python doc,
> 
> > "Note that the order in which the various test cases will be run is
> 
> > determined by sorting the test function names with respect to the
> 
> > built-in ordering for strings."
> 
> >
> 
> >  s.addTest(BTest())
> 
> >  s.addTest(ATest())
> 
> >  TextTestRunner().run(ts)
> 
> >
> 
> > I need BTest() to be run prior to ATest(), is there any
> 
> > natural/beautiful way to achieve this? Thanks,

If BTest *has* to run prior to ATest, it could be a code smell.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest - sort cases to be run

2012-08-21 Thread Terry Reedy

On 8/21/2012 5:09 AM, Kevin Zhang wrote:

Hi all,

I want to sort the order of the unittest cases to be run, but found such
statement in Python doc,
"Note that the order in which the various test cases will be run is
determined by sorting the test function names with respect to the
built-in ordering for strings."

 s.addTest(BTest())
 s.addTest(ATest())
 TextTestRunner().run(ts)

I need BTest() to be run prior to ATest(), is there any
natural/beautiful way to achieve this? Thanks,


Rename it @BTest.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: Improve discoverability of discover (Was: Initial nose experience)

2012-07-16 Thread Philipp Hagemeister
On 07/16/2012 02:37 PM, Philipp Hagemeister wrote:
> Can we improve the discoverability of the discover
> option, for example by making it the default action, or including a
> message "use discover to find test files automatically" if there are no
> arguments?
Oops, already implemented as of Python 3.2. Sorry, should've checked before.

- Philipp



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-04-02 Thread Steve Howell
On Mar 28, 6:55 pm, Ben Finney  wrote:
> Steven D'Aprano  writes:
> > (By the way, I have to question the design of an exception with error
> > codes. That seems pretty poor design to me. Normally the exception *type*
> > acts as equivalent to an error code.)
>
> Have a look at Python's built-in OSError. The various errors from the
> operating system can only be distinguished by the numeric code the OS
> returns, so that's what to test on in one's unit tests.
>

To the extent that numeric error codes are poor design (see Steven's
comment) but part of the language (see Ben's comment), it may be
worthwhile to consider a pattern like below.

Let's say you have a function like save_config, where you know that
permissions might be an issue on some systems, but you don't want to
take any action (leave that to the callers).  In those cases, it might
be worthwhile to test for the specific error code (13), but then
translate it to a more domain-specific exception.  This way, all your
callers can trap for a much more specific exception than OSError.
Writing the test code for save_config still presents some of the
issues that the OP alluded to, but then other parts of the system can
be tested with simple assertRaises().


  import os

  class ConfigPermissionError:
pass

  def save_config(config):
try:
  dir = os.mkdir('/config')
except OSError, e:
  if e[0] == 13:
raise ConfigPermissionError()
  else:
raise
fn = os.path.join(dir, 'config.txt')
f = open(fn, 'w')
# and so on...

  try:
save_config({'port': 500})
  except ConfigPermissionError:
# do some workaround here
print 'Config not saved due to permissions'

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-30 Thread Ethan Furman

Steven D'Aprano wrote:
To the degree that the decision of how finely to slice tests is a matter 
of personal judgement and/or taste, I was wrong to say "that is not the 
right way". I should have said "that is not how I would do that test".


I believe that a single test is too coarse, and three or more tests is 
too fine, but two tests is just right. Let me explain how I come to that 
judgement.


If you take a test-driven development approach, the right way to test 
this is to write testFooWillFail once you decide that foo() should raise 
MyException but before foo() actually does so. You would write the test, 
the test would fail, and you would fix foo() to ensure it raises the 
exception. Then you leave the now passing test in place to detect 
regressions.


Then you do the same for the errorcode. Hence two tests.


[snip]

So: never remove tests just because they are redundant. Only remove them 
when they are obsolete due to changes in the code being tested.


Very persuasive argument -- I now find myself disposed to writing two 
tests (not three, nor five ;).


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Steven D'Aprano
On Thu, 29 Mar 2012 08:35:16 -0700, Ethan Furman wrote:

> Steven D'Aprano wrote:
>> On Wed, 28 Mar 2012 14:28:08 +0200, Ulrich Eckhardt wrote:
>> 
>>> Hi!
>>>
>>> I'm currently writing some tests for the error handling of some code.
>>> In this scenario, I must make sure that both the correct exception is
>>> raised and that the contained error code is correct:
>>>
>>>
>>>try:
>>>foo()
>>>self.fail('exception not raised')
>>>catch MyException as e:
>>>self.assertEqual(e.errorcode, SOME_FOO_ERROR)
>>>catch Exception:
>>>self.fail('unexpected exception raised')
>> 
>> Secondly, that is not the right way to do this unit test. You are
>> testing two distinct things, so you should write it as two separate
>> tests:
> 
> I have to disagree -- I do not see the advantage of writing a second
> test that *will* fail if the first test fails as opposed to bundling
> both tests together, and having one failure.

Using that reasoning, your test suite should contain *one* ginormous test 
containing everything:

def testDoesMyApplicationWorkPerfectly(self):
# TEST ALL THE THINGS!!!
...


since *any* failure in any part will cause cascading failures in every 
other part of the software which relies on that part. If you have a tree 
of dependencies, a failure in the root of the tree will cause everything 
to fail, and so by your reasoning, everything should be in a single test.

I do not agree with that reasoning, even when the tree consists of two 
items: an exception and an exception attribute.

The problem of cascading test failures is a real one. But I don't believe 
that the solution is to combine multiple conceptual tests into a single 
test. In this case, the code being tested covers two different concepts:

1. foo() will raise MyException. Hence one test for this.

2. When foo() raises MyException, the exception instance will include an
   errorcode attribute with a certain value. This is conceptually 
   separate from #1 above, even though it depends on it. 

Why is it conceptually separate? Because there may be cases where the 
caller cares about foo() raising MyException, but doesn't care about the 
errorcode. Hence errorcode is dependent but separate, and hence a 
separate test.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Steven D'Aprano
On Thu, 29 Mar 2012 09:08:30 +0200, Ulrich Eckhardt wrote:
> Am 28.03.2012 20:07, schrieb Steven D'Aprano:

>> Secondly, that is not the right way to do this unit test. You are
>> testing two distinct things, so you should write it as two separate
>> tests:
> [..code..]
>> If foo does *not* raise an exception, the unittest framework will
>> handle the failure for you. If it raises a different exception, the
>> framework will also handle that too.
>>
>> Then write a second test to check the exception code:
> [...]
>> Again, let the framework handle any unexpected cases.
> 
> Sorry, you got it wrong, it should be three tests: 1. Make sure foo()
> raises an exception. 2. Make sure foo() raises the right exception. 3.
> Make sure the errorcode in the exception is right.
> 
> Or maybe you should in between verify that the exception raised actually
> contains an errorcode? And that the errorcode can be equality-compared
> to the expected value? :>

Of course you are free to slice it even finer if you like:

testFooWillRaiseSomethingButIDontKnowWhat
testFooWillRaiseMyException
testFooWillRaiseMyExceptionWithErrorcode
testFooWillRaiseMyExceptionWithErrorcodeWhichSupportsEquality
testFooWillRaiseMyExceptionWithErrorcodeEqualToFooError

Five tests :)

To the degree that the decision of how finely to slice tests is a matter 
of personal judgement and/or taste, I was wrong to say "that is not the 
right way". I should have said "that is not how I would do that test".

I believe that a single test is too coarse, and three or more tests is 
too fine, but two tests is just right. Let me explain how I come to that 
judgement.

If you take a test-driven development approach, the right way to test 
this is to write testFooWillFail once you decide that foo() should raise 
MyException but before foo() actually does so. You would write the test, 
the test would fail, and you would fix foo() to ensure it raises the 
exception. Then you leave the now passing test in place to detect 
regressions.

Then you do the same for the errorcode. Hence two tests.

Since running tests is (usually) cheap, you never bother going back to 
remove tests which are made redundant by later tests. You only remove 
them if they are made redundant by chances to the code. So even though 
the first test is made redundant by the second (if the first fails, so 
will the second), you don't remove it.

Why not? Because it guards against regressions. Suppose I decide that 
errorcode is no longer needed, so I remove the test for errorcode. If I 
had earlier also removed the independent test for MyException being 
raised, I've now lost my only check against regressions in foo().

So: never remove tests just because they are redundant. Only remove them 
when they are obsolete due to changes in the code being tested.

Even when I don't actually write the tests in advance of the code, I 
still write them as if I were. That usually makes it easy for me to 
decide how fine grained the tests should be: since there was never a 
moment when I thought MyException should have an errorcode attribute, but 
not know what that attribute would be, I don't need a *separate* test for 
the existence of errorcode.

(I would only add such a separate test if there was a bug that sometimes 
the errorcode does not exist. That would be a regression test.)

The question of the exception type is a little more subtle. There *is* a 
moment when I knew that foo() should raise an exception, but before I 
decided what that exception would be. ValueError? TypeError? Something 
else? I can write the test before making that decision:

def testFooRaises(self):
try:
foo()
except:  # catch anything
pass
else:
self.fail("foo didn't raise")


However, the next step is broken: I have to modify foo() to raise an 
exception, and there is no "raise" equivalent to the bare "except", no 
way to raise an exception without specifying an exception type.

I can use a bare raise, but only in response to an existing exception. So 
to raise an exception at all, I need to decide what exception that will 
be. Even if I start with a placeholder "raise BaseException", and test 
for that, when I go back and change the code to "raise MyException" I 
should change the test, not create a new test.

Hence there is no point is testing for "any exception, I don't care what" 
since I can't write code corresponding to that test case. Hence, I end up 
with two tests, not three and certainly not five.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Ethan Furman

Steven D'Aprano wrote:

On Wed, 28 Mar 2012 14:28:08 +0200, Ulrich Eckhardt wrote:


Hi!

I'm currently writing some tests for the error handling of some code. In
this scenario, I must make sure that both the correct exception is
raised and that the contained error code is correct:


   try:
   foo()
   self.fail('exception not raised')
   catch MyException as e:
   self.assertEqual(e.errorcode, SOME_FOO_ERROR)
   catch Exception:
   self.fail('unexpected exception raised')


Secondly, that is not the right way to do this unit test. You are testing 
two distinct things, so you should write it as two separate tests:


I have to disagree -- I do not see the advantage of writing a second 
test that *will* fail if the first test fails as opposed to bundling 
both tests together, and having one failure.


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Terry Reedy

On 3/29/2012 3:28 AM, Ulrich Eckhardt wrote:


Equality comparison is by id. So this code will not do what you want.


 >>> Exception('foo') == Exception('foo')
False

Yikes! That was unexpected and completely changes my idea. Any clue
whether this is intentional? Is identity the fallback when no equality
is defined for two objects?


Yes. The Library Reference 4.3. Comparisons (for built-in classes) puts 
is this way.
"Objects of different types, except different numeric types, never 
compare equal. Furthermore, some types (for example, function objects) 
support only a degenerate notion of comparison where any two objects of 
that type are unequal." In other words, 'a==b' is the same as 'a is b'. 
That is also the default for user-defined classes, but I am not sure 
where that is documented, if at all.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: tabs/spaces (was: Re: unittest: assertRaises() with an instance instead of a type)

2012-03-29 Thread Roy Smith
In article <0ved49-hie@satorlaser.homedns.org>,
 Ulrich Eckhardt  wrote:

> I didn't consciously use tabs, actually I would rather avoid them. That 
> said, my posting looks correctly indented in my "sent" folder and also 
> in the copy received from my newsserver. What could also have an 
> influence is line endings. I'm using Thunderbird on win32 here, acting 
> as news client to comp.lang.python. Or maybe it's your software (or 
> maybe some software in between) that fails to preserve formatting.
> 
> *shrug*

Oh noes!  The line eater bug is back!
-- 
http://mail.python.org/mailman/listinfo/python-list


tabs/spaces (was: Re: unittest: assertRaises() with an instance instead of a type)

2012-03-29 Thread Ulrich Eckhardt

Am 28.03.2012 20:26, schrieb Terry Reedy:

On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote:

[...]

# call testee and verify results
try:
...call function here...
except exception_type as e:
if not exception is None:
self.assertEqual(e, exception)


Did you use tabs? They do not get preserved indefinitely, so they are
bad for posting.


I didn't consciously use tabs, actually I would rather avoid them. That 
said, my posting looks correctly indented in my "sent" folder and also 
in the copy received from my newsserver. What could also have an 
influence is line endings. I'm using Thunderbird on win32 here, acting 
as news client to comp.lang.python. Or maybe it's your software (or 
maybe some software in between) that fails to preserve formatting.


*shrug*

Uli
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Ulrich Eckhardt

Am 28.03.2012 20:26, schrieb Terry Reedy:

On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote:

with self.assertRaises(MyException(SOME_FOO_ERROR)):
foo()


I presume that if this worked the way you want, all attributes would
have to match. The message part of builtin exceptions is allowed to
change, so hard-coding an exact expected message makes tests fragile.
This is a problem with doctest.


I would have assumed that comparing two exceptions leaves out messages 
that are intended for the user, not as part of the API. However, my 
expectations aren't met anyway, because ...



This of course requires the exception to be equality-comparable.


Equality comparison is by id. So this code will not do what you want.


 >>> Exception('foo') == Exception('foo')
 False

Yikes! That was unexpected and completely changes my idea. Any clue 
whether this is intentional? Is identity the fallback when no equality 
is defined for two objects?


Thanks for your feedback!

Uli
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Peter Otten
Ulrich Eckhardt wrote:

> True. Normally. I'd adapting to a legacy system though, similar to
> OSError, and that system simply emits error codes which the easiest way
> to handle is by wrapping them.

If you have

err = some_func()
if err: 
   raise MyException(err) 

the effort to convert it to

exc = lookup_exception(some_func())
if exc:
raise exc

is small. A fancy way is to use a decorator:

#untested
def code_to_exception(table):
def deco(f):
def g(*args, **kw):
err = f(*args, **kw)
exc = table[err]
if exc is not None: 
raise exc
return g
return f

class MyError(Exception): pass
class HyperspaceBypassError(MyError): pass

@code_to_exception({42: HyperspaceBypassError, 0: None})
def some_func(...):
# ...


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Ulrich Eckhardt

Am 28.03.2012 20:07, schrieb Steven D'Aprano:

First off, that is not Python code. "catch Exception" gives a syntax
error.


Old C++ habits... :|



Secondly, that is not the right way to do this unit test. You are testing
two distinct things, so you should write it as two separate tests:

[..code..]

If foo does *not* raise an exception, the unittest framework will handle
the failure for you. If it raises a different exception, the framework
will also handle that too.

Then write a second test to check the exception code:

[...]

Again, let the framework handle any unexpected cases.


Sorry, you got it wrong, it should be three tests:
1. Make sure foo() raises an exception.
2. Make sure foo() raises the right exception.
3. Make sure the errorcode in the exception is right.

Or maybe you should in between verify that the exception raised actually 
contains an errorcode? And that the errorcode can be equality-compared 
to the expected value? :>


Sorry, I disagree that these steps should be separated. It would blow up 
the code required for testing, increasing maintenance burdens. Which 
leads back to a solution that uses a utility function, like the one you 
suggested or the one I was looking for initially.




(By the way, I have to question the design of an exception with error
codes. That seems pretty poor design to me. Normally the exception *type*
acts as equivalent to an error code.)


True. Normally. I'd adapting to a legacy system though, similar to 
OSError, and that system simply emits error codes which the easiest way 
to handle is by wrapping them.



Cheers!

Uli
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-29 Thread Peter Otten
Ben Finney wrote:

> Steven D'Aprano  writes:
> 
>> (By the way, I have to question the design of an exception with error
>> codes. That seems pretty poor design to me. Normally the exception *type*
>> acts as equivalent to an error code.)
> 
> Have a look at Python's built-in OSError. The various errors from the
> operating system can only be distinguished by the numeric code the OS
> returns, so that's what to test on in one's unit tests.
 
The core devs are working to fix that:

$ python3.2 -c'open("does-not-exist")'
Traceback (most recent call last):
  File "", line 1, in 
IOError: [Errno 2] No such file or directory: 'does-not-exist'
$ python3.3 -c'open("does-not-exist")'
Traceback (most recent call last):
  File "", line 1, in 
FileNotFoundError: [Errno 2] No such file or directory: 'does-not-exist'

$ python3.2 -c'open("unwritable", "w")'
Traceback (most recent call last):
  File "", line 1, in 
IOError: [Errno 13] Permission denied: 'unwritable'
$ python3.3 -c'open("unwritable", "w")'
Traceback (most recent call last):
  File "", line 1, in 
PermissionError: [Errno 13] Permission denied: 'unwritable'

http://www.python.org/dev/peps/pep-3151/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-28 Thread Steven D'Aprano
On Thu, 29 Mar 2012 12:55:13 +1100, Ben Finney wrote:

> Steven D'Aprano  writes:
> 
>> (By the way, I have to question the design of an exception with error
>> codes. That seems pretty poor design to me. Normally the exception
>> *type* acts as equivalent to an error code.)
> 
> Have a look at Python's built-in OSError. The various errors from the
> operating system can only be distinguished by the numeric code the OS
> returns, so that's what to test on in one's unit tests.

I'm familiar with OSError. It is necessary because OSError is a high-
level interface to low-level C errors. I wouldn't call it a good design 
though, I certainly wouldn't choose it if we were developing an error 
system from scratch and weren't constrained by compatibility with a more 
primitive error model (error codes instead of exceptions).

The new, revamped exception hierarchy in Python 3.3 will rationalise much 
(but not all) for this, unifying IOError and OSError and making error 
codes much less relevant:


http://www.python.org/dev/peps/pep-3151/



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-28 Thread Ben Finney
Steven D'Aprano  writes:

> (By the way, I have to question the design of an exception with error 
> codes. That seems pretty poor design to me. Normally the exception *type* 
> acts as equivalent to an error code.)

Have a look at Python's built-in OSError. The various errors from the
operating system can only be distinguished by the numeric code the OS
returns, so that's what to test on in one's unit tests.

-- 
 \  “In the long run, the utility of all non-Free software |
  `\  approaches zero. All non-Free software is a dead end.” —Mark |
_o__)Pilgrim, 2006 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-28 Thread Terry Reedy

On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote:

Hi!

I'm currently writing some tests for the error handling of some code. In
this scenario, I must make sure that both the correct exception is
raised and that the contained error code is correct:


try:
foo()
self.fail('exception not raised')
catch MyException as e:
self.assertEqual(e.errorcode, SOME_FOO_ERROR)
catch Exception:
self.fail('unexpected exception raised')


This is tedious to write and read. The docs mention this alternative:


with self.assertRaises(MyException) as cm:
foo()
self.assertEqual(cm.the_exception.errorcode, SOME_FOO_ERROR)


Exceptions can have multiple attributes. This allows the tester to 
exactly specify what attributes to test.



This is shorter, but I think there's an alternative syntax possible that
would be even better:

with self.assertRaises(MyException(SOME_FOO_ERROR)):
foo()


I presume that if this worked the way you want, all attributes would 
have to match. The message part of builtin exceptions is allowed to 
change, so hard-coding an exact expected message makes tests fragile. 
This is a problem with doctest.



Here, assertRaises() is not called with an exception type but with an
exception instance. I'd implement it something like this:

def assertRaises(self, exception, ...):
# divide input parameter into type and instance
if isinstance(exception, Exception):
exception_type = type(exception)
else:
exception_type = exception
exception = None
# call testee and verify results
try:
...call function here...
except exception_type as e:
if not exception is None:
self.assertEqual(e, exception)


Did you use tabs? They do not get preserved indefinitely, so they are 
bad for posting.



This of course requires the exception to be equality-comparable.


Equality comparison is by id. So this code will not do what you want.

You can, of course, write a custom AssertX subclass that at least works 
for your custom exception class.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest: assertRaises() with an instance instead of a type

2012-03-28 Thread Steven D'Aprano
On Wed, 28 Mar 2012 14:28:08 +0200, Ulrich Eckhardt wrote:

> Hi!
> 
> I'm currently writing some tests for the error handling of some code. In
> this scenario, I must make sure that both the correct exception is
> raised and that the contained error code is correct:
> 
> 
>try:
>foo()
>self.fail('exception not raised')
>catch MyException as e:
>self.assertEqual(e.errorcode, SOME_FOO_ERROR)
>catch Exception:
>self.fail('unexpected exception raised')

First off, that is not Python code. "catch Exception" gives a syntax 
error.

Secondly, that is not the right way to do this unit test. You are testing 
two distinct things, so you should write it as two separate tests:


def testFooRaisesException(self):
# Test that foo() raises an exception.
self.assertRaises(MyException, foo)


If foo does *not* raise an exception, the unittest framework will handle 
the failure for you. If it raises a different exception, the framework 
will also handle that too.

Then write a second test to check the exception code:

def testFooExceptionCode(self):
# Test that foo()'s exception has the right error code.
try:
foo()
except MyException as err:
self.assertEquals(err.errorcode, SOME_FOO_ERROR)


Again, let the framework handle any unexpected cases.

If you have lots of functions to test, write a helper function:

def catch(exception, func, *args, **kwargs):
try:
func(*args, **kwargs)
except exception as err:
return err
raise RuntimeError('no error raised')


and then the test becomes:

def testFooExceptionCode(self):
# Test that foo()'s exception has the right error code.
self.assertEquals(
catch(MyException, foo).errorcode, SOME_FOO_ERROR
)



(By the way, I have to question the design of an exception with error 
codes. That seems pretty poor design to me. Normally the exception *type* 
acts as equivalent to an error code.)


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest and threading

2012-01-25 Thread Mark Hammond

Let me have a guess :)

On 25/01/2012 7:42 PM, Ross Boylan wrote:

On Tue, 2012-01-24 at 13:54 -0800, Ross Boylan wrote:

...

The code I want to test uses threads, but that is not entirely internal
from the standpoint of the unit test framework.  The unit test will be
executing in one thread, but some of the assertions may occur in other
threads.  The question is whether that will work, in particular whether
assertion failures will be properly captured and logged by the test
framework.


I think it will - so long as your "test" on the main thread hasn't 
returned yet.



Concretely, a test may exercise some code that triggers a callback; the
callback might come in a different thread, and the code that is
triggered might make various assertions.

There are two issues: whether assertions and their failures that happen
in other threads will be correctly received by the test framework, and
whether the framework is robust against several assertions being raised
"simultaneously" in different threads.  The latter seems a bit much to
hope for.


I suspect both will be fine.



I assume that, at a minimum, the my test code will need to use locks or
other coordination mechanisms so the test doesn't end before all code
under test executes.


Yep - that's the only caveat I'd expect .


Finally, I'll mention two senses of threads in tests that my question
does not concern, although they are also interesting.

I am not concerned with testing the performance of my code, in the sense
of asserting  that an operation must complete before x seconds or after
y seconds.  Some potential implementations of such tests might use
threads even if the code under test was single-threaded.

The question also does not concern running lots of unit tests in
parallel.


nose is still worth having a look at - personally I just use it as a 
runner and where possible ignore its api...


Mark
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest and threading

2012-01-25 Thread Ross Boylan
On Tue, 2012-01-24 at 13:54 -0800, Ross Boylan wrote:
> Is it safe to use unittest with threads?
> 
> In particular, if a unit test fails in some thread other than the one
> that launched the test, will that information be captured properly?
> 
> A search of the net shows a suggestion that all failures must be
> reported in the main thread, but I couldn't find anything definitive.
> 
> If it matters, I'm using CPython 2.7.
> 
> Thanks.  If you're using email, I'd appreciate a cc.
> Ross Boylan
> 
Steven D'Aprano wrote

> I think you need to explain what you mean here in a little more detail.
> 
> If you mean, "I have a library that uses threads internally, and I want 
> to test it with unittest", then the answer is almost certainly yes it is 
> safe.
> 
> If you mean, "I want to write unit tests which use threads as part of the 
> test", then the answer again remains almost certainly yes it is safe.
Thanks for your responses (only partially excerpted above).

The code I want to test uses threads, but that is not entirely internal
from the standpoint of the unit test framework.  The unit test will be
executing in one thread, but some of the assertions may occur in other
threads.  The question is whether that will work, in particular whether
assertion failures will be properly captured and logged by the test
framework.

Concretely, a test may exercise some code that triggers a callback; the
callback might come in a different thread, and the code that is
triggered might make various assertions.

There are two issues: whether assertions and their failures that happen
in other threads will be correctly received by the test framework, and
whether the framework is robust against several assertions being raised
"simultaneously" in different threads.  The latter seems a bit much to
hope for.

I assume that, at a minimum, the my test code will need to use locks or
other coordination mechanisms so the test doesn't end before all code
under test executes.

Finally, I'll mention two senses of threads in tests that my question
does not concern, although they are also interesting.

I am not concerned with testing the performance of my code, in the sense
of asserting  that an operation must complete before x seconds or after
y seconds.  Some potential implementations of such tests might use
threads even if the code under test was single-threaded.

The question also does not concern running lots of unit tests in
parallel.

Ross

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest and threading

2012-01-24 Thread Steven D'Aprano
On Tue, 24 Jan 2012 13:54:23 -0800, Ross Boylan wrote:

> Is it safe to use unittest with threads?

I see nobody else has answered, so I'll have a go.

I think you need to explain what you mean here in a little more detail.

If you mean, "I have a library that uses threads internally, and I want 
to test it with unittest", then the answer is almost certainly yes it is 
safe.

If you mean, "I want to write unit tests which use threads as part of the 
test", then the answer again remains almost certainly yes it is safe.

Provided, of course, that your test code is not buggy. Tests, being code, 
are not immune to bugs, and the more complex your tests, the more likely 
they contain bugs.

Lastly, if you mean, "I want to execute each unit test in a separate 
thread, so that all my tests run in parallel instead of sequentially", 
then the answer is that as far as I know the unittest framework does not 
support this.

You would have to write your own framework. You might be able to inherit 
some of the behaviour from the unittest module, but all the threading 
would be up to you. So only you will know whether it will be safe or not.

Alternatively, you could try the nose or py.test frameworks, which I 
understand already support running tests in parallel.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest. customizing tstloaders / discover()

2011-12-12 Thread Nathan Rice
Nose is absolutely the way to go for your testing needs.  You can put
"__test__ = False" in modules or classes to stop test collection.

On Mon, Dec 12, 2011 at 5:44 AM, Thomas Bach  wrote:
> Gelonida N  writes:
>
>> Do I loose anything if using nose. or example can all unit tests / doc
>> tests still be run from nose?
>
> AFAIK you don't loose anything by using nose – the unittests should all
> be found and doctests can be run via `--with-doctest', I never used
> doctests though.
>
> regards
> --
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest. customizing tstloaders / discover()

2011-12-12 Thread Thomas Bach
Gelonida N  writes:

> Do I loose anything if using nose. or example can all unit tests / doc
> tests still be run from nose?

AFAIK you don't loose anything by using nose – the unittests should all
be found and doctests can be run via `--with-doctest', I never used
doctests though.

regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest. customizing tstloaders / discover()

2011-12-11 Thread Gelonida N
On 12/12/2011 12:27 AM, Thomas Bach wrote:
> Gelonida N  writes:
> 
>> I'd like to use regular expresions as include / exclude rules
>> and I would like to have another filter function, which would check for
>> the existence of certain metavariabels in test suite files
> 
> Did you have a look at nose? I'm using it and it supports
> include/exclude rules via RE and lets you select directories to run
> tests from.
> 
> I'm not sure about the meta-variable thing, but it supports plug ins
> that could do the trick…
> 

I looked at nose very briefly, but too short to form a real opinion
and to understand whether nose would have any negative impact on
existing unit tests.

Do I loose anything if using nose. or example can all unit tests / doc
tests still be run from nose?



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest. customizing tstloaders / discover()

2011-12-11 Thread Thomas Bach
Gelonida N  writes:

> I'd like to use regular expresions as include / exclude rules
> and I would like to have another filter function, which would check for
> the existence of certain metavariabels in test suite files

Did you have a look at nose? I'm using it and it supports
include/exclude rules via RE and lets you select directories to run
tests from.

I'm not sure about the meta-variable thing, but it supports plug ins
that could do the trick…

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Roy Smith
In article ,
 Tim Chase  wrote:

> On 09/28/11 19:52, Roy Smith wrote:
> > In many cases, there's only two states of interest:
> >
> > 1) All tests pass
> >
> > 2) Anything else
> 
> Whether for better or worse, at some places (such as a previous 
> employer) the number (and accretion) of test-points is a 
> marketing bullet-point for upgrades & new releases.

Never attribute to malice that which is adequately explained by the 
stupidity of the marketing department.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Tim Chase

On 09/28/11 19:52, Roy Smith wrote:

In many cases, there's only two states of interest:

1) All tests pass

2) Anything else


Whether for better or worse, at some places (such as a previous 
employer) the number (and accretion) of test-points is a 
marketing bullet-point for upgrades & new releases.


-tkc



--
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Roy Smith
In article <874nzw3wxc@benfinney.id.au>,
 Ben Finney  wrote:

> Roy Smith  writes:
> 
> > In article <87k48szqo1@benfinney.id.au>,
> >  Ben Finney  wrote:
> >
> > > Worse, if one of the scenarios causes the test to fail, the loop will
> > > end and you won't get the results for the remaining scenarios.
> >
> > Which, depending on what you're doing, may or may not be important.  In 
> > many cases, there's only two states of interest:
> >
> > 1) All tests pass
> >
> > 2) Anything else
> 
> For the purpose of debugging, it's always useful to more specifically
> narrow down the factors leading to failure.

Well, sure, but "need to debug" is just a consequence of being in state 
2.  If a test fails and I can't figure out why, I can always go back and 
add additional code to the test case to extract additional information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Ben Finney
Steven D'Aprano  writes:

> I used to ask the same question, but then I decided that if I wanted
> each data point to get its own tick, I should bite the bullet and
> write an individual test for each.

Hence my advocating the ‘testscenarios’ library. Have you tried that?

It allows the best of both worlds: within the class, you write a
collection of data scenarios, and write one test case for each separate
behaviour, and that's all that appears in the code; but the test run
shows every test case for every scenario.

E.g. with a ParrotTestCase class having three test cases and four
scenarios, the test run will run the full combination of all of them and
report twelve distinct test cases and the result for each.

So the “bite the bullet” you describe isn't necessary to get what you
want.

> If you really care, you could subclass unittest.TestCase, and then
> cause each assert* method to count how often it gets called. But
> really, how much detailed info about *passed* tests do you need?

The ‘testscenarios’ library does subclass the standard library
‘unittest’ module.

But as explained elsewhere, it's not the counting which is the issue.
The issue is to ensure (and report) that every test case is actually
tested in isolation against every relevant scenario.

-- 
 \“Human reason is snatching everything to itself, leaving |
  `\  nothing for faith.” —Bernard of Clairvaux, 1090–1153 |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Ben Finney
Roy Smith  writes:

> In article <87k48szqo1@benfinney.id.au>,
>  Ben Finney  wrote:
>
> > Worse, if one of the scenarios causes the test to fail, the loop will
> > end and you won't get the results for the remaining scenarios.
>
> Which, depending on what you're doing, may or may not be important.  In 
> many cases, there's only two states of interest:
>
> 1) All tests pass
>
> 2) Anything else

For the purpose of debugging, it's always useful to more specifically
narrow down the factors leading to failure.

-- 
 \  “A lie can be told in a few words. Debunking that lie can take |
  `\   pages. That is why my book… is five hundred pages long.” —Chris |
_o__)Rodda, 2011-05-05 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Eric Snow
On Wed, Sep 28, 2011 at 6:50 PM, Devin Jeanpierre
 wrote:
>> I used to ask the same question, but then I decided that if I wanted each
>> data point to get its own tick, I should bite the bullet and write an
>> individual test for each.
>
> Nearly the entire re module test suite is a list of tuples. If it was
> instead a bunch of TestCase classes, there'd be a lot more boilerplate
> to write. (At a bare minimum, there'd be two times as many lines, and
> all the extra lines would be identical...)
>
> Why is writing boilerplate for a new test a good thing? It discourages
> the authorship of tests. Make it as easy as possible by e.g. adding a
> new thing to whatever you're iterating over. This is, for example, why
> the nose test library has a decorator for generating a test suite from
> a generator.

+1

>
> Devin
>
> On Wed, Sep 28, 2011 at 8:16 PM, Steven D'Aprano
>  wrote:
>> Tim Chase wrote:
>>
>>> While I asked this on the Django list as it happened to be with
>>> some Django testing code, this might be a more generic Python
>>> question so I'll ask here too.
>>>
>>> When performing unittest tests, I have a number of methods of the
>>> form
>>>
>>>    def test_foo(self):
>>>      data = (
>>>        (item1, result1),
>>>        ... #bunch of tests for fence-post errors
>>>        )
>>>      for test, result in data:
>>>        self.assertEqual(process(test), result)
>>>
>>> When I run my tests, I only get a tick for running one the one
>>> test (test_foo), not the len(data) tests that were actually
>>> performed.  Is there a way for unittesting to report the number
>>> of passed-assertions rather than the number of test-methods run?
>>
>> I used to ask the same question, but then I decided that if I wanted each
>> data point to get its own tick, I should bite the bullet and write an
>> individual test for each.
>>
>> If you really care, you could subclass unittest.TestCase, and then cause
>> each assert* method to count how often it gets called. But really, how much
>> detailed info about *passed* tests do you need?
>>
>> If you are writing loops inside tests, you might find this anecdote useful:
>>
>> http://mail.python.org/pipermail/python-list/2011-April/1270640.html
>>
>>
>>
>> --
>> Steven
>>
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Roy Smith
In article <87k48szqo1@benfinney.id.au>,
 Ben Finney  wrote:

> Worse, if one of the scenarios causes the test to fail, the loop will
> end and you won't get the results for the remaining scenarios.

Which, depending on what you're doing, may or may not be important.  In 
many cases, there's only two states of interest:

1) All tests pass

2) Anything else
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Devin Jeanpierre
> I used to ask the same question, but then I decided that if I wanted each
> data point to get its own tick, I should bite the bullet and write an
> individual test for each.

Nearly the entire re module test suite is a list of tuples. If it was
instead a bunch of TestCase classes, there'd be a lot more boilerplate
to write. (At a bare minimum, there'd be two times as many lines, and
all the extra lines would be identical...)

Why is writing boilerplate for a new test a good thing? It discourages
the authorship of tests. Make it as easy as possible by e.g. adding a
new thing to whatever you're iterating over. This is, for example, why
the nose test library has a decorator for generating a test suite from
a generator.

Devin

On Wed, Sep 28, 2011 at 8:16 PM, Steven D'Aprano
 wrote:
> Tim Chase wrote:
>
>> While I asked this on the Django list as it happened to be with
>> some Django testing code, this might be a more generic Python
>> question so I'll ask here too.
>>
>> When performing unittest tests, I have a number of methods of the
>> form
>>
>>    def test_foo(self):
>>      data = (
>>        (item1, result1),
>>        ... #bunch of tests for fence-post errors
>>        )
>>      for test, result in data:
>>        self.assertEqual(process(test), result)
>>
>> When I run my tests, I only get a tick for running one the one
>> test (test_foo), not the len(data) tests that were actually
>> performed.  Is there a way for unittesting to report the number
>> of passed-assertions rather than the number of test-methods run?
>
> I used to ask the same question, but then I decided that if I wanted each
> data point to get its own tick, I should bite the bullet and write an
> individual test for each.
>
> If you really care, you could subclass unittest.TestCase, and then cause
> each assert* method to count how often it gets called. But really, how much
> detailed info about *passed* tests do you need?
>
> If you are writing loops inside tests, you might find this anecdote useful:
>
> http://mail.python.org/pipermail/python-list/2011-April/1270640.html
>
>
>
> --
> Steven
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Roy Smith
In article <4e83b8e0$0$29972$c3e8da3$54964...@news.astraweb.com>,
 Steven D'Aprano  wrote:

> If you are writing loops inside tests, you might find this anecdote useful:
> 
> http://mail.python.org/pipermail/python-list/2011-April/1270640.html

On the other hand, the best test is one that gets written.  I will often 
write tests that I know do not meet the usual standards of purity and 
wholesomeness.  Here's a real-life example:

for artist in artists:
name = artist['name']
self.assertIsInstance(name, unicode)
name = name.lower()
# Due to fuzzy matching, it's not strictly guaranteed that the 
# following assertion is true, but it works in this case.  
self.assertTrue(name.startswith(term), (name, term))

Could I have written the test without the loop?  Probably.  Would it 
have been a better test?  I guess, at some level, probably.  And, of 
course, the idea of a "not strictly guaranteed" assertion is probably 
enough to make me lose my Unit Tester's Guild Secret Decoder Ring 
forever :-)

But, the test was quick and easy to write, and provides value.  I could 
have spent twice as long writing a better test, and it would have 
provided a little more value, but certainly not double.  More 
importantly, had I spent the extra time writing the better test, I might 
have not had enough time to write all the other tests I wrote that day.

Sometimes good enough is good enough.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Steven D'Aprano
Tim Chase wrote:

> While I asked this on the Django list as it happened to be with
> some Django testing code, this might be a more generic Python
> question so I'll ask here too.
> 
> When performing unittest tests, I have a number of methods of the
> form
> 
>def test_foo(self):
>  data = (
>(item1, result1),
>... #bunch of tests for fence-post errors
>)
>  for test, result in data:
>self.assertEqual(process(test), result)
> 
> When I run my tests, I only get a tick for running one the one
> test (test_foo), not the len(data) tests that were actually
> performed.  Is there a way for unittesting to report the number
> of passed-assertions rather than the number of test-methods run?

I used to ask the same question, but then I decided that if I wanted each
data point to get its own tick, I should bite the bullet and write an
individual test for each.

If you really care, you could subclass unittest.TestCase, and then cause
each assert* method to count how often it gets called. But really, how much
detailed info about *passed* tests do you need? 

If you are writing loops inside tests, you might find this anecdote useful:

http://mail.python.org/pipermail/python-list/2011-April/1270640.html



-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Terry Reedy

On 9/28/2011 10:16 AM, Tim Chase wrote:


When performing unittest tests, I have a number of methods of the form

def test_foo(self):
data = (
(item1, result1),
... #bunch of tests for fence-post errors
)
for test, result in data:
self.assertEqual(process(test), result)

When I run my tests, I only get a tick for running one the one test
(test_foo), not the len(data) tests that were actually performed. Is
there a way for unittesting to report the number of passed-assertions
rather than the number of test-methods run?


In my view, unittest, based on JUnit from Java, is both overkill and 
inadequate for simple function testing of multiple input-output pairs. 
So I wrote my own short function test function that does just what I 
want, and which I can change if I change what I want.


Ben has described the combinatorial explosion solution. But if I were 
using unittest, I might do something like the following:


  def test_foo(self):
data = (
  (item1, result1),
  ... #bunch of tests for fence-post errors
  )
errors = []
for input, expected in data:
  try:
actual = process(input)
if actual != expected: errors.append(input, expected, actual)
  except Exception as e:
errors.append(input, expected, actual)
self.assertEqual((0,[]), (len(errors),errors))

except that I would write a functest(func, iopairs) that returned the 
error pair. (This is essentially what I have done for for myself.)  I am 
presuming that one can run unittest so that it prints the unequal items.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Ben Finney
Ben Finney  writes:

> You can use the third-party ‘testscenarios’ library

The URL got a bit mangled. The proper PyPI URL for that library is
http://pypi.python.org/pypi/testscenarios>.

> to generate test cases at run time, one for each combination of
> scenarios with test cases on the class. They will all be run and
> reported as distinct test cases.
>
> There is even another library integrating this with Django
> http://pypi.python.org/pypi/django-testscenarios>.

-- 
 \“Don't worry about people stealing your ideas. If your ideas |
  `\ are any good, you'll have to ram them down people's throats.” |
_o__)—Howard Aiken |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest testing assert*() calls rather than methods?

2011-09-28 Thread Ben Finney
Tim Chase  writes:

>   def test_foo(self):
> data = (
>   (item1, result1),
>   ... #bunch of tests for fence-post errors
>   )
> for test, result in data:
>   self.assertEqual(process(test), result)

The sets of data for running the same test we might call “scenarios”.

> When I run my tests, I only get a tick for running one the one test
> (test_foo), not the len(data) tests that were actually performed.

Worse, if one of the scenarios causes the test to fail, the loop will
end and you won't get the results for the remaining scenarios.

> Is there a way for unittesting to report the number of
> passed-assertions rather than the number of test-methods run?

You can use the third-party ‘testscenarios’ library
http://pypi.python.org/pypi/test-scenarios> to generate test cases
at run time, one for each combination of scenarios with test cases on
the class. They will all be run and reported as distinct test cases.

There is even another library integrating this with Django
http://pypi.python.org/pypi/django-testscenarios>.

-- 
 \ “Books and opinions, no matter from whom they came, if they are |
  `\ in opposition to human rights, are nothing but dead letters.” |
_o__)  —Ernestine Rose |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest: how to pass information to TestCase classes?

2010-10-26 Thread Steve Holden
On 10/26/2010 2:46 PM, AK wrote:
> Hi, I have a question about unittest: let's say I create a temp dir for
> my tests, then use loadTestsFromNames() to load my tests from packages
> and modules they're in, then use TextTestRunner.run() to run the tests,
> how can I pass information to TestCase instances, e.g. the location of
> the temp dir I created?
> 
> The dir has to be created just once, before any tests run, and then
> multiple packages and multiple modules in them are imported and run.
> 
In which case a class variable would seem to be the appropriate mechanism.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon 2011 Atlanta March 9-17   http://us.pycon.org/
See Python Video!   http://python.mirocommunity.org/
Holden Web LLC http://www.holdenweb.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest: how to pass information to TestCase classes?

2010-10-26 Thread Ben Finney
AK  writes:

> Hi, I have a question about unittest: let's say I create a temp dir
> for my tests, then use loadTestsFromNames() to load my tests from
> packages and modules they're in, then use TextTestRunner.run() to run
> the tests, how can I pass information to TestCase instances, e.g. the
> location of the temp dir I created?

Have it available from outside the TestCase child classes. Either as a
module-level global, or imported from some other module.

-- 
 \   Moriarty: “Forty thousand million billion dollars? That money |
  `\must be worth a fortune!” —The Goon Show, _The Sale of |
_o__)   Manhattan_ |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest basics

2010-05-11 Thread Chris Withers

import unittest

class MyTestCase(unittest.TestCase):

  def test_my_import(self):
import blah

cheers,

Chris

John Maclean wrote:
is there a way to test that a certian library or module is or can be 
loaded successfully?


self.assert('import blah')



--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest basics

2010-05-11 Thread Giampaolo Rodolà
There's no reason for such a thing.
You can just make "import module" in your test and if something goes
wrong that will be treated as any other test failure.

--- Giampaolo
http://code.google.com/p/pyftpdlib
http://code.google.com/p/psutil


2010/5/11 John Maclean :
> is there a way to test that a certian library or module is or can be loaded
> successfully?
>
> self.assert('import blah')
>
> --
> John Maclean
> MSc. (DIC) BSc. (Hons)
> Linux Systems and Applications
> 07739 171 531
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest not being run

2010-05-10 Thread Joe Riopel
On Mon, May 10, 2010 at 5:17 PM, cjw  wrote:
> PyScripter and PythonWin permit the user to choose the equivalence of tabs
> and spaces.  I like two spaces = on tab, it's a matter of taste.  I feel
> that eight spaces is too much.

While it is a matter of taste,  PEP 8 recommends 4 spaces per indentation level.

http://www.python.org/dev/peps/pep-0008/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest not being run

2010-05-10 Thread cjw

On 10-May-10 10:21 AM, John Maclean wrote:

On 10/05/2010 14:38, J. Cliff Dyer wrote:

My guess is you mixed tabs and spaces. One tab is always treated by the
python interpreter as being equal to eight spaces, which is two
indentation levels in your code.

Though if it were exactly as you show it, you'd be getting a syntax
error, because even there, it looks like the indentation of your `def
test_T1(self):` line is off by one column, relative to pass, and by
three columns relative to the other methods.

Cheers,
Cliff


'twas a spaces/indent issue. thanks!



PyScripter and PythonWin permit the user to choose the equivalence of 
tabs and spaces.  I like two spaces = on tab, it's a matter of taste.  I 
feel that eight spaces is too much.


Colin W.
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest not being run

2010-05-10 Thread John Maclean

On 10/05/2010 14:38, J. Cliff Dyer wrote:

My guess is you mixed tabs and spaces.  One tab is always treated by the
python interpreter as being equal to eight spaces, which is two
indentation levels in your code.

Though if it were exactly as you show it, you'd be getting a syntax
error, because even there, it looks like the indentation of your `def
test_T1(self):` line is off by one column, relative to pass, and by
three columns relative to the other methods.

Cheers,
Cliff


'twas a spaces/indent issue. thanks!



--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest not being run

2010-05-10 Thread J. Cliff Dyer
My guess is you mixed tabs and spaces.  One tab is always treated by the
python interpreter as being equal to eight spaces, which is two
indentation levels in your code.

Though if it were exactly as you show it, you'd be getting a syntax
error, because even there, it looks like the indentation of your `def
test_T1(self):` line is off by one column, relative to pass, and by
three columns relative to the other methods.

Cheers,
Cliff
 

On Mon, 2010-05-10 at 13:38 +0100, John Maclean wrote:
> hi,
> 
> can some one explain why the __first__ test is not being run?
> 
> #!/usr/bin/env python
> import unittest # {{{
> class T1TestCase(unittest.TestCase):
> 
>  def setUp(self):
>  pass  # can we use global variables here?
> 
>  def tearDown(self):
>  pass  # garbage collection
> 
>   def test_T1(self):
>   '''this test aint loading'''
>   self.assertEquals(1, 0)
> 
>  def test_T2(self):  ## test method names begin 'test*'
>  self.assertEquals((1 + 2), 3)
>  self.assertEquals(0 + 1, 1)
> 
>  def test_T3(self):
>  self.assertEquals((0 * 10), 0)
>  self.assertEquals((5 * 8), 40)
> 
> # the output is better. prints each test and ok or fail
> suite = unittest.TestLoader().loadTestsFromTestCase(T1TestCase)
> unittest.TextTestRunner(verbosity=2).run(suite) # }}}
> 
> 
> ''' halp!
> 
> the first test ain't loading...
> 
> python blaht.py
> test_T2 (__main__.T1TestCase) ... ok
> test_T3 (__main__.T1TestCase) ... ok
> 
> --
> Ran 2 tests in 0.000s
> 
> OK
> 
> '''



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest not being run

2010-05-10 Thread Joe Riopel
On Mon, May 10, 2010 at 8:38 AM, John Maclean  wrote:
> hi,
>
> can some one explain why the __first__ test is not being run?

It looks like you defined test_T1 inside of  the tearDown method.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest help needed!

2010-01-14 Thread Phlip

Oltmans wrote:


def test_first(self):
print 'first test'
process(123)


All test cases use the pattern "Assemble Activate Assert".

You are assembling a 123, and activating process(), but where is your assert? If 
it is inside process() (if process is a test-side method), then that should be 
called assert_process().



As you can see, every test method is almost same. Only difference is
that every test method is calling process() with a different value.
Also, I've around 50 test methods coded that way.


We wonder if your pattern is truly exploiting the full power of testing. If you 
have ~15 different features, you should have ~50 tests (for a spread of low, 
middle, and high input values, to stress the targeted production code).


But this implies your 15 different features should have as many different 
interfaces - not the same interface over and over again. That suggests coupled 
features.


Anyway, the short-term answer is to temporarily abandon "AAA", and roll up your 
input values into a little table:


   for x in [123, 327, 328, ... ]:
  process(x)

(Also, you don't need the print - tests should run silent and unattended, unless 
they fail.)


This refactor is the standard answer to the question "I have an unrolled loop". 
You roll it back up into a table iteration.


However, you lose the test case features, such as restartability and test 
isolation, that AAA give you.


Long term, you should use a literate test runner, such as (>cough<) my Morelia 
project:


   http://c2.com/cgi/wiki?MoreliaViridis

Down at the bottom, that shows how to create a table of inputs and outputs, and 
Morelia does the unrolling for you.


--
  Phlip
  http://zeekland.zeroplayer.com/Uncle_Wiggilys_Travels/1
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest help needed!

2010-01-14 Thread Oltmans


On Jan 14, 11:46 pm, exar...@twistedmatrix.com wrote:
> When you run test.py, it gets to the loadTestsFromName line.  There, it
> imports the module named "test" in order to load tests from it.  To
> import
> that module, it runs test.py again.  By the time it finishes running the
> contents of test.py there, it has run all of your tests once, since part
> of test.py is "suite.run(r)".  Having finished that, the import of the
> "test"

Many thanks, really appreciate your insight. Very helpful. I need a
program design advice. I just want to know what's the gurus way of
doing it? I've a unittest.TestCase derived class that have around 50
test methods. Design is almost similar to following
---
import unittest

class result(unittest.TestResult):
pass

class tee(unittest.TestCase):
def test_first(self):
print 'first test'
process(123)
def test_second(self):
print 'second test'
process(564)
def test_third(self):
print 'final method'
process(127863)



if __name__=="__main__":
r = result()
suite = unittest.defaultTestLoader.loadTestsFromName('test.tee')
suite.run(r)
---

As you can see, every test method is almost same. Only difference is
that every test method is calling process() with a different value.
Also, I've around 50 test methods coded that way. I just want to know:
is there a way I can make things smaller/smarter/pythonic given the
current design? If you think any information is missing please let me
know. I will really really appreciate any insights. Many thanks in
advance.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest help needed!

2010-01-14 Thread exarkun

On 06:33 pm, rolf.oltm...@gmail.com wrote:

Hi Python gurus,

I'm quite new to Python and have a problem. Following code resides in
a file named test.py
---
import unittest


class result(unittest.TestResult):
   pass



class tee(unittest.TestCase):
   def test_first(self):
   print 'first test'
   print '-'
   def test_second(self):
   print 'second test'
   print '-'
   def test_final(self):
   print 'final method'
   print '-'

r = result()
suite = unittest.defaultTestLoader.loadTestsFromName('test.tee')

suite.run(r)

---

Following is the output when I run it
---
final method
-
first test
-
second test
-
final method
-
first test
-
second test
-

---

Looks like it's running every test twice, I cannot figure out why?


When you run test.py, it gets to the loadTestsFromName line.  There, it
imports the module named "test" in order to load tests from it.  To 
import

that module, it runs test.py again.  By the time it finishes running the
contents of test.py there, it has run all of your tests once, since part
of test.py is "suite.run(r)".  Having finished that, the import of the 
"test"
module is complete and the "loadTestsFromName" call completes.  Then, 
the

"suite.run(r)" line runs again, and all your tests run again.

You want to protect the suite stuff in test.py like this:

   if __name__ == '__main__':
   ...

Or you want to get rid of it entirely and use `python -m unittest 
test.py`

(with a sufficiently recent version of Python), or another runner, like
Twisted Trial or one of the many others available.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest inconsistent

2010-01-12 Thread Chris Withers

Phlip wrote:

The reason the 'Tester' object has no attribute 'arg1' is because
"self" still refers to the object made for testA.


I hope someone else can spot the low-level reason...

...but why aren't you using http://pypi.python.org/pypi/mock/ ? Look
up its patch_object facility...


Indeed, I love mock, although I prefer testfixture replace decorator 
and/or context manager for installing and removing them:


http://packages.python.org/testfixtures/mocking.html

cheers,

Chris

--
Simplistix - Content Management, Batch Processing & Python Consulting
- http://www.simplistix.co.uk

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest inconsistent

2010-01-06 Thread Peter Otten
André wrote:

> On Jan 5, 8:14 pm, Matt Haggard  wrote:
>> Can anyone tell me why this test fails?
>>
>> http://pastebin.com/f20039b17
>>
>> This is a minimal example of a much more complex thing I'm trying to
>> do.  I'm trying to hijack a function and inspect the args passed to it
>> by another function.
>>
>> The reason the 'Tester' object has no attribute 'arg1' is because
>> "self" still refers to the object made for testA.
> 
> Quick answer: change faketest.py as follows:
> 
> #--
> # faketest.py
> #--
> 
> #from importme import render
> import importme
> 
> def run(somearg):
> return importme.render(somearg)
> 
> =
> A long answer, with explanation, will cost you twice as much ;-)
> (but will have to wait)
> 
> André

Or you figure it out yourself staring at

>>> import os
>>> from os import rename
>>> os.rename = 42
>>> rename

>>> os.rename
42

from module import name

binds the object referred to by module.name to the name variable in the 
current module. You can think of it as a shortcut for

import module
name = module.name
del module

When you later rebind

import module
module.name = something_else

the reference in the current module isn't magically updated to point to 
something_else.

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest inconsistent

2010-01-05 Thread André
On Jan 5, 8:14 pm, Matt Haggard  wrote:
> Can anyone tell me why this test fails?
>
> http://pastebin.com/f20039b17
>
> This is a minimal example of a much more complex thing I'm trying to
> do.  I'm trying to hijack a function and inspect the args passed to it
> by another function.
>
> The reason the 'Tester' object has no attribute 'arg1' is because
> "self" still refers to the object made for testA.

Quick answer: change faketest.py as follows:

#--
# faketest.py
#--

#from importme import render
import importme

def run(somearg):
return importme.render(somearg)

=
A long answer, with explanation, will cost you twice as much ;-)
(but will have to wait)

André
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest inconsistent

2010-01-05 Thread Phlip
On Jan 5, 4:14 pm, Matt Haggard  wrote:
> Can anyone tell me why this test fails?
>
> http://pastebin.com/f20039b17
>
> This is a minimal example of a much more complex thing I'm trying to
> do.  I'm trying to hijack a function and inspect the args passed to it
> by another function.
>
> The reason the 'Tester' object has no attribute 'arg1' is because
> "self" still refers to the object made for testA.

I hope someone else can spot the low-level reason...

...but why aren't you using http://pypi.python.org/pypi/mock/ ? Look
up its patch_object facility...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest buffing output on windows?

2009-12-07 Thread Dave Angel

Roy Smith wrote:

I'm running 2.5.1.  I've got a test suite that takes about 15 minutes
to complete.  On my unix boxes, as each test case executes, it prints
out a line (I'm using unittest.TextTestRunner(verbosity=2)) of status,
but on my windows box (running under cygwin), it buffers everything
until the entire test suite is completed.

I can stick sys.stdout.flush() and sys.stderr.flush() in my tearDown()
method, which gets me output, but that doesn't seem like the right
solution.  Is there a better way to get the test runner to flush
output  after every test case?

  
You could try starting Python with the -u switch, which says don't 
buffer stdout. or stderr (at least on Python 2.6).  Try running

   python -?

to see the commandline options.  On the other hand, if it's specifically 
a cygwin problem, I have no idea.


DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest & setup

2009-11-04 Thread Joe Riopel
On Tue, Nov 3, 2009 at 11:02 PM, Jonathan Haddad  wrote:
> I've got a class, in the constructor it loads a CSV file from disc.  I'd
> like only 1 instance of the class to be instantiated.  However, when running
> multiple unit tests, multiple instances of the class are created.  What's
> the best way for me to avoid this?  It takes about a few seconds to load the
> CSV file.

This post that might be worth reading, as it relates to testing with
singletons.

http://misko.hevery.com/2008/08/17/singletons-are-pathological-liars/

As is this

http://misko.hevery.com/code-reviewers-guide/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest & setup

2009-11-03 Thread Gabriel Genellina
En Wed, 04 Nov 2009 01:02:24 -0300, Jonathan Haddad   
escribió:



I've got a class, in the constructor it loads a CSV file from disc.  I'd
like only 1 instance of the class to be instantiated.  However, when  
running

multiple unit tests, multiple instances of the class are created.  What's
the best way for me to avoid this?  It takes about a few seconds to load  
the

CSV file.


Use a factory function:

_instance = None
def createFoo(parameters):
  if _instance is None:
_instance = Foo(parameters)
  return _instance

and replace all occurrences of Foo(parameters) with createFoo(parameters).  
For new-style classes, you may override the __new__ method instead.


Perhaps I didn't understand your problem correctly because this is  
unrelated to unit testing...


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-22 Thread Ethan Furman

Gabriel Genellina wrote:
En Tue, 20 Oct 2009 19:57:19 -0300, Ethan Furman   
escribió:

Steven D'Aprano wrote:

On Tue, 20 Oct 2009 14:45:49 -0700, Zac Burns wrote:



My preference would be that failIfEqual checks both != and ==. This is
practical, and would benefit almost all use cases. If "!=" isn't "not
==" (IEEE NaNs I hear is the only known use case)



  numpy uses == and != as element-wise operators:


Two issues:  1) Sounds like we should have two more Asserts --  
failIfNotEqual, and assertNotNotEqual to handle the dichotomy in 
Python;  and 2) Does this mean (looking at Mark Dickinson's post) that 
2.7 and  3.1 are now broken?


1) assertEqual and assertNotEqual test for == and != respectively. The  
failXXX methods are being deprecated. Why do you think we need more  
asserts?


Ignorance, of course.  :)  I didn't know those were there.  Hopefully 
the OP will also now realize those are there.


2) Not exactly, but there are still inconsistencies (e.g. 
assertDictEqual  and assertMultiLineEqual use != instead of ==, and some 
assertion messages  use the wrong terminology)


--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-22 Thread Gabriel Genellina
En Tue, 20 Oct 2009 19:57:19 -0300, Ethan Furman   
escribió:

Steven D'Aprano wrote:

On Tue, 20 Oct 2009 14:45:49 -0700, Zac Burns wrote:



My preference would be that failIfEqual checks both != and ==. This is
practical, and would benefit almost all use cases. If "!=" isn't "not
==" (IEEE NaNs I hear is the only known use case)



  numpy uses == and != as element-wise operators:


Two issues:  1) Sounds like we should have two more Asserts --  
failIfNotEqual, and assertNotNotEqual to handle the dichotomy in Python;  
and 2) Does this mean (looking at Mark Dickinson's post) that 2.7 and  
3.1 are now broken?


1) assertEqual and assertNotEqual test for == and != respectively. The  
failXXX methods are being deprecated. Why do you think we need more  
asserts?
2) Not exactly, but there are still inconsistencies (e.g. assertDictEqual  
and assertMultiLineEqual use != instead of ==, and some assertion messages  
use the wrong terminology)


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-20 Thread Ethan Furman

Steven D'Aprano wrote:

On Tue, 20 Oct 2009 14:45:49 -0700, Zac Burns wrote:



My preference would be that failIfEqual checks both != and ==. This is
practical, and would benefit almost all use cases. If "!=" isn't "not
==" (IEEE NaNs I hear is the only known use case)



numpy uses == and != as element-wise operators:




import numpy
a = numpy.array([10, 20, 30, 40])
b = numpy.array([10, 20, 31, 40])
a==b


array([ True,  True, False,  True], dtype=bool)


a!=b


array([False, False,  True, False], dtype=bool)


not a!=b


Traceback (most recent call last):
  File "", line 1, in 
ValueError: The truth value of an array with more than one element is 
ambiguous. Use a.any() or a.all()






then those could simply not use this method.



I'm not so sure this is a good idea. Python specifically treats == and != 
as independent. There's no reason to think that a class must have both, 
or that it's an error if it defines == without !=, or even that they are 
reflections of each other. numpy doesn't, and that's a pretty huge 
counter-example.


 


It would not surprise me if changing this would bring to light many
existing bugs.



It would surprise me.




Two issues:  1) Sounds like we should have two more Asserts -- 
failIfNotEqual, and assertNotNotEqual to handle the dichotomy in Python; 
and 2) Does this mean (looking at Mark Dickinson's post) that 2.7 and 
3.1 are now broken?


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-20 Thread Steven D'Aprano
On Tue, 20 Oct 2009 14:45:49 -0700, Zac Burns wrote:

> My preference would be that failIfEqual checks both != and ==. This is
> practical, and would benefit almost all use cases. If "!=" isn't "not
> ==" (IEEE NaNs I hear is the only known use case)

numpy uses == and != as element-wise operators:


>>> import numpy
>>> a = numpy.array([10, 20, 30, 40])
>>> b = numpy.array([10, 20, 31, 40])
>>> a==b
array([ True,  True, False,  True], dtype=bool)
>>> a!=b
array([False, False,  True, False], dtype=bool)
>>> not a!=b
Traceback (most recent call last):
  File "", line 1, in 
ValueError: The truth value of an array with more than one element is 
ambiguous. Use a.any() or a.all()



> then those could simply not use this method.

I'm not so sure this is a good idea. Python specifically treats == and != 
as independent. There's no reason to think that a class must have both, 
or that it's an error if it defines == without !=, or even that they are 
reflections of each other. numpy doesn't, and that's a pretty huge 
counter-example.

 
> It would not surprise me if changing this would bring to light many
> existing bugs.

It would surprise me.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-20 Thread Zac Burns
> I was with you right up to the last six words.
>
> Whether it's worth changing assertNotEqual to be something other than an
> alias of failIfEqual is an interesting question. Currently all the
> assert* and fail* variants are aliases of each other, which is easy to
> learn. This would introduce a broken symmetry, where assertNotEqual tests
> something different from failIfEqual, and would mean users have to learn
> which assert* methods are aliases of fail* methods, and which are not.
> I'm not sure that's a good idea.
>
> After all, the documentation is clear on what it does:
>
>     |  assertNotEqual = failIfEqual(self, first, second, msg=None)
>     |      Fail if the two objects are equal as determined by the '=='
>     |      operator.
>     |
>
>
> (Taken from help(unittest).)
>
>
>
> --
> Steven
> --
> http://mail.python.org/mailman/listinfo/python-list
>

My preference would be that failIfEqual checks both != and ==. This is
practical, and would benefit almost all use cases. If "!=" isn't "not
==" (IEEE NaNs I hear is the only known use case) then those could
simply not use this method.

It would not surprise me if changing this would bring to light many
existing bugs.

--
Zachary Burns
(407)590-4814
Aim - Zac256FL
Production Engineer (Digital Overlord)
Zindagi Games
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest wart/bug for assertNotEqual

2009-10-20 Thread Steven D'Aprano
On Tue, 20 Oct 2009 10:20:54 -0700, Zac Burns wrote:

> Using the assertNotEqual method of UnitTest (synonym for failIfEqual)
> only checks if first == second, but does not include not (first !=
> second)
> 
> According to the docs:
> http://docs.python.org/reference/datamodel.html#specialnames There are
> no implied relationships among the comparison operators. The truth of
> x==y does not imply that x!=y is false
> 
> The name assertNotEqual to me implies a check using !=. This misleading
> title can cause a programmer to think a test suite is complete, even if
> __ne__ is not define - a common mistake worth testing for.


I was with you right up to the last six words.

Whether it's worth changing assertNotEqual to be something other than an 
alias of failIfEqual is an interesting question. Currently all the 
assert* and fail* variants are aliases of each other, which is easy to 
learn. This would introduce a broken symmetry, where assertNotEqual tests 
something different from failIfEqual, and would mean users have to learn 
which assert* methods are aliases of fail* methods, and which are not. 
I'm not sure that's a good idea.

After all, the documentation is clear on what it does:

 |  assertNotEqual = failIfEqual(self, first, second, msg=None)
 |  Fail if the two objects are equal as determined by the '=='
 |  operator.
 |


(Taken from help(unittest).)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >