Hello,
to my last question I got many helpful and enlightening answers.
So I try again with a completely different topic.
https://docs.python.org/3/library/unittest.html#test-discovery
says about test discovery:
"Unittest supports simple test discovery. In order to be compatible with
Finally I found why my setup.py dosen't work. I didn't put a `__init__.py` in
my `test` folder, thus Python dosen't think it's a package. That's why it found
the wrong package.
Thank you.
On Mon, Sep 02, 2019 at 04:28:50PM +0200, dieter wrote:
> YuXuan Dong writes:
> > I have uninstalled `six`
YuXuan Dong writes:
> I have uninstalled `six` using `pip uninstall six` but the problem is still
> there.
Your traceback shows that `six` does not cause your problem.
It is quite obvious that a `test_winreg` will want to load the
`wingreg` module.
> As you suggested, I have checked the traceba
Thank you. It helps.
I have uninstalled `six` using `pip uninstall six` but the problem is still
there.
As you suggested, I have checked the traceback and found the exception is
caused by
`/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test/test_winreg.py
PI. How dose `winreg` come here?
>
> In my `setup.py`:
>
> test_suite="test"
>
> In my `test/test.py`:
>
> import unittest
>
> class TestAll(unittest.TestCase):
> def testall(self):
YuXuan Dong writes:
> I met a problem while I ran `python setup.py test`:
>
> unittest.case.SkipTest: No module named 'winreg'
> ... no windows modules should be necessary ...
I know apparently unexplainable "no module named ..." messages as
a side effect of the use of "six".
"six" is use
or only UNIX-like
> systems. I don't use any Windows-specified API. How dose `winreg` come here?
>
> In my `setup.py`:
>
> test_suite="test"
>
> In my `test/test.py`:
>
> import unittest
>
> class TestAll(unittest.TestCase):
In my `setup.py`:
test_suite="test"
In my `test/test.py`:
import unittest
class TestAll(unittest.TestCase):
def testall(self):
return None
It works if I ran `python -m uniittest test.py` alone but raises the
In my `setup.py`:
test_suite="test"
In my `test/test.py`:
import unittest
class TestAll(unittest.TestCase):
def testall(self):
return None
I'm working on this for the whole day, searched for every keywords I
07 à 04:39, Etienne Robillard a écrit :
Hi,
is it possible to benchmark a django application with unittest module in
order to compare and measure the speed/latency of the django orm with
sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB (schevo).
i wou
specific asyncio coroutines. :-)
>
> Etienne
>
>
>
> Le 2018-02-07 à 04:39, Etienne Robillard a écrit :
>>
>> Hi,
>>
>> is it possible to benchmark a django application with unittest module in
>> order to compare and measure the speed/latency of the django
Also, i need to isolate and measure the speed of gevent loop engine
(gevent.monkey), epoll, and python-specific asyncio coroutines. :-)
Etienne
Le 2018-02-07 à 04:39, Etienne Robillard a écrit :
Hi,
is it possible to benchmark a django application with unittest module
in order to compare
Hi,
is it possible to benchmark a django application with unittest module
in order to compare and measure the speed/latency of the django orm with
sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB
(schevo). i would like to make specific test
bin = Assemble(instText, modeSize)
#print map(lambda x: hex(ord(x)), bin)
Notes:
(1) This is a hack; the original code was probably written against an older
version of unittest and is abusing the framework to some extent. The dummy
tests I introduced ar
I am trying to get distorm3's unittests working but to no avail.
I am not really a Python programmer so was hoping someone in the know maybe
able to fix this for me.
Here's a GitHub issue I have created for the bug :-
https://github.com/gdabah/distorm/issues/118
--
Aaron Gray
Independent
fle...@gmail.com writes:
> I have a really large and mature codebase in py2, but with no test or
> documentation.
>
> To resolve this I just had a simple idea to automatically generate tests and
> this is how:
>
> 1. Have a decorator that logs all arguments and return values
>
> 2. Put them in a
one stone. Write doctests.
For anything too complicated for a doctest, or for extensive and detailed
functional tests that exercise all the corners of your function, write
unit tests. The `unittest` module can run your doctests too,
automatically turning your doctests into a test case.
If you
I have a really large and mature codebase in py2, but with no test or
documentation.
To resolve this I just had a simple idea to automatically generate tests and
this is how:
1. Have a decorator that logs all arguments and return values
2. Put them in a test case
and have it run in production
On 7/25/2016 12:45 PM, Joaquin Alzola wrote:
Hi Guys
I have a question related to unittest.
I suppose a SW that is going to live will not have any trace of
unittest module along their code.
In order to test idlelib, I had to a _utest=False (unittest = False)
parameter to some functions
Joaquin Alzola writes:
> I suppose a SW that is going to live will not have any trace of
> unittest module along their code.
Many packages are deployed with their unit test suite. The files don't
occupy much space, don't interfere with the running of the program, and
can be he
ture their
projects and setup their setup.py.
On Mon, Jul 25, 2016 at 9:45 AM, Joaquin Alzola
wrote:
> Hi Guys
>
> I have a question related to unittest.
>
> I suppose a SW that is going to live will not have any trace of unittest
> module along their code.
>
> So is it the w
Hi Guys
I have a question related to unittest.
I suppose a SW that is going to live will not have any trace of unittest module
along their code.
So is it the way to do it to put all unittest in a preproduction environment
and then remove all lines relate to unittest once the SW is release
;class setup failed")
> AssertionError: class setup failed
>
> --
> Ran 0 tests in 0.000s
>
> FAILED (errors=1)
>
> 2. I find assert and raise RunTimeError also fitting my program ,please
> sugge
name "Ian", which
> is very distracting.
>
>
I am lazy fellow and you are smart guy. just a sentence with few words .
Take care :)
> > Iam skipping the unittest from setUpClass in following way # raise
> > unittest.SkipTest(message)
> >
> > The
I read it as the name "Ian", which
is very distracting.
> Iam skipping the unittest from setUpClass in following way # raise
> unittest.SkipTest(message)
>
> The test are getting skipped but I have two problem .
>
> (1) This script is in turn read by other scripts
Hi Team,
Iam on python 2.7 and Linux . I need inputs on the below program ,
Iam skipping the unittest from setUpClass in following way # raise
unittest.SkipTest(message)
The test are getting skipped but I have two problem .
(1) This script is in turn read by other scripts which
ith the same short string, say, "x". Now
run the two versions, repeatedly, and time how long they take.
On Linux, I would do something like this (untested):
time python -m unittest test_mymodule > /dev/null 2>&1
the intent being to ignore the overhead of actual printing any error
On 04/06/2016 03:58 PM, John Pote wrote:
I have been writing a very useful test script using the standard Python
'unittest' module. This works fine and has been a huge help in keeping
the system I've been writing fully working even when I make changes that
could break many feature
I have been writing a very useful test script using the standard Python
'unittest' module. This works fine and has been a huge help in keeping
the system I've been writing fully working even when I make changes that
could break many features of the system. eg major rewrite of
Vincent Davis wrote:
> On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
>
>> >>> import doctest
>> >>> example = doctest.Example(
>> ... "print('hello world')\n",
>> ... want="hello world\n")
>> >>> test = doctest.DocTest([example], {}, None, None, None, None)
>> >>>
On Tue, Dec 8, 2015 at 7:30 AM, Laura Creighton wrote:
> >--
> >https://mail.python.org/mailman/listinfo/python-list
>
> Check out this:
> https://pypi.python.org/pypi/pytest-ipynb
>
Thanks Laura, I think I read the descript as saying I could run untittests
on source code from a jupyter noteboo
On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
> >>> import doctest
> >>> example = doctest.Example(
> ... "print('hello world')\n",
> ... want="hello world\n")
> >>> test = doctest.DocTest([example], {}, None, None, None, None)
> >>> runner = doctest.DocTestRunner(v
In a message of Tue, 08 Dec 2015 07:04:39 -0700, Vincent Davis writes:
>On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
>
>> But why would you want to do that?
>
>
>Thanks Peter, I want to do that because I want to test jupyter notebooks.
>The notebook is in JSON and I can ge
On Wed, Dec 9, 2015 at 1:04 AM, Vincent Davis wrote:
> I also tried something like:
> assert exec("""print('hello word')""") == 'hello word'
I'm pretty sure exec() always returns None. If you want this to work,
you would need to capture sys.stdout into a string:
import io
import contextlib
outpu
On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
> But why would you want to do that?
Thanks Peter, I want to do that because I want to test jupyter notebooks.
The notebook is in JSON and I can get the source and result out but it was
unclear to me how to stick this into a
Vincent Davis wrote:
> If I have a string that is python code, for example
> mycode = "print('hello world')"
> myresult = "hello world"
> How can a "manually" build a unittest (doctest) and test I get myresult
>
> I have attempted to buil
On Tuesday 08 December 2015 14:30, Vincent Davis wrote:
> If I have a string that is python code, for example
> mycode = "print('hello world')"
> myresult = "hello world"
> How can a "manually" build a unittest (doctest) and test I get myr
If I have a string that is python code, for example
mycode = "print('hello world')"
myresult = "hello world"
How can a "manually" build a unittest (doctest) and test I get myresult
I have attempted to build a doctest but that is not working.
e = doctes
I've been working with unittest for a while and just started using logging, and
my question is: is it possible to use logging to display information about the
tests you're running, but still have it be compatible with the --buffer option
so that you only see it if a test fails? It see
On 9/24/2014 3:33 PM, Milson Munakami wrote:
I am learning to use unittest with python
[way too long example]
File "TestTest.py", line 44
def cleanup(self, success):
^
SyntaxError: invalid syntax
A common recommendation is to find the *minimal* example that exh
On 24/09/2014 21:06, Milson Munakami wrote:
[snipped all the usual garbage]
Would you please access this list via
https://mail.python.org/mailman/listinfo/python-list or read and action
this https://wiki.python.org/moin/GoogleGroupsPython to prevent us
seeing double line spacing and single li
On Wednesday, September 24, 2014 1:33:35 PM UTC-6, Milson Munakami wrote:
> Hi,
>
>
>
> I am learning to use unittest with python and walkthrough with this example
>
> http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html
>
>
> Than
Milson Munakami wrote:
> if self.reportStatus_:
> self.log.info("=== Test %s completed normally (%d
> sec)", self.name_, duration
The info() call is missing the closing parenthesis
> def cleanup(self, success):
> sys.excepthook =
On 9/24/2014 12:33 PM, Milson Munakami wrote:
def tearDown(self):
if self.failed:
return
duration = time.time() - self.startTime_
self.cleanup(True)
if self.reportStatus_:
self.
Hi,
I am learning to use unittest with python and walkthrough with this example
http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html
so my test script is like this:
import json
import urllib
#import time
#from util import *
import httplib
#import sys
#from scapy.all
I'm running the tests under sudo as the routines expect to be run that way.
Anybody have any ideas?
For posterity's sake:
I added a .close() method to the class being tested which destroys its big data structures; then I added a tearDownClass
method to the unittest. That seems to have
stuff being kept alive by the stack
traces of the failed tests.
There is an issue or two about unittest not releasing memory. Also,
modules are not cleared from sys.modules, so anything accessible from
global scope is kept around.
--
Terry Jan Reedy
--
https://mail.python.org/mailman/listinfo/p
On 03/12/2014 04:38 PM, Steven D'Aprano wrote:
[snip lots of good advice for unit testing]
I was just removing the Personally Identifiable Information. Each test is pulling a payment from a batch of payments,
so the first couple asserts are simply making sure I have the payment I think I hav
On 03/12/2014 04:47 PM, Steven D'Aprano wrote:
top -Mm -d 0.5
Cool, thanks!
--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:
>> Some systems have an oom (Out Of Memory) process killer, which nukes
>> (semi-random) process when the system exhausts memory. Is it possible
>> this is happening? If so, you should see some log message in one of
>> your system logs.
>
e
Linux(?) OOM killer reaping your applications, just some general
observations on your test.
> def test_xxx_1(self):
Having trouble thinking up descriptive names for the test? That's a sign
that the test might be doing too much. Each test should check one self-
contained thing. T
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:
> There must
> be waaay too much stuff being kept alive by the stack traces of the
> failed tests.
I believe that unittest does keep stack traces alive until the process
ends. I thought that there was a recent bug report fo
tuff being kept alive by the stack traces of the failed tests.
One thing you might try is running your tests under nose
(http://nose.readthedocs.org/). Nose knows how to run unittest tests,
and one of the gazillion options it has is to run each test case in an
isolated process:
--process-rest
On 03/12/2014 06:44 AM, Roy Smith wrote:
In article ,
Ethan Furman wrote:
I've tried it both ways, and both ways my process is being killed, presumably
by the O/S.
What evidence do you have the OS is killing the process?
I put a bare try/except around the call to unittest.main, with a pr
In article ,
Ethan Furman wrote:
> I've tried it both ways, and both ways my process is being killed, presumably
> by the O/S.
What evidence do you have the OS is killing the process?
Some systems have an oom (Out Of Memory) process killer, which nukes
(semi-random) process when the system e
On 03/11/2014 08:36 PM, Terry Reedy wrote:
On 3/11/2014 6:13 PM, John Gordon wrote:
In Ethan Furman
writes:
if missing:
raise ValueError('invoices %r missing from batch' % missing)
It's been a while since I wrote test cases, but I recall using the assert*
methods (
On 3/11/2014 6:13 PM, John Gordon wrote:
In Ethan Furman
writes:
if missing:
raise ValueError('invoices %r missing from batch' % missing)
It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of ra
On 03/11/2014 03:13 PM, John Gordon wrote:
Ethan Furman writes:
if missing:
raise ValueError('invoices %r missing from batch' % missing)
It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raisin
On 03/11/2014 01:58 PM, Ethan Furman wrote:
Anybody have any ideas?
I suspect the O/S is killing the process. If I manually select the other class to run (which has all successful tests,
so no traceback baggage), it runs normally.
--
~Ethan~
--
https://mail.python.org/mailman/listinfo/pyth
In Ethan Furman
writes:
> if missing:
> raise ValueError('invoices %r missing from batch' % missing)
It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?
Anybody have any ideas?
--
~Ethan~
--snippet of code--
from VSS.path import Path
from unittest import TestCase, main as Run
import wfbrp
class Test_wfbrp_20140225(TestCase):
@classmethod
def setUpClass(cls):
cls.pp = wfbrp.PaymentProcessor(
'.../lo
On Saturday, January 11, 2014 11:34:30 PM UTC-5, Roy Smith wrote:
> In article ,
>
> "W. Trevor King" wrote:
>
>
>
> > On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
>
> > > I'm finding it hard to use uni
In article ,
"W. Trevor King" wrote:
> On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> > I'm finding it hard to use unittest in a for loop. Perhaps something like:
> >
> > for val in range(25):
> > self.assertEqual(val,5,"not equa
On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> I'm finding it hard to use unittest in a for loop. Perhaps something like:
>
> for val in range(25):
> self.assertEqual(val,5,"not equal)
>
> The loop will break after the first failure. Anyone hav
CraftyTech writes:
> I'm trying parametize my unittest so that I can re-use over and over,
> perhaps in a for loop.
The ‘testscenarios’ https://pypi.python.org/pypi/testscenarios>
library allows you to define a set of data scenarios on your
FooBarTestCase and have all the test cas
hello all,
I'm trying parametize my unittest so that I can re-use over and over,
perhaps in a for loop. Consider the following:
'''
import unittest
class TestCalc(unittest.TestCase):
def testAdd(self):
self.assertEqual(7, 7, "Didn't
On Friday, 20 December 2013 17:41:40 UTC, Serhiy Storchaka wrote:
> 20.12.13 16:47, Paul Moore написав(ла):
>
> > 1. I can run all the tests easily on demand.
> > 2. I can run just the functional or unit tests when needed.
>
> python -m unittest discover -s tests/functio
ly on demand.
I believe that if you copy Lib/idlelib/idle_test/__init__.py to
tests/__main__.py and add
import unittest; unittest.main()
then
python -m tests
would run all your tests. Lib/idlelib/idle_test/README.py may help explain.
2. I can run just the functional or unit tests when n
20.12.13 16:47, Paul Moore написав(ла):
What's the best way of structuring my projects so that:
1. I can run all the tests easily on demand.
2. I can run just the functional or unit tests when needed.
python -m unittest discover -s tests/functional
python -m unittest discover tests/funct
te a "tests" directory with an __init__.py and "unit" and "functional"
subdirectories, each with __init__.py and test_XXX.py files in them, then
"python -m unittest" works (as in, it discovers my tests fine). But if I just
want to run my unit tests, or just
On Friday, August 9, 2013 1:31:43 AM UTC-5, Peter Otten wrote:
> I see I have to fix it myself then...
Sorry man, I think in my excitement of seeing the first of your examples to
work, that I missed the second example, only seeing your comments about it at
the end of the post. I didn't expect s
On Sun, Aug 11, 2013 at 1:52 AM, Josh English
wrote:
> I'm using logging for debugging, because it is pretty straightforward and can
> be activated for a small section of the module. My modules run long (3,000
> lines or so) and finding all those dastardly print statements is a pain, and
> litt
On Saturday, August 10, 2013 4:21:35 PM UTC-7, Chris Angelico wrote:
> On Sun, Aug 11, 2013 at 12:14 AM, Roy Smith <> wrote:
>
> > Maybe you've got two different handlers which are both getting the same
> > loggingvents and somehow they both end up in your stderr stream.
> > Likely? Maybe not, bu
On Saturday, August 10, 2013 4:14:09 PM UTC-7, Roy Smith wrote:
>
>
> I don't understand the whole SimpleChecker class. You've created a
> class, and defined your own __call__(), just so you can check if a
> string is in a list? Couldn't this be done much simpler with a plain
> old function
On 8/10/13 4:40 PM, Roy Smith wrote:
In article ,
Josh English wrote:
I am working on a library, and adding one feature broke a seemingly unrelated
feature. As I already had Test Cases written, I decided to try to incorporate
the logging module into my class, and turn on debugging at the log
On Sun, Aug 11, 2013 at 12:14 AM, Roy Smith wrote:
> Maybe you've got two different handlers which are both getting the same
> logging events and somehow they both end up in your stderr stream.
> Likely? Maybe not, but if you don't have any logging code in the test
> at all, it becomes impossible
On Saturday, August 10, 2013 1:40:43 PM UTC-7, Roy Smith wrote:
> > For example, you drag in the logging module, and do some semi-complex
> > configuration. Are you SURE your tests are getting run multiple times,
> > or maybe it's just that they're getting LOGGED multiple times. Tear out
> > a
Aha. Thanks, Ned. This is the answer I was looking for.
I use logging in the real classes, and thought that turning setting
the level to logging.DEBUG once was easier than hunting down four
score of print statements.
Josh
On Sat, Aug 10, 2013 at 3:52 PM, Ned Batchelder wrote:
> On 8/10/13 4:40
r("%(name)s - %(levelname)s - %(message)s")
h.setFormatter(f)
self.logger.addHandler(h)
def __call__(self, thing):
self.logger.debug('calling %s' % thing)
return thing in ['a','b','c']
import unittest
class LoaderTC(unittest.Te
In article ,
Josh English wrote:
> I am working on a library, and adding one feature broke a seemingly unrelated
> feature. As I already had Test Cases written, I decided to try to incorporate
> the logging module into my class, and turn on debugging at the logger before
> the newly-broken te
ging.StreamHandler()
f = logging.Formatter("%(name)s - %(levelname)s - %(message)s")
h.setFormatter(f)
self.logger.addHandler(h)
def __call__(self, thing):
self.logger.debug('calling %s' % thing)
vals = self.callback()
ret
adam.pre...@gmail.com wrote:
> On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
>> Peter Otten wrote:
>> Oops, that's an odd class name. Fixing the name clash in Types.__new__()
>> is
>>
>> left as an exercise...
>
> Interesting, I got __main__.T, even though I pretty much just t
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
> Peter Otten wrote:
> Oops, that's an odd class name. Fixing the name clash in Types.__new__() is
>
> left as an exercise...
Interesting, I got __main__.T, even though I pretty much just tried your code
wholesale. For what it's
On 8/8/2013 12:20 PM, adam.pre...@gmail.com wrote:
On Thursday, August 8, 2013 3:04:30 AM UTC-5, Terry Reedy wrote:
def test(f):
f.__class__.__dict__['test_'+f.__name__]
Sorry, f.__class__ is 'function', not the enclosing class. A decorator
for a method could not get the enclosing cl
ly
assign that attribute on the test methods.
You'd still write your tests using the unittest base classes, but run
them with nose.
--Ned.
--
http://mail.python.org/mailman/listinfo/python-list
27;m writing code that, well,
runs experiments. So the word "test" is already all over the place. I would
even prefer if I could do away with assuming everything starting with "test" is
a unittest, but I didn't think I could; it looks like Peter Otten got me in the
right
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
> Peter Otten wrote:
> Oops, that's an odd class name. Fixing the name clash in Types.__new__() is
>
> left as an exercise...
I will do some experiments with a custom test loader since I wasn't aware of
that as a viable alternativ
Peter Otten wrote:
> $ python3 mytestcase_demo.py -v
> test_one (__main__.test_two) ... ok
> test_two (__main__.test_two) ... ok
>
> --
> Ran 2 tests in 0.000s
Oops, that's an odd class name. Fixing the name clash in Types.__new
adam.pre...@gmail.com wrote:
> We were coming into Python's unittest module from backgrounds in nunit,
> where they use a decorate to identify tests. So I was hoping to avoid the
> convention of prepending "test" to the TestClass methods that are to be
> actually run.
On 8/8/2013 2:32 AM, adam.pre...@gmail.com wrote:
We were coming into Python's unittest module from backgrounds in nunit, where they use a
decorate to identify tests. So I was hoping to avoid the convention of prepending
"test" to the TestClass methods that are to be actually
We were coming into Python's unittest module from backgrounds in nunit, where
they use a decorate to identify tests. So I was hoping to avoid the convention
of prepending "test" to the TestClass methods that are to be actually run. I'm
sure this comes up all the time, bu
On 2013-06-29, Steven D'Aprano wrote:
> On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:
>
>> $PYTHONPATH points at both the code and the test directories.
>>
>> When I run blablabla_test.py it fails to import blablabla.py
>
> What error message do you get?
>
>
>> I have messed around f
On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:
> $PYTHONPATH points at both the code and the test directories.
>
> When I run blablabla_test.py it fails to import blablabla.py
What error message do you get?
> I have messed around for oven an hour and get nowhere. I have done
> unitt
;blablabla.py
> test
>blablabla_test.py
> doc
>(empty for now)
>
> blablabla_test.py contains "import unittest" and "import blablabla"
>
> $PYTHONPATH points at both the code and the test directories.
A couple of generic debugging suggestions. F
)
blablabla_test.py contains "import unittest" and "import blablabla"
$PYTHONPATH points at both the code and the test directories.
When I run blablabla_test.py it fails to import blablabla.py
I have messed around for oven an hour and get nowhere. I have
done unittesting like this with succe
On 5/23/2013 2:58 AM, Ulrich Eckhardt wrote:
Well, per PEP 8, classes use CamelCaps, so your naming might break
automatic test discovery. Then, there might be another thing that could
cause this, and that is that if you have an intermediate class derived
from unittest.TestCase, that class on its
In article ,
Ulrich Eckhardt wrote:
> if you have an intermediate class derived
> from unittest.TestCase, that class on its own will be considered as test
> case! If this is not what you want but you still want common
> functionality in a baseclass, create a mixin and then derive from both
>
Am 22.05.2013 17:32, schrieb Charles Smith:
I'd like to subclass from unittest.TestCase. I observed something
interesting and wonder if anyone can explain what's going on... some
subclasses create null tests.
I can perhaps guess what's going on, though Terry is right: Your
question isn't ver
s
capital):
class AaaTestCase (StdTestCase):
differentblahblah
the test completes immediately without any work being done.
What does this mean? I see no difference with the following
import unittest
class StdTestCase (unittest.TestCase): pass
class lowerSub(StdTestCase): pass
On 22 Mai, 17:32, Charles Smith wrote:
> Hi,
>
> I'd like to subclass from unittest.TestCase. I observed something
> interesting and wonder if anyone can explain what's going on... some
> subclasses create null tests.
>
> I can create this subclass and the test works:
>
> class StdTestCase (un
1 - 100 of 528 matches
Mail list logo