Steven D'Aprano added the comment:

On 15/08/13 22:58, ezio.melo...@gmail.com wrote:
> http://bugs.python.org/review/18606/diff/8927/Lib/statistics.py#newcode277
> Lib/statistics.py:277: assert isinstance(x, float) and
> isinstance(partials, list)
> Is this a good idea?

I think so add_partials is internal/private, and so I don't have to worry about 
the caller providing wrong arguments, say a non-float. But I want some testing 
to detect coding errors. Using assert for this sort of internal pre-condition 
is exactly what assert is designed for.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics.py#newcode144
> Lib/test/test_statistics.py:144: assert data != sorted(data)
> Why not assertNotEqual?

I use bare asserts for testing code logic, even if the code is test code. So if 
I use self.assertSpam(...) then I'm performing a unit test of the module being 
tested. If I use a bare assert, I'm asserting something about the test logic 
itself.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py
> File Lib/test/test_statistics_approx.py (right):
>
> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode1
> Lib/test/test_statistics_approx.py:1: """Numeric approximated equal
> comparisons and unit testing.
> Do I understand correctly that this is just an helper module used in
> test_statistics and that it doesn't actually test anything from the
> statistics module?

Correct.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode137
> Lib/test/test_statistics_approx.py:137: # and avoid using
> TestCase.almost_equal, because it sucks
> Could you elaborate on this?

Ah, I misspelled "TestCase.AlmostEqual".

- Using round() to test for equal-to-some-tolerance is IMO quite an 
idiosyncratic way of doing approx-equality tests. I've never seen anyone do it 
that way before. It surprises me.

- It's easy to think that ``places`` means significant figures, not decimal 
places.

- There's now a delta argument that is the same as my absolute error tolerance 
``tol``, but no relative error argument.

- You can't set a  per-instance error tolerance.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode241
> Lib/test/test_statistics_approx.py:241: assert len(args1) == len(args2)
> Why not assertEqual?

As above, I use bare asserts to test the test logic, and assertSpam methods to 
perform the test. In this case, I'm confirming that I haven't created dodgy 
test data.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode255
> Lib/test/test_statistics_approx.py:255: self.assertTrue(approx_equal(b,
> a, tol=0, rel=rel))
> Why not assertApproxEqual?

Because I'm testing the approx_equal function. I can't use assertApproxEqual to 
test its own internals.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue18606>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to