On 03/13/2012 11:04 AM, Ary Manzana wrote:
> On 3/13/12 2:21 PM, Ali Çehreli wrote:
>> On 03/09/2012 06:20 AM, Andrej Mitrovic wrote:
>>
>> > The same story goes for unittests which can't be independently
>> > ran to get a list of all failing unittests
>>
>> D unittest blocks are for code correctness (as opposed to other meanings
>> of the unfortunately overused term "unit testing" e.g. the functional
>> testing of the end product). From that point of view, there should not
>> be even a single test failing.
>>
>> >, and so people are coming
>> > up with their own custom unittest framework (e.g. the Orange library).
>>
>> Yes, some unit test features are missing. From my day-to-day use I would
>> like to have the following:
>>
>> - Ensure that a specific exception is thrown
>>
>> - Test fixtures
>>
>> That obviously reflects my use of unit tests but I really don't care how
>> many tests are failing. The reason is, I start with zero failures and I
>> finish with zero failures. Any code that breaks an existing test is
>> either buggy or exposes an issue with the test, which must be dealt with
>> right then.
>>
>> Ali
>>
>
> How can you re-run just a failing test? (without having to run all the
> previous tests that will succeed?)

I know that there are test suites too and Unittest++, the framework that we use, supports them but we don't use suites.

There has never been a need to run a subset of the tests for us. The tests almost by definition must run very fast anyway. We don't even notice that they get run automatically as a part of the build process.

We are getting a little off topic here but I've been following the recent unit test thread about writing to files. Unit tests should not have external interactions like that either. For example, no test should connect to an actual server to do some interaction. Developers wouldn't want that to happen every time a .d file is compiled. :) (Solutions like mocks, fakes, stubs, etc. do exist. And yes, I know that they are sometimes non-trivial.)

Ali

Reply via email to