Hi,

On Saturday, 15 February 2014 06:42:49 UTC+5:30, Russell Keith-Magee wrote:
>
> Hi Akshay,
>
>>
> Great to hear! Here's some feedback:
>  
>
>> One of the improvements I see is classification of test cases. 
>> Classifying them into categories (read multiple-categories), would make it 
>> easier for users/developers/maintainers to run them. Basis of 
>> classification,etc is what I am still thinking on. But surely 
>> classification will help in deciding which all test cases to run. For 
>> example - just running third-party app test cases, or just run my test 
>> cases, or those which check part ABC of my project, or just those with 
>> priority set to important.
>>
>> How to run tests? Here we have a few choices.
>> -> Allowing the ability to decide and run test cases from the admin 
>> interface. (Will people like it? Choosing what tests to run will become 
>> easy for sure. This will require the server to be up though. Will this be 
>> problematic?)
>>
>
> This doesn't sound like a very viable idea to me. This is something that 
> needs to be persisted in code; a web server interface to manage this 
> doesn't strike me as a good idea. Unless you've got a particularly inspired 
> way for handling this, I don't think this will work.  
>
-> Sticking with and improving the current way. Specify what all tests to 
>> be run. (Do I want to add to this, every time I put new apps/add test 
>> cases? Or delete from this?)
>> Specify default settings per app? App developer can decide if he wants 
>> the tests to be included or excluded by default. Helpful, if say, app (or 
>> certain components of the app) not dependent on other things, say DB,etc. 
>> and it has been tested many times before, then there is no need to run 
>> those tests by default.
>> -> Having both of the above. ( Coherence between both of these? )
>>
>
> I would envisage that this would be a declarative process - in code, 
> marking a specific test as a "system" test or an "integration" test (or 
> whatever other categories we develop).  
>

This is exactly what I had in mind.
 

> I'm still brain-storming on other possible improvements we can do. And I 
>> am going through tickets to see current problems with testing.
>>
>> I'm willing to hear your opinions and comments. 
>>
>
> My other comment: have a look into how other systems handle this. In 
> particular, look at other test running tools, like nose and py.test. Where 
> at all possible, I would prefer to avoid reinventing the wheel; if we can 
> leverage the tools of an existing testing framework, then we should do that 
> in preference to building our own.
>
>
Py.test looks good to me. It is being used in a lot of places, and can also 
help with speeding up the testing process, reporting line-coverage, etc.
There has been work on it -> http://pytest-django.readthedocs.org/en/latest/

Summarising, I have three things on my mind (sorted according to decreasing 
priority) ->

1) Py.test
2) Classification
3) Refactoring existing test-cases
 

Would be glad to hear your comments.

Thanks. :)

--
Akshay Jaggi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/62a69b24-8578-4e82-8422-b09eaae92276%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to