Re: Adding an option to re-test only failed tests

2009-10-02 Thread Rob Madole

> - if there's no 'failure record' run all
> - if there's some record, first test those that have failed the last time
>   - if they still fail, stop there
>   - if there's no further failures, rerun the whole set

That's a pretty cool idea.  I haven't seen this kind of behavior
before but it makes total sense.  I think it's the way everyone does
it anyway, it would just take the manual command line changes out of
the process.  I wonder what you would call such an option.  ./bin/test
--smartfail?

Rob
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Looking for a project

2009-10-01 Thread Rob Madole

I'd really love to see a Selenium or Windmill integration into the
Django testing framework.  That would be really fun to demonstrate in
class too when you get done. :D  This idea was listed on
http://code.djangoproject.com/wiki/SummerOfCode2009.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding an option to re-test only failed tests

2009-09-30 Thread Rob Madole

> From the point of view of encouraging the usage of nose, either would
> work fine. I think this is fits in to the conversation at DjangoCon
> about how we should go about encouraging Django users to explore the
> wider Python ecosystem. The important thing is that we can have some
> official (or semi-official) documentation somewhere saying "to access
> enhanced test running commands, install nose and drop this string in
> to your TEST_RUNNER setting." The string itself could point at
> dango.something or nose.something, the important thing from my point
> of view is that they don't have to grab a test runner script from
> another third party, then figure out where to put it within their
> project. If nose don't want to ship the test runner, I'd be fine with
> putting it in django.contrib somewhere.

It's hard to get people to write tests.  It's hard to get them to
document their work.  I wouldn't care if it's in Nose or Django that
has the test runner, just as long as doing an "easy_install nose" and
changing the TEST_RUNNER is all a user has to do to make the switch.
I just can't imagine the Nose guys including a Django test runner in
their base install.  Maybe I'm wrong.

Simon's comment about documentation makes me think there needs to be
some detailed docs about how you would use this (beyond the "change
your TEST_RUNNER").  The Nose docs are good, but they don't have the
same flavor that the Django docs do.  The audience is different I
think.  If this is really going to be easy, I'd like to see docs about
how to use --pdb and --failed and --coverage built right into the
Django testing docs.

This may sunset or compete with two other tickets that are slotted for
1.2 though:
http://code.djangoproject.com/ticket/4501 (via Nose's coverage
support)
http://code.djangoproject.com/ticket/11613 (you can use --pdb and --
pdb-failures to get a similar behavior)

Rob
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding an option to re-test only failed tests

2009-09-29 Thread Rob Madole

I'll see if I can talk Jeff into adding what he's got as a start to
this.  It looks solid to me.

Ticket and patches forthcoming...

On Sep 29, 2:47 pm, Simon Willison <si...@simonwillison.net> wrote:
> On Sep 29, 7:34 pm, Rob Madole <robmad...@gmail.com> wrote:
>
> > TEST_RUNNER = 'django.contrib.test.nose.run_tests'
>
> > There might be some futzy bits to make that actually work, but I think
> > it'd doable.
>
> I'd love to see this working. Obviously this would work just as well
> implemented as an external project - but if it's as useful as it
> sounds I'd personally be up for including it in core, if only to
> promote the usage of nose amongst Django developers (and hence help
> weaken the impression that Django doesn't integrate well enough with
> the wider Python community).
>
> Cheers,
>
> Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding an option to re-test only failed tests

2009-09-29 Thread Rob Madole

http://blog.jeffbalogh.org/post/57653515/nose-test-runner-for-django

It's certainly been done and doesn't require changes to Django.

On Sep 29, 1:34 pm, Rob Madole <robmad...@gmail.com> wrote:
> Ok, --failfast would be nice too :D, I think I remember seeing a
> ticket on that.  So make that 4 features from nose...
>
> Which would be great if the test is third or fourth in the stack.  If
> it's the last test in 50, it would loose it's effectiveness.
>
> I know, I know.  If you are running 50 tests you can reduce that down
> to the module that is causing the problem.
>
> Maybe time would be better spent making the use of nose really super
> easy.
>
> In settings.py:
>
> TEST_RUNNER = 'django.contrib.test.nose.run_tests'
>
> There might be some futzy bits to make that actually work, but I think
> it'd doable.
>
> Eh?
>
> Rob
>
> On Sep 29, 1:23 pm, Rob Madole <robmad...@gmail.com> wrote:
>
>
>
> > Yep, I use the pdb stuff too.  That would be handy.
>
> > The way this works in nose is through the testid plugin. Typically you
> > do this:
>
> > nosetests --with-id --failed
>
> > This will create a file called .noseids in the current working
> > directory.
>
> > You can make it use something else by saying:
>
> > nosetests --with-id --id-file=/somewhere/else/.noseids --failed
>
> > As far as storing the data of which test failed for Django, I'm not
> > sure what the *best* approach would be.  Ned Batchelder's coverage
> > module does a similar thing.  It keeps a .coverage file in the root I
> > think.  Maybe just call ours .failedtests.  Kinda gross, and not my
> > first choice, but it would work.
>
> > Or, perhaps use Python's tempfile module.  But I'm not sure how to
> > grab a hold of the temp file again for the second pass through (maybe
> > tempfile.NamedTemporaryFile but this has problems on some platforms
> > according to the docs).
>
> > On one hand, I can see this argument: If you are adding 3 features
> > from nose, why not just use nose.  But setting up nose and Django to
> > use it as the test runner isn't trivial the last time I checked.
> > We're using buildout to ease the pain.
>
> > Thanks for the input.
>
> > Rob
>
> > On Sep 29, 12:58 pm, Simon Willison <si...@simonwillison.net> wrote:
>
> > > On Sep 29, 5:03 pm, Rob Madole <robmad...@gmail.com> wrote:
>
> > > > I've been using nose for our tests, and one of the features that I
> > > > really like is the ability to run the tests again but filter only the
> > > > ones that caused a problem.
>
> > > > I'm thinking it would look something like this
>
> > > > ./manage.py test --failed
>
> > > > Does this sound worthwhile to anybody?
>
> > > I don't understand how this works - does it persist some indication of
> > > which tests failed somewhere? If so, where?
>
> > > If we're talking about features from nose, the two I'd really like in
> > > Django's test runner are --pdb and --pdb-failures:
>
> > > --pdb = when an error occurs, drop straight in to the interactive
> > > debugger
> > > --pdb-failures = when a test assertion fails, drop in to the debugger
>
> > > Cheers,
>
> > > Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding an option to re-test only failed tests

2009-09-29 Thread Rob Madole

Ok, --failfast would be nice too :D, I think I remember seeing a
ticket on that.  So make that 4 features from nose...

Which would be great if the test is third or fourth in the stack.  If
it's the last test in 50, it would loose it's effectiveness.

I know, I know.  If you are running 50 tests you can reduce that down
to the module that is causing the problem.

Maybe time would be better spent making the use of nose really super
easy.

In settings.py:

TEST_RUNNER = 'django.contrib.test.nose.run_tests'

There might be some futzy bits to make that actually work, but I think
it'd doable.

Eh?

Rob

On Sep 29, 1:23 pm, Rob Madole <robmad...@gmail.com> wrote:
> Yep, I use the pdb stuff too.  That would be handy.
>
> The way this works in nose is through the testid plugin. Typically you
> do this:
>
> nosetests --with-id --failed
>
> This will create a file called .noseids in the current working
> directory.
>
> You can make it use something else by saying:
>
> nosetests --with-id --id-file=/somewhere/else/.noseids --failed
>
> As far as storing the data of which test failed for Django, I'm not
> sure what the *best* approach would be.  Ned Batchelder's coverage
> module does a similar thing.  It keeps a .coverage file in the root I
> think.  Maybe just call ours .failedtests.  Kinda gross, and not my
> first choice, but it would work.
>
> Or, perhaps use Python's tempfile module.  But I'm not sure how to
> grab a hold of the temp file again for the second pass through (maybe
> tempfile.NamedTemporaryFile but this has problems on some platforms
> according to the docs).
>
> On one hand, I can see this argument: If you are adding 3 features
> from nose, why not just use nose.  But setting up nose and Django to
> use it as the test runner isn't trivial the last time I checked.
> We're using buildout to ease the pain.
>
> Thanks for the input.
>
> Rob
>
> On Sep 29, 12:58 pm, Simon Willison <si...@simonwillison.net> wrote:
>
>
>
> > On Sep 29, 5:03 pm, Rob Madole <robmad...@gmail.com> wrote:
>
> > > I've been using nose for our tests, and one of the features that I
> > > really like is the ability to run the tests again but filter only the
> > > ones that caused a problem.
>
> > > I'm thinking it would look something like this
>
> > > ./manage.py test --failed
>
> > > Does this sound worthwhile to anybody?
>
> > I don't understand how this works - does it persist some indication of
> > which tests failed somewhere? If so, where?
>
> > If we're talking about features from nose, the two I'd really like in
> > Django's test runner are --pdb and --pdb-failures:
>
> > --pdb = when an error occurs, drop straight in to the interactive
> > debugger
> > --pdb-failures = when a test assertion fails, drop in to the debugger
>
> > Cheers,
>
> > Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding an option to re-test only failed tests

2009-09-29 Thread Rob Madole

Yep, I use the pdb stuff too.  That would be handy.

The way this works in nose is through the testid plugin. Typically you
do this:

nosetests --with-id --failed

This will create a file called .noseids in the current working
directory.

You can make it use something else by saying:

nosetests --with-id --id-file=/somewhere/else/.noseids --failed

As far as storing the data of which test failed for Django, I'm not
sure what the *best* approach would be.  Ned Batchelder's coverage
module does a similar thing.  It keeps a .coverage file in the root I
think.  Maybe just call ours .failedtests.  Kinda gross, and not my
first choice, but it would work.

Or, perhaps use Python's tempfile module.  But I'm not sure how to
grab a hold of the temp file again for the second pass through (maybe
tempfile.NamedTemporaryFile but this has problems on some platforms
according to the docs).

On one hand, I can see this argument: If you are adding 3 features
from nose, why not just use nose.  But setting up nose and Django to
use it as the test runner isn't trivial the last time I checked.
We're using buildout to ease the pain.

Thanks for the input.

Rob

On Sep 29, 12:58 pm, Simon Willison <si...@simonwillison.net> wrote:
> On Sep 29, 5:03 pm, Rob Madole <robmad...@gmail.com> wrote:
>
> > I've been using nose for our tests, and one of the features that I
> > really like is the ability to run the tests again but filter only the
> > ones that caused a problem.
>
> > I'm thinking it would look something like this
>
> > ./manage.py test --failed
>
> > Does this sound worthwhile to anybody?
>
> I don't understand how this works - does it persist some indication of
> which tests failed somewhere? If so, where?
>
> If we're talking about features from nose, the two I'd really like in
> Django's test runner are --pdb and --pdb-failures:
>
> --pdb = when an error occurs, drop straight in to the interactive
> debugger
> --pdb-failures = when a test assertion fails, drop in to the debugger
>
> Cheers,
>
> Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Adding an option to re-test only failed tests

2009-09-29 Thread Rob Madole

I've been using nose for our tests, and one of the features that I
really like is the ability to run the tests again but filter only the
ones that caused a problem.

I'm thinking it would look something like this

./manage.py test --failed

Does this sound worthwhile to anybody?

Rob
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Final Multi-DB status Update

2009-09-29 Thread Rob Madole

Hmm.  I just spent some time looking at #11828, and I don't think the
"syncing one db at a time" will work.  The first problem this causes
is with anything that subscribes to the post sync signal.  Content
type does this, so it can create permissions.  If we sync one db at a
time, I don't see how content type can do it's job.

My response in the ticket was to just make sure that default get's
sync'd first, guaranteed.  But based on what JL said about resetdb and
running the SQL manually it sounds like the roots go down a little
deeper.  I disagree with myself now.

What if syncdb did this in two phases.  It goes through every db
connection and creates the appropriate tables.  Then do everything
else.

Oh and I certainly think that tables should only exist for the db's
they apply to.  I had a yuck moment the first time I opened up the db
and saw those empty tables.  This one is going to be a hard sale on
our dev team.  I'm anticipating the argument I'm going to have with
the DBA'ish fella on the team.

Rob

On Sep 14, 2:57 pm, Alex Gaynor  wrote:
> On Mon, Sep 14, 2009 at 1:54 PM, Joseph Kocherhans
>
>
>
>
>
>  wrote:
>
> > On Mon, Sep 14, 2009 at 12:16 PM, Alex Gaynor  wrote:
>
> >> FWIW, Russ, Joseph Kocherhans, and I discussed this at the DjangoCon
> >> sprints and our conclusion was to have syncdb only sync a single table
> >> at a time, and to take a --exclude flag (or was it --include?) to
> >> specify what models should be syncd.
>
> > Did you mean to say "sync a single db" instead of "sync a single table"?
> > Russ was talking about an --exclude flag at the sprints, but it doesn't
> > really matter either way to me.
> > Joseph
>
> Yes :)
>
> Alex
>
> --
> "I disapprove of what you say, but I will defend to the death your
> right to say it." -- Voltaire
> "The people's good is the highest law." -- Cicero
> "Code can always be simpler than you think, but never as simple as you
> want" -- Me

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---