Martin Unsal <[EMAIL PROTECTED]> wrote: > On Mar 5, 10:06 pm, [EMAIL PROTECTED] (Alex Martelli) wrote: > > My favorite way of working: add a test (or a limited set of tests) for > > the new or changed feature, run it, check that it fails, change the > > code, rerun the test, check that the test now runs, rerun all tests to > > see that nothing broke, add and run more tests to make sure the new code > > is excellently covered, rinse, repeat. Occasionally, to ensure the code > > stays clean, stop to refactor, rerunning tests as I go. > > From the way you describe your workflow, it sounds like you spend very > little time working interactively in the interpreter. Is that the case > or have I misunderstood?
I often do have an interpreter open in its own window, to help me find out something or other, but you're correct that it isn't where I "work"; I want all tests to be automated and repeatable, after all, so they're better written as their own scripts and run in the test-framework. I used to use a lot of doctests (often produced by copy and paste from an interactive interpreter session), but these days I lean more and more towards unittest and derivatives thereof. Sometimes, when I don't immediately understand why a test is failing (or, at times, why it's unexpectedly succeeding _before_ I have implemented the feature it's supposed to test!-), I stick a pdb.set_trace() call at the right spot to "look around" (and find out how to fix the test and/or the code) -- I used to use "print" a lot for such exploration, but the interactive interpreter started by pdb is often handier (I can look at as many pieces of data as I need to find out about the problem). I still prefer to run the test[s] within the test framework, getting interactive only at the point where I want to be, rather than running the tests from within pdb to "set breakpoints" manually -- not a big deal either way, I guess. Alex -- http://mail.python.org/mailman/listinfo/python-list