I’ve done a buch of work with TDD and qml now, and built a very nice test 
framework in coffeescript with “before each” “after each”; nice spec style ‘it 
“has a feature”`, signal handling, color coded output the whole bit. 

There are, however, two remaining big frustrations that make continuous testing 
during development very challenging.  One is the tendency for the test runner 
to pull focus, for longer running tests it makes it nearly impossible to leave 
the test system running while coding.  Every time you save a watched file focus 
is pulled away from the text editor and may or may not ever come back. If one 
is working in full screen mode on a mac then the entire screen is shifted to a 
different desktop even if no actually GUI pops up.  Is there any way to keep 
this from happening? 

The second issue is the hard stop on test failure.  Actually there is even a 
hard stop if a SignalSpy.wait() doesn’t receive it’s signal.  It may not be an 
error for a signal not to fire, so I don’t agree with the behavior that 
SignalSpy.wait() fails hard.  However, even the test conditions shouldn’t stop 
code execution.  It makes it very hard to predictably reset the environment for 
the next test.  I can’t do things like after each test dump the LocalStorage, 
or prior to each test ask my web service if a test object still exists (wait 
for signal from XMLHttpRequest) and call the server’s delete api if it does, 
and move on to the test either way.  I’d love to be able to wait for either 
success or fail signals at the same time so I can jump right to debugging the 
issue instead of simply getting a uninformative failure because a success 
signal did not fire, but as it is I can only include SignalSpy.wait if it is 
exactly the only thing I expect to happen.  I can call wait() and then look at 
each spy’s count, but that makes the test very difficult to read, and starts 
spinning off into absurd complexity in every individual test to handle just a 
few conditional signals.

So I’d like to know if there is any way to alter these two behaviors as it 
stands now now.  I'm hard pressed to understand a reason to fail hard when a 
test condition fails, is this deliberate behavior?  Also, I’d like to ask 
whoever is working on the QML test case stuff if these are features that could 
be included in the near future (perhaps even as the default behavior).

Thanks,
j

P.S. I’ll probably open source the test framework in the near future if anyone 
is interested.
_______________________________________________
Development mailing list
Development@qt-project.org
http://lists.qt-project.org/mailman/listinfo/development

Reply via email to