On Tue, Sep 8, 2009 at 12:34 PM, Andrew Straw<straw...@astraw.com> wrote: > Michael Droettboom wrote: >> More information after another build iteration. >> >> The two tests that failed after updating to the unhinted images were >> subtests of tests that were failing earlier. If a single test >> function outputs multiple images, image comparison stops after the >> first mismatched image. So there's nothing peculiar about these >> tests, it's just that the system wasn't saying they were failing >> before since they were short-circuited by earlier failures. I wonder >> if it's possible to run through all the images and batch up all the >> failures together, so we don't have these "hidden" failures -- might >> mean fewer iterations with the buildbots down the road. > Ahh, good point. I can collect the failures in the image_comparison() > decorator and raise one failure that describes all the failed images. > Right now the loop that iterates over the images raises an exception on > the first failure, which clearly breaks out of the loop. I'd added it to > the nascent TODO list, which I'll check into the repo next to > _buildbot_test.py.
Should I hold off on committing the other formatter baselines until you have made these changes so you can test, or do you want me to go ahead and commit the rest of these now? ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ Matplotlib-devel mailing list Matplotlib-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/matplotlib-devel