Cheeky way to get me more involved in contributing, but okay, I'll bite. ;) Switching discussion to the dev list.
So how would you want the feature to work? I'd suggest an initial set of requirements something like the following: - Need to support the ability to define multiple setup and/or tear down tasks. - It should be possible to specify dependencies between setup tasks and between tear down tasks. - Individual tests need to be able to indicate which setup and/or tear down tasks they require, similar to the way DEPENDS is used to specify dependencies between test cases. - When using ctest --rerun-failed, ctest should automatically invoke any setup or tear down tasks required by the test cases that will be re-run. - Setup or tear down tasks which reference executable targets should substitute the actual built executable just like how add_custom_command() does. Some open questions: - Should setup and tear down tasks be defined in pairs, or should they completely independent (this would still require the ability to specify a dependency of a tear down task on a setup task)? - Should the setup and tear down tasks be defined by a new CTest/CMake command or extend an existing mechanism (e.g. add_custom_command())? - If no test case has a dependency on a setup or tear down task, should that task be skipped? Perhaps tasks need to have a flag which indicates whether they always run or only if a test case depends on it. - What terminology to use? Things like GoogleTest use terms like test *fixtures* for this sort of thing. The terms setup and tear down are a bit imprecise and cumbersome, so we would probably need something better than those. - Would it make sense for the ctest command line to support disabling setup and/or tear down steps? I can see some potential scenarios where this may be desirable, but maybe this is getting too ambitious for a starting set of requirements. - What should happen if a setup or tear down task fails? How would failure be detected? How would such failures impact things like a CDash test report, etc.? I think that's probably enough to kick off discussions for now. On Sun, Aug 21, 2016 at 11:41 PM, David Cole <dlrd...@aol.com> wrote: > The best thing to do would be to add the feature to ctest, and > contribute to the CMake community. > > I, too, use the "run this test first" and "that test last" technique, > and set up DEPENDS property values to ensure ordering when all tests > are run in parallel. However, as you noted, this does not work to run > subsets of tests reliably. For me, I am able to live with the partial > solution because the vast majority of my tests can be run > independently anyhow and we usually do run all the tests, but a setup > / teardown step for the whole suite would be a welcome addition to > ctest. > > Looking forward to your patch... :-) > > > David C. > > > On Sat, Aug 20, 2016 at 8:32 PM, Craig Scott <craig.sc...@crascit.com> > wrote: > > Let's say a project defines a bunch of tests which require setup and tear > > down steps before/after all the tests are run (not each individual test, > I'm > > talking here about one setup before all tests are run and one tear down > > after all tests have finished). While this could be done by a script > driving > > CTest itself, it is less desirable since different platforms may need > > different driver scripts and this seems like something CTest should be > able > > to handle itself (if the setup/tear down steps use parts of the build, > that > > only strengthens the case to have them handled by CMake/CTest directly). > > > > It is possible to abuse the DEPENDS test property and define setup and > tear > > down "tests" which are not really tests but which perform the necessary > > steps. While this mostly works, it is not ideal and in particular it > doesn't > > work with CTest's --rerun-failed option. I'm wondering if there's > currently > > a better way of telling CMake/CTest about a setup step which must be run > > before some particular set of test cases and a tear down step after they > are > > all done. The tear down step needs to be performed regardless of whether > any > > of the real test cases pass or fail. > > > > The motivating case is to start up and shutdown a service which a (subset > > of) test cases need running. That service is expensive to set up and > hence > > it isn't feasible to start it up and shut it down for every test case > > individually. > > > > Any ideas? > > > > -- > > Craig Scott > > Melbourne, Australia > > http://crascit.com > > > > -- > > > > Powered by www.kitware.com > > > > Please keep messages on-topic and check the CMake FAQ at: > > http://www.cmake.org/Wiki/CMake_FAQ > > > > Kitware offers various services to support the CMake community. For more > > information on each offering, please visit: > > > > CMake Support: http://cmake.org/cmake/help/support.html > > CMake Consulting: http://cmake.org/cmake/help/consulting.html > > CMake Training Courses: http://cmake.org/cmake/help/training.html > > > > Visit other Kitware open-source projects at > > http://www.kitware.com/opensource/opensource.html > > > > Follow this link to subscribe/unsubscribe: > > http://public.kitware.com/mailman/listinfo/cmake > -- Craig Scott Melbourne, Australia http://crascit.com
-- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers