Hello, Am Dienstag, 15. Oktober 2013 schrieb Steve Beattie: > On Fri, Oct 11, 2013 at 10:08:51PM +0200, Christian Boltz wrote: > > We'll see if you still like this in some months... > > While I reserve the right to flake out^W^W change my mind, I help
;-) > maintain and improve other codebases that don't get reviews before > commits... and they sure could use it. ;-) > > Maybe it would be even easier with "break_file" instead of > > "add_monkey" as function name. OTOH, I'm quite sure people _will_ > > check what add_monkey() does, but nobody will read the code of > > break_file() ;-) > The thing is, I often need the full path for the subsequent > verification check as well, so pushing the > os.path.join(self.cache_dir, ...) call into the helper function both > limits the generality of the helper function (in some cases, that's > okay) and doesn't save me very much because I need to do it again > later. So in this case, I created a write_file() function that takes > a path and a string and writes that string to the path. It's more > general but means the path join occurs in the test function. What about this? def write_file(directory, filename, contents): '''write contents to path''' path = os.path.join(directory, path) with open(path, 'w+') as f: f.write(contents) return path This makes the tests a bit more readable, and if you need the filename later, you can use filename = write_file(...) > > > Though, what I'd really like is to somehow set self.do_cleanup to > > > False when any test fails, so that for test cases that fail, the > > > temporary directory is left behind, to make diagnosing why it > > > failed > > > easier to do. I'll think about whether there's a reasonable way to > > > do > > > that. > > > > Looks like it isn't really nice or easy, but at least possible: > > > > http://stackoverflow.com/questions/4414234/getting-pythons-unittest-> > > > results-in-a-teardown-method > > > > http://www.piware.de/2012/10/python-unittest-show-log-on-test-failur > > e/ (the comments are also interesting) > > Thanks. A lot of those solutions are specific to adding information > to the logged output, which is fine, but not what I'm after; I want > to be able to potentially re-run the test manually to see why it's > failing with a minimum of effort. > > That said, I was able to make the decorator function approach work. > Patch history: > v1: - initial version > v2: - create template base class > - add keep_on_fail() decorator to keep temporary test files > around after a test fails I'd prefer if you can do this on a global level instead of having it on every test. google says it's possible, see http://stackoverflow.com/questions/6695854/writing-a-class-decorator-that-applies-a-decorator-to-all-methods and http://stackoverflow.com/questions/3467526/attaching-a-decorator-to-all-functions-within-a-class and some more, see https://www.google.com/search?q=python+decorator+all+functions&ie=UTF-8 > - create run_cmd_check wrapper to run_cmd that adds an assertion > check based on whether return code matches the expected rc > - similarly, add a check to run_cmd_check for verifying the > output contains a specific string, also simplifying many of the > caching tests. Looks good :-) I'd say commit the patch now and then do a follow-up patch for the things listed above. That's easier to review ;-) Assuming the patch consists of the previous patch + the changes listed in your next mail, Acked-by: Christian Boltz <appar...@cboltz.de> Regards, Christian Boltz -- > Was ist das, "Nacht"? Das ist der Zeitraum, in dem Du effektiv administrieren kannst. Weil anscheinend die User alle total faul sind, und sich ausgeloggt haben. [Wilfried Kramer] -- AppArmor mailing list AppArmor@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/apparmor