On Feb 18, 2014, at 1:16 PM, Steve Molloy <[email protected]> wrote:
> Maybe the solution would be to keep running them, but have the failure > message specify the Jira entry associated with it. Yeah, it’s not a bad idea for long running issues. Your not necessarily running into long running issues though. For example, we recently enabled SSL on a ton of tests. In some cases, running a bunch of embedded jetties with SSL can be ridiculously slow compared to not using SSL. A change like that will take time to stabilize across all envs, it makes sense to just address each issue rather than add output to the fail. A lot of the timeouts we used for example, need to be adjusted. If you followed my Solr trail, you would know I have been working on raising timeouts and adjusting things for the new issues around randomly using SSL. Things are in constant flux. The point of randomizing tests and constantly running them is to get fails. Unless you are paying attention, how do you know they are not new fails from new code? Because that is expected - we randomize, we run in different envs, it’s expected that we will find issues with a complicated test in some new random situation on some different env. Then we have to fix them in our copious fix test time. There are issues that are a little longer running, just because perhaps it looks like a test issue and just hasn't had the priority or other reasons. Like the Overseer test fail. I’m +1 on adding the JIRA to the fail output for tests like that. Those fails should be fairly rare though - if a fail occurs frequently for everyone, it needs to be addressed. But to my knowledge, we do that. And we keep developing and finding new things - often new things in the same tests. Because they are useful tests. You have to be paying attention to Solr to know whats going though. - Mark http://about.me/markrmiller --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
