At the point of writing this I am dealing with a similar
problem. Stressing the system to its limits and see how it behaves under
pressure by randomly running all the stories it supports millions of times 
during
the night. At the end of the n hours run it triggers the execution of a
final “step” that produces a detailed report about the load and the results,
extracts error entries from the logs and calculates some statistics. This
report is used next day by humans to assess the system health.

 

Just an idea about how we did it but of course there is
always room for improvement.


Cheers,
Julian

--- On Wed, 19/12/12, Mary Walshe <[email protected]> wrote:

From: Mary Walshe <[email protected]>
Subject: Re: [jbehave-user] Re: Repeating scenarios
To: [email protected]
Received: Wednesday, 19 December, 2012, 12:41 AM

Thanks for you replies. I agree that it does not need to be something jbehave 
handles I was just wondering if it did.
It is a difficult case and it looks like we are going to get requirements in 
the future with the same criteria. Taking everything into account the test may 
end up taking a long time to run which wouldn't fit in with our CI and the idea 
of fast feedback.

We are going looking at taking the statistic out of the story and seeing if we 
can use our CI to handle such tests that allow a certain failure percentage. 
There just does not seem to be a silver bullet with this type of requirement.

Thank you for all your suggestions. I will pass them on to the developers. If 
we come to a nice solution for this case I'll follow up if you are interested.


On Tue, Dec 18, 2012 at 11:53 AM, Alexander Lehmann <[email protected]> wrote:

I think it would be possible to write a step similar to a composite step to run 
the necessary steps repeatedly and evaluate the result, however this has the 
disadvantage that the steps are hidden in the implementation of the step and 
not in the story file (and I think the steps are not reported individually), 
though maybe given stories may be a possibility.




When running individual steps with a statistical outcome, it would be necessary 
to keep the state somewhere to be able to evaluate the probability in the end, 
possibly in the page object, but this is difficult to do if there are different 
tests to be accounted when multi-threading.




One last thing I'd like to point out is that the evaluation of such a test is 
not deterministic unless you have a mockup service that actually returns a 
failure every n counts. When running the test 20 times, it would be valid to 
have 0 or 2+ failures sometimes, where it would be misleading to have the test 
fail. Maybe it would be feasible to run the 20 tests more than once in that 
case and calculate the average, but this is still a bit flaky to have a test 
failing when the expected value is not reached (either you need an estimator 
function and a confidence interval or you can just report the test result as 
warning with the calculated value)










---------------------------------------------------------------------

To unsubscribe from this list, please visit:



   http://xircles.codehaus.org/manage_email







Reply via email to