On Mon, Jan 18, 2010 at 2:00 PM, Drew Farris <drew.far...@gmail.com> wrote:
> In what cases would you want to reset them all remotely, at the
> beginning of each test?

You pretty much said it -- tests should start from a known, fixed
state, so that the result is the same each time, and we can assert
about the output. This means setting the entire library and test
fixture state to a known state -- that's why there's a need to not
just control the initial seed but reset it.

(Separately you could argue we're going about this all wrong, by
trying to depend on the exact output of the RNG, and should be writing
tests that assert only what's true no matter what the outcome, or
else, assert things that should be true in 99.9999% of all RNG
sequences. But let's resort to that argument later.)


> In tests you call
>
> Random r = RandomUtil.getTestRandom()
> ev = new GenericRecommenderIRStatsEvaluator(r);
>
> In production code you call:
>
> Random r = RandomUtil.getRandom();
> ev = new GenericRecommenderIRStatsEvaluator(r);

And you're suggesting getRandom() returns a randomly-seeded RNG? Then
this just returns to the original problem: the test is not repeatable.
You've moved around the injection, but nothing else I think. Am I
misunderstanding because that seems to be why I'm not following
getTestRandom().

(Taking it as a constructor param is the conventional way to set up
for injecting, but from an API perspective I don't quite like it. I
understand why an evaluator necessarily needs a Recommender to exist,
but why do I need to give it a Random, conceptually?)

Reply via email to