Currently I use a random size windowing algorithm to try to reduce the number of tests that cause the problem to a minimum, but I realize that it may not work in some cases since it's random. I need your help with coming up with a few algorithms to apply to the long sequence of tests to effectively and fast reduce it to a minimum.

After the first run, it's possible that the sequence will include 500 tests. Currently I the algorithm that picks a slice from this sequence using random edges. From preliminary testing in some cases it reduces 500 tests to 2 in just a few runs, but in some cases it just cannot reduce at all. It's random after all.

I was thinking to try first something like binary search, but chances that it'll work are not big since it's possible that there are two tests causing to the failure of the third and they can be located in different halves of the sequence. I guess I should try it anyway.

I was think about the password cracker that applies a set of different algoritms for each path, and I was thinking to take the same route, to improve the efficiency of this utility (after all tests are running very slow). But I'm not sure which other algorithms I could try.

So if you have any insights please let me know. Thanks a lot!


_____________________________________________________________________ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://ticketmaster.com http://apacheweek.com http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Reply via email to