Thanks for the idea Dave.  I don't really believe that the link type is the 
problem, though it may be related.  I don't want to test the links that I 
filter out on purpose because they either don't take me to a new page, or they 
take me to the same page multiple times because they are always in the same 
position on the page, like menu links for example.  Our website rules state 
that if you go to exactly the same page multiple times, then it is treated as 
though you only went there once.  That's okay with me, I just won't test links 
that do that to me.  Our purpose in the spider is also to try and see as many 
unique pages as we possibly can on our website.  And with each recursion it 
becomes more and more likely that we will accomplish this.

The purpose of a web crawler, which this test is, is to crawl the entire web 
site by itself as well as possible.  To do this a recursive method is 
implemented.  In a way it is incidental that we are using Watir, though it 
isn't accidental - we chose this tool for a good reason.  It works!  So 
dividing the task up would defeat our original purposes.  This is why we don't 
follow the test case format, in fact we don't utilize any of the the test 
suite/test case stuff included with Watir for this reason.  We wanted to be as 
native as possible without reinventing the wheel.  As far as JavaScript and the 
other links go, they don't do anything the a normal web crawler would want to 
do anyway, so we just throw them away. :)

I hope that helps you understand a little about why we did what we did.

Nathan
---------------------------------------------------------------------
Posted via Jive Forums
http://forums.openqa.org/thread.jspa?threadID=5183&messageID=14409#14409
_______________________________________________
Wtr-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/wtr-general

Reply via email to