If that is your literal test script, then the reason it doesn't work is because the HTTP HTML Link Parser is looking for a previous response from which to parse the path info. Since you only have one sampler, there is no previous response the first time, and so the path value (.*) is not replaced, and therefore you get a 404. And you keep getting 404 because the link parser doesn't find a link in the 404 response either.
You need at least one sampler to hit a page first so the link parser has something to work with. Furthermore, I think you'll be disappointed in your attempt to make a spider. The reason is that the link parser does not store up links it finds in a stack or any other data structure. Therefore, it follows one path through your app until it runs into a dead end, or a link that doesn't work, and thereafter all you get are errors. If someone really wanted JMeter to spider effectively, some coding work needs to be done on these components. And lastly, the HTML Link parser is just so incredibly old and decrepit code, that I have little confidence in it. For what that's worth. I think the access log sampler would serve you better. -Mike On Tue, 2005-07-19 at 11:28 +0200, Christian Baumgartner wrote: > Hello, > > I've searched the List about the HTTP LINK PARSER, but cannot find an answer > to my problem. > > The usual structure should be (as i saw in other posts): > > TEST PLAN > > THREAD GROUP > HTTP REQUEST (domain: www.mydomain.com; path: .* > HTTP HTML LINK PARSER > > VIEW RESULTS TREE > > Well as result i always geht a 404 Error Page not found because of URL > www.mydomain.com/.* > So I suggest that the regex doesn't function in path (we had that already in > another post). > > Now I just want to ask, if someone could give a really simple explanation > how to use the > HTTP HTML LINK PARSER correctly. > > Beside my actual Task: > I shoult test stress test tools, creating a SIMPLE spider that crawls > through a Web Site and records > the times, for each URL it finds. > Further it should just regocnize links for a spezific domain and URL > pattern, but thats another problem. > > Hope someone can help me out there. > > Thanks Chris > > For answers please write to: [EMAIL PROTECTED] > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]