Think of a web crawler as a cfhttp get call.  It doesn't have to be a person doing the page requesting.  As long as they get the url right they get the page as you intended it.

Assuming of course the url alone will bring up the page.  You'll have all sorts of errors generated if you have a page that counts on, for example, a person visiting your home page to instantiate some variable, which is then used on subsequent page views.  The crawler will very likely hit the pages in an order only it understands and blow up repeatedly when it tries to hit all of your pages at once on the next go-round the following month.

Then there are more issues involving folder depth, parameter laden links, SES urls and making a dynamic page look like a completely static one, with normal filename and all.  Search the archives for those sagas :-)

--
-------------------------------------------
Matt Robertson,     [EMAIL PROTECTED]
MSB Designs, Inc. http://mysecretbase.com
-------------------------------------------

--
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings]

Reply via email to