Dave,

I'd need something like this in robots.txt  ...

Disallow: *?*file=*

Everything's dynamic, so i don't know the url's in advance. And i
don't think that would work somehow. But your post gave me an idea ...
maybe i could try to push a don't follow / don't index header up with
the other cfcontent tags and see what happens. Don't know the syntax
off-hand, but it might work. I have to check it out. Thanks.

On Fri, 8 Oct 2004 08:22:01 -0400, Dave Watts <[EMAIL PROTECTED]> wrote:
> > Anyone have a take on this? An intelligent solution? Should i
> > be filtering the download process to exclude the search engines?
>
> You should use robots.txt to specify which URLs to exclude.
>
> Dave Watts, CTO, Fig Leaf Software
> http://www.figleaf.com/
> phone: 202-797-5496
> fax: 202-797-5444
>
>
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings] [Donations and Support]

Reply via email to