We use anonymous cookies to identify unique visitors to our sites and 
determine things like requests/visitor, etc.  It's not perfect but it works 
pretty well, as long as your time-scale is short enough that you can make 
insignificant the probability that many users have lost/deleted their 
cookies.  It's lots better than any other way to count visits/visitors.

I'm increasingly worried about how robots/crawlers handle cookies.  I 
imagine most of them reject cookies, which is fine.  Lots of them probably 
accept cookies in order to index registration-required sites.  Those are 
fine, too (since they make so many requests they're easy to exclude).  What 
we're worried about is whether maybe some of them first accept and then 
delete their cookies, so they appear to be different users each time they 
make a request or with each visit.

That sounds dumb, and I can't think of any reason they would do that, but I 
also can't find anything that says they don't.  The thing is, our traffic 
has increased a lot lately, and we can't attribute it to advertising or 
anything sensible.  With all the new shopping-bots out there, and probably 
hundreds or of data-mining companies with their own secretive web-bots, I 
wonder if we're just getting lots of them pretending to be humans using 
navigator or IE.

Anybody have any suggestions for an easy way to sort this out?  Or pointers 
to any info to check out?

Thanks,
Matt Morgan

------------------------------------------------------------------------
This is the analog-help mailing list. To unsubscribe from this
mailing list, send mail to [EMAIL PROTECTED]
with "unsubscribe analog-help" in the main BODY OF THE MESSAGE.
List archived at http://www.mail-archive.com/analog-help@lists.isite.net/
------------------------------------------------------------------------

Reply via email to