Dear all,
Recently I've been wondering how to conceptualise and illustrate some of
the regular information gathering which is occurring on the web,
specifically that undertaken by Facebook via Facebook 'likes' and Google
analytics. As I understand it then this kind of information gathering/
surveillance is used for behavioural advertising. I imagine that Opera,
and similar services (like Amazon's Kindle) would also undertake similar
information gathering too.
One way to illustrate this information gathering might be through a
browser plug-in which simply looks for the code in webpages (using
Facebook 'Likes' and Google analytics) and then has some icon (and
possible to link to other information) which is displayed or highlighted
together with some tally for a session. However, I also thought that
this approach really doesn't show the extent of what's happening. Given
the number of page views occurring every second of everyday which make a
request to Facebook or Google then the number must be really quite
large. Huge in fact. I wondered if anyone knew of any estimates for this
number? Also, how would you actually count this number of requests? I
guess it would require some kind of distributed architecture (cloud
computing?) otherwise a dedicated server would simply crash (with a
denial of service?).
Best wishes,
Andrew
_______________________________________________
developers-public mailing list
[email protected]
https://secure.mysociety.org/admin/lists/mailman/listinfo/developers-public
Unsubscribe:
https://secure.mysociety.org/admin/lists/mailman/options/developers-public/archive%40mail-archive.com