Regarding the how it is done aspect:

Wired has an article last year that the NSA started
reverse-engineering the database that is used at Google.
Apparently the software is named Accumulo Achttp://accumulo.apache.org/
http://www.wired.com/wiredenterprise/2012/07/nsa-accumulo-google-bigtable/

Todd Hoff had an article at HighScalability recently that
explains how easy it is to build PRISM if you have the right data
http://highscalability.com/blog/2013/7/1/prism-the-amazingly-low-cost-of-using-bigdata-to-know-more-a.html

-J.


On 07/15/2013 05:47 PM, Owen Densmore wrote:
I've started following the Snowden/PRISM thing a bit more, and came across this via twitter:
http://www.guardian.co.uk/commentisfree/2013/jul/15/crux-nsa-collect-it-all

Regardless of opinions on the ethics/legal side, the "collect it all" approach seems just impossible for me to grok. Lets suppose you *did* have all the data generated on the internet every day for the last 20 years. What could you do with it?

I presume they are using specialized hardware, possibly openCL sort of processing via GPU farms. Fine. How would you turn this into a usable tool?

Color me naive, but isn't this a self generated DOS on themselves?

   -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to