Re: Algorithmic / Biometric Governmentality

2017-11-03 Thread charlie derr
On 11/02/2017 02:00 PM, Vincent Van Uffelen wrote:
> Hmm, their team is a prime example of white, male, and non-diverse
> "singularity".
>
> Are blockchain ICOs really spreading the control and wealth to the
> many? It's difficult to know, but considering the hurdles that have to
> be crossed to be able to gain access to the blockchain (to have
> internet access, a credit card or bank account, the knowledge and
> desire, and the money to invest) the vast majority of wealth generated
> went most likely into the pockets of the global top 2%. I've my doubt
> that much of this will start trickling down.
>
> If the COIN has not the tackling of problems to the greater good in it
> genes, pardon me contracts, it will most likely not happen. Of course
> the platforms in creation could be very helpful (as Facebook is for
> many NGOs) but I don't have hope that the free coin markets will steer
> things into better places than the free financial markets did.
>
> \\vincent

Vincent,

Your first point is well taken. I try to keep my eyes open to these
things whenever I can, but apparently the fact that I am myself a white
male helped to blind me to the reality you pointed out in this respect.
While I'm aware of women and PoC playing roles in the project, it is
definitely a fact that the founders all appear to be white men (albeit
from a diverse collection of geographical locations). Thank you very
much for bringing this to my attention (and shame on me for not
realizing it on my own).

I don't disagree with your contention about existing ICO blockchains
having a limited effect so far (and most of the benefit being directed
to those who already have the most agency in our societies). But as I
understand the goals of the singularityNET project, I don't see it
operating in that same space. Their aim appears to be to build a
structure that will support individuals who would otherwise be without
the resources to compete with the larger players in the AI universe
(which is the original point you were making that I specifically
responded to). Yes it's true that internet connectivity will be
necessary in order to participate but I have hope that the expressed
goals of the project to provide access and opportunity to folks with
limited means around the world are based on the core values of the
founders rather than being window-dressing cynically used for marketing
purposes. They are certainly seeking investors with deep pockets to help
facilitate the effort but if it succeeds, I think it will provide a
great opportunity for individuals (and groups) with great ideas (in
terms of AI algorithms of potential use to us all) but minimal financial
resources.

The promise that blockchain technology holds in terms of providing
verifiability and transparency as well as it naturally fitting in with
operating in a decentralized way is what excites me about it. The fact
that it originated in the realm of crytocurrency doesn't (in my mind)
condemn it to only ever being used in that arena. While the
singularityNET folks are incorporating a token into their platform, I
don't see it primarily as a cryptocurrency effort. Their stated intent
to open source all their code and the goal to provide an avenue for AI
researchers to gain access to a global market puts them in another (new)
realm (in my opinion).

    be well,
    ~c



signature.asc
Description: OpenPGP digital signature
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Algorithmic / Biometric Governmentality

2017-11-02 Thread Vincent Van Uffelen
Hmm, their team is a prime example of white, male, and non-diverse 
"singularity".


Are blockchain ICOs really spreading the control and wealth to the many? 
It's difficult to know, but considering the hurdles that have to be 
crossed to be able to gain access to the blockchain (to have internet 
access, a credit card or bank account, the knowledge and desire, and the 
money to invest) the vast majority of wealth generated went most likely 
into the pockets of the global top 2%. I've my doubt that much of this 
will start trickling down.


If the COIN has not the tackling of problems to the greater good in it 
genes, pardon me contracts, it will most likely not happen. Of course 
the platforms in creation could be very helpful (as Facebook is for many 
NGOs) but I don't have hope that the free coin markets will steer things 
into better places than the free financial markets did.


\\vincent

On 02/11/2017 13:53, charlie derr wrote:

On 11/02/2017 05:29 AM, Vincent Van Uffelen wrote:

Nevertheless, it [AI] remains a very powerful tool, and it is in the
hands of a very few (and their software engineer/programmer management
layer).

While it's still in the embryonic stages, I just wanted to mention a
rather ambitious effort to change this reality using blockchain
technology and implementing via open source code:

https://singularitynet.io

https://medium.com/ben-goertzel-on-singularitynet

Their whitepaper is due out any day now.

  best,
   ~c



#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Algorithmic / Biometric Governmentality

2017-11-02 Thread Ian Alan Paul
While this has turned into an interesting discussion on the (non?)future of
AI, it's important to remind ourselves that what Zeynep is describing (and
control societies in general) don't rely on the existence of advanced AI to
work.

The power of control societies emerges from the historical conjuncture of
biopower (the institutional collection/management of data concerning
populations) and computation (the automation of data/information processing
that can result in then dynamic modulation of control / the exercise of
power).

On Nov 2, 2017 8:54 AM, "charlie derr"  wrote:

> On 11/02/2017 05:29 AM, Vincent Van Uffelen wrote:
> > Nevertheless, it [AI] remains a very powerful tool, and it is in the
> > hands of a very few (and their software engineer/programmer management
> > layer).
>
> While it's still in the embryonic stages, I just wanted to mention a
> rather ambitious effort to change this reality using blockchain
> technology and implementing via open source code:
>
> https://singularitynet.io
>
> https://medium.com/ben-goertzel-on-singularitynet
>
> Their whitepaper is due out any day now.
>
>  best,
>   ~c
>
>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
>
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Algorithmic / Biometric Governmentality

2017-11-02 Thread charlie derr
On 11/02/2017 05:29 AM, Vincent Van Uffelen wrote:
> Nevertheless, it [AI] remains a very powerful tool, and it is in the
> hands of a very few (and their software engineer/programmer management
> layer).

While it's still in the embryonic stages, I just wanted to mention a
rather ambitious effort to change this reality using blockchain
technology and implementing via open source code:

https://singularitynet.io

https://medium.com/ben-goertzel-on-singularitynet

Their whitepaper is due out any day now.

 best,
  ~c



signature.asc
Description: OpenPGP digital signature
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Algorithmic / Biometric Governmentality

2017-11-02 Thread Vincent Van Uffelen
I also wonder if just one skillfully performed twitch with the left leg 
could trip a gait detection algorithm? There are many holes to poke 
into. Having access to the interpreting system, as those researchers 
did, makes it obviously much easier to find the right "markers" to 
tweak. But considering that economies of scale will most likely give to 
rise to a few default classification networks, accessible for $£€ over 
an API, some of there inner workings might be discovered over time. 
Isn't the prying open of a black box peoples favorite pastime?


Regarding the rise of the "AI". Totally agree, it "became" something 
like climate change. An inevitable wicked problem, of which the 
involved's right hand demands careful consideration of the consequences 
while the rest of the body is pushing for its implementation at full 
speed. I very much like to stress that at the moment it is just machine 
intelligence, not sentient, or as Zuckerberg said: it's just math. 
Nevertheless, it remains a very powerful tool, and it is in the hands of 
a very few (and their software engineer/programmer management layer).



On 01/11/2017 21:33, Morlock Elloi wrote:

And this just in:

https://arxiv.org/pdf/1707.07397

We introduce the first method for constructing real-world 3D objects 
that consis-
tently fool a neural network across a wide distribution of angles and 
viewpoints.
We present a general-purpose algorithm for generating adversarial 
examples that

are robust across any chosen distribution of transformations.


Video of a rather impressive demo (turtle gets classified as a rifle) at:

https://www.labsix.org/media/2017/10/31/video.mp4
https://www.labsix.org/physical-objects-that-fool-neural-nets/


The point of all these attacks appears to be that "AI" is just plain 
old primitive classifiers, rebranded by the marketing, all extremely 
brittle, working under naive assumptions (but good enough for demos 
and PR.) "AI" sounds more scary and induces defeatism, resignation, 
and deference to technology, which is its sole purpose.




..

#  distributed via : no commercial use without permission
#    is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:




--
DE: +49 (0)160 9549 5269
UK: +44 (0)75 0655 0520
 
http://vincentvanuffelen.com

http://transmit-interfere.com
http://deepmediaresearch.org

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Algorithmic / Biometric Governmentality

2017-11-01 Thread Morlock Elloi

And this just in:

https://arxiv.org/pdf/1707.07397


We introduce the first method for constructing real-world 3D objects that 
consis-
tently fool a neural network across a wide distribution of angles and 
viewpoints.
We present a general-purpose algorithm for generating adversarial examples that
are robust across any chosen distribution of transformations.


Video of a rather impressive demo (turtle gets classified as a rifle) at:

https://www.labsix.org/media/2017/10/31/video.mp4
https://www.labsix.org/physical-objects-that-fool-neural-nets/


The point of all these attacks appears to be that "AI" is just plain old 
primitive classifiers, rebranded by the marketing, all extremely 
brittle, working under naive assumptions (but good enough for demos and 
PR.) "AI" sounds more scary and induces defeatism, resignation, and 
deference to technology, which is its sole purpose.




..

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: Algorithmic / Biometric Governmentality

2017-11-01 Thread Morlock Elloi
The described pixel attack works in the digital domain (ie. modifying 
pre-captured images), in other words the attacker must have access to 
the digital pipeline. For real-time applications such access is rarely 
available, certainly not for the regular people on the street.


However, there are analog countermeasures:

https://arxiv.org/pdf/1602.04504.pdf
https://cvdazzle.com/
https://io9.gizmodo.com/how-fashion-can-be-used-to-thwart-facial-recognition-te-1495648863

I wonder if this will became a hoodie of the 21st century.



"CV Dazzle explores how fashion can be used as camouflage from 
face-detection technology, the first step in automated face recognition.


The name is derived from a type of World War I naval camouflage called 
Dazzle, which used cubist-inspired designs to break apart the visual 
continuity of a battleship and conceal its orientation and size. 
Likewise, CV Dazzle uses avant-garde hairstyling and makeup designs to 
break apart the continuity of a face. Since facial-recognition 
algorithms rely on the identification and spatial relationship of key 
facial features, like symmetry and tonal contours, one can block 
detection by creating an “anti-face”. "





https://arxiv.org/abs/1710.08864

One pixel attack for fooling deep neural networks

Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi (Submitted on 24
Oct 2017)


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: Algorithmic / Biometric Governmentality

2017-11-01 Thread Vincent Van Uffelen
What bleak topic to engage with... Today's uplifting news is that all 
the machine learned intelligence needed to roll all this out on the 
large scale does rely on very complex algorithms which have severe 
issues with being too dependent on the initial condition. While the 
paper linked below describes the findings as a means to improve the 
learning algorithms it points to a vector to hack the AI. At the moment 
a small yellow, pink, or green pixels wins :)


https://arxiv.org/abs/1710.08864

One pixel attack for fooling deep neural networks

Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi (Submitted on 24 
Oct 2017)


/Recent research has revealed that the output of Deep neural 
networks(DNN) is not continuous and very sensitive to tiny perturbation 
on the input vectors and accordingly several methods have been proposed 
for crafting effective perturbation against the networks. In this paper, 
we propose a novel method for optically calculating extremely small 
adversarial perturbation (few-pixels attack), based on differential 
evolution. It requires much less adversarial information and works with 
a broader classes of DNN models. The results show that 73.8% of the test 
images can be crafted to adversarial images with modification just on 
one pixel with 98.7% confidence on average. In addition, it is known 
that investigating the robustness problem of DNN can bring critical 
clues for understanding the geometrical features of the DNN decision map 
in high dimensional input space. The results of conducting few-pixels 
attack contribute quantitative measurements and analysis to the 
geometrical understanding from a different perspective compared to 
previous works./




On 30/10/2017 21:47, Morlock Elloi wrote:
To throw in two items, one presently real, and one somewhat 
speculative. Both are contingent on high speed network-to-brain (N2B) 
interface, namely a handset, which has victim's attention many hours 
every day.


1. Social networks (ie. FB) likely know your IQ with margin of error 
of 5 points or less.


IQ is hard to mask, unnoticeable tests can be easily implemented, 
probably focusing on the speed of actions, ie. figuring out where the 
button is in a slightly changed interface, etc., which can be done 
over long time, not in one sitting. This information did not exist 
before (national IQ dataset), has nothing to do with your habits, and 
is highly valuable: once FB separates sharpies from dims (exactly half 
of us are below average), it can use different strategies to influence 
each. More importantly, this data is valuable to the law enforcement: 
if you are looking to frame someone, you go for dims. If you are 
looking for leaders, you narrow your attention to sharpies.


2. Ubiquitous N2B interfaces may enable effective brain hacking.

We are not talking advertizing and nudging here, but straightforward 
hacking that bypasses voluntary/consciousness layers. After all, the 
brain is just a computer, and it's a matter of time before buffer 
overflow zero days are figured out (note that they will stay zero 
days, as there is no one to send you the patch.) To illustrate the 
principle, this could be similar to the way that flashing patterns 
induce epileptic attacks in those prone to them. I don't expect a good 
brain overflow hack to have crude flashing patterns, it may have 
something far more discreet, a combination of outputs and feedbacks 
(something comes up on the screen, you click on X, then something else 
comes up, you ... etc.) that causes ... something. I'm pretty sure 
that self-respecting TLAs are already investing billions in the 
research (they did spend $90M on LSD research in 1950s.) The presence 
of the N2B interface is just too important to ignore.






But if the people in power are using these algorithms to quietly watch
us, to judge us and to nudge us, to predict and identify the
troublemakers and the rebels, to deploy persuasion architectures at
scale and to manipulate individuals one by one using their personal,
individual weaknesses and vulnerabilities, and if they're doing it at
scale through our private screens so that we don't even know what our
fellow citizens and neighbors are seeing, that authoritarianism will
envelop us like a spider's web and we may not even know we're in it.




#  distributed via : no commercial use without permission
#    is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:




--
DE: +49 (0)160 9549 5269
UK: +44 (0)75 0655 0520
 
http://vincentvanuffelen.com

http://transmit-interfere.com
http://deepmediaresearch.org

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of 

Re: Algorithmic / Biometric Governmentality

2017-10-30 Thread Morlock Elloi
To throw in two items, one presently real, and one somewhat speculative. 
Both are contingent on high speed network-to-brain (N2B) interface, 
namely a handset, which has victim's attention many hours every day.


1. Social networks (ie. FB) likely know your IQ with margin of error of 
5 points or less.


IQ is hard to mask, unnoticeable tests can be easily implemented, 
probably focusing on the speed of actions, ie. figuring out where the 
button is in a slightly changed interface, etc., which can be done over 
long time, not in one sitting. This information did not exist before 
(national IQ dataset), has nothing to do with your habits, and is highly 
valuable: once FB separates sharpies from dims (exactly half of us are 
below average), it can use different strategies to influence each. More 
importantly, this data is valuable to the law enforcement: if you are 
looking to frame someone, you go for dims. If you are looking for 
leaders, you narrow your attention to sharpies.


2. Ubiquitous N2B interfaces may enable effective brain hacking.

We are not talking advertizing and nudging here, but straightforward 
hacking that bypasses voluntary/consciousness layers. After all, the 
brain is just a computer, and it's a matter of time before buffer 
overflow zero days are figured out (note that they will stay zero days, 
as there is no one to send you the patch.) To illustrate the principle, 
this could be similar to the way that flashing patterns induce epileptic 
attacks in those prone to them. I don't expect a good brain overflow 
hack to have crude flashing patterns, it may have something far more 
discreet, a combination of outputs and feedbacks (something comes up on 
the screen, you click on X, then something else comes up, you ... etc.) 
that causes ... something. I'm pretty sure that self-respecting TLAs are 
already investing billions in the research (they did spend $90M on LSD 
research in 1950s.) The presence of the N2B interface is just too 
important to ignore.






But if the people in power are using these algorithms to quietly watch
us, to judge us and to nudge us, to predict and identify the
troublemakers and the rebels, to deploy persuasion architectures at
scale and to manipulate individuals one by one using their personal,
individual weaknesses and vulnerabilities, and if they're doing it at
scale through our private screens so that we don't even know what our
fellow citizens and neighbors are seeing, that authoritarianism will
envelop us like a spider's web and we may not even know we're in it.




#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: Algorithmic / Biometric Governmentality

2017-10-29 Thread jeremy bentham
On Sun, Oct 29, 2017 at 05:21:18PM -0400, Ian Alan Paul wrote:

> This very digestible short talk (22:00) on the emerging threat of
> algorithmic/biometric governmentality from Zeynep Tufekci may be of
> interest to those who research control societies, etc..:
> https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads

> The transcript is below:

...

> (Laughter)

...

> (Applause)

> So to go back to that Hollywood paraphrase, we do want the prodigious
> potential of artificial intelligence and digital technology to blossom, but
> for that, we must face this prodigious menace, open-eyed and now.

The panopticon:  what a quaint, crude concept!

-- 
 Dave Williamsd...@eskimo.com
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: Algorithmic / Biometric Governmentality

2017-10-29 Thread lincoln dahlberg
Thanks Ian,

in respect to the theme of the TED talk and your post-subject, this Wired 
article on China's new citizen rating system is worth looking at, if you 
haven't seen:

http://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion

(your 'brave" posting of a TED talk has openned the way for me to post a Wired 
article!)

best

Lincoln

> On 30 October 2017 at 10:21 Ian Alan Paul <ianalanp...@gmail.com> wrote:
> 
> This very digestible short talk (22:00) on the emerging threat of 
> algorithmic/biometric governmentality from Zeynep Tufekci may be of interest 
> to those who research control societies, etc..: 
> https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads
> 
> The transcript is below:
> 
> So when people voice fears of artificial intelligence, very often, they 
> invoke images of humanoid robots run amok. You know? Terminator? You know, 
> that might be something to consider, but that's a distant threat. Or, we fret 
> about digital surveillance with metaphors from the past. "1984," George 
> Orwell's "1984," it's hitting the bestseller lists again. It's a great book, 
> but it's not the correct dystopia for the 21st century. What we need to fear 
> most is not what artificial intelligence will do to us on its own, but how 
> the people in power will use artificial intelligence to control us and to 
> manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of 
> the technology that threatens our freedom and our dignity in the near-term 
> future is being developed by companies in the business of capturing and 
> selling our data and our attention to advertisers and others: Facebook, 
> Google, Amazon, Alibaba, Tencent.
> 
> Now, artificial intelligence has started bolstering their business as 
> well. And it may seem like artificial intelligence is just the next thing 
> after online ads. It's not. It's a jump in category. It's a whole different 
> world, and it has great potential. It could accelerate our understanding of 
> many areas of study and research. But to paraphrase a famous Hollywood 
> philosopher, "With prodigious potential comes prodigious risk."
> 
> Now let's look at a basic fact of our digital lives, online ads. Right? 
> We kind of dismiss them. They seem crude, ineffective. We've all had the 
> experience of being followed on the web by an ad based on something we 
> searched or read. You know, you look up a pair of boots and for a week, those 
> boots are following you around everywhere you go. Even after you succumb and 
> buy them, they're still following you around. We're kind of inured to that 
> kind of basic, cheap manipulation. We roll our eyes and we think, "You know 
> what? These things don't work." Except, online, the digital technologies are 
> not just ads. Now, to understand that, let's think of a physical world 
> example. You know how, at the checkout counters at supermarkets, near the 
> cashier, there's candy and gum at the eye level of kids? That's designed to 
> make them whine at their parents just as the parents are about to sort of 
> check out. Now, that's a persuasion architecture. It's not nice, but it kind 
> of works. That's why you see it in every supermarket. Now, in the physical 
> world, such persuasion architectures are kind of limited, because you can 
> only put so many things by the cashier. Right? And the candy and gum, it's 
> the same for everyone, even though it mostly works only for people who have 
> whiny little humans beside them. In the physical world, we live with those 
> limitations.
> 
> In the digital world, though, persuasion architectures can be built at 
> the scale of billions and they can target, infer, understand and be deployed 
> at individuals one by one by figuring out your weaknesses, and they can be 
> sent to everyone's phone private screen, so it's not visible to us. And 
> that's different. And that's just one of the basic things that artificial 
> intelligence can do.
> 
> Now, let's take an example. Let's say you want to sell plane tickets to 
> Vegas. Right? So in the old world, you could think of some demographics to 
> target based on experience and what you can guess. You might try to advertise 
> to, oh, men between the ages of 25 and 35, or people who have a high limit on 
> their credit card, or retired couples. Right? That's what you would do in the 
> past.
> 
> With big data and machine learning, that's not how it works anymore. So 
> to imagine that, think of all the data that Facebook has on you: every status 
> update you ever typed, every Messenger conversation, every place you logged 
> in from, all your photographs

Algorithmic / Biometric Governmentality

2017-10-29 Thread Ian Alan Paul
This very digestible short talk (22:00) on the emerging threat of
algorithmic/biometric governmentality from Zeynep Tufekci may be of
interest to those who research control societies, etc..:
https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads

The transcript is below:

So when people voice fears of artificial intelligence, very often, they
invoke images of humanoid robots run amok. You know? Terminator? You know,
that might be something to consider, but that's a distant threat. Or, we
fret about digital surveillance with metaphors from the past. "1984,"
George Orwell's "1984," it's hitting the bestseller lists again. It's a
great book, but it's not the correct dystopia for the 21st century. What we
need to fear most is not what artificial intelligence will do to us on its
own, but how the people in power will use artificial intelligence to
control us and to manipulate us in novel, sometimes hidden, subtle and
unexpected ways. Much of the technology that threatens our freedom and our
dignity in the near-term future is being developed by companies in the
business of capturing and selling our data and our attention to advertisers
and others: Facebook, Google, Amazon, Alibaba, Tencent.

Now, artificial intelligence has started bolstering their business as well. And
it may seem like artificial intelligence is just the next thing after
online ads. It's not. It's a jump in category. It's a whole different
world, and
it has great potential. It could accelerate our understanding of many areas
of study and research. But to paraphrase a famous Hollywood philosopher, "With
prodigious potential comes prodigious risk."

Now let's look at a basic fact of our digital lives, online ads. Right? We
kind of dismiss them. They seem crude, ineffective. We've all had the
experience of being followed on the web by an ad based on something we
searched or read. You know, you look up a pair of boots and for a week,
those boots are following you around everywhere you go. Even after you
succumb and buy them, they're still following you around. We're kind of
inured to that kind of basic, cheap manipulation. We roll our eyes and we
think, "You know what? These things don't work." Except, online, the
digital technologies are not just ads. Now, to understand that, let's think
of a physical world example. You know how, at the checkout counters at
supermarkets, near the cashier, there's candy and gum at the eye level of
kids? That's designed to make them whine at their parents just as the
parents are about to sort of check out. Now, that's a persuasion
architecture. It's not nice, but it kind of works. That's why you see it in
every supermarket. Now, in the physical world, such persuasion
architectures are kind of limited, because you can only put so many things
by the cashier. Right? And the candy and gum, it's the same for everyone, even
though it mostly works only for people who have whiny little humans beside
them. In the physical world, we live with those limitations.

In the digital world, though, persuasion architectures can be built at the
scale of billions and they can target, infer, understand and be deployed at
individuals one by one by figuring out your weaknesses, and they can be
sent to everyone's phone private screen, so it's not visible to us. And
that's different. And that's just one of the basic things that artificial
intelligence can do.

Now, let's take an example. Let's say you want to sell plane tickets to
Vegas. Right? So in the old world, you could think of some demographics to
target based on experience and what you can guess. You might try to
advertise to, oh, men between the ages of 25 and 35, or people who have a
high limit on their credit card, or retired couples. Right? That's what you
would do in the past.

With big data and machine learning, that's not how it works anymore. So to
imagine that, think of all the data that Facebook has on you: every status
update you ever typed, every Messenger conversation, every place you logged
in from, all your photographs that you uploaded there. If you start typing
something and change your mind and delete it, Facebook keeps those and
analyzes them, too. Increasingly, it tries to match you with your offline
data. It also purchases a lot of data from data brokers. It could be
everything from your financial records to a good chunk of your browsing
history. Right? In the US, such data is routinely collected, collated and
sold. In Europe, they have tougher rules.

So what happens then is, by churning through all that data, these
machine-learning algorithms -- that's why they're called learning
algorithms -- they learn to understand the characteristics of people who
purchased tickets to Vegas before. When they learn this from existing
data, they
also learn how to apply this to new people. So if they're presented with a
new person, they can classify whether that person is likely to bu