On 29 Apr 2002 at 12:29, Tim May wrote:

> The deep error which has been with us for a long time is the assumption 
> that we can create legal systems or surveillance systems which go after 
> "bad guys" but not "good guys." That is, that we can separate "bad guys" 
> like Mohammed Atta from "good guys," all in advance of actual criminal 
> or terrorist acts.
> 

...
 
> What people want to know is "Will Person X commit a crime in the 
> future?" (And hence we should deny him access to strong crypto _now_,  
> for example, which is the whole point of attempting to surveil, 
> restrict, and use data mining to ferret out bad trends.)
> 
> Even the strongest believer in the law of the excluded middle would not 
> argue that the "Will Person X commit a crime in the future?" has a "Yes" 
> or "No" answer at the _present_ time. (Well, actually, I suppose some 
> folks _would_. They would say "I personally don't know if he will, but 
> in 50 years he either will have committed a crime or he will not have 
> committed a crime.")
> 

I think, though, that it wouldn't be too hard to find a bunch of people
that agree that Person X is a hell of a lot more likely to
commit a crime than Person Y.  The point of this data mining, I
gather, is not to actually predict individual crimes (which
is probably impossible even in principle and definitely impossible
in practice) but rather to devide the populace into
"sheeple" who only need occasional monitioring to ensure that
they continue to fit the sheeple profile and "potential future 
criminals" who would be subject to more extensive monitoring.

The problem with selling a system like this to the public is
how to convince them that the system won't be branding 
as "future criminals" people who have not committed a crime
and quite likely never will based on such things as what
restaurants they eat at or what books they read, when in fact
that is precisely what the system is designed to do. 


> Can we Identify the Bad Guys?
> 
> Getting back to law enforcement attempting to predict the future, the 
> lack of any meaningful way to predict who will be a future Mohammed Atta 
> or Charles Manson, and who thus "should" be restricted in his civil 
> liberties, is the important point.
> 
> Could any amount of data mining have identified Mohammed Atta and his 
> two dozen or so co-conspirators? Sure, *now* we know that an "indicator" 
> is "Unemployed Arab taking flying lessons," but we surely did not know 
> this prior to 9/11.
> 
> Finding correlations ("took flying lessons," "showed interest in 
> chemical engineering," "partied at a strip club") is not hard. But not 
> very useful.
>
 I think the LEOs and sheeple would be willing to accept the 
general rule that anyone who has lots of money to spend yet has 
no declared legitimate source of income is probably some kind
of criminal.  With a sufficiently broad definition of "criminal".
the reasoning is actually pretty good.
 
> To the law enforcement world, this means _everyone_ must be tracked and 
> surveilled, dossiers compiled.
>
No doubt.
 
> All of the talk about "safeguards" in the data mining is just talk. Any 
> safeguard sufficient to give John Q. Public protection will give 
> Mohammed Atta protection...because operationally they are identical 
> persons: there is no subobject classifier which can distinguish them! By 
> saying Mohammed Atta is indistinguishable from other Arab men who 
> generally fit the same criteria...assuming we don't know in *advance* 
> that "Unemployed Arab taking flying lessons" is an important subobject 
> classifier.
> 

I think the kind of "abuses" that they're trying to "safeguard"
against are things like an IRS agent triggering an audit on a 
neighbor in retalliation for playing the stereo too loud. As
opposed to auditing someone because playing music too
loud is part of the tax evader profile, which would be completely
proper.  I hope the distinction is clear. 

> Indeed, the major "changes in ground truth" (what is actually seen on 
> the "ground," as in a battle) have come from technology. It was the 
> invention and sale of the Xerox machine and VCR that altered legal ideas 
> about copyright and "fair use," not a bunch of lawyers pontificating. In 
> both cases, the ground truth had already shifted, in a kind of 
> knowledgequake, and the Supremes had only two choices: accept the new 
> reality by arguing about "fair use" and "time-shifting," or declare such 
> machines contraband and authorize the use of storm troopers to collect 
> the millions of copiers and VCRs aleady sold. They chose the first 
> option.
> 

It might be amusing to speculate as to what the result
would have been had they attempted to choose the second option.
Or maybe not.


> Precisely! This is why the talk fo how the Cypherpunks list (and similar 
> lists) should not be political is so wrong-headed: without a political 
> compass, where would we head?
> 

I think this comes from different meanings of the word political.
To most people, this means lobbying legislators or fighting
court cases,  maybe even carrying big signs at rallies.  I'm
pretty confident this isn't what you have in mind

> (That the dominant political philosophy is closely-attuned to what is 
> now called "libertarianism" (but which used to be called "liberalism," 
> or "classical liberalism") is more because that's the only political 
> philosophy attuned to distributed, non-hierarchical systems. One might 
> imagine a list oriented toward using strong crypto to help with fascism, 
> or with Maoism, but there would be some deep conflicts. The absence of 
> such groups, or even of "strong crypto for social welfare" milder forms, 
> tells us a lot.)

Well, obviously a Maoist would feel that any strong crypto would 
have to have keys escrowed with party officials, and the knowledge
of crypto should be restricted to those with the appropriate
clearences.  Basically, the stuff that Dorothy Denning says.


> 
> >
> > One encouraging thing was that a lot of the "data mining" speakers 
> > seemed
> > very interested in exploring methods for limiting use of their 
> > techniques.
> > i.e. making sure you can't get "too much" out of the database.
> > Unfortunately the discussion of "how much is too much" wasn't in scope
> > here, but at least some mechanisms might end up being in place...
> 
> And, as I argued above, I doubt that any such "limits" will either be 
> very _useful_ or will be very _long-lasting_.
> 

Can't possibly be.  The idea that the details of one's perfect
life must be available for scrutiny by officials is already
an abombination; the only reason people would tolerate
it is that they don't care who knows what toothpaste they buy.
But it is precisely those details of one's life which one would
very much like to keep secret that the state would feel they
have a compelling need to know.    

> The core point is the familiar one: we are coming to, or have reached, a 
> fork in the road. Down one path lies the Surveillance State, the 
> Panopticon, with ubiquitous cameras, intrusive questions, restrictions 
> on untraceable spending, and other detritus of the police state. Down 
> the other path lies a universe of strong crypto with a web of "opaque 
> pipes" linking "opaque objects."
> 
Right.  I think we're nearing the end of the time when it's
possible to believe that a middle path exists.

> Technologists can make the second path the reality. Lawyers and 
> lawmakers will try to take us down the first path.
> 
> --Tim May
>
George

Reply via email to