Successful effectiveness of any publically-applied system requires
collaborative development of its entire system, including servers and
clients (in client server systems).  With the ability to fix the
system as any particular developer and/or administrator uses it, the
system can clearly progress to a threshold of being highly effective
enough to depend on it for its (developed) goals, and then often
maintain this status through further refinement and reprogramming.
Without intelligent constant progress from broad experience and
requirement sets on all of its parts, "razor" will prove to be not
good enough for anybody, and will fail.

The difference between one and two developers for the same item can be
all of the world; I'm not promising a flood of developers once the
servers become GPLd; to the contrary, only a handful of new developers
would show up naturally, and would fill any gaps the system had
before, and then it would meet the above threshold levels.  At that
point, its utility would become so great that more people would depend
on it, and it would continue to be upgraded to meet such demands with
better attention to the detail of the uses it must execute.

It can be understood that the brash Microsoft-style commercialization
of an entity designed to stop commercialization is bound for high
likelihood of failure, when measured in achievement of anything
despite fraudulent sales.  Razor has become the ultimate spamware, the
ultimate degenerate of all Earth, by simply claiming on the one hand
to stop most of that which is bad mail, and then on the other hand
causing most of it not to be stopped; any dependence a user has of
this system suddenly becomes so bad, that they must find a way to deal
with their losses due to such system.

We need to identify ways we can create a GPL version of programs which
perform the same function that the Razor servers provide, are supposed
to provide, or would have supposed to provide.  Hacking around to make
up for the inadequacies of the head-scratching proprietary server
won't cut it.

I'm certain that they're scratching their head right now, attempting
not to give too many false positives, in the process giving higher
scores to fraudulent reports by spammers of supposed non-spam messages
that really are spam than they are to the people who report real spam,
all the while not realizing that they are ignoring the basic idea that
they don't have to put everyone into the same ascertainment pool; that
we must develop trust on the basis of a secret system is absurd.  What
we need is ideas that were developed long, long ago (18 years ago I
heard others tell me of these very ideas, after I had thought of them
myself): signing assessments of messages, and then sending the ID and
signatures of the messages into a distributed system which can look up
those assessments (and, quite possibly, the messages themselves).  For
instance, with the Razor system as is, there are hashes and stuff like
that.  We could gpg sign assessments and turn them in.  We could
choose whose assessments we want to use.  The assessments would be XML
based, with their descriptions written according to machine parsable
language and also crafted to precisely describe a specified meaning by
the signer themselves.  Each signer could describe what they mean by
their assessment in XML (I think this is what XML is for; correct me
if I'm wrong), and then others could peruse those assessments.

Distribution could be basically how I perceived the solution to
USENET's problems long ago: multiple redundant indicies.  Also, a peer
storage network would be nice for saving this data, as well, so there
are no SPOF central administrations.

We could easily evolve a package such as Razor, or any other like it,
if we had the software fully available; it could splinter, branch,
grow, glorp, fall over, be reprogrammed, etc.  Instead, in Razor (and
all of its subsequent identity obfuscations), all we have is some sort
of MSN work-alike, where some secretive random choses our view of
things, but without all the benefits.



All this is based upon my decision to start using Razor as one of my
exclusive mail dejunkers.  I will not get into fully describing how
much spam I receive through various means, but suffice it to say that
it has been about at least hundreds per day for almost a decade.  Over
the last few days when I was funneling as much spam into the spam
sorting system as I could, I started to realize that the scores
assigned to my assessments were basically inversely proportional to
the quality of my assessments.  (I could deduce the scores by looking
at the cf variables that accounts were seeing after other accounts had
made reports on the same items.)  In addition, the worst spams were
marked as unscorable, and were getting cleanly through every stage.
This concept of leaving it up to someone else to determine for
everyone else what it is that they can be trusted with to figure out
what is spam or not, and then not to reveal how this decision came
into being or from whom it was made, is simply ineffective; at small
scales, it doesn't catch enough; at large scales, it garners enough
effectiveness attention that fraud gets introduced proportionally
effective to the unfraudulent effectiveness of the system, causing its
resultant effectiveness to always be insufficient.  I have a horrible
suspicion that they're actually selling spamming rights to various
spammers for a high fee, so that they won't be marked by this
despamming software.  How the hell could they prove to us that they
aren't, given their model?

Brad Allen
<[EMAIL PROTECTED]>

Headers repeated to be inside signature (date ~0-3min. sooner than email):
To: [EMAIL PROTECTED]
Cc: Brad Allen <[EMAIL PROTECTED]>
Subject: Razor success dependent on server collaboration 
From: Brad Allen <[EMAIL PROTECTED]>
Fcc: +backup
X-Mailer: Mew version 3.2 on XEmacs 21.4.6 (Common Lisp)
Date: Fri, 21 Mar 2003 14:28:49 -0600

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to