Re: DBCs now issued by DMT

2002-12-08 Thread Peter Fairbrother
OK, suppose we've got a bank that issues bearer money.

Who owns the bank? It should be owned by bearer shares, of course.

Can any clever person here devise such a protocol?

I'd guess that all the Bank's finances should be available to anyone who
asks. That should include an accounting of all the money issued. And not
be reliant on one computer to keep the records.

Or the propounders wanting to: make a profit/control the bank?


-- 
Peter Fairbrother

(who's drunk now, but will be sober tomorrow, and may regret posting this
then...)


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Optical Time-Domain Eavesdropping Risks of CRT Displays

2002-03-12 Thread Peter Fairbrother

 John Young wrote:

 Markus Kuhn has released this after learning of
 Joe Loughry's announcement.
 
 -
 
 Announced 5 March 2002.
 To be presented at IEEE Oakland conference, May 2002
 
 
 http://www.cl.cam.ac.uk/~mgk25/ieee02-optical.pdf
 
 Optical Time-Domain Eavesdropping Risks of CRT Displays
 
 Markus G. Kuhn
 University of Cambridge, Computer Laboratory
 JJ Thomson Avenue, Cambridge CB3 0FD, UK
 [EMAIL PROTECTED]
 
 Abstract

I've snipped the abstract, because it's dry as ditchwater. I can only
recommend you read this, or at least look at the pictures, if you haven't
already.

Wow. 

Makes Tempest look like a toy. Nice (?) one, Markus.

-- Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Where's the smart money?

2002-02-11 Thread Peter Fairbrother

I don't know whether to smile, or call you an arsehole, you give few clues -
economic growth?.

I just hope that if your suggestion is taken up then all the bills that are
declared invalid belong to you, and thus not to me or anyone else.

:)

Even credit card invalidity is not thoroughly recognised these days - just
before Christmas I ordered a CD which was sent, even though I'd
inadvertantly gone over my limit and the payment authorisation was refused -
they wrote a note telling me so on the delivery invoice. A human being, not
a computer system.

People give cash notes to other people, who may or may not be retailers. If
it's not humanly recognisable without a terminal, it's likely to be refused.
I suppose they could do an introduction of new notes, like they did with the
Euro, but I can't see it working.

And after a few exchanges on the lines of It won't pass the box - Give me
my change or I'll break your head I suspect the retailer boxes would become
as likely to be sabotaged as the notes.

What about eg markets where there is no electricity, never mind connectivity
to check notes? R. Branson started there, for one.


Cash has it's place, but requiring electronic confirmation is exactly where
it isn't. We have credit cards for that. Cash needs to be authenticatable by
humans alone.

-- Peter Fairbrother


 Sampo Syreeni wrote:

 On Mon, 11 Feb 2002, Trei, Peter wrote:
 
 That's the scenario which is (semi) worrying. As the tagged bills wear,
 some fraction of the RFID transponders will inevitably fail. When this
 happens, is the bill declared invalid?
 
 I see no reason why sufficiently reliable RFID notes (say an MTBF in
 average use of around 5; not technologically infeasible, yet around what
 current print-only notes can take at max) could not be handled this way.
 But if this is really such a problem, one would expect the issuer to be
 able to invest a fair amount of money per bill in circulation into
 verification methods in excess of what you'd typically see in a grocery
 store -- a reasonable MTBF and enough circulation through the issuer would
 lead to few notes getting into a bad shape to be passed this far up the
 chain. Thus, failed notes could be replaced at a cost not much higher than
 that incurred by routine check-ups, only with a greater delay.
 
 Besides, there's a point in invalidating failed bills -- if this is not
 done, where's the incentive for people to keep the stock in shape? A
 monetary economy, by itself, *can* adapt to lost bills via deflation, and
 bills going invalid is something nobody really wants to experience.
 Also, it is likely that deflationary pressures arising out of economic
 growth will completely drown out any effects lost notes might have on the
 larger economy. The implication is, wear and tear of bills can be
 accurately analyzed by treating them as a slowly devaluing physical good,
 and the usual efficiency arguments apply.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: PGP GPG compatibility

2002-01-21 Thread Peter Fairbrother

Brad's point about writing encryption software for Windows, as you often
write email to people who use Windows, so you know your email is safe on
*both* ends, has merit, and if Windows was at all secure I'd agree, but...
Another point about this type of zero-UI encryption is that you don't
actually know if your email will be secure, or just sent in clear (if you
have a flag to tell, it isn't zero-UI).

A better idea is to minimise the UI, not bring it to zero. This has the
disadvantage of making encrypted email less used, thus making encrypted
traffic more of a target, but false security is worse than no security.

I am writing m-o-o-t, which runs on a bootable CD and doesn't use Windows
(OpenBSD based, same CD runs on PC's and Macs). You can only email another
m-o-o-t user, though m-o-o-t does more than email.

The email package is part of the system, and it doesn't allow even the
stupidest or most intelligent user on either end to do anything insecure,
within reason. It is transparent to the user except when needed, eg writing
to a new correspondent (verify public keys) storing files (level of
protection) or setting up (there are some things a new user must know).


m-o-o-t will use something similar to Pete's message-keys-stored-on-a-server
suggestion (actually DH keyparts), with the addition that the keyparts are
signed. The 175-bit public signing key is included with every message, no
long PGP strings, and I'm trying to convert the key to ascii art to make it
more easily recognisable. Two shared keys are automatically and
transparently set up for later communications, and the address book is
updated. The shared keys are updated with each message.


On a side note, there is no choice of cypher or protocol. The multiple
cyphers and protocols used by PGP and GPG are the main cause of this thread!
If encryption software writers can't decide which cypher to use they
shouldn't be writing encryption software.

As m-o-o-t is mainly designed for GAK resistance, all persistant keys
(except some locally-used SFS keys) are used only for signatures. The use of
persistant keys for encryption in both PGP and GPG make them unsuitable for
GAK resistance, and if you haven't got GAK yet, you might get it someday,
making all your present traffic insecure.

-- Peter Fairbrother


Pete Chown wrote:

 John Gilmore wrote:
 
 Brad Templeton has been kicking around some ideas on how to make zero-UI
 encryption work (with some small UI available for us experts who care more
 about our privacy than the average joe).
 
 http://www.templetons.com/brad/crypt.html
 
 That's an interesting article.  I wrote Whisper (http://234.cx/whisper.php) as
 a different way of making crypto more usable.  The idea is that you simply
 agree a pass phrase with the correspondent beforehand.  You then encrypt your
 message with a small and hopefully bullet-proof program.  It isn't innovative
 cryptographically, and that is the point -- hopefully it is simple enough that
 anyone with basic computer literacy can make it work.
 
 Of course the effect of Whisper is different to the zero-UI encryption.
 Whisper provides you with good security (subject to weak pass phrases and
 bugs), but you must agree a pass phrase beforehand.  Zero-UI encryption is
 more vulnerable to active attacks on the network, but works with much less
 effort.
 
 One enhancement to the zero-UI model that I think might be worthwhile is
 automated key exchange ahead of the first message.  So when Alice asks to
 email Bob, her computer first sends a message asking for Bob's key. When the
 reply is received, Alice's original message is taken out of the queue,
 encrypted and sent.  This way the first message doesn't go across the network
 in the clear.
 
 If we don't want to add another round-trip time, we could make keys available
 from a key server.  This would have the disadvantage that attackers could
 compromise the key server and replace the keys with false ones.  However, this
 would be detected almost straight away if they could not modify communications
 going directly between Alice and Bob -- Bob would receive a message that he
 couldn't decrypt.  Normally surveillance operations have to be kept secret so
 this kind of attack would be impractical.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Scarfo keylogger, PGP

2001-10-16 Thread Peter Fairbrother

The keystroke capture component (which does not work when the modem is
operating) would capture email when composed offline before transmission. I
don't know whether this needs a wiretap warrant or not, but in effect it is
tapping email, during a part of it's journey from brain to brain.

The PGP-key capture component only captured the PGP logon, and wouldn't
capture email in any case. It would work when the modem was working (on
something else).

The encrypted data on Scarfo's computer may or may not include email, which
the PGP key would decode, but the FBI were authorised to seize business
records, not email. Perhaps the FBI might not be allowed to decrypt or look
at any email found, though in practice it would be nearly impossible to stop
them doing so.

The affidavit is extremely complex and hard to unravel, whether to try to
preserve secrecy, in the hope that it will confuse the defence/Court, or
perhaps it's just legalese, I don't know.


-- Peter Fairbrother

 David Wagner wrote:

 It seems the FBI hopes the law will make a distinction between software
 that talks directly to the modem and software that doesn't.  They note
 that PGP falls into the latter category, and thus -- they argue -- they
 should be permitted to snoop on PGP without needing a wiretap warrant.
 
 However, if you're using PGP to encrypt email before sending, this
 reasoning sounds a little hard to swallow.  It's hard to see how such a
 use of PGP could be differentiated from use of a mail client; neither
 of them talk directly to the modem, but both are indirectly a part of
 the communications path.  Maybe there's something I'm missing.
 
 If you're using PGP to encrypt stored data only, though, then I can
 see how one might be able to make a case that use of PGP should be
 distinguished from use of a mail client.
 
 Does anyone know what PGP was used for in this case?  Was it used only
 for encrypting stored data, or was it also used from time to time for
 encrypting communications?




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Scarfo keylogger, PGP

2001-10-16 Thread Peter Fairbrother

Capturing keystrokes of email in composition would appear to me to be part
of a transfer of ..intelligence of any nature transmitted ... in part by a
wire..., and nothing to do with stored email or 2703, but I am not a
lawyer.

-- Peter Fairbrother


 Steven M. Bellovin wrote:
[snip] 
 The problem is that you're thinking like a computer scientist instead
 of like a lawyer...
 
 Definitions are important in the law.  The wiretap statute (18 USC 2510
 et seq, http://www4.law.cornell.edu/uscode/18/2510.html) defines
 an electronic communication as any transfer of signs,
 signals, writing, images, sounds, data, or intelligence of any
 nature transmitted in whole or in part by a wire, radio,
 electromagnetic, photoelectronic or photooptical system that
 affects interstate or foreign commerce, but does not include -
 (A) any wire or oral communication...  (Wire communications
 refers to telephone calls.)  Interception of such transmissions
 is one of the things governed by the wiretap statute; the procedure
 for getting an authorization for a tap is very cumbersome,
 and is subject to numerous restrictions in both the statute and
 DoJ regulations.
 
 Access to *stored communications* -- things that aren't actually
 traveling over a wire -- are governed by 18 USC 2701 et seq.,
 which was added to the wiretap statute in 1986.  (That's when
 electronic communications were added as well.)  The rules for
 access there are much simpler.  But that section was written on
 the assumption that email would only be stored on your service
 bureau's machine!  In this case, it would appear that we're back to
 the ordinary search and seizure statutes governing any computer records
 owned by an individual.  *But* -- if they're *in the process of being
 sent* -- 2511 would apply, it would be a wiretap, and it would be
 hard to do.  The FBI agents who wrote that keystroke logger are
 well aware of this distinction, and apparently tried to finesse
 the point by ensuring that no communications (within the meaning
 of the statute) were taking place when their package was operating.
 
 I suppose that someone could make an argument to a judge that
 email being composed is intended for transmission, and that it
 should therefore be covered by 2511.  The government's counter will
 be to cite 2703, which provides for simpler access to some email, as
 evidence that Congress did not intend the same protections for
 email not actually in transit.  I'd have to reread the ruling
 in the Steve Jackson Games case to carry my analysis any further,
 but I'll leave that to the real lawyers.
 
 
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Peter Fairbrother

 Bram Cohen wrote:

 On Tue, 18 Sep 2001, Pawel Krawczyk wrote:
[..]
 It's not that stupid, as feeding the PRNG from i810_rng at the kernel
 level would be resource intensive,
 
 You only have to do it once at startup to get enough entropy in there.

If your machine is left on for months or years the seed entropy would become
a big target. If your PRNG status is compromised then all future uses of
PRNG output are compromised, which means pretty much everything crypto.
Other attacks on the PRNG become possible.

 and would require to invent some defaults without any reasonable
 arguments to rely on. Like how often to feed the PRNG, with how much
 data etc.

The Intel rng outputs about 8kB/s (I have heard of higher rates). Using all
this entropy to reseed a PRNG on a reasonably modern machine would not take
up _that_ much resources. And it would pretty much defeat any likely attacks
on the PRNG.

 At startup and with 200 bits of data would be fine.

So you need a cryptographically-secure PRNG that takes a 200-bit seed. As
the output is used by programs that may use strange and not-yet-invented
algorithms which may interact with and weaken the PRNG, how are you going to
design it? And what happens if your PRNG is broken? Everything is lost, the
attacker has got root so to speak.

 Of course, there's the religion of people who say that /dev/random output
 'needs' to contain 'all real' entropy, despite the absolute zero increase
 in security this results in and the disastrous effect it can have on
 performance.

Sometimes it may have no effect on security, but it can affect it badly.
Brute force attacks on the PNRG could be more efficient than on the cipher
if 256 bit or higher keys were used. With the possible introduction of QC
looming it might well be advisable to use such key-lengths for data that
requires long-term security.

I agree that performance hits arise if an all-real-random approach is used,
but personally I am in favour of using all the entropy that can easily be
collected without taking those hits. The Intel rng can do this nicely
(although I would use other sources of entropy as well).


-- Peter Fairbrother




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Peter Fairbrother

Bram,

I need _lots_ of random-looking bits to use as covertraffic, so I'm using
continuous reseeding (of a BBS PRNG) using i810_rng output on i386 platform
as well as other sources (the usual suspects plus CD latency plus an
optional USB feed-through rng device a bit like a dongle). I don't use a rng
on Apple, 'cos it doesn't have one. Others would perhaps not need so many
bits. 

I do hash them, but I don't really trust any hash, algorithm, or rng, so I
use all the entropy I can get from anywhere and mix it up. I try to arrange
things so each source is sufficient by itself to provide decent protection.

It might be a better idea to schedule reseeding of the PRNG depending on
usage rather than time for more everyday use. Actually I don't disagree with
you much, except I'd like to see reseeding more often than once a minute.

There is another reason to use a PRNG rather than a real-rng, which is to
deliberately repeat random output for debugging, replaying games, etc. Not
very relevant to crypto, except perhaps as part of an attack strategy.

-- Peter


 On Wed, 19 Sep 2001, Peter Fairbrother wrote:
 
 Bram Cohen wrote:
 
 You only have to do it once at startup to get enough entropy in there.
 
 If your machine is left on for months or years the seed entropy would become
 a big target. If your PRNG status is compromised then all future uses of
 PRNG output are compromised, which means pretty much everything crypto.
 Other attacks on the PRNG become possible.
 
 Such attacks can be stopped by reseeding once a minute or so, at much less
 computational cost than doing it 'continuously'. I think periodic
 reseedings are worth doing, even though I've never actually heard of an
 attack on the internal state of a PRNG which was launched *after* it had
 been seeded properly once already.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



How to ban crypto?

2001-09-16 Thread Peter Fairbrother

Banning cryptography to deter terrorism, or controlling it to give GAK, is
much in the news these days. I wonder if it could be done?

Bin-Laden was at one time said to use stego in posted images for comms. I
doubt this was true, but it would be very hard to stop. Good stego can be
undetectable (and deniable) for short messages of the type needed by
terrorists. Without depth it can be very hard to detect even ordinary
stego, and stego is advancing fast.

To prevent traffic analysis, public fora such as newspaper private ads or
chalk marks on walls have been used by spies and terrorists for a long time,
and modern ones like newsnet groups aren't very different. Requiring posters
to prove identity would be difficult if not impossible, and wouldn't work
against undetectable stego anyway. Even a popular privately run site could
be used to provide cover traffic. That's not counting the CIA's SafeWeb
anonymiser, remailers, and the like.

Subliminal channels in Government-approved crypto could also be used. Word
or phrase selections can carry messages. Pre-arranged codes can be as secure
as OTP, and impossible to detect or prove. The list is long if not endless.

Perhaps Governments can ban (non-approved?) encryption software, and punish
those who have it on their computers? I'm no expert, but it seems likely
that a macro worm could be written to do hard crypto without great
difficulty, and people can reasonably say they didn't know it was there. It
might even be possible to embed this functionality in a virus.

Certainly it could be included in freeware available on the 'net. I've also
been looking at the possibility of steganographically hiding
functionality, and while I can't do it yet, I'm convinced it could be done.

Any other suggestions for how to ban crypto? I can't think of anything that
would actually work against terrorists.

-- Peter Fairbrother




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: moving Crypto?

2001-08-03 Thread Peter Fairbrother

 Ray Dillinger at [EMAIL PROTECTED] wrote:
 
 It is time to move the conference because it is no longer safe for
 cryptography researchers to enter the USA.
 
 Bear


I'm worried about the long-term National Security implications. If DMCA
stands and US cryptography researchers are imprisoned en masse, or forbidden
to work, then the best US crypto people will migrate to - Russia? Where they
would be free to do their research?

Non-government cryptography researchers are a strategic asset, vital for
economic reasons, needed to provide a pool of talent/knowledge for NSA/the
Armed Forces, available to be called upon in time of war, etc.etc.

How about moving Crypto to Moscow, or perhaps St Petersberg?


-- Peter




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



SFS for anonymity

2001-07-18 Thread Peter Fairbrother

Given: an online Steganographic Filing System database based on the second
construction of Anderson, Needham and Shamir*, with many users. Users write
email to the data base, with random cover writes. They read from the
database to collect their mail, reads are covered by random cover reads, and
random reads/writes when they have no mail.

Assumptions include: Messages are encrypted. Users would prefer to lose
their mail than have it compromised. All communications and alterations to
the database are intercepted, and the database itself is compromised. Shared
secret keys between users are allowed. Stored hashes of the database state
are allowed, to ensure that it has changed enough. The database/userbase can
be split into groupwrite/anyread and anywrite/groupread segments (group
membership is random and not secret).

The point is to foil traffic analysis without a distributed network or
trusted third party. Any ideas/insuperable objections?

(Could datarates be optimised to implement untraceable internet telephony as
well as email on a DSL/cable-type connection?)

Comments? 


-- Peter

* http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/sfs3.ps.gz




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Starium (was Re: article: german secure phone)

2001-06-11 Thread Peter Fairbrother

 Bram Cohen at [EMAIL PROTECTED] wrote:
[..]
 I can't emphasize enough that it's very important that the form factor be
 a double-female phone jack and work when plugged in with *either*
 orientation - is this an easy thing to detect?

Surely a male-to-female jack. Plug it (male) into the wall socket and plug
the phone into it (female). You could put the electronics in a separate box
if the jack is too small. Or a box with a tail. If you have to have a double
female, like an external modem has, does it matter if, like a modem, it
doesn't work if you plug it in the wrong way round? That is easy to detect.

I don't see why you can't sell a handset though. People buy new ones when
they redecorate. And they're more resistant to Tempest-style attacks, as
unencrypted speech isn't transmitted along the cord.

A handset also allows a protocol to avoid some MITM without authentication:
Alice calls Bob, exchange DH, Bob replies with a few spoken digits hashed
from their shared secret, which he sees on a display on the handset. Alice
keys these digits into her phone, if they match she can speak (until then
her voice doesn't get transmitted). Not perfect, but it helps.

Too much user interaction? Users _need_ interaction to make them aware of
the security status of the line they are using. They've all seen the spy
films where people use STU's and say go secure and press a button and wait
for the bleeps, if it doesn't do that they won't believe it. If you have no
user interaction then what happens if eg the kids unplug the box and you
don't notice?

A handset should, of course, be compatible with a computer/headset and
encrypted-voice-over-internet software. Perhaps a pda/mobile combo too.


-- Peter




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: NSA tapping undersea fibers?

2001-06-01 Thread Peter Fairbrother

 John Denker at [EMAIL PROTECTED] wrote:

 I was talking with some colleagues who had read the WSJ article
 
 http://interactive.wsj.com/articles/SB990563785151302644.htm
 
 http://www.zdnet.com/zdnn/stories/news/0,4586,2764372,00.html for
 
  and who were wondering as follows:  Given that They know how to tap a
 fiber in the lab, how hard it would be for a submarine such as the USS
 Jimmy Carter to apply a tap while working 3000 meters down in the ocean.
 
 Well, it ain't gonna happen by sending any such sub down to 3000 meters.
 
 It is highly unusual for a full-sized sub to go below 500 meters.  US subs,
 which are believed to be not as tough as the Russian ones, are limited to
 something more like 300 meters.
 http://www.fas.org/nuke/guide/russia/slbm/941.htm
 http://www.fas.org/nuke/guide/usa/slbm/ssbn-726.htm
 
 One might have guessed that subs could zoom down 1000 meters about as
 easily as airplanes can zoom up 1000 meters -- but that's not the case.
 

Jane's suggests a working depth of at least 2000 ft (600m) for Seawolf class
submarnes such as the Jimmy Carter, but it might be more, as their
pressure hulls are made of HY-100 high-pressure steel, material of choice
for working depths of 10,000ft (3000m) or more in small commercial subs.  I
agree that they aren't going to do it at much more than about 3000ft
(1000m), but they could probably zoom down that far without needing major
inspection/overhaul afterwards.

 According to the WSJ article, near shore (at depths of less than 1000
 feet), the cable is buried in a trench, which would add an extra layer of
 nuisance to someone trying to tap it.  Out in the deep ocean, it just lies
 on the seafloor -- but the pressure becomes a big issue.
 
 Here's are some hypothetical scenarios to consider:
 
 First, it should be obvious that They don't need a submarine to tap cables
 that already make landfall in the US, which is the vast majority:

We are talking about the NSA, right? Of course they have reasons to want to
tap cables that make landfall in the US. Apart from the legal aspects, eg
operations conducted outside the US, inadvertant monitoring of  US
citizens, and the frankly illegal things they might want to do, NSA aren't
going to trust cable operators to keep wholesale monitoring secret.

 Anyway, here's scenario #1:  To tap a deep-sea cable, They keep the
 submarine at a modest depth, perhaps 150 meters or so.  They send down a
 small Remotely Operated Vehicle to grab the cable and lift it up to the
 sub.  They do the work there, and then return the cable to the seafloor.
 
 The cable is certainly strong enough and flexible enough to permit
 this.  (Otherwise, how could it ever have been laid?)

Cable companies do this (from the surface) when they repair cables, but they
usually cut the cable before separately raising the cut ends and splicing in
a new section. I doubt that cable would be strong or extensible enough to
lift uncut, unless there was a lot of slack from eg a previous repair.

 But could the tappers do this without leaving telltale signs?  I don't
 know.  It depends on how closely the tappees are watching.

If they don't have to lift it, almost certainly yes, unexplained performance
spikes and even outages are not unknown, if the cable still works the
company isn't going to get too het up. I doubt anyone will even notice. A
1% change in signal strength, in one link, isn't much of a worry.

[..]
 Another scenario:  Overall it might be easier to tap the cable while it is
 still on the continental shelf, at 100-meter depth or so.  That would
 require Them to dig it out of its trench and re-bury it, but thereafter it
 would be hard for others to notice Their handiwork.

This also solves the problem of backhaul, just lay a short cable to shore.
However cable companies do inspect/overfly their cables close to shore, to
prevent and detect damage from trawlers etc., so operating a bit further
out, at depths of between 300 and 600m, might be better. Filtering equipment
can be hidden more effectively here, unless they want all the take.


 ===
 
 Related issue:
 
 Suppose They install a tap.  What then?  What are They going to use for
 backhaul?!!
 
 1) One option would be to use some other channel on the same cable to do
 the backhaul.
 1.1) This would be relatively straightforward if They could lease a
 suitable backhaul channel from the cable operator.  I don't know how this
 could be done without the cable operator knowing exactly what They were up
 to.  And if the cable operator is giving Them that level of consent, there
 are ways of getting the data without bothering with a submarine.

I speculate that the NSA could (and probably do) lease fibres from cable
companies. I also expect that if they wanted to install their own monitoring
equipment and security at the cable termination site, no-one would raise an
eyebrow. 

 1.2) Without a leased backhaul channel, I suppose it is possible that
 They could just insert packets 

Re: forwarded message from tylera19@hotmail.com

2001-05-14 Thread Peter Fairbrother

 Amir Herzberg at [EMAIL PROTECTED] wrote:
[..] 
 This takes care reasonably well of peer to peer e-mail (I think), and can be
 easily deployed (any volunteers? I'll be very glad to provide our system for
 this !). As to mailing lists like this one... Here one solution is manual
 moderating, of course. But for a fully automated list... maybe a charge per
 posting which is proportional to the number of subscribers (well, like an
 ad, I guess). 

Does the recipient get paid? The recipients of spam/ads? I've got unlimited
email addresses ...

--Peter

[..]




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]