Computerized outdoors idea serves users virtual baloney

2004-11-29 Thread R.A. Hettinga
http://www.adn.com/outdoors/story/5849296p-5765085c.html

 Computerized outdoors idea serves users virtual baloney


(Published: November 28, 2004)
 A Texas businessman wants to rig a robotic, high-power rifle to a Webcam
in a game park so people can punch buttons and hunt'' from the comfort of
their handiest Internet connection.

 The People for the Ethical Treatment of Animals wants everyone to stop
eating fish because the slippery critters are, in their own way, as cute
and cuddly as cats and dogs.

 Has the world gone nuts?

 The proponents of what has been labeled remote-control hunting'' are,
predictably, arguing that a sanitized, virtual slaughter would be a boon
for the disabled.

 The leaders of the Fish Empathy Project are, with equal predictability,
trying to convince everyone to spare the fish because they are sensitive,
thinking creatures that travel in schools.

 One group of loonies thinks anyone should be able to kill anything the
easiest way possible -- simply because we can.

 The other group thinks nobody should kill anything because we're all
brother fauna. The flora are apparently exempt from the discussion because
they're rooted in place. Were they able to move around and wag their
leaves, PETA would likely argue we shouldn't eat them either.

 Whatever happened to the natural order of things?

 Instead, we have people who think it would be sporting to hunt and kill
animals by remote-control with their computer. That sort of thinking is
just plain sick.

 Where exactly is the sport''? More importantly, where is the hunt?

 Webster's New World Dictionary defines hunt'' this way: 1.) to go out to
kill or catch (game) for food or sport; 2. to search eagerly or carefully
for; try to find 3. a.) to pursue; chase; drive b) to hound; harry,
persecute 4. a) to go through (a woods, fields, etc.) in pursuit of game''
and on and on in that vein.

 Nowhere is there any mention of sitting in a home or office, watching a
computer-display screen and punching buttons. If that qualifies as hunting,
no one really need ever hunt again because we've then reduced the killing
of animals to the shooting of pictures.

 After all, a hunter who chose to engage in this sort of computer sport
wouldn't really be shooting an animal. He'd be shooting a picture of an
animal on his computer screen, thereby telling a piece of machinery in the
middle of a field somewhere to do the actual execution.

 And if all you're really doing is shooting a picture, what differences
does it make if the picture represents a real animal or a virtual one? For
that matter, how would you even know for certain what you shot?

 Think how easy it would be to scam this sort of hunting.''

 Put up a Web site. Run a film of animals walking around in a field. Let
the people who sign onto the Web site and pay their fee shoot the animals.
Run some film of an animal dying.

 Then you ship the hunter 50 pounds of beef from the supermarket and tell
her that's the animal she killed.

 Someone really creative might even be able to convince PETA to endorse an
Internet hunting site that kills virtual animals. Look, PETA wants to save
real animals from being killed. If shooting a virtual deer spares a real
deer while satisfying someone's instinctive urge to hunt, isn't that a good
thing?

 And if we can do this with hunting, why not fishing?

 Someone could rig a Webcam to a robotic fishing rod along the Russian
River. You could sit at home and watch on your computer as the red salmon
swarm up that stream, then maneuver a joy stick to make the rod cast a fly
in front of them.

 Let it drift. Maybe even hear the computer going tappa-tappa-tappa to give
you the feel of a lead weight bouncing along the river bottom. Feel the
joystick jerk against your hand as a fish hits and then battle it across
the table as the fight is on.

 Oh, the thrill, the excitement, the virtual adrenaline rush, until at last
you bring that flapping salmon into view of the robotic net that scoops it
up.

 A week later, salmon filets would arrive in the mail.

 Does it matter if any of this is real? Isn't the experience exactly the
same if all you are seeing on your computer is virtual? Does a prerecorded
film of salmon coming up the Russian really look any different than a live
camera feed of salmon coming up the stream?

 Of course not.

 The only problem might come in producing a soy product that really tastes
like salmon. But science can certainly solve that.

 Wouldn't that be perfect for just about everybody, except the poor, dead
soybean plants? I hear they're quite sensitive, too.

-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



Re: Idea: Offshore gambling as gateway between real and electronic money

2004-04-17 Thread Bill Stewart
At 11:35 AM 4/17/2004, Thomas Shaddack wrote:
Adoption of anonymous e-money is to great degree hindered by the lack of
infrastructure to convert this currency to/from meatspace money.
However, there is possible a method, using offshore gambling companies.
You're trying too hard.
Gambling has always been a convenient money-laundering technique,
as long as the casinos accept the kinds of money you're trying to launder.
That's also why spook agencies get anti-money-laundering laws passed.
If the casino will take your ecash and give you chips,
and you want to make a pretense of gambling rather than
just turning the chips back in for conventional euros,
go bet ~half the chips on red, ~half on black,
some insurance money on green, and tip the croupier,
and the casino collects their 1/37 or 2/38 cut.
.. Your winnings, sir.







Idea: Offshore gambling as gateway between real and electronic money

2004-04-17 Thread Thomas Shaddack

Adoption of anonymous e-money is to great degree hindered by the lack of
infrastructure to convert this currency to/from meatspace money.
However, there is possible a method, using offshore gambling companies.

There may be a special kind of gamble, that looks from the outside
like regular betting, but where the participants to certain degree know
the betting results, allowing use of their e-money to gain insight into
the game - using meatspace money as a bet and e-money to buy the
knowledge of cards/numbers/whatever in the value of the e-money that
allows a sure win of that amount.

In other words: Without use of the e-money, the game is a normal game,
with appropriate probability of win. With the e-money, the player can buy
the 100%-certain win of a given value.

Conversely, a rigged game with 0%-probability of win could be used for
depositing the real money and converting them to e-money.

Is this approach possible?
Is this approach feasible?
Where are the hidden problems there?



openssl/gpg and IDEA

2004-01-20 Thread J.A. Terranson

IDEA seems to be completely missing from everything everywhere :-(  Does
nybody know how to enable openssl for IDEA (no, I don't require the
commercial license for this)?

Thanks!

-- 
Yours,
J.A. Terranson
[EMAIL PROTECTED]

Unbridled nationalism, as distinguished from a sane and legitimate
patriotism, must give way to a wider loyalty, to the love of humanity as a
whole. Bah'u'llh's statement is: The earth is but one country, and mankind
its citizens.

The Promise of World Peace
http://www.us.bahai.org/interactive/pdaFiles/pwp.htm




Re: Idea: Simplified TEMPEST-shielded unit (speculative proposal)

2003-12-15 Thread Tim May
On Dec 14, 2003, at 8:33 PM, Thomas Shaddack wrote:

TEMPEST shielding is fairly esoteric (at least for non-EM-specialists)
field. But potentially could be made easier by simplifying the problem.
If we won't want to shield the user interface (eg. we want just a
cryptographic processor), we may put the device into a solid metal case
without holes, battery-powered, with the seams in the case covered with
eg. adhesive copper tape. The input and output can be mediated by 
fibers,
whose ports can be the only holes, fraction of millimeter in diameter,
carefully shielded, in the otherwise seamless well-grounded box. There 
are
potential cooling problems, as there are no ventilation holes in the
enclosure; this can be alleviated by using one side of the box as a 
large
passive cooler, eventually with an externally mounted fan with separate
power supply. If magnetic shielding is required as well, the box could 
be
made of permalloy or other material with similar magnetic properties.

I am not sure how to shield a display. Maybe taking an LCD, bolting it 
on
the shielded box, and cover it with a fine wire mesh and possibly
metalized glass? Using LCD with high response time of the individual
pixels also dramatically reduces the value of eventual optical 
emissions.
I worked inside a Faraday cage in a physic lab for several months. And, 
later, I did experiments in and around Faraday cages. Shielding is 
fairly easy to measure. (Using portable radios and televisions, or even 
using the Software-Defined Radio as a low-cost spectrum analyzer.)

My advice? Skip all of the nonsense about building special laptops or 
computers and special displays with mesh grids over the displays. Those 
who are _casually_ interested will not replace their existing Mac 
Powerbooks or Dell laptops with this metal box monster.

Instead, devise a metal mesh bag that one climbs into to use whichever 
laptop is of interest. To reduce costs, most of the bag can be 
metallized fabric that is not mesh, with only part of it being mesh, 
for breathability. (Perhaps the head region, to minimize claustrophobia 
and to allow audio and visual communication with others nearby.)

I would imagine a durable-enough metallized fabric bag could be 
constructed for under a few hundred dollars, which is surely cheaper 
for most to use than designing a custom laptop or desktop.

Or consider heads-up LCD glasses. These have been available for PCs and 
gamers for a few years (longer in more experimental forms, of course, 
dating back to the VR days of the late 80s). Sony has had a couple of 
models, and so have others. Some have video resolutions (PAL, NTSC), 
some have VGA resolutions. Perfectly adequate for displaying crypto 
results and requesting input.

These very probably radiate little. But of course a lightweight hood, a 
la the above mesh bag, would drop the emissions by some other goodly 
amount of dB. Experiments necessary, of course.

Interface to a laptop or PC could be as you described it, with shielded 
cables. Or just use a small PC (Poqet, etc.) and move the keyboard and 
CPU under the draped hood. Leakage out the bottom, hence the earlier 
proposal for a full bag, like a sleeping bag.

--Tim May



Idea: Simplified TEMPEST-shielded unit (speculative proposal)

2003-12-15 Thread Thomas Shaddack
TEMPEST shielding is fairly esoteric (at least for non-EM-specialists)
field. But potentially could be made easier by simplifying the problem.

If we won't want to shield the user interface (eg. we want just a
cryptographic processor), we may put the device into a solid metal case
without holes, battery-powered, with the seams in the case covered with
eg. adhesive copper tape. The input and output can be mediated by fibers,
whose ports can be the only holes, fraction of millimeter in diameter,
carefully shielded, in the otherwise seamless well-grounded box. There are
potential cooling problems, as there are no ventilation holes in the
enclosure; this can be alleviated by using one side of the box as a large
passive cooler, eventually with an externally mounted fan with separate
power supply. If magnetic shielding is required as well, the box could be
made of permalloy or other material with similar magnetic properties.

I am not sure how to shield a display. Maybe taking an LCD, bolting it on
the shielded box, and cover it with a fine wire mesh and possibly
metalized glass? Using LCD with high response time of the individual
pixels also dramatically reduces the value of eventual optical emissions.

I also have doubts about the keyboard. Several ideas that could help: We
may use optical scanning of the key matrix, with the light fed into and
read from the matrix by optical fibers, coming out from a well-shielded
enclosure, similar to the I/O lines of the first example. We may use a
normal keyboard, but modified to use reliably random scanning pattern;
that won't reduce the EM emissions of the keyboard, but effectively
encrypts them, dramatically reducing their intelligence value. It's then
necessary to take precautions about the data cable between the keyboard
itself and the computer, where the data go through in plaintext; it's
possible to encrypt it, or to use a fiber.

As really good shielding of complicated cases is difficult to achieve, the
primary objective of this approach is to put everything into simple
metallic boxes with as few and as small ports as possible, which should be
comparatively easy to manufacture, replacing the special contacting of
removable panels with disposable adhesive copper tape (the only reason to
go inside is replacing batteries, and the tape together with other
measures may serve as tamperproofing), and replacement of all potentially
radiating external data connections with fiber optic.

I should disclaim I have nothing that could vaguely resemble any deeper
knowledge of high frequencies; therefore I lay out the idea here and
wonder if anyone can see holes in it (and where they are).



Re: Idea: Simplified TEMPEST-shielded unit (speculative proposal)

2003-12-15 Thread John Young
There's a good possibility that Saddam was traced by Tempest
sensing, airborne or mundane. The technology is far more sensitive
than a decade ago. And with a lot of snooping technology kept obscure 
by tales of HUMINT, finks, lost laptops and black bag jobs.

For less sensitive compromising emanations, BETA, among others, 
makes portable Tempest units, desktop and room-sized, the devices 
export-restricted as if munitions.

There's a patent on a booth-like Tempest device into which the
user climbs, with protection provided for connections, but whether
it was ever built is unknown.

A slew of firms make Tempest products which can be examined
for what shielding works sufficiently well to be placed on NSA's
more or less trustworthy Tempest products list:

Beyond commercial-grade, NSA is reportedly able to read faint 
emanations from all known Tempest protection, thanks in part to 
reviewing products and international sharing among spooks.

Those leaked from fiber are now a piece of cake, and not by 
tapping the glass a la the RU submarine cable escapade and 
the derring-do of USS Jimmy Carter custom-rigged to hack 
transoceanic fiber.

Tempest snooping at the atomic level is feasible, thanks to
physicists who walk among the electrons with supercomputers.

As ever, what you don't know is what kills you, and if you are not
currently doing research or working on NDA stuff, you're toast.

Protecting against the known is what keeps the orchestrated 
leak industry thriving.

Be sure to submit bright inventions to the authorities to get contracts
for funding dark ones that work against the grain, then you'll get
really swell contracts or offed.

Ex-NSA staff are rolling in clover selling commercialized versions
of security technology that NSA freely accesses. Reminds of the Brits
selling to gullible govs impregnable Enigma machines after WW2.




Re: Idea: Simplified TEMPEST-shielded unit (speculative proposal)

2003-12-15 Thread Anonymous Sender
While I agree with much of what you say I don't think it's likely that any 
kind of advanced SIGINT operation was what brought him down. The most important thing 
to have is intelligence from humans. From insiders. This is partly the problem with 
the intelligence agencies today. They think too much of the technology and it's 
possible uses. Good old fashion spies will always be the most powerfull way to get 
information if you can get someone to cooperate. This is also why it is a bit harder 
in countries with a lot of people willing to kill or be killed for the sake of ideas. 
Even so it seems that someone sold him for the money in this case. It was bound to 
happen sooner or later since it's not possible to be on the run without trusting at 
least one or a few individuals from time to time.



Idea: Using GPG signatures for SSL certificates

2003-12-12 Thread Thomas Shaddack
The problem that makes me feel uneasy about SSL is the vulnerability
of the certification authorities; when they get compromised, everything
they signed gets compromised too.

However, the system could be for some applications potentially get
hardened to certain degree, using the web-of-trust approach.

The server presents its certificate to the client. The client then can
optionally request the GPG signature of the certificate from the server
either by always trying if it is there or only if its presence is
indicated in the certificate data fields, and verify it by the specified
GPG public key (which then can be firmly embedded in the web of trust).

The server's key may be stored on the server itself together with the
certificate signature file, or the signature file may indicate the
keyserver it should be fetched from. Being signed by several trusted keys
is crucial for this purpose, as otherwise it would be trivial to
compromise the GPG pubkey together with the signature and the SSL
certificate, if the adversary gets access to the server and manages to
compromise the CA (risk especially with in-house CAs, or when Agencies get
involved).

The clients should cache the server's authentication information, and
report any changes, like SSH does.

The location of the signature may vary; it can be stored in a default
place on the server (https://secure.server.com/cert-gpgsignature.asc), or
the location can be specified in a X509 field.

Is it a good idea? Could it fly? If not, why?



Re: Idea: Using GPG signatures for SSL certificates

2003-12-12 Thread Anonymous
Thomas Shadduck writes:
 The problem that makes me feel uneasy about SSL is the vulnerability of
 the certification authorities when they get compromised, everything
 they signed gets compromised too.

Technically this is true, but the only thing that the CA signs is
other keys.  So it merely means that the CA can create certificates on
behalf of anyone the compromisers choose.  It doesnt compromise any
existing key or previously issued certificate or even any newly created
key.  

In any case, you dont need a CA to use SSL.  (Or more accurately, you
dont need anyone elses CA to use SSL just create your own CA and
issue yourself a certificate. This can be done without a lot of effort
using openssl, for example.)

 However, the system could be for some applications potentially get
 hardened to certain degree, using the web-of-trust approach.

What exactly does this buy you?  The SSL certification authority system
has as its only (but useful) redeeming value that one can connect to
www.somecompany.com and have some level of confidence that the SSL
certificate presented by that site was actually issued to
www.somecompany.com and was issued by a reputable certification
authority -- one that presumably will not hand out a certificate stamped
www.somecompany.com to [EMAIL PROTECTED]

If the certificate presented is not from one of the recognized
reputable CAs built into your web browser, SSL itself will still work
but your web browser will pop up a box saying that the CA is not in its
list of reputable CAs (and BTW would you like to connect anyway?
yesno).

I dont understand the mindless worship of the web of trust.  PGP
(/GPG) is a useful tool, but the web of trust is simply a way of
certifying a key in a non-centralized, non-hierarchical way.

-- Frondeur



Re: Idea: Using GPG signatures for SSL certificates

2003-12-12 Thread Thomas Shaddack

 Thomas Shadduck writes:
  - cute :) Though I am more often called Shaddup.

  The problem that makes me feel uneasy about SSL is the vulnerability of
  the certification authorities when they get compromised, everything
  they signed gets compromised too.

 Technically this is true, but the only thing that the CA signs is
 other keys.  So it merely means that the CA can create certificates on
 behalf of anyone the compromisers choose.  It doesnt compromise any
 existing key or previously issued certificate or even any newly created
 key.

By compromised I meant the signature confirming the authenticity of the
certificate can't be trusted anymore. Sorry if it wasn't obvious.

 In any case, you dont need a CA to use SSL.  (Or more accurately, you
 dont need anyone elses CA to use SSL just create your own CA and
 issue yourself a certificate. This can be done without a lot of effort
 using openssl, for example.)

I am aware of this.

Using the GPG/SSL approach, you can have your own in-house CA for SSL
purposes, and at the same time be able to prove to external users that the
certificate is really yours. One more factor for establishing trust, one
more obstacle for the Adversary to pass.

  However, the system could be for some applications potentially get
  hardened to certain degree, using the web-of-trust approach.

 What exactly does this buy you?  The SSL certification authority system
 has as its only (but useful) redeeming value that one can connect to
 www.somecompany.com and have some level of confidence that the SSL
 certificate presented by that site was actually issued to
 www.somecompany.com and was issued by a reputable certification
 authority -- one that presumably will not hand out a certificate stamped
 www.somecompany.com to [EMAIL PROTECTED]

It won't buy me anything new. It only strengthens the confidence level
by providing a CA-independent, alternative method of verifying the
certificate.

 If the certificate presented is not from one of the recognized
 reputable CAs built into your web browser, SSL itself will still work
 but your web browser will pop up a box saying that the CA is not in its
 list of reputable CAs (and BTW would you like to connect anyway?
 yesno).

What I'd like is one more button, Attempt to verify by GPG. Though that
can be easily done by an external application; browser integration is
nothing more than mere comfort.

 I dont understand the mindless worship of the web of trust.  PGP
 (/GPG) is a useful tool, but the web of trust is simply a way of
 certifying a key in a non-centralized, non-hierarchical way.

YES! Which is what I want to achieve.



Re: Idea: GPG signatures within HTML - problem with inline objects

2003-11-22 Thread Thomas Shaddack
There is a problem with images and other inline objects. There is a
solution, too.

The objects included into the document can get their hash calculated and
included in their tag; eg,
IMG SRC=image.jpg HASH=SHA1:4e1243bd22c66e76c2ba9eddc1f91394e57f9f83
The tag has to be in the signed part of the document, so the hash can't be
tampered with.

Full digital signatures should be possible as well, eg.

IMG SRC=image.jpg SIGNATURE=http://where.is.the/signature.asc;

or

IMG SRC=image.jpg SIGNATURE=identifier
some HTML code here
SIGNATURE TYPE=gpg NAME=identifier!--
-BEGIN PGP SIGNATURE-
Version: GnuPG v0.9.11 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQA31UOQaLeriVdUjc0RAjhBAJ4u1k5ex8+ZAtYi737GFXPOiBc51gCfU5+8
is2rD6L/6fIOWttfh5CYUW0=
=WOv2
-END PGP SIGNATURE-
--/SIGNATURE

This way doesn't depend on the part of the document being signed, as the
signature can't be effectively tampered with undetected anyway.


Same scheme could be used in A HREF tags, allowing automated checking of
signatures or hashes of downloaded binary files.



Re: Idea: GPG signatures within HTML

2003-11-22 Thread Henryk Plötz
Moin,

Am Sat, 22 Nov 2003 14:54:39 +0100 (CET) schrieb Thomas Shaddack:

 A trick with HTML (or SGML in general) tag and a comment, a browser
 plugin(or manual operation over saved source), and a GPG signature
 over part of the HTML file should do the job, with maintaining full
 backward compatibility and no problems for the users not using this
 scheme.

 Opinions, comments?

This is already done, although I'm not aware of any browser supporting
an automated verification. For an example look at the HTML source of
http://www.bundesverfassungsgericht.de/entscheidungen/frames/rk20030827_2bvr091103

-- 
Henryk Plvtz
Gr|_e aus Berlin
 Un-CDs, nein danke! http://www.heise.de/ct/cd-register/ ~~~
~~ Help Microsoft fight software piracy: Give Linux to a friend today! ~



Idea: GPG signatures within HTML

2003-11-22 Thread Thomas Shaddack
Sometimes a problem appears with publishing information on the Web, when
the authenticity of document, especially a widely-distributed one, has to
be checked. I am not aware about any mechanism available presently.

A trick with HTML (or SGML in general) tag and a comment, a browser plugin
(or manual operation over saved source), and a GPG signature over part of
the HTML file should do the job, with maintaining full backward
compatibility and no problems for the users not using this scheme.

It should be possible to make this HTML construction:


HTML
BODY
blah blah blah blah blah unsigned irrelevant part of the document, eg.
headers and sidebars which change with the site design
SIGNED SCHEME=GPG!--
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

--!
This is the PGP-signed part
of the HTML document.
!--

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.1.91 (MingW32) - GPGrelay v0.893

ihas7Ds9fXLR9ksWRdwNZXNA8SdshwAJ9zwXFDgvdg5G2mqXp5BD4Sx2ZmjwCfSs70
Kj8sQor6i+MUZBmp5pdM1vU=
=hIsR
-END PGP SIGNATURE-
--!/SIGNED
the unsigned rest of the HTML document
/BODY/HTML


The SIGNED.../SIGNED tags are ignored by browsers that don't know
them, and provide leads for eventual browser plugins.

The !-- -- comments are used to hide the signature from the user in
standard browsers.

The scheme is designed to allow signing only parts of documents, so they
could be published in fast-changing environments like blogs or on
dynamically generated pages, and to have many different signed parts on
one page. It should also allow manual checking of the signature, eg. by
curl http://url | gpg --verify

Feel free to use the idea if it is good.

Opinions, comments?



EDRI-gram: RFID-blocker wins German idea-contest

2003-11-19 Thread Thomas Shaddack
-- Forwarded message --
Date: Wed, 19 Nov 2003 16:26:40 +0100 (CET)
Subject: EDRI-gram newsletter - Number 22, 19 November 2003
From: EDRI-gram newsletter [EMAIL PROTECTED]
To: [EMAIL PROTECTED]

snip

==
6. RFID-DETECTOR WINS GERMAN IDEA-CONTEST
==

The German civil rights and privacy-organisation FoeBuD is the winner of
an idea-contest for a national awareness campaign about the infringement
of civil liberties through new technologies. With the price of 15.000
Euro, FoeBuD wants to develop a 'Dataprivatizer', a tool to detect RFID's,
minuscule spy-chips that are increasingly built into consumer goods.

RFID (Radio Frequency Identification) are tiny computer chips with an
antenna that can be read without touching or even seeing it. These
transponders can be built into every yoghurt cup or piece of clothing. The
chips can secretly divulge information about the buyer. With these data
firms can set up profiles about the shopping behaviour and leisure
activities of their customers.

This is not a remote future. The German chain of supermarkets and
DIY-stores Metro AG already won a Big Brother Award last month for
implementing this technology.

Idea contest (winner announced 06.11.2003)
http://www.bridge-ideas.de

snip



Idea: Small-volume concealed data storage

2003-10-11 Thread Thomas Shaddack
I mentioned here the AT24RF08 chip here for couple times already. I got an
idea about another application for this nice toy.

For an encrypted data storage, the storage of the key is crucial. If the
key is recovered, everything is lost. Remembering 256 (or even 128) bits
is a hassle, a storage medium is subject to potential seizure. The key has
to be protected by a passphrase, which is subject to bruteforcing. The key
has to be destroyed in the event of a dangerous situation detected by the
environmental sensors, or if the passphrase is tried one time too much
(which opens the possibility of a DoS attack, rendering the data
protection scheme unusable by regular intentional destruction of the key).

However, the higher security we want, the lower alarm thresholds we have
to set and the higher is the probability of misfire. For convenience
reasons, for most common scenarios where absolute security is not
necessary and some risk is affordable, we need a backup key storage.

The mentioned chip can operate passively, powered from the coil used for
data transfer, principially the same as an RFID tag. The independence on
any kind of power supply makes it suitable for being built into some
object, including the building itself; the chip and coil may be located
inside a wall, serving as a potential storage for up to 8 kilobits of data
as necessary. It may be put in place at the moment when the building is
built, or during some construction work. Routine kinds of police raids are
quite unlikely to discover this kind of data storage (though the eventual
discovery of a reader device may be a giveaway). XORing the key with a MD5
hash of a memorized keyphrase can further increase security.

This method may be also used for covert exchange of short messages. The
device may be hidden under the carpet or inside poured concrete floor, and
reader/writed in the shoes of the conspirators. The simplicity and
robustness of the storage part of the technology could permit long-time
installations just for case.

Or maybe I am too tired to think in coherent way.

Maybe it's a good idea. Maybe not (if, then why?). May be handy at least
for a spy novel writer.



Re: Idea: Small-volume concealed data storage

2003-10-11 Thread Morlock Elloi
And what is the purpose of connecting the key and data storage in the first
place ?

Data storage is data storage, concealed or not. You feed encrypted data to/from
it.

Key is required at human interface and has absolutely nothing to do with the
storage.

If you want better security than passphrase, then you need a mechanical key
carrier. Indeed, that is where the word key comes from. You can store any
number on bits on it and you'll hand it over before they beat the shit out of
you - or  you may want to be brave and destroy it instead (trivial with
flash-on-chip and small battery cell), but, again, it has nothing to do with
storage of data.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:

__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com



Re: Viral DNS Attack, DDos Idea

2003-08-19 Thread Major Variola (ret)
At 10:11 AM 8/17/03 -0700, Tim May wrote:
Many evolved diseases _DO_ kill their hosts. Look around.

It is true that there are tradeoffs in lethality, time to death, and
virulence, and that a disease which kills too quickly and too many
won't spread adequately, but quite clearly all of the diseases of the
past were evolved (until recently, none were created) and yet they
often killed their hosts.

This objection jammed in my memegrinder so I had to examine it.

I'll argue that the nastiness of many human diseases are *temporary*
exceptions
to the evolved pathogens don't kill observation.  Because humans are
not in equilibrium:

* Human population is growing.  This means you can kill your host, two
new
ones are born every minute (except in a few places, eg W. Europe).   If
your host population
is growing like that, you can be extra lethal, temporarily.
If the host numbers are stable, you could wipe them all out if you're
too lethal.

* Humans are expanding their range.  This means new diseases are
introduced from existing
resivoirs so they have not adapted to humans --especially the conditions
of modern
humans-- yet.  Ebola, HIV, etc.

* Humans only *recently* live in dense (and stationary) groups.  This
means that pathogens have not adapted yet.  Cities are incubators.
Bubonic plague, TB are good examples here.

* Rapid travel is even more recent an invention.
Populations who have never seen a pathogen (West nile,
etc.) are getting exposed for the first time.  No equilibrium there.

The Cortez effect, amplified by Whittle's jet engine.
Globalization means everyone gets exposed to everyone
else's pathogens.  A sick chinese chicken can ruin your day
in America.  Guns, germs, and steel.

BTW Globalization also means that everyone gets exposed to everyone's
plants, insects, etc.  A lot of isolated species (e.g., Hawaii) that
can't deal
with competition will be toast just as much as the Amerinds who
met Mr. Cortez.  Guns, germs, and steel.  Meet Mr. Kudzu.



Obviously, the scale of temporary should be taken in the
larger context, not that of one's own lifespan.

Of course a coadapted pathogen (eg flu) can spontaneously become newly
virulent
simply because of mutation or recombination.  If the hosts aren't all
connected,
then merely one particular host-group dies, along with the newly
virulent strain.
Losing some village is not a big deal (until someone gets on a plane).

...

Interesting to extend the analogy to say virii that zap cellphones or
PCs permenantly
vs. merely being annoyances.  A PC-zapping virus would give Macs the
kind of
ripe open field not seen since the days of the Bering Strait.  Also
interesting to
view the RIAA vs. Networked-Computer struggle in a biological
(evo/eco) light.
Ms. Dodo, meet Mr. Kudzu.

And of course fascinating to watch how the new dense mobile humans (or
their lawyers :-) adapt behaviorally.



Re: Viral DNS Attack, DDos Idea

2003-08-17 Thread Major Variola (ret)
At 05:46 PM 8/15/03 -0700, Bill Stewart wrote:
At 01:19 PM 08/15/2003 -0700, Major Variola (ret.) wrote:
Suppose malware appends a bogus entry to an infected machine's
/etc/hosts (or more likely, MSwindows' \windows\blahblah\hosts file).
(This constitutes a DNS attack on the appended domain name, exploiting

the local hosts' name-resolution prioritization.)
If the appended IP address points to the
same victim (66.66.66.66) on all the virus-infected machines,
and the appended (redirected) domain name is popular (google.com

Cute, but sounds like a lot of work compared to other obvious attacks
you could do if you're spreading a virus anyway.

Yes if you have virally owned a machine you can do much nastier.
But this attack has the advantage that its effects would not be
immediately recognized, nor could they be fixed in one spot
once detected.

Evolved diseases don't kill their hosts.  Google is too useful
to redirect.  On the other hand, you can redirect an entire
TLD (eg .mil), albeit on one machine at a time. Try doing that
to one of The DNS Roots (pbut).

The more popular version of this attack is to try to hack DNS servers,
or poison DNS requests, so that DNS requests for google report the
wrong thing.

Yes I've followed discussions about SecDNS etc before.

The cute part of the local hostsfile attack is that local machines
are *not* administered competently, whereas DNS servers
(and even ISP caches) are more likely tended better.

One problem with hacking the hosts files is that
different versions of Windows tend to put them in different places,
though perhaps if you target XP and 2000 and ME and 98
it's consistent enough to work.

OS detection is trivial once in.. as is file/path detection.  I bet a
javascript
program could do it, if the client security settings (ACLs) were poor.

The real question is whether the bad guys would redirect to a victim,
or to a fake web server run by them, so they could hand out
bogus responses, such as redirects to various places around the web,
potentially along with some advertising banners.

That's the virus author's choice, of course.  In fact, I first thought
of
the attack as a DNS-redirect on domain names ---intending on random
(or even localhost) misdirection.  Upon thinking about it, the
utility of all those 9AM Monday clicks became apparent.

Diagnosing the situation would be a bushel of fun in the first hours
either way.

If it's a virtual server machine, though, you can't do that
without disrupting all the clients on it, which is too bad;

Hadn't thought of virtual servers... all your eggs in one basket :-)

If it's a router, that's a more interesting problem,

You're right, routers merely drop port 80 incoming,
any router DoS depends on sheer bandwidth --say
routing the NYTimes.com clicks to Podunk-BackwaterTimes.com

because many routers have wimpy CPUs and do the routine work in ASICs -

ASICs are great except for exception handling, which is a vulnerability.

I was working on Intel's network processors earlier
this year.  Amazing chips--they have hardware support for everything you

do in an IP stack, buttloads of memory controllers, I/O up the kazoo,
and a dozen hardware-supported
thread contexts (hyperthreading) on each of a dozen high-clockrate RISC
engines.
But they all defer exception packet processing to the onboard ARM, which
might
alert the host system or at least log the exception by incrementing a
counter.  But the ARM is not as fast as the threads
and could perhaps be overwhelmed.  Perhaps the subject of a future
Gedanken Design Idea.

-

When the rotary telephone first came out, people
said, 'You mean I have to dial seven numbers?' 



Re: Viral DNS Attack, DDos Idea

2003-08-17 Thread Tim May
On Sunday, August 17, 2003, at 08:19  AM, Major Variola (ret) wrote:
Evolved diseases don't kill their hosts.  Google is too useful
to redirect.  On the other hand, you can redirect an entire
TLD (eg .mil), albeit on one machine at a time. Try doing that
to one of The DNS Roots (pbut).
Many evolved diseases _DO_ kill their hosts. Look around.

It is true that there are tradeoffs in lethality, time to death, and 
virulence, and that a disease which kills too quickly and too many 
won't spread adequately, but quite clearly all of the diseases of the 
past were evolved (until recently, none were created) and yet they 
often killed their hosts.

--Tim May
In the beginning of a change the patriot is a scarce man, and brave, 
and hated and scorned. When his cause succeeds, the timid join him, for 
then it costs nothing to be a patriot. -- Mark Twain



Re: Viral DNS Attack, DDos Idea

2003-08-16 Thread Bill Stewart
At 01:19 PM 08/15/2003 -0700, Major Variola (ret.) wrote:
Suppose malware appends a bogus entry to an infected machine's
/etc/hosts (or more likely, MSwindows' \windows\blahblah\hosts file).
(This constitutes a DNS attack on the appended domain name, exploiting
the local hosts' name-resolution prioritization.)
If the appended IP address points to the
same victim (66.66.66.66) on all the virus-infected machines, 
and the appended (redirected) domain name is popular (google.com 

Cute, but sounds like a lot of work compared to other obvious attacks
you could do if you're spreading a virus anyway.
The more popular version of this attack is to try to hack DNS servers,
or poison DNS requests, so that DNS requests for google report the wrong thing.
One problem with hacking the hosts files is that
different versions of Windows tend to put them in different places,
though perhaps if you target XP and 2000 and ME and 98 
it's consistent enough to work.

The real question is whether the bad guys would redirect to a victim,
or to a fake web server run by them, so they could hand out
bogus responses, such as redirects to various places around the web,
potentially along with some advertising banners.

Besides making google.com harder to reach,
another effect is that lots of people send TCP SYN requests
to 66.66.66.66 port 80 instead of google.com port 80,
and if there's a web server on that port,
they send it HTTP requests for URLs on google.com,
which it presumably will reject.

If 66.66.66.66 is an arbitrary victim computer with no web server,
the main impact is a bunch of extra SYN requests,
so the obvious defense is to filter them out from the router.
If it's got a single web server, moving the server to a new IP address
and using DNS to update it can help, at the cost of disrupting
clients until its DNS update propagates, and getting its router
to drop requests for port 80 (passing other ports is fine.)

If it's a virtual server machine, though, you can't do that
without disrupting all the clients on it, which is too bad;
either hope you've got enough horsepower to handle rejecting the
google.com requests, or front-end it with a squid proxy and
kill it off there, which cuts down the CPU impact,
though it doesn't cut down the bandwidth much.
You could get fancy and have the squid redirect all the real requests
to another IP or DNS name, e.g. example1.net/stuff to example2.net/stuff,
where the new address is on a different machine and 
if necessary on a different access line.

If it's a router, that's a more interesting problem,
because many routers have wimpy CPUs and do the routine work in ASICs -
so if the router has an HTTP interface for admin use,
and it's not protected by ACLs for some reason,
you might blow it away with the work required to reject google hits.
But if you don't need the web interface, it's much easier to protect.
Also, if the router is in an ISP, rather than at the customer premises,
access to it may be blocked anyway, as a general security mechanism,
and even if it's not, it's usually easy to add that kind of blocking,
by null-routing the traffic rather than by ACLs.

If the victim IP address were a router just upstream
of the victim domain name, its extra fun for the victim domain 
--not only are they unavailable on infected machines, 
but clients pound their upstream when they try to connect.

That's actually much less of a risk, except for CPU consumption,
because if the router has enough capacity to handle google.com's traffic,
it can handle the bunch of unsuccessful SYN packets that it gets instead.

Thoughts?  Has this ever been suggested or implemented?

Never seen it.  Another variation on this attack is to use
random redirect addresses instead of a single target victim -
that loses the focus for detection and correction that
having a single victim can provide.
If you scatter it around, people will still have trouble reaching Google,
but almost no web servers will get enough rejected hits
to swamp them, so admins are less likely to notice.



Viral DNS Attack, DDos Idea

2003-08-15 Thread Major Variola (ret.)
Suppose malware appends a bogus entry to an infected machine's
/etc/hosts (or more likely, MSwindows' \windows\blahblah\hosts file).
(This constitutes a DNS attack on the appended domain name, exploiting
the local hosts' name-resolution prioritization.)

If the appended IP address points to the
same victim (66.66.66.66) on all the virus-infected machines, and the
appended
(redirected) domain name is popular (google.com for instance) then you
get a
DDoS attack on the appended IP host 66.66.66.66 that grows as the
viral infection spreads in the population.  You also get a DDoS on the
popular
domain name (google.com) you've redirected.

If the victim IP address were a router just upstream
of the victim domain name, its extra fun for the victim
domain --not only are they unavailable on infected machines, but clients

pound their upstream when they try to connect.

Thoughts?  Has this ever been suggested or implemented?

---
In The Wild One bikers mount a DoS attack on a router: her name is
Dorothy and she works at a plugboard.  ca 1954



Re: Idea: Homemade Passive Radar System (GNU/Radar)

2003-08-14 Thread Morlock Elloi
 As an active twist, we can also use a separate unit, Illuminating
 Transceiver (IT), periodically broadcasting a pulse of known
 characteristics, easy to recognize by the LPs when it bounces from an
 aerial target. This unit has to be cheap and expendable - it's easy to
 locate and to destroy by a HARM missile. As a bonus, forcing the adversary
 to waste a $250,000+ AGM-88 missile on a sub-$100 transmitter may be quite
 demoralizing. There can be a whole hierarchy of ITs; when one of them

Microwave oven.

This has been done in recent years in various theatres.

 Even other sources can serve as involuntary ITs. The landscape is littered
 with cellular base stations and civilian TV and radio transmitters. Just
 pick the suitable frequency and listen on.

There is enough wideband power in the ether above inhabited areas to make
passive detection from reflected EM possible in theory (without any EM
emanating from the target.) The space is illuminated, but the eyes are not
good enough, yet. Signal levels are extremely low, but it's likely that a
flying jet reflects back enough from hundreds of cellphone/celltower
transmissions to be few dB above the background noise. However, without knowing
where to look the receiver cannot use typical narrow beam high-gain antennas.
What is needed is an array, like an insect's eye, and that will be a sizeable
contraption - passive, but not small. In other words, the size of a passive eye
is proportional to the wavelength. To get human eye resolution in 10cm band the
size gets to 2km across. Big eye.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Idea: Homemade Passive Radar System (GNU/Radar)

2003-08-14 Thread Thomas Shaddack
The current developments in international politics, mainly the advent of
rogue states attacking sovereign countries from air, causes a necessity of
proliferation of cheap air defense solutions. Key part of air defense is
the awareness, usually maintained by a network of ground radar stations.

In the end of 50's, Czech Republic developed a passive radar system called
PRP-1/Kopac (Korelacni Patrac, Correlation Seeker), which was later
replaced by more advanced system Ramona and even more advanced Tamara.
Then the Revolution came, bringing the inevitable international pressures
that led to the bankrupcy of the Tamara developer company, following false
indictments of its top managements which lead to revocation of the
company's arms sale licence. Shortly after this, articles in the world
press appeared about groundbreaking passive radar system being developed
by - guess who? Lockheed. (After 15 years of research, good part of
which consisted from reverse-engineering of seized shipment of I am not
sure if Ramonas or Tamaras.)

See also http://www.techtydenik.cz/tt1998/tt10/panoram5.htm

The system allows locating and identification of aerial, ground, and (when
installed on the shore) sea-based EM sources.

The passive radar system consists from four main parts. Three are wideband
receivers, listening for any characteristical transmitting activity. They
talk to the fourth one, where a correlator is located - an electronic
system calculating the position of the signal sources from the differences
of times when the listening posts received their signals.

The civilian sector electronics is developing fast; component prices fall
down, computing power goes up, anybody can buy a machine that just few
decades ago would make everyone in Pentagon salivating. Naturally, this
opens interesting possibilities.

The threat rogue states with overwhelming air force pose to other
countries makes it a necessity to develop a cheap, open passive radar
system, effectively bringing a key part of air defense down to easy
affordability on a municipiality level. Let's call it GNU/Radar.

We need the four stations: three listening ones, and the correlating one.

The correlating station (CS) may be built as a MOSIX or Beowulf cluster.
Its job is to handle signals from LPs, identifying the targets, and
tracking their position.

The listening posts (LPs) need a receiver - a suitably wideband one, a
digitizer (a fast ADC card), optionally a DSP board to take some
calculations off the shoulders of the CPU, a source of precise timebase
for synchronizations (may be a GPS, which also provides information about
the location of the listening post which is what the CS needs to know, or
may be a receiver of a time synchronization signal broadcasted from
somewhere if we want a backup for case of GPS being shut down. The
receiver may be possibly adapted from the GNU/Radio project. The timing
pulses can be also delivered optically, eg. by a modification in the Ronja
unit mentioned later.

The LPs crunch the received signals, isolate the interesting-looking ones,
mark the precise moments of their reception, and send their arrival times
and key characteristics to CS. The transmission channel may be anything
with sufficient bandwidth - from an Internet leased line to Ronja-based
10Mbps optical links in case of direct visibility between LPs and CS.

As an active twist, we can also use a separate unit, Illuminating
Transceiver (IT), periodically broadcasting a pulse of known
characteristics, easy to recognize by the LPs when it bounces from an
aerial target. This unit has to be cheap and expendable - it's easy to
locate and to destroy by a HARM missile. As a bonus, forcing the adversary
to waste a $250,000+ AGM-88 missile on a sub-$100 transmitter may be quite
demoralizing. There can be a whole hierarchy of ITs; when one of them
transmits, the other ones sleep - when the transmitting one is destroyed,
one of the sleeping units wakes up and continues in illuminating the
airspace. This is within reach of capabilities of a simple
microcontroller.

Even other sources can serve as involuntary ITs. The landscape is littered
with cellular base stations and civilian TV and radio transmitters. Just
pick the suitable frequency and listen on.

Remember that Kopac was built about 50 years ago, on vacuum tubes. It
should be far from impossible to replicate it with contemporary COTS
electronics.

Using lower frequencies than the gigahertz band usual for modern military
radars reduces accuracy, but also dramatically reduces the effectivity of
aircraft stealth features.

There are already prototype results in this field:
http://www.wired.com/news/print/0,1294,16762,00.html

Some other sources:
http://ronja.twibright.com/
http://slashdot.org/articles/01/06/11/1617239.shtml

Opinions, comments, ideas?



Re: Idea: Homemade Passive Radar System (GNU/Radar)

2003-08-14 Thread Major Variola (ret)
At 05:04 PM 8/11/03 +0200, Thomas Shaddack wrote:
 This unit has to be cheap and expendable - it's easy to
locate and to destroy by a HARM missile. As a bonus, forcing the
adversary
to waste a $250,000+ AGM-88 missile on a sub-$100 transmitter may be
quite
demoralizing.

Microwave ovens were used in the Yugo war for this.

The invading air power can't ignore the ISM band because then you could
use it for real missile trackers.

Someone who can do vacuum and welding work could change the output
freq of an oven magnetron, by changing the shorting-strap connections.



Idea: Snort/Tripwire for RF spectrum?

2003-04-05 Thread Thomas Shaddack
Messing around TSCM.com, musing over detection of bugs. Getting an
immediate idea I'd like to get peer-reviewed.

There is a problem with bug sweeps in some countries. The legal TCSM
providers can be legally required to not inform the client about a
police-authorized bug, and/or legally forbidden to tamper with it. So a
customer-operated solution should exist.

GNU-Radio project seems to me to be flexible enough to be suitable as a
bug detector. With proper tuner (or a selectable set of tuners to be
wideband enough), the device could act as a 24/7-running frequency
analyzer, checking the electromagnetic spectrum, alerting the operators
about suspicious changes - suddenly appearing signals, suspicious pulses,
something that looks like a spread-spectrum transmission.

(Because of the equipment limitations, we can't see all the spectrum at
once; this approach is more similar to a guardian walking around the
facility, listening and looking wherever he is at the moment, eventually
returning a bit and looking closer if he hears a suspicious sound. Will
have some probability of missing pulse-nature signals, if they will be off
during the scan in their part of the spectrum, but will have chance
proportional to their duty cycle to see them too, and with proper software
it could be instructed to check the frequencies where a signal once is and
once isn't for pulsed signals (listening on the suspected frequencies for
longer time).)

With proper software, the system could write alert reports including
characteristics of the suspicious signals, or even recorded samples of
the signals for further evaluation.

Could serve as a 24/7 TCSM spectrum sweep, limited by the positions of the
antennas. (Though there could be several antennas, switched periodically,
in order to detect even more directional signals.) The advantage of 24/7
approach is easy time correlation of a suspicious signal with eventual
suspicious physical events (a visitor, a facility without anyone
present...). Usage of several antennas could allow triangulation of the
signal source within (or outside of) the supervised facility.
Correlation of signals that should be the same from several antennas could
reveal even transmitters trying to hide in stronger nearby transmitters
(so called snuggling).

The interesting part will of course be the software, either automatically
correlating present signals with past ones and sending reports of
suspicions, or some advanced visualisation system showing the 3D (4D) data
(time, frequency, intensity(, source antenna)).

Could cover the cases of bugs implanted into protected objects during
black-bag jobs or by the insiders, wireless microphones carried by hostile
visitors, and even increased rate of communication on the related
frequencies when a raid or a blackbag job is being prepared, if the
adversary doesn't keep radio silence. Could deny the adversaries
undetected usage of RF transmitters, at least in sane frequency ranges,
significantly limiting their technological options.

Could it work? Why not? If I would fill this idea with water, where it
would leak? Do I watch way too many spy movies?

Feel free to comment, feel free to forward anywhere where it could spur
some interest or further comments.

Shaddack, the Mad Scientist



IDEA

2003-03-22 Thread mindfuq
I compiling the Mixmaster remailer, I get an error the OpenSSL was not
compiled with IDEA support.  However, OpenSSL was supposed to have
compiled with IDEA out of the box, with only an option to disable it.
What am I missing?



RE: IDEA

2003-03-22 Thread Lucky Green
Mindfuq wrote:
 I compiling the Mixmaster remailer, I get an error the 
 OpenSSL was not compiled with IDEA support.  However, OpenSSL 
 was supposed to have compiled with IDEA out of the box, with 
 only an option to disable it. What am I missing?

You in all likelihood fell victim to some misguided nonsense that seems
to spread through the Open Source community at present. Some
distributions have disabled IDEA and other patented algorithms to
cleanse the code from non-free math to maintain the patent-purity of
the software. Cypherpunks of course reject such nonsense, just as they
rejected RSA DSI's and David Sternlight's claims that PGP must not be
used because it supposedly infringed on some patents.

Do a Google search for IDEA and the name of your OS or distribution to
find out how to recompile with IDEA support enabled.

--Lucky



Re: IDEA

2003-03-22 Thread mindfuq
* Lucky Green [EMAIL PROTECTED] [2003-03-22 09:13]:
 
 Do a Google search for IDEA and the name of your OS or distribution to
 find out how to recompile with IDEA support enabled.

I might need my hand held on this one.  I did an exhausting search
before posting.  Part if the problem is that 'idea' is an english
word, which makes it difficult to search.  It's a shame there aren't
any good web search engines that allow Lexis/Nexis type of
expressions.  Anyway-

I'm using a 3-year-old version of Mandrake.

The OpenSSL documentation claims IDEA is enabled by default, and there
are only switches for disabling it.  To verify that IDEA is enabled in
OpenSSL, I ran 'openssl ciphers':

   DHE-RSA-AES256-SHA: DHE-DSS-AES256-SHA: AES256-SHA:
   EDH-RSA-DES-CBC3-SHA: EDH-DSS-DES-CBC3-SHA: DES-CBC3-SHA:
   DES-CBC3-MD5: DHE-RSA-AES128-SHA: DHE-DSS-AES128-SHA: AES128-SHA:
   IDEA-CBC-SHA: IDEA-CBC-MD5: RC2-CBC-MD5: DHE-DSS-RC4-SHA: RC4-SHA:
   RC4-MD5: RC4-MD5: RC4-64-MD5: EXP1024-DHE-DSS-DES-CBC-SHA:
   EXP1024-DES-CBC-SHA: EXP1024-RC2-CBC-MD5: EDH-RSA-DES-CBC-SHA:
   EDH-DSS-DES-CBC-SHA: DES-CBC-SHA: DES-CBC-MD5:
   EXP1024-DHE-DSS-RC4-SHA: EXP1024-RC4-SHA: EXP1024-RC4-MD5:
   EXP-EDH-RSA-DES-CBC-SHA: EXP-EDH-DSS-DES-CBC-SHA: EXP-DES-CBC-SHA:
   EXP-RC2-CBC-MD5: EXP-RC2-CBC-MD5: EXP-RC4-MD5: EXP-RC4-MD5

IDEA is listed on the fourth line, so it seems IDEA was installed with
OpenSSL, but MixMaster's install may be improperly detecting that IDEA
is absent.  It's when I run the Mixmaster install that I get the
error:

   ...
   Looking for libz.a...
   Found at /usr/lib/libz.so.
   Found source directory zlib-1.1.4.
   Use the source if the pre-installed library causes compilation problems.
   Use source? [n]
   Looking for libpcre.a...
   Found source directory pcre-2.08.
   Looking for libcrypto.a...
   Found at /usr/local/ssl/lib/libcrypto.a.
   ./Install: [: 90701f: integer expression expected
   ./Install: [: 90701f: integer expression expected
   ./Install: [: 90701f: integer expression expected
   Looking for libncurses.a...
   Found at /lib/libncurses.so.
   ./Install: tmptst.c: Permission denied
   gcc: tmptst.c: No such file or directory

 WARNING: Your version of OpenSSL has been configured without IDEA support.
 If you continue, Mixmaster will be installed with reduced functionality.
 This means (among other things) that Mixmaster will not creade an RSA
 OpenPGP key (to avoid mail loss in the Type I system). You may want to
 re-install OpenSSL before proceeding.

 This will not concern you if you only plan to run a type II remailer or
 simply want a type II client.  

If anyone has any clues for me, please post them.

Thanks!



Re: IDEA

2003-03-22 Thread Eric Murray
On Sat, Mar 22, 2003 at 09:40:50AM +, [EMAIL PROTECTED] wrote:

 
 IDEA is listed on the fourth line, so it seems IDEA was installed with
 OpenSSL, but MixMaster's install may be improperly detecting that IDEA
 is absent.  It's when I run the Mixmaster install that I get the
 error:
 
...
Looking for libz.a...
Found at /usr/lib/libz.so.
Found source directory zlib-1.1.4.
Use the source if the pre-installed library causes compilation problems.
Use source? [n]
Looking for libpcre.a...
Found source directory pcre-2.08.
Looking for libcrypto.a...
Found at /usr/local/ssl/lib/libcrypto.a.
./Install: [: 90701f: integer expression expected

I think that line means that mixmaster's install script isn't
properly identifying the version of Openssl.  If it were
me, I'd fix the Mixmaster install script.


./Install: tmptst.c: Permission denied
gcc: tmptst.c: No such file or directory

Yep, the install script needs help.


BTW, if you will be posting Mixmaster messages to the cpunks
list, could you fix it so it uses an informative Subject: line
instead of Mixmaster Type III Message?  

Eric



Re: IDEA

2003-03-22 Thread Peter Palfrader
On Sat, 22 Mar 2003, Eric Murray wrote:

 I think that line means that mixmaster's install script isn't
 properly identifying the version of Openssl.  If it were
 me, I'd fix the Mixmaster install script.

The install script needs to die.  I think nobody argues that point.

 BTW, if you will be posting Mixmaster messages to the cpunks
 list, could you fix it so it uses an informative Subject: line
 instead of Mixmaster Type III Message?  

That's mixminion, not mixmaster.  And mixminion is not operational at
the moment - this will take at least a few more months.  Whoever relies
on it for anonymity cannot be serious.

Peter
-- 
 PGP signed and encrypted  |  .''`.  ** Debian GNU/Linux **
messages preferred.| : :' :  The  universal
   | `. `'  Operating System
 http://www.palfrader.org/ |   `-http://www.debian.org/


pgp0.pgp
Description: PGP signature


Re: IDEA

2003-03-22 Thread mindfuq
* Len Sassaman [EMAIL PROTECTED] [2003-03-22 18:52]:
 On Sat, 22 Mar 2003, Eric Murray wrote:
 
 It's been a while since I really worked on the Install script -- Mixmaster
 3.0 doesn't use it -- but this looks to be to be a bug that existed and
 was fixed sometime around a year ago. What version of Mixmaster are you
 using?
 
 Please use the release version -- 2.9.0.

I'm using version 2.9.0.  I intentionally dodged the betas to get
something stable going, but it seems there is still some bleeding
edgeness to it.

Maybe I'll troubleshoot the problem more, now that we've narrowed it
down a bit.  I certainly was to a point where I was going to give up,
because I had no idea (get it?  No IDEA) whether the problem was in
OpenSSL or MixMaster.  It seems people are sure this is the MixMaster
Install script.  Maybe I'll grab the absolute latest Install script,
and compare it.



Re: IDEA

2003-03-22 Thread Peter Palfrader
On Sat, 22 Mar 2003, [EMAIL PROTECTED] wrote:

 IDEA is listed on the fourth line, so it seems IDEA was installed with
 OpenSSL, but MixMaster's install may be improperly detecting that IDEA
 is absent.  It's when I run the Mixmaster install that I get the
 error:
 
...
Looking for libz.a...
Found at /usr/lib/libz.so.
Found source directory zlib-1.1.4.
Use the source if the pre-installed library causes compilation problems.
Use source? [n]
Looking for libpcre.a...
Found source directory pcre-2.08.
Looking for libcrypto.a...
Found at /usr/local/ssl/lib/libcrypto.a.
./Install: [: 90701f: integer expression expected
./Install: [: 90701f: integer expression expected
./Install: [: 90701f: integer expression expected
Looking for libncurses.a...
Found at /lib/libncurses.so.
./Install: tmptst.c: Permission denied
 ^^^  
gcc: tmptst.c: No such file or directory

Do you have write permissions do the directory?

Peter
-- 
 PGP signed and encrypted  |  .''`.  ** Debian GNU/Linux **
messages preferred.| : :' :  The  universal
   | `. `'  Operating System
 http://www.palfrader.org/ |   `-http://www.debian.org/


pgp0.pgp
Description: PGP signature


Re: IDEA

2003-03-22 Thread Len Sassaman
On Sat, 22 Mar 2003, Eric Murray wrote:

 Looking for libcrypto.a...
 Found at /usr/local/ssl/lib/libcrypto.a.
 ./Install: [: 90701f: integer expression expected

 I think that line means that mixmaster's install script isn't
 properly identifying the version of Openssl.  If it were
 me, I'd fix the Mixmaster install script.

It's been a while since I really worked on the Install script -- Mixmaster
3.0 doesn't use it -- but this looks to be to be a bug that existed and
was fixed sometime around a year ago. What version of Mixmaster are you
using?

Please use the release version -- 2.9.0.

 BTW, if you will be posting Mixmaster messages to the cpunks
 list, could you fix it so it uses an informative Subject: line
 instead of Mixmaster Type III Message?

Those messages are from people testing the Mixminion software. Mixminion
isn't ready for actual use yet. It is my understanding that the user has
no control over the subject line in the current Mixminion system though --
the servers remove it.

I think this will be changed before the final release. Mixmaster 4.0
(which will interoperate with Mixminion) will place no restrictions on
user's Subject lines.


--Len.



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-18 Thread adg
On Mon, Mar 17, 2003 at 03:13:46PM +0100, Thomas Shaddack wrote:

 Using a powerful high-frequency modulated infrared source (eg, a bank of
 LEDs) located on a highly visible place, it couldbe possible to facilitate
 local community broadcasts, effectively sidestepping all FCC regulations.

Hi,

I don't know if this may be of interest to you, since it's 
related more to electromagnetic emanation monitoring, but anyway...

in September 1999, I've used the modulation of keyborard's VLED 
as a means to broadcast data from a personal computer to the outside
world.

I've builded a very simple protocol by encoding data (in the basic version)
through different LED's states. The bitrate was very low and 
obviously related to the specific chosen encoding. Infact, I've tried
several encodings (e.g., some inspired to BAUDOT).

However, the monitoring of the electromagnetic emanation was
based on the relative radio frequencies and not on the visible spectrum.

The monitoring equipment was very simple. A common radio receiver
with audio cassette recorder.

It was possible to tap the data by tuning the radio receiver, 
expecially on the following frequencies: LW 209 Khz, LW 201 Khz, MW 892 Khz.
(the receiver was very simple and the maximum distance from the emission
source was ten metres).
The decoding of the recorder signlas was straighforward using FFT.

ciao,
alfonso



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-18 Thread Tyler Durden
I think you're on to something here.

One quick thought that occurs to me is that for some of the gain, I see no 
reason forward error correction couldn't be used within the IP payload, at 
least for a few dB of gain (has this been tried?) Of course, the FEC 
probably won't help the header information very much, but doesn't IP 
broadcast use a small set of broadcast IP addresses? Thus, it might be 
possible for payload-based FEC to know a-priori what will be in the header 
and basically correct for it. Then there's simply the matter of the reduced 
bandwidth due to the FEC, but it might be possible for that to look just 
like good old Ethernet shared-bandwidth-based conjestion (but I'm no IP guy 
so I could be talkin' out my arse here).

-TD





From: Steve Schear [EMAIL PROTECTED]
To: Tyler Durden [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: Idea: Sidestepping low-power broadcast regulations with  
infrared
Date: Mon, 17 Mar 2003 14:45:15 -0800

Another possibility occurred to me.  It might be possible to use the 
802.11-like devices for this purpose.  The problem for this application 
with Wi-Fi is its focus on high data rate and therefore low process gain.  
But there is no inherent reason why almost the same circuits (perhaps even 
the off-the-shelf PC cards themselves) couldn't be re-purposed for used at 
lower effective data rates and higher process gain for much greater range 
and interference immunity while still operating within the FCC Part 15 
guidelines.

As I recall most of the notebook cards have a max output of about 80 mW.  
Each of the 5 channels in the 2.4 GHz band can support up to 11 mbps.  If 
you assume that you will use this for stereo broadcasting then only 128 
kbps offers a pretty good quality .mp3  This is a data rate ratio of 85 :1 
or about 18 dB.  For every 6 dB of link margin improvement a signal's range 
is doubled.  18 dB should, all other things being equivalent, extend the 
device range by 8 times.  (If data rates were lowered to those now common 
for PCS and used for that sort of purpose, link margins would expand by 
another 9-12 dB.)

steve


_
The new MSN 8: smart spam protection and 2 months FREE*  
http://join.msn.com/?page=features/junkmail



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-17 Thread Steve Schear
At 12:08 PM 3/17/2003 -0500, you wrote:
Steve Schear wrote...

A detector that is only sensitive to this spectral region has the 
capability to operate in the daylight, even while pointing at the sun, and 
pick up little background radiation

How much are UV receivers (note, not the same thing as a mere UV 
detector)? Gotta be kinda expensive, I would think (ie, in the 4-digit 
range), but I could be wrong.
I haven't checked but assume they should be relatively cheap.  For example, 
I'm assuming this device isn't too expensive and the sensor itself should 
be available for a few $10s.  http://www.ame-corp.com/UVB.htm

And preferably, it would be nice if it could run up to 11Meg/sec or so.
I don't think you will be able to get anywhere near multi-megabit data 
rates with inexpensive, omni-directional, optical systems.  But that's 
needed for broadcast of entertainment .mp3 sterams.


Seems to me if one wanted broadcast, operating in the 1550-nm range and 
then using good old EDFAs might work, if one had the right kind of 
omnidirectional IR 'antenna' (or whatever such a thing would be called). 
Then of course, the broadcast cost would be kind of expensive (say $5000), 
but the detectors could be cheap ($100 or less). The only drawback here is 
fog (1550nm doesn't go too good through fog, but rain and snow are 
apparently fine).
Fabrication of efficient, high-power,isible wavelength emitters and sensors 
using nano-imprinting technologies should be feasible today.  The advantage 
of this approach is that it need not employ materials using their bandgaps 
but simply resonant structures similar to RF circuits.

steve



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-17 Thread Steve Schear
At 03:13 PM 3/17/2003 +0100, Thomas Shaddack wrote:
Using a powerful high-frequency modulated infrared source (eg, a bank of
LEDs) located on a highly visible place, it couldbe possible to facilitate
local community broadcasts, effectively sidestepping all FCC regulations.
Better to ignore low power regs and challenge the FCC to demonstrate for 
each and every such station that their signal measurably interferes at 
receivers in another state with another station.  Interference at receivers 
within the same state as the low power transmitters is not a valid 
constitutional basis for FCC regulation.

Regarding LED broadcasts, you should consider RF modulated mid-UV 
lamps.  There is a wide swath of spectrum from 230 to 280 nanometers
created by the ozone layer.  Little sun light in this frequency range, the 
only significant natural illumination source, reaches most parts of the 
earth.  A detector that is only sensitive to this spectral region has the 
capability to operate in the daylight, even while pointing at the sun, and 
pick up little background radiation. A detector operating in this 
wavelength region need not be directional and will have an increased 
performance by orders of magnitude because of the reduction of the 
background noise. Furthermore, precise alignment of the transmitter and 
receiver is dispensed with since a detector does not have to operate in the 
line-of-sight but can function in a wide field-of-view mode to sense 
radiation scattered by the modulated UV signal.

Multi-watt transmitters can be constructed from inexpensive, commercially 
available, Ar-Hg discharge lamps.  Data rates can easily exceed 100s kbps 
(megabit data rates have been reported).  By selection of different Hg 
isotopes in the lamps multiple channel operation is possible.  Reception 
using inexpensive, solid-state, sensors is assumed.

See U.S. Patent 4,493,114.

steve



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-17 Thread Tyler Durden
Steve Schear wrote...

I haven't checked but assume they should be relatively cheap.  For example, 
I'm assuming this device isn't too expensive and the sensor itself should 
be available for a few $10s.  http://www.ame-corp.com/UVB.htm
Perhaps I misunderstand what you would want to use this device for. Remember 
we need to detect bits, not just the presence of UV/IR or whatever. It's got 
to be able to react quickly, and hopefully quickly enough that the 
electronics behind it can be off-the shelf, and probably Ethernet or 
SONET-capable. (Think 10/11 Meg, or 155Meg and beyond...)

And because I've never heard of UV-based communications, I would assume that 
such a receiver would be quite expensive, even at lower bitrates. However, 
if you go with the standard tele/datacom wavelength bands (850nm, 1310nm, 
1550nm...), prices get VERY cheap, even at bandwidths up to OC-48 (2.5 gig). 
With both the 1550nm as well as 1310nm-band, you have the added possibility 
of optical amplifiers (Raman at 1310nm, Erbium-Doped fiber amplifiers at 
1550nm), and pretty much unlimited power (cladding-pumped fiber amplifiers 
can output in the 2 to 5 watt range and beyond).

Oh, and it should be mentioned that several companies have already 
commercialized free-space point-to-point line of sight optical 
communications at these bandwidths and these wavelegnths, so the only thing 
you really need is the wierd antenna, and I'd bet there's something out 
there already you could use.

-TD








And preferably, it would be nice if it could run up to 11Meg/sec or so.
I don't think you will be able to get anywhere near multi-megabit data 
rates with inexpensive, omni-directional, optical systems.  But that's 
needed for broadcast of entertainment .mp3 sterams.


Seems to me if one wanted broadcast, operating in the 1550-nm range and 
then using good old EDFAs might work, if one had the right kind of 
omnidirectional IR 'antenna' (or whatever such a thing would be called). 
Then of course, the broadcast cost would be kind of expensive (say $5000), 
but the detectors could be cheap ($100 or less). The only drawback here is 
fog (1550nm doesn't go too good through fog, but rain and snow are 
apparently fine).
Fabrication of efficient, high-power,isible wavelength emitters and sensors 
using nano-imprinting technologies should be feasible today.  The advantage 
of this approach is that it need not employ materials using their bandgaps 
but simply resonant structures similar to RF circuits.

steve



_
Tired of spam? Get advanced junk mail protection with MSN 8. 
http://join.msn.com/?page=features/junkmail



Re: Idea: Sidestepping low-power broadcast regulations with infrared

2003-03-17 Thread Tyler Durden
Steve Schear wrote...

A detector that is only sensitive to this spectral region has the 
capability to operate in the daylight, even while pointing at the sun, and 
pick up little background radiation

How much are UV receivers (note, not the same thing as a mere UV detector)? 
Gotta be kinda expensive, I would think (ie, in the 4-digit range), but I 
could be wrong. And preferably, it would be nice if it could run up to 
11Meg/sec or so.

Seems to me if one wanted broadcast, operating in the 1550-nm range and then 
using good old EDFAs might work, if one had the right kind of 
omnidirectional IR 'antenna' (or whatever such a thing would be called). 
Then of course, the broadcast cost would be kind of expensive (say $5000), 
but the detectors could be cheap ($100 or less). The only drawback here is 
fog (1550nm doesn't go too good through fog, but rain and snow are 
apparently fine).

-TD






From: Steve Schear [EMAIL PROTECTED]
To: Thomas Shaddack [EMAIL PROTECTED],   cypherpunks  
[EMAIL PROTECTED]
Subject: Re: Idea: Sidestepping low-power broadcast regulations with  
infrared
Date: Mon, 17 Mar 2003 08:40:05 -0800

At 03:13 PM 3/17/2003 +0100, Thomas Shaddack wrote:
Using a powerful high-frequency modulated infrared source (eg, a bank of
LEDs) located on a highly visible place, it couldbe possible to facilitate
local community broadcasts, effectively sidestepping all FCC regulations.
Better to ignore low power regs and challenge the FCC to demonstrate for 
each and every such station that their signal measurably interferes at 
receivers in another state with another station.  Interference at receivers 
within the same state as the low power transmitters is not a valid 
constitutional basis for FCC regulation.

Regarding LED broadcasts, you should consider RF modulated mid-UV lamps.  
There is a wide swath of spectrum from 230 to 280 nanometers
created by the ozone layer.  Little sun light in this frequency range, the 
only significant natural illumination source, reaches most parts of the 
earth.  A detector that is only sensitive to this spectral region has the 
capability to operate in the daylight, even while pointing at the sun, and 
pick up little background radiation. A detector operating in this 
wavelength region need not be directional and will have an increased 
performance by orders of magnitude because of the reduction of the 
background noise. Furthermore, precise alignment of the transmitter and 
receiver is dispensed with since a detector does not have to operate in the 
line-of-sight but can function in a wide field-of-view mode to sense 
radiation scattered by the modulated UV signal.

Multi-watt transmitters can be constructed from inexpensive, commercially 
available, Ar-Hg discharge lamps.  Data rates can easily exceed 100s kbps 
(megabit data rates have been reported).  By selection of different Hg 
isotopes in the lamps multiple channel operation is possible.  Reception 
using inexpensive, solid-state, sensors is assumed.

See U.S. Patent 4,493,114.

steve


_
Add photos to your e-mail with MSN 8. Get 2 months FREE*.  
http://join.msn.com/?page=features/featuredemail



Re: Jonathan Zittrain on data retention, an awful idea

2002-07-06 Thread Bill Stewart

Sigh.  Back when the US Feds were still trying to push Key Escrow on the
National Information Infrastructure, I started research for an April 1 RFC
for the National Information Infrastructure Data Entry Escrow Protocol, 
NIIDEEP,
and proposed NIIDEEP Information Network Protocol Implementation Government
Standard, NIIDEEPINPIGS.  (Didn't get it finished by April 1 :-)

Because after all, there's no sense escrowing our crypto keys if you
don't also escrow the cyphertext - some of Eric Hughes's talks on
Message Escrow and Data Retention Policies were excellent explanations
of why those was the critical issues.

If the US Feds and Eurocrats would like us to provide them with all of our 
data,
the existing internet would need to double in size to transport all of it
to the appropriate government data storage facilities, or more than double
if separate copies need to be provided to multiple national governments,
or much more if local governments such as city trade commissions need copies.
Since this is clearly unrealistic, even with the demise of the E-Bone,
transmission will require the use of off-line data transmission technology.
Waiting for approval of new government standards would take too long, and
lose access to valuable data because of the resulting delay of several years,
but there are several existing standards for data storage that can be applies.

My long-lost research had the FIPS (US Federal Information Processing 
Standards)
references for several of them, but the current environment requires
the corresponding European standards as well, and I'll need assistance.
But there's also been progress!  RFC 1149 (Avian Carriers) has been 
implemented,
though scalable implementation and dual-source requirements may require
genetic reconstruction of the Passenger Pigeon to supplement current 
carrier species.
Modular methods uses standard data storage formats and separate transmission.
Standards widely supported in the US include Hollerith cards,
1600 bpi 9-track tapes with EBCDIC character sets and fixed block sizes and 
LRECLs,
and ASCII-format punch tape (with country-specific standards for recycled 
paper content.)
8-inch floppy disks have also been widely used, and support both
CP/M and RT11 file system formats.
Are there corresponding European standards for data storage?
Transmission methods for data storage media include International
Postal Union standards for link layer and addressing formats and pricing,
though I'm not directly familiar with standards for shipping containers
where data encapsulation is required.

 Thanks;  Bill Stewart, [EMAIL PROTECTED]


From: Jon Zittrain [EMAIL PROTECTED]
Subject: Re: FC: Data retention scheme marches forward in European 
Parliament

I've written something opposing this at 
http://www.forbes.com/forbes/2002/0708/062.html.

Consider the range of proposals for unobtrusive but sweeping Internet 
monitoring. Most of them are doable as a technical matter, and all of them 
would be unnoticeable to us as we surf. Forbes columnist Peter Huber's 
idea is perhaps the most distilled version. Call it the return of the lock 
box. He asks for massive government data vaults, routinely receiving 
copies of all Internet traffic--e-mails, Web pages, chats, mouse clicks, 
shopping, pirated music--for later retrieval should the government decide 
it needs more information to solve a heinous crime. (See the Nov. 12 
column at forbes.com/huber.)

The idea might sound innocuous because the data collected would remain 
unseen by prying eyes until a later search, commenced only after legal 
process, is thought to require it. Make no mistake, however: The idealized 
digital lock box and many sibling proposals are fundamentally terrible 
ideas. Why?


Jonathan Zittrain, Harvard law professor; codirector, Berkman Center for 
Internet  Society.




Random number generator-Idea 1

2002-04-24 Thread gfgs pedo


hi,

With reference to the following url

http://www.ietf.org/rfc/rfc1750.txt
As in idea 1 what about choosing 2 independent  bit
file streams.
Then as in RFC 1750 6.1.1 A Trivial Mixing Function
(page 14),

 make a 3rd bit file stream such that We xor 

For i=0 to n
bit(i)file3=bit(i)file1 (xor) bit(i) file2
and follow idea 1?
Is the problem solved and idea 1 worthy?
Regards Data.


__
Do You Yahoo!?
Yahoo! Games - play chess, backgammon, pool and more
http://games.yahoo.com/