Re: Crypto Craft Knowledge

2009-02-23 Thread Peter Gutmann
Ben Laurie b...@links.org writes:

I totally agree, and this is the thinking behind the Keyczar project (
http://www.keyczar.org/):

If we're allowed to do self-promotion I'll have to mention cryptlib, which had
as one of its principal design goals what was later stated by Ian Grigg as
there should only be one mode and that is secure.  With cryptlib you have to
work very, very hard to do things insecurely (generally by resorting to
calling very low-level functions that the docs contain all sorts of dire
warnings about), and some things just can't be done at all, plaintext key
export being one really major sticking point that I get no end of complaints
about (if you really want the gory details you can get them at either
http://researchspace.auckland.ac.nz/handle/2292/2310 or at
http://www.springer.com/computer/security+and+cryptology/book/978-0-387-95387-8 
for a newer, cleaned-up version).

This points out an awkward problem though, that if you're a commercial vendor
and you have a customer who wants to do something stupid, you can't afford not
to allow this.  While my usual response to requests to do things insecurely is
If you want to shoot yourself in the foot then use CryptoAPI, I can only do
this because I care more about security than money.  For any commercial vendor
who has to put the money first, this isn't an option.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The password-reset paradox

2009-02-23 Thread Ian G

On 19/2/09 14:36, Peter Gutmann wrote:

There are a variety of password cost-estimation surveys floating around that
put the cost of password resets at $100-200 per user per year, depending on
which survey you use (Gartner says so, it must be true).

You can get OTP tokens as little as $5.  Barely anyone uses them.



The two numbers are not comparable.  One is the business cost to a 
company including all the internal, absorbed costs (see Steve's email), 
while the other is the pricelist of the supplier, without internal 
user-company costs.


If we compared each method using the other's methodology, passwords 
would list at $0 per reset, and tokens recoveries would estimate at 
$105 to $205, plus shipping.




Can anyone explain why, if the cost of password resets is so high, banks and
the like don't want to spend $5 (plus one-off background infrastructure costs
and whatnot) on a token like this?



It is a typical claim of the smart card  tokens industry that that the 
bulk unit cost of their product is an important number.  This is 
possibly because the sellers of such product cannot offer the real 
project work because they are too product oriented and/or too small.  So 
they have to sell on somthing, and push the number.  It is for this 
reason that IBM once ruled the world, they bypassed the whole 
listprice/commodity issue.


As a humourous aside, here's another deceptive sales approach available 
to the token world, the end of something we know security, as we know 
it :)


http://www.technologyreview.com/computing/22201/?a=f



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: stripping https from pages

2009-02-23 Thread Peter Gutmann
Steven M. Bellovin s...@cs.columbia.edu writes:

http://www.theregister.co.uk/2009/02/19/ssl_busting_demo/ -- we've talked
about this attack for quite a while; someone has now implemented it.

My analysis of this (part of a much longer writeup):

-- Snip --

[...] it's now advantageous for attackers to spoof non-SSL rather than their
previous practice of trying to spoof SSL.  The reason for this is that the
Hamming distance beteween the eye-level SSL indicators and the no-SSL
indicators (even without using the trick of putting a blue border around the
favicon) is now so small that, as shown in the magnified view in [Reference to
graphic snipped], it's barely noticeable (imagine this crammed up into the
corner of a 1280 x 1024 display, at which point the difference is practically
invisible).  What makes this apparently counterintuitive spoof worthwhile is
the destructive interaction between the near-invisible indicators and the
change in the way that certificate errors are handled.  In Firefox 3 any form
of certificate error (including minor bookkeeping ones like forgetting to pay
your annual CA tax) results in a huge scary warning that requires a great many
clicks to bypass.  In contrast not having a certificate at all produces almost
no effect.  Since triggering negative feedback from the browser is something
that attackers generally want to avoid while failing to trigger positive
feedback has little to no effect, the unfortunate interaction of these two
changes in Firefox is that it's now of benefit to attackers to spoof non-SSL
rather than spoofing SSL.

-- Snip --

It's the law of unintended consequences in effect, HCI people pointed out some 
time ago that the change in the security indicators in FF3 was a bad idea but 
AFAIK 'Moxie Marlinspike' is the first person to show that it's even worse 
than that because of the destructive interaction between the 
security-indicator change and the cert-warning change.

The first step in fixing this would be to undo several of the UI changes that 
lead to the easily-spoofed security indicators in FF3 and bring back the FF2 
versions, which would at least partially upset the nasty interaction that 
makes this attack effective.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Solving password problems one at a time, Re: The password-reset paradox

2009-02-23 Thread Ed Gerck

List,

In a business, one must write down the passwords and one must have a 
duplicate copy of it, with further backup, where management can access 
it. This is SOP.


This is done not just in case the proverbial truck hits the employee, or 
fire strikes the building, or for the disgruntled cases, but because 
people do forget and a company cannot be at the same time responsible to 
the shareholders for its daily operations and not be responsible for the 
passwords that pretty much define how those daily operations are run.


The idea that people should not write their passwords is thus silly from 
the security viewpoint of assuring availability and also for another 
reason. Users cannot be trusted to follow instructions. So, if one's 
security depends on their users following instructions, then something 
is wrong from the start.


Solving password problems one at a time.

I submit that the most important password problem is not that someone 
may find it written somewhere. The most important password problem is 
that people forget it. So, writing it down and taking the easy 
precaution of not keeping next to the computer solves the most important 
problem with not even a comparably significant downside. Having 
automatic, secure, and self-managed password recovery and password reset 
(in case the password cannot be recovered) apps are also part of this 
solution.


I see the second most important problem in passwords to be that they 
usually have low entropy -- ie, passwords are usually easily guessable 
or easy to find in a quick search.


The next two important problems in passwords are absence of mutual 
authentication (anti-phishing) and absence of two-factor authentication.


To solve these three problems, at the same time, we have been 
experimenting since 2000 with a scheme where the Username/Password login 
is divided in two phases. In different applications in several countries 
over nine years, this has been tested with many hundreds of thousands of 
users and further improved. (you can also test it if you want). It has 
just recently been applied for TLS SMTP authentication where both the 
email address and the user's common name are also authenticated (as with 
X.509/PKI but without the certificates).


This is how it works, both for the UI and the engine behind it.

(UI in use since 2000, for web access control and authorization) After 
you enter a usercode in the first screen, you are presented with a 
second screen to enter your password. The usercode is a mnemonic 
6-character code such as HB75RC (randomly generated, you receive from 
the server upon registration). Your password is freely choosen by you 
upon registration.That second screen also has something that you and the 
correct server know but that you did not disclose in the first screen -- 
we can use a simple three-letter combination ABC, for example. You use 
this to visually authenticate the server above the SSL layer. A rogue 
server would not know this combination, which allays spoofing 
considerations -- if you do not see the correct three-letter 
combination, do not enter your password.


(UI in use since 2008, TLS SMTP, aka SMTPS, authentication). The SMTP 
Username is your email address, while the SMTP Password is obtained by 
the user writing in sequence the usercode and the password. With TLS 
SMTP, encryption is on from the start (implict SSL), so that neither the 
Username or the Password are ever sent in the clear.


(UI 2008 version, web access control) Same as the TLS SMTP case, where a 
three-letter combination is provided for user anti-spoofing verification 
after the username (email address) is entered. In trust terms, the user 
does not trust the server with anything but the email address (which is 
public information) until the server has shown that it can be trusted 
(to that extent) by replying with the expected three-letter combination.


In all cases, because the usercode is not controlled by the user and is 
random, it adds a known and independently generated amount of entropy to 
the Password.


With a six-character (to be within the mnemonic range) usercode, 
usability considerations (no letter case, no symbols, overload 0 with 
O, 1 with I, for example), will reduce the entropy that can be 
added to (say) 35 bits. Considering that the average poor, short 
password chosen by users has between 20 and 40 bits of entropy, the end 
result is expected to have from 55 to 75 bits of entropy, which is quite 
strong. This can be made larger by, for example, refusing to accept 
passwords that are less than 8 characters long, by and adding more 
characters to the usercode alphabet and/or usercode (a 7-character code 
can still be mnemonic and human friendly).


The fourth problem, and the last important password problem that would 
still remain, is the vulnerability of password lists themselves, that 
could be downloaded and cracked given enough time, outside the access 
protections of online login 

RE: The password-reset paradox

2009-02-23 Thread Charlie Kaufman
I would assume (hope?) that when you have an OTP token, you get two factor
authentication and don't stop needing a password. You would need a password
either to unlock the OTP device or to enter alongside the OTP value. Otherwise,
someone who finds your token can impersonate you.

Assuming that's true, OTP tokens add costs by introducing new failure modes 
(e.g.,
I lost it, I ran it through the washing machine, etc.). I suspect a similar 
study
would find that the cost of the OTP token would be $500-$700/yr. even if the
device itself only cost $5. After all, passwords are free!

--Charlie

-Original Message-
From: owner-cryptogra...@metzdowd.com [mailto:owner-cryptogra...@metzdowd.com] 
On Behalf Of Peter Gutmann
Sent: Thursday, February 19, 2009 5:36 AM
To: cryptography@metzdowd.com
Subject: The password-reset paradox

There are a variety of password cost-estimation surveys floating around that
put the cost of password resets at $100-200 per user per year, depending on
which survey you use (Gartner says so, it must be true).

You can get OTP tokens as little as $5.  Barely anyone uses them.

Can anyone explain why, if the cost of password resets is so high, banks and
the like don't want to spend $5 (plus one-off background infrastructure costs
and whatnot) on a token like this?

(My guess is that the password-reset cost estimates are coming from the same
place as software and music piracy figures, but I'd still be interested in any
information anyone can provide).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Shamir secret sharing and information theoretic security

2009-02-23 Thread Jerry Leichter

On Feb 17, 2009, at 6:03 PM, R.A. Hettinga wrote:


Begin forwarded message:

From: Sarad AV jtrjtrjtr2...@yahoo.com
Date: February 17, 2009 9:51:09 AM EST
To: cypherpu...@al-qaeda.net
Subject: Shamir secret sharing and information theoretic security

hi,


I was going through the wikipedia example of shamir secret sharing  
which says it is information theoretically secure.


http://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing

In the example in that url, they have a polynomial
f(x) = 1234 + 166.x + 94.x^2

they construct 6 points from the polynomial
(1,1494);(2,1942);(3,2578);(4,3402);(5,4414);(6,5615)

the secret here is S=1234. The threshold k=3 and the number of  
participants n=6.


If say, first two users collude then
1494 = S + c1 .1 + c2.1
1942 = S + c1 .2 + c2.2

clearly, one can start making inferences about the sizes of the  
unknown co-efficients c1 and c2 and S.



However, it is said in the URL above that Shamir secret is  
information theoretically secure
It is.  Knowing some of the coefficients, or some constraints on some  
of the coefficients, is just dual to knowing some of the points.   
Neither affects the security of the system, because the coefficients  
*aren't secrets* any more than the values of f() at particular points  
are.  They are *shares* of secrets, and the security claim is that  
without enough shares, you have no information about the remaining  
shares.


The argument for information-theoretic security is straightforward:   
An n'th degree polynomial is uniquely specified if you know its value  
at n+1 points - or, dually, if you know n+1 coefficients.  On the  
other hand, *any* set of n+1 points (equivalently, any set of n+1  
coefficients) corresponds to a polynomial.  Taking a simple approach  
where the secret is the value of the polynomial at 0, given v_1,  
v_2, ..., v_n and *any* value v, there is a (unique) polynomial of  
degree at most n with p(0) = v and p(i) = v_i for i from 1 to n.   
Dually, the value p(0) is exactly the constant term in the  
polynomial.  Given any fixed set of values c_1, c_2, ..., c_n, and any  
other value c there is obviously a polynomial p(x) = Sum_{0 to n}(c_i  
x^i), where c_0 = c, and indeed p(0) = c.


Or ... in terms of your problem:  Even if I give you, not just a pair  
of linear equations in c1, c2, and S, but the actual values c1 and c2  
- the constant term (the secret) can still be anything at all.


The description above is nominally for polynomials over the reals.  It  
works equally for polynomials over any field - like the integers mod  
some prime, for example.  For a finite field, there is an obvious  
interpretation of probability (the uniform probability distribution),  
and given that, no information can be interpreted in terms of the  
difference between your a priori and a posterio estimates of the  
probability that p(0) takes on any particular value, the values of  
p(1), ..., p(n) (and that differences is exactly 0).  Because there  
can be no uniform probability distribution over all the reals, you  
can't state things in quite the same way, and information theoretic  
security is a bit of a vague notion.  Then again, no one does  
computations over the reals.  FP values - say, IEEE single precision -  
aren't a field and there are undoubtedly big biases if you try to use  
Shamir's technique there.  (It should work over infinite-precision  
rationals.)


-- Jerry





in the url below they say
http://en.wikipedia.org/wiki/Information_theoretic_security
Secret sharing schemes such as Shamir's are information  
theoretically secure (and in fact perfectly secure) in that less  
than the requisite number of shares of the secret provide no  
information about the secret.


how can that be true? we already are able to make inferences.

Moreover say that, we have 3 planes intersecting at a single point  
in euclidean space, where each plane is a secret share(Blakely's  
scheme). With 2 plane equations, we cannot find the point of  
intersection but we can certainly narrow down to the line where the  
planes intersect. There is information loss about the secret.



from this it appears that Shamir's secret sharing scheme leaks  
information from its shares but why is it then considered  
information theoretically secure?


They do appear to leak information as similar to k-threshold schemes  
using chinese remainder theorem.


what am i missing?

Thanks,
Sarad.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Sweden's air force 'can't send secret messages'

2009-02-23 Thread Jerry Leichter
Summary:  Sweden developed its own secure encryption system for  
communicating with fighter jets.  A new jet, which is scheduled to  
replace all existing fighters by 2011, uses a NATO-standard encryption  
system - only.  There is no plan in place to upgrade the ground  
systems to the NATO standard.  So the new jets must communicate in the  
clear.


http://www.thelocal.se/17724/20090220/
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The password-reset paradox

2009-02-23 Thread Debra L Cook



On Fri, 20 Feb 2009, Jerry Leichter wrote:


On Feb 19, 2009, at 8:36 AM, Peter Gutmann wrote:

There are a variety of password cost-estimation surveys floating around 
that

put the cost of password resets at $100-200 per user per year, depending on
which survey you use (Gartner says so, it must be true).


You can get OTP tokens as little as $5.  Barely anyone uses them.

Can anyone explain why, if the cost of password resets is so high, banks 
and
the like don't want to spend $5 (plus one-off background infrastructure 
costs

and whatnot) on a token like this?

(My guess is that the password-reset cost estimates are coming from the 
same
place as software and music piracy figures, but I'd still be interested in 
any

information anyone can provide).
I suspect some very biased analysis.  For example, people who really need 
their passwords reset regularly will probably lose their tokens just as 
regularly.  The cost of replacing one of those is high - not for the token 
itself, but for the administrative costs, which *must* be higher than for a 
password reset since they include all the work in a password reset (properly 
authenticating user/identifying account probably contribute the largest 
costs), plus all the costs of physically obtaining, registering, and 
distributing a replacement token - plus any implied costs due to the delays 
needed to physically deliver the token versus the potential for an 
instantaneous reset.


I suppose the $100-$200 estimate might make sense for an organization that 
actually does password resets in a secure, carefully managed fashion. 
Frankly ... I, personally, have never seen such an organization.  Password 
resets these days are mainly automated, with authentication and 
identification based on very weak secondary security questions.  Even 
organizations you'd expect to be secure authenticate password reset 
requests based entirely on public information (e.g., if you know the name and 
badge number of an employee and the right help desk to call, you can get the 
password reset).  New passwords are typically delivered by unsecured email. 
All too many organizations reset to a fixed, known value.


It's quite true that organizations have found the costs of password resets to 
be too high.  What they've generally done is saved money on the reset process 
itself, pushing the cost out into whatever budgets will get hit as by the 
resulting security breaches.

  -- Jerry



Peter.



There is nothing technical in the following, but I wanted to reply because 
the cost issue isn't the only reason. The format of the token and user

interface are just as important. Think of a user with multiple accounts and
how to have the tokens on a single device with a single user interface 
when the banks don't use the same token vendor.


For large banks, cost was cited as a reason - both for deployment and 
synchronizations/replacements. Another issue is that even with banks and

brokerage firms in the US that offer tokens to customers, the bank thinks in
term of the customer having a token for one bank (itself) and not the 
inconvenience that a token per bank places on a customer. Also, in a 
consumer scenario as opposed to a work scenario, the consumer wants to be able to 
log into his/her bank account regardless of physical location and PC/laptop.


In a study I did last year with middle-age, well-educated adults, the average number of bank and brokerage accounts accessed online was 6. 
One person had 20. No one wanted to carry around multiple hardware tokens 
in case they need to access an account from someplace other than home. No one wanted a cup 
full of hardware tokens next to their PC at home. For banks that require 
certain customers (such as those with an account below some minimum 
balance) pay for their token, no one wants to pay $15-$25 every couple years for a single token when they don't 
understand what it provides over a static password.
Without commenting on security of software implementations, software-based 
tokens offered for browsers and cell phones get rid of the hardware issue 
for users. Broswer-based tokens don't solve the problem of users wanting 
to log in from anywhere. Tokens on cell phones are more promising in 
terms of human factors if the user is not required to install a different application for every bank account and has a standard
interface to all the tokens, and has a way of migrating tokens to a new 
cell phone. However, what I've seen so far are vendors charging a small
licensing fee per token per a specific number of years. Thus the bank 
either needs to cover the cost or a user pays a fee for every token every 
couple of years. To deploy tokens, the bank will need to either install 
and start the tokens on the cell phone for the customer or expect a large 
percentage of calls to a help desk. An alternative of having an OTP 
delivered to the user's cell phone on an as needed basis and 

SHA-3 Round 1: Buffer Overflows

2009-02-23 Thread R.A. Hettinga

http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-1


Off by On
A Software Security Blog
Search:

Friday, 20 February 2009
SHA-3 Round 1: Buffer Overflows
« Gartner Magic Quadrant for Static Analysis | Main
NIST is currently holding a competition to choose a design for the  
SHA-3 algorithm (Bruce Schneier has a good description of secure  
hashing algorithms and why this is important). The reference  
implementations of a few of the contestants have bugs in them that  
could cause crashes, performance problems, or security problems if  
they are used in their current state. Based on our bug reports, some  
of those bugs have already been fixed. Here's the full story:
The main idea behind the competition is to have the cryptographic  
community weed out the less secure algorithms and choose from the  
remainder. A couple of us at Fortify (thanks to Doug Held for his  
help) decided to do our part. We're not hard-core cryptographers, so  
we decided to take a look at the reference implementations.
This competition is to pick an algorithm, but all of the submissions  
had to include a C implementation, to demonstrate how it works and  
test the speed, which will be a factor in the final choice. We used  
Fortify SCA to audit the 42 projects accepted into Round 1. We were  
impressed with the overall quality of the code, but we did find  
significant issues in a few projects, including buffer overflows in  
two of the projects. We have emailed the submission teams with our  
findings and one team has already corrected their implementation.

Confirmed issues:
Implementation
Buffer Overflow
Out-of-bounds Read
Memory Leak
Null Dereference
Blender
1
0
0
0
Crunch
0
0
0
4
FSB
0
0
3
11
MD6
3
2
0
0
Vortex
0
0
1
15

One of the projects with buffer issues was MD6, the implementation  
provided Professor Ron Rivest and his team. All of the problems came  
back to the hashval field of the md6_state struct:


 unsigned char hashval[ (md6_c/2)*(md6_w/8) ];
The buffer size is determined by two constants:

 #define w md6_w /* # bits in a word   (64) */
 #define c md6_c /* # words in compression output  (16) */
At several points, this buffer is read or written to using a different  
bound:


 if (z==1) /* save final chaining value in st-hashval */
  { memcpy( st-hashval, C, md6_c*(w/8) );
return MD6_SUCCESS;
  }
Further analysis showed that ANSI standard layout rules would make  
incorrect behavior unlikely, but other compilers may have allowed it  
to be exploited. The MD6 team has doubled the size of the vulnerable  
buffer, which eliminated the risk. In this case, Fortify SCA found an  
issue that would have been difficult to catch otherwise.
The other buffer overflow was found in the Blender implementation,  
from Dr. Colin Bradbury. This issue was a classic typo:


 DataLength sourceDataLength2[3];	// high order parts of data  
length

 ...
 if (ss.sourceDataLength  (bcount | databitlen)) // overflow
  if (++ss.sourceDataLength2[0] == 0) // increment higher  
order count
   if (++ss.sourceDataLength2[1] == 0) // and the next  
higher order
++ss.sourceDataLength2[3]; // and the next one,  
etc.
The developer simply mistyped, using 3 instead of 2 for the array  
access. This issue was probably not caught because it would not be  
exposed without a very large input. The other issues we found were  
memory leaks and null dereferences from memory allocation.
This just emphasizes what we already knew about C, even the most  
careful, security conscious developer messes up memory management.  
Some of you are saying, so what? These are reference implementations  
and this is only Round 1. There are a few problems with that thought.
Reference implementations don't disappear, they serve as a starting  
point for future implementations or are used directly. A bug in the  
RSA reference implementation was responsible for vulnerabilities in  
OpenSSL and two seperate SSH implementations. They can also be used to  
design hardware implementations, using buffer sizes to decide how much  
silicon should be used.
The other consideration is speed, which will be a factor in the choice  
of algorithm. The fix for the MD6 buffer issues was to double the size  
of a buffer, which could degrade the performance. On the other hand,  
memory leaks could slow an implementation. A correct implementation is  
an accurate implementation.

We will put out a more detailed report on all the results soon.
Technorati Tags: sha-3 buffer overflow
Posted by jforsythe at 5:41 PM in crypto


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Brazilian mandatory vehicle anti-theft and tracking regulation

2009-02-23 Thread Santiago Aguiar

Hello,

I have been following this list for some time, and I wanted to comment 
on one of the projects I'm working on, just to hear your comments about 
it (and because I think is quite interesting for its security 
implications...).


Starting on August 2009, all new Brazilian vehicles will need to include 
a mandatory anti-theft device, installed from factory, that will be 
activated on demand by the vehicle owner. The device (TCU) will connect 
with an owner selected service operator (SO) by using a standard 
protocol defined by the National Department of Traffic (DENATRAN), 
closely based on Motorola's ACP protocol, over GPRS. The main functions 
of the anti-theft device are vehicle tracking, and remote blocking of 
the vehicle by the service operator on request of the owner or of the 
police department.


As you may notice, the risk of not implementing this in the right way, 
are enormous. Not only because of privacy concerns, but because anyone 
could just block/unblock your car engine or doors remotely, and 
massively (think hundreds of thousands cars in some SO). In my present 
opinion, there's no way they are going to do it correctly.


One of the issues is how the TCU will be activated. The idea is the the 
owner will be able to switch SO whenever he wants, and for that an 
activation protocol is needed. The current 'high-level' proposal by 
DENATRAN is here


http://www.gristec.com.br/disco_virtual/SMS_Proposal_ACP_245.pdf

In few words, there's a default authkey installed on every device, and a 
'secret' key for each SO. When a SO needs to activate a device, it sends 
an SMS message to the TCU so it connects to the SO server through GRPS, 
then the SO configures the TCU with it's authkey, and from that point on 
the TCU only answers messages that include that authkey. To change to 
another SO, the current SO sends a message that sets the authkey to the 
default one, and repeats the process.


I can think of many of ways to defeat an scheme like that (from just 
getting the SIM card from the TCU and playing the protocol against the 
SO to get it's key,   eavesdropping some weak point, replaying SO-TCU 
commands, etc.).


The reasons of why they say the proposal is OK are based on assuming: a) 
the secrecy of the SO authkey, which is sent in clear to every activated 
device b) the secrecy of the ICC-ID associated to each phone number (at 
least to do something massively), which is known by, at least every, SO, 
c) the security of the network (TCU-(GRPS/GSM)-Telco-(VPN)-SO to 
avoid eavesdropping/spoofing, which is compromised by any compromised SO).


My company started to participate on the working groups that are trying 
to define all the technical and process issues of the regulation, and 
I'm personally deeply concerned. We are not security experts (though we 
build the tracking units and develop it's firmware and server side 
components), but we want to contribute as much as we can to the process.


Do you know of any similar experiences we can base on? Do you think this 
is doomed to fail? Am I being too paranoid and things are done this way 
normally and attacks 'just don't happen';)? 


Any comment is welcomed! Thanks!

--
Santiago

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The password-reset paradox

2009-02-23 Thread Matt Crawford


On Feb 21, 2009, at 10:26 PM, Charlie Kaufman wrote:

Assuming that's true, OTP tokens add costs by introducing new  
failure modes (e.g.,

I lost it, I ran it through the washing machine, etc.)


Or even more surprising hazards.

http://home.fnal.gov/~crawdad/CryptoCard.jpg

The token on the left in that picture was issued in 2003 by postal  
mail to a Sloane Digital Sky Survey collaborator at the US Naval  
Observatory. All incoming packages were subjected to high doses of  
electron and x-ray radiation, as it is also the residence of the Vice  
President.


On the right is the normal appearance of the token and its holder.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Shamir secret sharing and information theoretic security

2009-02-23 Thread sbg
Is it possible that the amount of information that the knowledge of a
sub-threshold number of Shamir fragments leaks in finite precision setting
depends on the finite precision implementation?

For example, if you know 2 of a 3 of 5 splitting and you also know that
the finite precision setting in which the fragments will be used is IEEE
32-bit floating point or GNU bignum can you narrow down the search for the
key relative to knowing no fragments and nothing about the finite
precision implementation?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


RE: Solving password problems one at a time, Re: The password-reset paradox

2009-02-23 Thread Dave Kleiman

 On February 21, 2009 14:34, Ed Gerck wrote:
 In a business, one must write down the passwords and one must have a 
 duplicate copy of it, with further backup, where management can access 
 it. This is SOP.

 This is done not just in case the proverbial truck hits the employee, or 
 fire strikes the building, or for the disgruntled cases, but because 
 people do forget and a company cannot be at the same time responsible to 
 the shareholders for its daily operations and not be responsible for the 
 passwords that pretty much define how those daily operations are run.

The idea that people should not write their passwords is thus silly from 
the security viewpoint of assuring availability and also for another 
reason. Users cannot be trusted to follow instructions. So, if one's 
security depends on their users following instructions, then something 
is wrong from the start.

Most organizations I interact with have an SOP that nobody should ever know 
another's password. The only passwords that are safe stored are those for 
encryption or the top level admin. You take on a degree of legal responsibility 
if you have the ability to logon as another user. Since the admin can easily 
change a user's password, what would be the necessity for this risk? All 
password changes should be audited.


Respectfully,

Dave Kleiman - http://www.ComputerForensicExaminer.com 
4371 Northlake Blvd #314
Palm Beach Gardens, FL 33410
561.310.8801 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-23 Thread silky
On Sun, Feb 22, 2009 at 6:33 AM, Ed Gerck edge...@nma.com wrote:
 List,

 In a business, one must write down the passwords and one must have a
 duplicate copy of it, with further backup, where management can access it.
 This is SOP.

 This is done not just in case the proverbial truck hits the employee, or
 fire strikes the building, or for the disgruntled cases, but because people
 do forget and a company cannot be at the same time responsible to the
 shareholders for its daily operations and not be responsible for the
 passwords that pretty much define how those daily operations are run.

 The idea that people should not write their passwords is thus silly from the
 security viewpoint of assuring availability and also for another reason.
 Users cannot be trusted to follow instructions. So, if one's security
 depends on their users following instructions, then something is wrong from
 the start.

 Solving password problems one at a time.

 I submit that the most important password problem is not that someone may
 find it written somewhere. The most important password problem is that
 people forget it. So, writing it down and taking the easy precaution of not
 keeping next to the computer solves the most important problem with not even
 a comparably significant downside. Having automatic, secure, and
 self-managed password recovery and password reset (in case the password
 cannot be recovered) apps are also part of this solution.

 I see the second most important problem in passwords to be that they usually
 have low entropy -- ie, passwords are usually easily guessable or easy to
 find in a quick search.

 The next two important problems in passwords are absence of mutual
 authentication (anti-phishing) and absence of two-factor authentication.

 To solve these three problems, at the same time, we have been experimenting
 since 2000 with a scheme where the Username/Password login is divided in two
 phases. In different applications in several countries over nine years, this
 has been tested with many hundreds of thousands of users and further
 improved. (you can also test it if you want). It has just recently been
 applied for TLS SMTP authentication where both the email address and the
 user's common name are also authenticated (as with X.509/PKI but without the
 certificates).

 This is how it works, both for the UI and the engine behind it.

 (UI in use since 2000, for web access control and authorization) After you
 enter a usercode in the first screen, you are presented with a second screen
 to enter your password. The usercode is a mnemonic 6-character code such as
 HB75RC (randomly generated, you receive from the server upon registration).
 Your password is freely choosen by you upon registration.That second screen
 also has something that you and the correct server know but that you did not
 disclose in the first screen -- we can use a simple three-letter combination
 ABC, for example. You use this to visually authenticate the server above the
 SSL layer. A rogue server would not know this combination, which allays
 spoofing considerations -- if you do not see the correct three-letter
 combination, do not enter your password.

Well, this is an old plan and useless. Because any rogue server can
just submit the 'usercode' to the real server, and get the three
letters. Common implementations of this use pictures (cats dogs family
user-uploaded, whatever).

And FWIW, renaming password to usercode doesn't make it more secure.


 (UI in use since 2008, TLS SMTP, aka SMTPS, authentication). The SMTP
 Username is your email address, while the SMTP Password is obtained by the
 user writing in sequence the usercode and the password. With TLS SMTP,
 encryption is on from the start (implict SSL), so that neither the Username
 or the Password are ever sent in the clear.

I have no idea what you're referring to here. It doesn't seem to make
sense in the context of the rest of your email. Are you saying your
system is useless given SSL? (Aside from the fact that it's useless
anyway ...)


 (UI 2008 version, web access control) Same as the TLS SMTP case, where a
 three-letter combination is provided for user anti-spoofing verification
 after the username (email address) is entered. In trust terms, the user does
 not trust the server with anything but the email address (which is public
 information) until the server has shown that it can be trusted (to that
 extent) by replying with the expected three-letter combination.

Wrong again, see above.


 In all cases, because the usercode is not controlled by the user and is
 random, it adds a known and independently generated amount of entropy to the
 Password.

Disregarding all of the above, consider that it may not be random, and
given that you can generate them on signup there is the potential to
know or learn the RNG a given site is using.


 With a six-character (to be within the mnemonic range) usercode, usability
 considerations (no letter case, no symbols, 

Re: Shamir secret sharing and information theoretic security

2009-02-23 Thread Hal Finney
 Is it possible that the amount of information that the knowledge of a
 sub-threshold number of Shamir fragments leaks in finite precision setting
 depends on the finite precision implementation?

 For example, if you know 2 of a 3 of 5 splitting and you also know that
 the finite precision setting in which the fragments will be used is IEEE
 32-bit floating point or GNU bignum can you narrow down the search for the
 key relative to knowing no fragments and nothing about the finite
 precision implementation?

No, not really. Think of this simple example.

We will have two shares, x and y. Let's work mod 10 to make it simple. The
secret value v will be x + y mod 10. The shares are created by choosing a
random value for x, and then setting y to be v - x mod 10.

So for example if you want to share v = 5, and x is 9, then y will be 6:
9 + 6 = 5 mod 10.

Suppose that you happen to know from other information that the secret
value v is either 1 or 2. Now you learn a share value x = 5. How much
have you learned about v?

Nothing: you can deduce that y is either 6 or 7, but you have no way
of knowing which.  Whatever x had turned out to be, there would be a y
value corresponding to each possible v value. Learning a share tells you
nothing about v, and in general Shamir sharing, learning all but one of
the needed shares similarly tells you nothing about the secret.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-3 Round 1: Buffer Overflows

2009-02-23 Thread Ian G

On 22/2/09 23:09, R.A. Hettinga wrote:

http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-1



This just emphasizes what we already knew about C, even the most
careful, security conscious developer messes up memory management.



No controversy there.


Some
of you are saying, so what? These are reference implementations and this
is only Round 1. There are a few problems with that thought.
Reference implementations don't disappear, they serve as a starting
point for future implementations or are used directly. A bug in the RSA
reference implementation was responsible for vulnerabilities in OpenSSL
and two seperate SSH implementations. They can also be used to design
hardware implementations, using buffer sizes to decide how much silicon
should be used.



It is certainly appreciated that work is put in to improve the 
implementations during the competition (my group did something similar 
for the Java parts of AES, so I know how much work it can be).


However I think it is not really efficient at this stage to insist on 
secure programming for submission implementations.  For the simple 
reason that there are 42 submissions, and 41 of those will be thrown 
away, more or less.  There isn't much point in making the 41 secure; 
better off to save the energy until the one is found.  Then 
concentrate the energy, no?




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-3 Round 1: Buffer Overflows

2009-02-23 Thread Steve Furlong
 This just emphasizes what we already knew about C, even the most
 careful, security conscious developer messes up memory management.

 However I think it is not really efficient at this stage to insist on secure
 programming for submission implementations.  For the simple reason that
 there are 42 submissions, and 41 of those will be thrown away, more or less.
  There isn't much point in making the 41 secure; better off to save the
 energy until the one is found.  Then concentrate the energy, no?

Or stop using languages which encourage little oopsies like that. At
the least, make it a standard practice to mock those who use C but
don't use memory-safe libraries and diagnostic tools.

Regards,
SRF

-- 
Neca eos omnes. Deus suos agnoscet. -- Arnaud-Amaury, 1209

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com