Re: Trojan horse attack involving many major Israeli companies, executives
Amir Herzberg wrote: Nicely put, but I think not quite fair. From friends in financial and other companies in the states and otherwise, I hear that Trojans are very common there as well. In fact, based on my biased judgement and limited exposure, my impression is that security practice is much better in Israeli companies - both providers and users of IT - than in comparable companies in most countries. For example, in my `hall of shame` (link below) you'll find many US and multinational companies which don't protect their login pages properly with SSL (PayPal, Chase, MS, ...). I've found very few Israeli companies, and of the few I've found, two actually acted quickly to fix the problem - which is rare! Most ignored my warning, and few sent me coupons :-) [seriously] Could it be that such problems are more often covered-up in other countries? Or maybe that the stronger awareness in Israel also implies more attackers? I think both conclusions are likely. I also think that this exposure will further increase awareness among Israeli IT managers and developers, and hence improve the security of their systems. there is the story of the (state side) financial institution that was outsourcing some of its y2k remediation and failed to perform due diligence on the (state side) lowest bidder ... until it was too late and they were faced with having to deploy the software anyway. one of the spoofs of SSL ... was originally it was supposed to be used for the whole shopping experience from the URL the enduser entered, thru shopping, checkout and payment. webservers found that with SSL they took a 80-90% performance hit on their thruput ... so they saved the use of SSL until checkout and payment. the SSL countermeasure to MITM-attack is that the URL the user entered is checked against the URL in the webserver certificate. However, the URL the users were entering weren't SSL/HTTPS ... they were just standard stuff ... and so there wasn't any countermeasure to MITM-attack. If the user had gotten to a spoofed MITM site ... they could have done all their shopping and then clicked the checkout button ... which might provide HTTPS/SSL. however, if it was a spoofed site, it is highly probable that the HTTPS URL provided by the (spoofed site) checkout button was going to match the URL in any transmitted digital certificate. So for all, intents and purposes .. most sites make very little use of https/ssl as countermeasure for MITM-attacks ... simply encryption as countermeasure for skimming/harvesting (evesdropping). in general, if the naive user is clicking on something that obfuscates the real URL (in some case they don't even have to obfuscate the real URL) ... then the crooks can still utilize https/ssl ... making sure that they have a valid digital certificate that matches the URL that they are providing. the low-hanging fruit of fraud ROI ... says that the crooks are going to go after the easiest target, with the lowest risk, and the biggest bang-for-the buck. that has mostly been the data-at-rest transaction files. then it is other attacks on either of the end-points. attacking generalized internet channels for harvesting/skimming appears to be one of the lowest paybacks for the effort. in other domains, there have been harvesting/skimming attacks ... but again mostly on end-points ... and these are dedicated/concentrated environments where the only traffic ... is traffic of interest (any extraneous/uninteresting stuff has already been filtered out).
Re: Dell to Add Security Chip to PCs
Peter Gutmann wrote: Neither. Currently they've typically been smart-card cores glued to the MB and accessed via I2C/SMB. and chips that typically have had eal4+ or eal5+ evaluations. hot topic in 2000, 2001 ... at the intel developer's forums and rsa conferences
Re: Dell to Add Security Chip to PCs
Erwann ABALEA wrote: I've read your objections. Maybe I wasn't clear. What's wrong in installing a cryptographic device by default on PC motherboards? I work for a PKI 'vendor', and for me, software private keys is a nonsense. How will you convice Mr Smith (or Mme Michu) to buy an expensive CC EAL4+ evaluated token, install the drivers, and solve the inevitable conflicts that will occur, simply to store his private key? You first have to be good to convice him to justify the extra depense. If a standard secure hardware cryptographic device is installed by default on PCs, it's OK! You could obviously say that Mr Smith won't be able to move his certificates from machine A to machine B, but more than 98% of the time, Mr Smith doesn't need to do that. Installing a TCPA chip is not a bad idea. It is as 'trustable' as any other cryptographic device, internal or external. What is bad is accepting to buy a software that you won't be able to use if you decide to claim your ownership... Palladium is bad, TCPA is not bad. Don't confuse the two. the cost of EAL evaluation typically has already been amortized across large number of chips in the smartcard market. the manufactoring costs of such a chip is pretty proportional to the chip size ... and the thing that drives chip size tends to be the amount of eeprom memory. in tcpa track at intel developer's forum a couple years ago ... i gave a talk and claimed that i had designed and significantly cost reduced such a chip by throwing out all features that weren't absolutely necessary for security. I also mentioned that two years after i had finished such a design ... that tcpa was starting to converge to something similar. the head of tcpa in the audience quiped that i didn't have a committee of 200 helping me with the design.
Re: Banks Test ID Device for Online Security
Bill Stewart wrote: Yup. It's the little keychain frob that gives you a string of numbers, updated every 30 seconds or so, which stays roughly in sync with a server, so you can use them as one-time passwords instead of storing a password that's good for a long term. So if the phisher cons you into handing over your information, they've got to rip you off in nearly-real-time with a MITM game instead of getting a password they can reuse, sell, etc. That's still a serious risk for a bank, since the scammer can use it to log in to the web site and then do a bunch of transactions quickly; it's less vulnerable if the bank insists on a new SecurID hit for every dangerous transaction, but that's too annoying for most customers. in general, it is something you have authentication as opposed to the common shared-secret something you know authentication. while a window of vulnerability does exist (supposedly something that prooves you are in possession of something you have), it is orders of magnitude smaller than the shared-secret something you know authentication. there are two scenarios for shared-secret something you know authentication 1) a single shared-secret used across all security domains ... a compromise of the shared-secret has a very wide window of vulnerability plus a potentially very large scope of vulnerability 2) a unique shaerd-secret for each security domain ... which helps limit the scope of a shared-secret compromise. this potentially worked with one or two security domains ... but with the proliferation of the electronic world ... it is possible to have scores of security domains, resulting in scores of unique shared-secrets. scores of unique shared-secrets typically results exceeded human memory capacity with the result that all shared-secrets are recorded someplace; which in turn becomes a new exploit/vulnerability point. various financial shared-secret exploits are attactive because with modest effort it may be possible to harvest tens of thousands of shared-secrets. In one-at-a-time, real-time social engineering, may take compareable effort ... but only yields a single piece of authentication material with a very narrow time-window and the fraud ROI might be several orders of magnitude less. It may appear to still be large risk to individuals .. but for a financial institution, it may be relatively small risk to cover the situation ... compared to criminal being able to compromise 50,000 accounts with compareable effort. In some presentation there was the comment made that the only thing that they really needed to do is make it more attactive for the criminals to attack somebody else. It would be preferabale to have a something you have authentication resulting in a unique value ... every time the device was used. Then no amount of social engineering could result in getting the victim to give up information that results in compromise. However, even with relatively narrow window of vulnerability ... it still could reduce risk/fraud to financial institutions by several orders of magnitude (compared to existing prevalent shared-secret something you know authentication paradigms). old standby posting about security proportional to risk http://www.garlic.com/~lynn/2001h.html#61
Re: Academics locked out by tight visa controls
At 08:03 AM 9/20/2004, John Kelsey wrote: I guess I've been surprised this issue hasn't seen a lot more discussion. It takes nothing more than to look at the names of the people doing PhDs and postdocs in any technical field to figure out that a lot of them are at least of Chinese, Indian, Arab, Iranian, Russian, etc., ancestry. And only a little more time to find out that a lot of them are not citizens, and have a lot of hassles with respect to living and working here. What do you suppose happens to the US lead in high-tech, when we *stop* drawing in some large fraction of the smartest, hardest-working thousandth of a percent of mankind? in '94 there was report (possibly sjmn?) that said at least half of all cal. univ. tech. PHDs were awarded to foreign born. during some of the tech green card discussions in the late '90s ... it was pointed out that the internet boom (bubble) was heavily dependent on all these foreign born since there was hardly enuf born in the usa to meet the demand. in the late 90s there were some reports that many of these graduates had their education paid by their gov. with directions to enter an us company in strategic high tech areas for 4-8 years and then return home as tech transfer effort. i was told in the late 90s about one optical computing group in a high tech operation where all members of the group fell into this category (foreign born with obligation to return home after some period). another complicating factor competing for resources during the late 90s high-tech, internet boom (bubble?) period was the significant resource requirement for y2k remediation efforts. nsf had recent study on part of this http://www.nsf.gov/sbe/srs/infbrief/ib.htm graduate enrollment in science and engineering fields reaches new peak; 1st time enrollment of foreign students drops http://www.nsf.gov/sbe/srs/infbrief/nsf04326/start.htm -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/
Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
I arrived at that decision over four years ago ... TCPA possibly didn't decide on it until two years ago. In the assurance session in the TCPA track at spring 2001 intel developer's conference I claimed my chip was much more KISS, more secure, and could reasonably meet the TCPA requirements at the time w/o additional modifications. One of the TCPA guys in the audience grossed that I didn't have to contend with the committees of hundreds helping me with my design. There are actually significant similarities between my chip and the TPM chips. I'm doing key gen at very first, initial power-on/test of wafer off the line (somewhere in dim past it was drilled into me that everytime something has to be handled it increases the cost). Also, because of extreme effort at KISS, the standard PP evaluation stuff gets much simpler and easier because most (possibly 90 percent) of the stuff is N/A or doesn't exist early ref: http://www.garlic.com/~lynn/aadsm2.htm#staw or refs at (under subject aads chip strawman): http://www.garlic.com/~lynn/index.html#aads brand other misc. stuff: http://www.asuretee.com/ random evauation refs: http://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa? http://www.garlic.com/~lynn/2002j.html#86 formal fips186-2/x9.62 definition for eal 5/6 evaluation [EMAIL PROTECTED] on 8/15/2002 6:44 pm wrote: I think a number of the apparent conflicts go away if you carefully track endorsement key pair vs endorsement certificate (signature on endorsement key by hw manufacturer). For example where it is said that the endorsement _certificate_ could be inserted after ownership has been established (not the endorsement key), so that apparent conflict goes away. (I originally thought this particular one was a conflict also, until I noticed that.) I see anonymous found the same thing. But anyway this extract from the CC PP makes clear the intention and an ST based on this PP is what a given TPM will be evaluated based on: http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-PP-TPM1_9_4.pdf p 20: | The TSF shall restrict the ability to initialize or modify the TSF | data: Endorsement Key Pair [...] to the TPM manufacturer or designee. (if only they could have managed to say that in the spec). Adam -- http://www.cypherspace.org/adam/
Re: Challenge to David Wagner on TCPA
actually it is possible to build chips that generate keys as part of manufactoring power-on/test (while still in the wafer, and the private key never, ever exists outside of the chip) ... and be at effectively the same trust level as any other part of the chip (i.e. hard instruction ROM). using such a key pair than can uniquely authenticate a chip effectively becomes as much a part of the chip as the ROM or the chip serial number, etc. The public/private key pair if appropriately protected (with evaluated, certified and audited process) then can be considered somewhat more trusted than a straight serial number aka a straight serial number can be skimmed and replayed ... where a digital signature on unique data is harder to replay/spoof. the hips come with unique public/private key where the private key is never known. sometimes this is a difficult consept ... the idea of a public/private key pair as a form of a difficult to spoof chip serial when all uses of public/private key, asymmetric cryptograhy might have always been portrayed as equilanet to x.509 identity certificates (it is possible to show in large percentage of the systems that public/private key digital signatures are sufficient for authentication and any possible certificates are both redundant and superfulous). misc. ref (aads chip strawman): http://www.garlic.com/~lynn/index.html#aads http://www.asuretee.com/ [EMAIL PROTECTED] on 6/13/2002 11:10 am wrote: This makes a lot of sense, especially for closed systems like business LANs and WANs where there is a reasonable centralized authority who can validate the security of the SCP keys. I suggested some time back that since most large businesses receive and configure their computers in the IT department before making them available to employees, that would be a time that they could issue private certs on the embedded SCP keys. The employees' computers could then be configured to use these private certs for their business computing. However the larger vision of trusted computing leverages the global internet and turns it into what is potentially a giant distributed computer. For this to work, for total strangers on the net to have trust in the integrity of applications on each others' machines, will require some kind of centralized trust infrastructure. It may possibly be multi-rooted but you will probably not be able to get away from this requirement. The main problem, it seems to me, is that validating the integrity of the SCP keys cannot be done remotely. You really need physical access to the SCP to be able to know what key is inside it. And even that is not enough, if it is possible that the private key may also exist outside, perhaps because the SCP was initialized by loading an externally generated public/private key pair. You not only need physical access, you have to be there when the SCP is initialized. In practice it seems that only the SCP manufacturer, or at best the OEM who (re) initializes the SCP before installing it on the motherboard, will be in a position to issue certificates. No other central authorities will have physical access to the chips on a near-universal scale at the time of their creation and installation, which is necessary to allow them to issue meaningful certs. At least with the PGP web of trust people could in principle validate their keys over the phone, and even then most PGP users never got anyone to sign their keys. An effective web of trust seems much more difficult to achieve with Palladium, except possibly in small groups that already trust each other anyway. If we do end up with only a few trusted root keys, most internet-scale trusted computing software is going to have those roots built in. Those keys will be extremely valuable, potentially even more so than Verisign's root keys, because trusted computing is actually a far more powerful technology than the trivial things done today with PKI. I hope the Palladium designers give serious thought to the issue of how those trusted root keys can be protected appropriately. It's not going to be enough to say it's not our problem. For trusted computing to reach its potential, security has to be engineered into the system from the beginning - and that security must start at the root! - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: maximize best case, worst case, or average case? (TCPA)
security modules are also inside the swipe pin-entry boxes that you see at check-out counters. effectively both smartcards and dongles are forms of hardware tokens the issue would be whether a smartcard form factor might be utilized in a copy protection scheme similar to TCPA paradigm a single hardware chip that you register for all you applications or in the dongle paradigm you get a different smartcard for each application (with the downside of the floppy copy protection scenario where a user with a half dozen active copy protected applications all wanted their smartcard crammed into the same smartcard reader simultaneously). many of the current chipcards i believe are used in the magnetic stripe swipe mode for authenticating specific transactions most of the rest are used for password substitute at login type events. Many of the chipcards following the straight payment card model result in end-user having large number of different institutional tokens (similar to the floppy copy protect paradigm). Following the institutional-specific and/or application-specific token paradigm starts to become difficult to manage as the number of tokens increase and the probability that multiple are required simultaneously increases. That eventually leads into some sort of person-centric or device-centric paradigm not so much an issue of the form factor (floppy, chipcard, dongle, etc) but an issue of whether there are potentially large numbers of institutional/application specific objects or small numbers of person/device specific objects. So a simple issue is the trade-off between the institutional/application specific objects which seem to have some amount of acceptance (payment cards, chip cards, various dongle forms, etc) but in many instances can scale poorly ... especially if multiple different such objects have to be available concurrently vis-a-vis switching to a person/device specific object paradigm (chipcard, dongles, etc, potentially exactly same formfactor but different paradigm) [EMAIL PROTECTED] on 6/30/2002 12:39 pm wrote: I think dongles (and non-copyable floppies) have been around since the early 80s at least...maybe the 70s. Tamper-resistant CPU modules have been around since the ATM network, I believe, in the form of PIN processors stored inside safes) The fundamental difference between a dongle and a full trusted module containing the critical application code is that with a dongle, you can just patch the application to skip over the checks (although they can be repeated, and relatively arcane). If the whole application, or at least the non-cloneable parts of the application, exist in a sealed module, the rest of the application can't be patched to just skip over this code. Another option for this is a client server or oracle model where the really sensitive pieces (say, a magic algorithm for finding oil from GIS data, or a good natural language processor) are stored on vendor-controlled hardware centrally located, with only the UI executing on the end user's machine. What I'd really like is a design which accomplishes the good parts of TCPA, ensuring that when code claims to be executing in a certain form, it really is, and providing a way to guarantee this remotely -- without making it easy to implement restrictions on content copying. It would be nice to have the good parts of TCPA, and given the resistance to DRM, if security and TCPA have their fates bound, they'll probably both die an extended and painful death. I suppose the real difference between a crypto-specific module and a general purpose module is how much of the UI is within the trusted platform envelope. If the module is only used for handling cryptographic keys, as an addition to an insecure general purpose CPU, with no user I/O, it seems unlikely to be useful for DRM. If the entire machine is inside the envelope, it seems obviously useful for DRM, and DRM would likely be the dominant application. If only a limited user IO is included in the envelope, sufficient for user authentication and keying, and to allow the user to load initially-trusted code onto the general purpose CPU, but where the user can fully use whatever general purpose code on the general purpose CPU, even uncertified code, with the certified module, it's not really useful for DRM, but still useful for the non-DRM security applications which are the alleged purpose behind TCPA. (given that text piracy doesn't seem to be a serious commercial concern, simply keeping video and audio playback and network communications outside the TCPA envelope entirely is good enough, in practice...this way, both authentication and keying can be done in text mode, and document distribution control, privacy of records, etc. can be accomplished, provided there is ALSO the ability to do arbitrary text processing and computing outside the trusted envelope, .) If it's the user's own data being protected, you don't
Re: PKI: Only Mostly Dead
I think there is even less I than most people suspect. I've recently taken to some manual sampling of SSL domain name server certificates ... and finding certificates that have expired ... but being accepted by several browsers that i've tested with (no complaints or fault indications). there was thread in another forum where I observed that back when originally working on this payment/ecommerce thing for this small client/server startup that had invented these things called SSL HTTPS ... my wife and I had to go around to various certificate manufactures with regard to some due diligence activity. I think w/o exception that they all made some comment about the PK being technical ... and the I being service ... and providing service is an extremely hard thing to do (and they hadn't anticipated how really hard it is). some past ssl domain name certificate threads: http://www.garlic.com/~lynn/subtopic.html#sslcerts As i've observed previously there are a number of ways that the technical stuff for PK can be done w/o it having to equate to (capital) PKI ... some recent threads on this subject: http://www.garlic.com/~lynn/aepay10.htm#31 some certification authentication landscape summary from recent threads http://www.garlic.com/~lynn/aepay10.htm#32 some certification authentication landscape summary from recent threads http://www.garlic.com/~lynn/aepay10.htm#34 some certification authentication landscape summary from recent threads http://www.garlic.com/~lynn/aepay10.htm#35 some certification authentication landscape summary from recent threads http://www.garlic.com/~lynn/aadsm11.htm#18 IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#19 IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#20 IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#21 IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#22 IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#24 Proxy PKI. Was: IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#25 Proxy PKI. Was: IBM alternative to PKI? http://www.garlic.com/~lynn/aadsm11.htm#26 Proxy PKI http://www.garlic.com/~lynn/aadsm11.htm#27 Proxy PKI http://www.garlic.com/~lynn/aadsm11.htm#30 Proposal: A replacement for 3D Secure http://www.garlic.com/~lynn/aadsm11.htm#32 ALARMED ... Only Mostly Dead ... RIP PKI http://www.garlic.com/~lynn/aadsm11.htm#33 ALARMED ... Only Mostly Dead ... RIP PKI http://www.garlic.com/~lynn/aadsm11.htm#34 ALARMED ... Only Mostly Dead ... RIP PKI http://www.garlic.com/~lynn/aadsm11.htm#35 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda http://www.garlic.com/~lynn/aadsm11.htm#36 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda II http://www.garlic.com/~lynn/aadsm11.htm#37 ALARMED ... Only Mostly Dead ... RIP PKI http://www.garlic.com/~lynn/aadsm11.htm#38 ALARMED ... Only Mostly Dead ... RIP PKI ... part II http://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda http://www.garlic.com/~lynn/aadsm11.htm#40 ALARMED ... Only Mostly Dead ... RIP PKI ... part II http://www.garlic.com/~lynn/aadsm11.htm#42 ALARMED ... Only Mostly Dead ... RIP PKI ... part III [EMAIL PROTECTED] at 6/1/2002 2:18am wrote: Peter Gutmann should be declared an international resource. Thankyou Nobody. You should have found the e-gold in your acount by now : -). Only one little thing mars this picture. PKI IS A TREMENDOUS SUCCESS WHICH IS USED EVERY DAY BY MILLIONS OF PEOPLE. Of course this is in reference to the use of public key certificates to secure ecommerce web sites. Every one of those https connections is secured by an X.509 certificate infrastructure. That's PKI. Opinion is divided on the subject -- Captain Rum, Blackadder, Potato. The use with SSL is what Anne|Lynn Wheeler refer to as certificate manufacturing (marvellous term). You send the CA (and lets face it, that's going to be Verisign) your name and credit card number, and get back a cert. It's just an expensive way of doing authenticated DNS lookups with a ttl of one year. Plenty of PK, precious little I. The truth is that we are surrounded by globally unique identifiers and we use them every day. URLs, email addresses, DNS host names, Freenet selection keys, ICQ numbers, MojoIDs, all of these are globally unique! [EMAIL PROTECTED] is a globally unique name; you can use that address from anywhere in the world and it will get to the same mailbox. You can play with semantics here and claim the exact opposite. All of the cases you've cited are actually examples of global distinguisher + locally unique name. For example the value 1234567890 taken in isolation could be anything from my ICQ number to my shoe size in kilo-angstroms, but if you view it as the pair { ICQ domain, locally unique number } then it makes sense (disclaimer: I have no idea whether that's either a valid ICQ number or my shoe size