Re: uTorrent overflow
Jon Ribbens <[EMAIL PROTECTED]> wrote: > On Sat, Jun 02, 2007 at 08:15:09PM -, [EMAIL PROTECTED] wrote: > > if [ "$X" = "y" ];then > > telnet $victamIP $victamport > Um, is it just me, or does this "exploit" do nothing at all? According to the comment that is output a few lines above, you are supposed to "after you connect hold the enter key" So the claim is probably, that a large number of or probably rather will do something to utorrent. However I have not even tried to verify it, as this "advisory" contains almost no detail (like version, effect on utorrent), etc. To the OP: If you want to be taken seriously, you should take more time to investigate the vulnerability and to learn the right tools (like perl and netcat in this case) than to write silly scripts that ask for data that could just be supplied on the commandline. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Steganos Encrypted Safe NOT so safe
[EMAIL PROTECTED] wrote: > They boast how excellent their encryption and how uncrackable they are. If your findings are true, it is utterly insecure. Worse than what you found. Can someone confirm this vulnerability? > Simply mount anyones .SLE file encrypted drive into the software and it > will ask you for their password but won't let you in because it's > encrypted. If your findings are true, it is not encrypted, bute merely access-controlled by the Steganos Software. If it were encrypted - in the sense of "encrypted with the passphrase, so unuseable without that" - the program would simply be unable to do something like: > [update detects fake key and] > after the update and it will now PUNISH you by resetting your > encrypted drives passwords to "123" until you buy a registered copy. This should be impossible, if the passphrase would play a role in the encryption. > Stores passwords in clear text. Yes - the key must be retrievable in some way, if the password can be changed without knowledge of the prior password. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Firefox focus stealing vulnerability (possibly other browsers)
Michal Zalewski <[EMAIL PROTECTED]> wrote: > > A proper solution would be to keep a list of files explicitly selected > > by the user and only allow uploads of files in this list. Then even if a > > script can manipulate the field, the browser won't upload files that > > have not been selected by the user. > Not necessarily that easy: notice that it is the user who enters the name > of a target file. Right. And in some cases, I find is annoying that you cannot preset paths for file upload boxes. I use a web based report generator that can include pictures saved while investigating the to-be-reported issue and solve it by displaying the right path above the file path entry box and tell users to copy it to quickly change to the right directory. > Unless you want to prevent the browser from accepting any files that were > not chosen using a visual file selector widget Not a good idea to limit oneself to visual selectors IMHO. It is sometimes quite convenient to just paste a known path. Maybe it has also implications for some handicapped people. > but in such a case, there's not much point in having a manual file path > entry box in the first place. Right. Thus let me suggest a new approach to the problem: Let scripts and form parser handle upload fields just as usual form fields. Prefilling them with VALUE, changing them from script, etc. pp. BUT: Warn the user about uploading files. Present him with a complete list of all files to be uploaded and a big warning. And make this dialog not settable to "don't show me again" and use some kind of "click reflex prevention" like greying out the "OK" box for the first few seconds. Yes, this implies the danger, that users will not verify what is to be uploaded and just click "yes", or will loose track in large uploads (like photo printing services) or will simply not understand the implications. But I think it will be much simpler and secure to implement. All you need to hook is the form-upload preparation routines. It doesn't matter with what contrived method a form field has been filled with some filename - no need to protect it. All that needs protection then is the dialog that must unambigously state to the user what he is doing. This would IMHO improve usability for some services (like allowing to preset the "My Pictures" path for photo print services or the common path to a logfile for crash-reporting) while dramatically reducing the amount of code that needs to be watched for interactions with filling out file upload forms. Without reducing security. One could even highlight files that are not common to upload (i.e. no pictures, as this is probably 90% of file upload usage) and remind the user of the platform-specific dangers of submitting specific files. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Defeating CAPTCHAs via Averaging
Lou Katz <[EMAIL PROTECTED]> wrote: > On Wed, Jan 31, 2007 at 12:55:41AM +0100, Fred Leeflang wrote: > > distortion isn't noise-like. So when getting the same captcha several > > times and averaging out the noise-like distortion will not result in > I wonder if noise averaging can be trivially defeated (or at least made > more computationally expensive) by randomly changing the size of the > captcha images, with or without changing the size of the 'captcha' > characters/numbers. No, but it can be easily defeated by changing the placement/appearance of the number(s) as well as that of the noise or by keeping both constant over reloads. What is exploited here, is the fact that noise and payload behave differently on reload. This allows to separate them. Please note, that averaging is a very simple technique to do that. Depending on the type of captcha, one can use methods that converge much more quickly. Simplest one would be to use the simple majority of pixel values or the median value, if slight global noise (e.g. from compression artefacts) is expected. This should yield almost perfect results with as low as 3 different images. Adding a tiny bit of spatial filtering might help as well. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: On the Recent PGP and Truecrypt Posting
er and switching when it is ready. > The re-encryption problem is something we take very seriously, and > we have seriously discussed whether we should have a re-encryption > daemon that runs in the background and works like a garbage collector, > re-encrypting objects that need re-encrypting, based on some security > policy describing when things will need to be re-encrypted. This is a very nice idea, but I would rather avoid it, if it can be helped, because of the inherent complexity. There isn't much to it, if you are just talking about relatively small objects that can be reencrypted in a very short timeframe. But it gets quite a bit complicated when reencrypting large disks or similar. > It is a garbage collector, but one that is tied to a two-phase-commit, > zero loss database update system. Is that cool, or is it frightening? > Or both? Both of course. I tend to be frightened by cool things. To sum it up: I think the problem boils down to determining, if you just want to change the protection scheme of a key, or if you actually want to revoke it. (I use the term "revoke" a little loosely here. Maybe one should rather talk about "marking the key as tainted" or similar.) This is something the user has to decide, as he has at least a bit of information, that can be used to guess, if revoking is necessary. Of course, the safe option is to always revoke. However, unless the amount of data protected by the key is low, this can cause lengthy and ressource consuming operations. I think it is a good thing, to make revocation as painless as possible, to avoid the bias that it might induce in users, when deciding if they should revoke the key. However, unless/util there is a way to absolutely painlessly revoke a key, I am afraid we will have to leave the decision to human judgement. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Proof of concept that PGP AUTHENTICATION CAN BE BYPASSED WITHOUT PATCHING
will obscure real problems from reviewers. > I still see this as INTEGRITY and AVAILABILITY attacks on PGP. They are not. Moreover: If you want to talk about INTEGRITY at all, you should never ever use SFX-Style .EXE files that might have been tampered with, as I explained in http://seclists.org/lists/bugtraq/2006/Apr/0519.html#start > I do not think it is normal behavior of an encryption application to > reveal it is passphrase location There is no way to hide it. If you change a password on a container, the password will change, so a diff will show where it is stored. > and I do not see bypassing the passphrase dialog-box as Feature either. Try your "attack" on a volume that is not cloned from another volume, or on a volume that has been reencrypted. See it fail. I will skip commenting on the rest of the mail for obvious reasons. Regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Circumventing quarantine control in Windows 2003 and ISA 2004
Memet Anwar <[EMAIL PROTECTED]> wrote: > [bypassing Windows 2003 Network Access Quarantine Control by manipulating > scripts or programs used to verify the baseline] > == > MSRC repeatedly stressed that according to ISA 2004 online help, the > quarantine control 'is not a security feature'. They are right. Client side checks can never be a bulletproof security feature. They can be a way to remind benevolent users to adhere to the security guidelines and a way to catch lazily written malware, but not much more. If we are expecting an infected or otherwise malicious client, you can't expect it to answer truthfully to the "are you a bad guy" question. Your findings show, that is is very easy to circumvent this particular mechanism. However, even if it were improved to check integrity of the scripts or similar, this won't help much. After all, you are only looking at network traffic from the possibly malicious client. The client must know how to generate this traffic, if it is in uncompromised state. So the only way to reliably detect if it is compromised would be, if any kind of compromise would destroy something in this state that makes it impossible to generate such traffic. This is not quite possible. At worst, put the uncompromised machine into a virtual machine and after all checks went well, manipulate it from the host system. However usually much simpler rootkit stealth techniques will suffice. After all: Who cares, if the virus definitions are current, if the scanner can't read the infected files, as they are cleaned on the fly in the manipulated OS calls? > Security feature or not, it certainly not working as many admins would > expect. This expectation is, what needs to be corrected. > What's the purpose of having a quarantine control, if by-design, it > can be circumvented ? ;) Angel on the left: To remind lazy users to adhere to the security policy. Devil on the right: Marketing? Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: ADVISORY FOR IOPUS SECURE EMAIL ATTACHMENTS
[EMAIL PROTECTED] wrote: > # ADVISORY FOR IOPUS SECURE EMAIL ATTACHMENTS > ### Affected: iOpus Secure Email Attachments ### > ### Link: http://www.iopus.com/freeware/secure%2Demail/### > ### Type: File Encryption Tool ### > ### Problem : Passphrase guessing, Passphrase Issue### > ## > ### From iopus web site "iOpus SEA protects your data not only on its way ### > ### across the internet, but also on the recipient's PC." THIS IS ONLY ### > ### TRUE IF YOU DID NOT PICK SOME TYPE OF PASSWORDS. ### It is never true. > ### I have found a problem with the way iOpus handle the user password.### > ### The problem can EXPOSE your Protected encrypted file if you did not### > ### pay attention when you pick your password. ### It is always exposed. > ### 1- Create a text file with one word inside "hello" ### > ### 2- Encrypt your text.txt file using iOpus. The out put is text.exe ### Umm ... yeah. Great. So you send .exe files across the internet and think anything is safe, then? If you do this, you expose the data anyway. And worse: You pose a threat to any of your correspondents. Why? Because anyone who could get a copy of the encryted file is very possibly also able to either _replace_ it with a manipulated copy (which requires a little bit more than just read access), or just send a plausible followup-message with a "correction". In any case, he can easily coax the receiver into executing an untrusted binary. Because that is, what you are expecting of the receiver anyway. And you are even telling him that this is in the interest of security. In the case where you are only interested in the file contents, you could just use an .exe-Infector that will download and install a keylogger before executing the infected binary inside. Or you can piggy-pack it with a Screenshot-Maker to view the contents as they are displayed. However, you could go even further and completely trojanize the system in question. So basically any "Self-extracting/-encrypting" Scheme is not suitable for protecting messages that are sent through insecure channels. You can use them, to _protect data from view_, when you can ensure message integrity by some other means. E.g. for data you carry around on an USB Stick that you always keep very safe. In this case, it can protect your data, if it is stolen. Howver it cannot protect it, if somebody can _alter_ the data and you cannot ensure integrity by other means. Other than that, if somebody manages to get cryptography _this wrong: > ### 3- Pick AAA as password### > ### 4- Encrypt the file### > ### 5- Double click text.exe to open it, you should see Enter Password ### > ### 6- Now you think you need to enter AAA right ? WRONG ### > ###Just enter A or AA and you will have access to your so called ### > ###protected file(s). ### > ### 7- You can try with ABCABCABCABCABC as password. To access the file### > ###you guessed it you DO NOT NEED To enter ALL your password :-) you ### > ###can just enter ABC and you will have access to your protected data ### > ### 8- Let us see if you can find what you need to enter if you have a ### > ###password like this "ABCDEFGABCDEFGABCDEFG". I hope you got it ### > ###You need to enter ABCDEFG. ### I wouldn't trust him farther than I can throw his whole company building. To be honest, I quite don't see, how one can manage to make this kind of mistake and at the same time use Blowfish with "a key length of up to 448 bits". Actually it sound's like they are doing XOR encoding with a repeating pattern (which would have exactly the properties you describe). Possibly, that they are doing some silly kind of key expansion, by repeating the keyphrase until the keylength is reached. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Java script exploit
[EMAIL PROTECTED] wrote: > Greetings and Salutations: > I just receieved this exploit, It is none, as others already have mentioned. I suppose you got it from one of the various "you received a postcard" mailings going round. It is basically a trampoline that will lead to a series of webservers that have been compromised which will redirect to each other (typically 2 or 3 steps) using frames, iframes or similar javascripts (they use the same basic en-/decoder, as far as I have seen). The last step, however (which is probably what triggered a trap on your system) is a piece of HTML that is using 3 or 4 different exploits to try to download and execute a variant of Haxdoor. The first two are trying to use ActiveX together with .chm bugs (not sure, if I should count them as two), the next utilizes some JavaApplet called " SandBoxEscape.class", while the fourth tries to exploit http://www.securiteam.com/windowsntfocus/6B00L2KEKW.html The binary that should have been downloaded was identified by virusscan.jotti.org as being - Bitdefender BehavesLike:Trojan.WinlogonHook (probable variant), - NOD32 a variant of Win32/Haxdoor - VBA32 Trojan-Downloader.Agent.84 (probable variant). Note, that only three of about a dozen Scanners installed on jotti identify the malware, as it seems to be modified. I have given a short description of what I've seen there in the german newsgroup de.comp.security.virus with MID [EMAIL PROTECTED] > Subject: You have received a postcard! Id: 7963 Ah. Good guess. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Re: Vulnerability in WinRAR - Phishing based
[EMAIL PROTECTED] wrote: > Phishing through WinRAR 3.51 > Due to the build-up of WinRAR, some vital parts of the programs > functions and url's are visible through a simple HEX editor. This is not specific to WinRAR. It is true for almost every usual binary. Exceptions are only those using a compressor or obfuscation layer. > If a user want's to buy the full version of WinRar, the user can use > WinRAR's menu to access WinRars homepage. > Now if the file WinRAR.exe was altered at 0009BCC0, it would be possible > to conduct a phishing attack against the user. If the binary is modified, you are in far worse trouble than a mere phishing attack. If you can modify a binary, you can make it do anything. Like installing a keylogger, intercepting the banking data, even if it is entered into the original site, or your bank site. The only point is, that modifying some string is a bit easier than modifying functionality. However, this isn't of much value for programs that aren't run directly, but rather installed first. You simply wrap another installer around the existing installer executable (one that doesn't ask questions), install your keylogger stuff and then call the original installer. This is a generic process that only needs to be done once and can be accomplished with fairly standard tools. Even for programs that are usually directly executed without installation (like e.g. putty), a generic .EXE-infector as known from viri can be used. > In a realistic senario, the attacker could spread the modified file(s), > through file sharing networks or download sites. If you run software from untrusted sources, phishing is one of your smallest problems. > Other versions of WinRAR might be vulnerable as well. About every software that shows builtin external URLs to the user at some point is "vulnerable" to that. If we go to the scenario of a user running an untrusted binary, all is lost anyway. Kind regards, Andreas Beck -- Andreas Beck http://www.bedatec.de/
Obfuscating sensitive data? (was: response to tax software not encrypting tax info)
Hmmm - I originally didn't want to take part in that discussion, but we are seeing more and more "Vulnerabilities" reported about sensitive Information in Files not being obscured by some "crypto". Let's start out with the latest comments: > What could help our users is a default simple encryption of the Tax files. No. "default simple encryption" means it can be broken with a "default simple algorithm". Encryption without a key is useless. As is encryption without a sufficiently well picked key. If you can retrieve the file, brute force is always possible, so nothing but really _strong_ encryption using _strong_ keys will help. And I doubt any user that is as careless as having unprotected shares or opening C:\ to a filesharing network would take more care when picking passwords for all those potentially sensitive files out there. > Because not everyone using today's computers can utilized EFS or a third > party encryption tool. Putting some trivial encryption code into each and every application will not help. It will rather obscure the problem. Anyone with a little clue about reverse-engineering will be able to break it. And it will almost always be a BORE-scenario. So what is the problem, and what can be done to fix it? The problem is storage of sensitive information in files that are accessible to third parties. How can we fix it? 1) Remove third party access to the files! === This is IMHO the most important step, due to the weaknesses in the other methods I will detail below. If this is done properly, the whole "problem" disappears. But this includes that we have to try to make sure that software gets written in a responsible way, discouraging or even disallowing dangerous settings, warning the user in a way that really catches his attention (i.e. _NOT_ a "Press OK, if you want to do that stupid thing", but rather "Enter 'I know this is stupid.' in the Textbox below.") when he is doing something dumb and provide the software with adequate updates and patches as need arises. Yeah - a lot of work, but that will eliminate the problem. However we have to be realistic: This will not happen in the near future. Especially as many users seem to have a fancy for lots of more or less useless applications of ... well ... unknown quality. Now for the workarounds: 2) If 1) cannot be done for some reason, use _strong_ encryption to _encrypt_ the data. XORing them with "wrdlbrmft" will just make an attacker laugh, assuming he is just a bit smarter than a piece of wood. Never just obfuscate the passwords by using a generic key. Even if the app picks one individual key at installation time, it has to be stored somewhere and when you can retrieve the file, chances are, that you can as well retrieve the stored key. Note that when picking an encryption algorithm, you should be aware, that for such applications it should be very resistant to known plaintext attacks, as you will often be able to retrieve parts of the plaintext (like the name of the user) from other files you got from the system. In some cases it might be as well possible to mount chosen plaintext or maybe even chosen ciphertext attacks. 3) Strong encryption needs strong passwords. Make sure the user picks a strong one. The typical user won't do so by himself, as the success of worms guessing share passwords shows. Either get on the nerves of the user rejecting too simple passwords, or rather have strong passwords picked for the user, e.g. by using smartcards or similar. 4) If for some reason encryption cannot be used (say due to laws), make the user _AWARE_ that he is storing sensitive data to a file and that he should take Measures to protect it. What we IMHO should _not_ do, is encourage obfuscation of data with weak algorithms or generic keys fed to strong algorithms. That's like just putting a nice blanket over the problem and hoping no one will look beneath it. IMHIO obfuscating data serves only one purpose: Not giving away Information to someone _briefly_ _viewing_ over the file. That's o.k. to keep the sysadmin from the temptation to hit a user that picks a weak or offensive password with a wet haddock. It's as well o.k. to guard a password against a coworker that happened to look over your shoulder when you opened the wrong file. But it is NOT o.k., if an attacker can retrieve the file and play around with it all day. CU, Andy -- = Andreas Beck| Email : <[EMAIL PROTECTED]> =
BDT_AV200212140001: Insecure default: Using pam_xauth for su from sh-utils package
Bedatec Security Advisory 200212140001 -- Discovered : 2002-12-08 Vendor notified : 2002-12-14 (sorry for the delay, had to check if default is still set for RH 8.0) Author : Andreas Beck <[EMAIL PROTECTED]> Application : su as contained in e.g. sh-utils-2.0.12-3. RedHat pam packages like e.g. pam-0.75-18.7 Severity : Insecure default could allow X Session cookie stealing from root thus gaining root priviledges for a user already having unpriviledged acess. Risk : Medium (root compromise, but needs interaction with root) Vendor status: Vendor will make updated packages available shortly Vendor statement : "Red Hat is working on updated pam_xauth packages which adds back the missing ACL functionality. These will be available shortly from http://rhn.redhat.com/errata/ and via the Red Hat Network." Affected Versions: At least Redhat 8.0 and 7.1 are vulnerable. Supposedly all versions in between are as well. RedHat 7.0 and before are _NOT_ vulnerable. CVE reference: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2002-1160 Overview: - On Redhat Linux including 8.0, PAM comes with a module pam_xauth which can forward X MIT-Magic-Cookies to newly instantiated sessions. While this is a nice feature and generally harmless for the case that an unpriviledged user elevates his priviledges to root using e.g. su or the various wrappers for some root-only programs, it poses a security risk for root, if root uses su in order to assume the id of a less priviledged user, e.g. for troubleshooting purposes. Details: While checking an unrelated problem, we discovered that using su would allow the target user to connect to the running X session owned by the user that used su. Quick checking > becka@cupido$ su devel > Password: > [devel@cupido becka]$ xauth > Using authority file /home/devel/.xauthupNGf8 revealed, that su seems to forward the MIT-Magic-Cookie to the target user in a temporary .xauth-File. > [devel@cupido devel]$ ls -l /home/devel/.xauthupNGf8 > -rw---1 develdevel 51 Dez 8 00:26 .xauthupNGf8 This file is owned by the target user and only readable by the target user, as it must/should be for the method to work. This behaviour causes a security risk when root uses su to become an unpriviledged user for troubleshooting an account. Possible attack scenario: - Write a mail to local root, stating that you have difficulties logging in, like e.g. you get logged out after 5 seconds in which you can run programs and everything, you just get logged out afterwards. This should be a strange enough description, that root will probably want to verify this behaviour. Assuming root is running an X session on the console under his normal login name, he will probably su to root to allow to assume the id of the complaining user without having to supply a password by using su again. [Depending on the method of connection, a remote X server should also do.] The default entries in /etc/pam.d/su will cause the X session cookie to be forwarded to first root and then the user whose "problem" is to be investigated. Right after sending the mail, said user places a process in memory that will wait for the .xauth-file to appear. Only a very careful root would check for running processes, and even then, he is not likely to shut down something like "longrunning_calculation" that is niced up and all. The process will grab the contents of the .xauth-File and can then connect to the X server, as it knows the cookie. Though this is annoying by itself (User can see what is on the root desktop, send fake events, thus run programs as the user who started the desktop etc.), in this scenario it is much worse, as we know that there is a terminal open that has just su'ed to the current user, very probably from _root_. Just send it "exit" and then execute whatever you like. This way you even reproduce the problem you told root about. O.K. - he might get suspicious now, but the damage is done. Some webpages suggest, that pam_xauth can be customized to only forward cookies under certain conditions. However neither the manpage for su nor the one for pam_xauth mention how to do that. Moreover the su manpage does not state, that X forwarding is on by default. Proof of concept/How to reproduce: -- Log in as an unpriviledged user ("victim"). Start up X if necessary. Get root using su, then assume the ID of another unpriviledged user ("attacker") using su. Log in as "attacker" remotely or from a console. Locate the -xauth file. Give
Re: RAZOR advisory: Linux util-linux chfn local root vulnerability
Andrew Pimlott <[EMAIL PROTECTED]> wrote: > > > If he is smart, he will check whether the file is open (eg with fuser) > > Not really. The file does not have to be open to be present in the system. > > It is prefectly possible to leave a dangling root-owned file several > > times, > Correct, but: the admin should still verify that it is not open > before deleting it (in his cron job). As long as there is no atomic "check-if-file-is-open-and-if-not-delete-it" this just makes exploitation harder by introducing another race condition. CU, Andy -- = Andreas Beck| Email : <[EMAIL PROTECTED]> =
Re: VNC authentication weakness
> VNC uses a DES-encrypted challenge-response system to avoid passing > passwords over the wire in plaintext. > However, it seems that a weakness in the way the challenge is generated by > some servers would make this useless. This is a generic problem common to all challenge response systems. If the challenge can be issued multiple times with a reasonable probability or even timing based like described below, you can just forget about the "security" it adds. O.K. - you don't learn the plaintext passwords, but you can still login by sniffing. > Against tightvnc-1.2.1_unixsrc, you'll see output like > $ python pvc.py somehost:1 > 4b24fbab355452b55729d630fcf73d43 > b3acdf3fab422b7aa49b8d786f93def3 > b3acdf3fab422b7aa49b8d786f93def3 > b3acdf3fab422b7aa49b8d786f93def3 > b3acdf3fab422b7aa49b8d786f93def3 *sigh*. This looks like the challenge is timing based. Dependent on how the server works, it should implement either an extra counter or something like the PID. As long as you cannot go through the PID-space within a second, this should be fair enough. Another possibility would be to iterate a hash function and use a hashed version of its output as challenge. However care should be taken when initializing that one to avoid having the same sequence at every server restart. > WinVNC version 3.3.3R9 will display output more like > 91ff701f7dce8c6eebbc6062ffebcc6a > Server declined connection > Server declined connection *sigh* being too stupid to do it right ... but at least they have done _something_ about it. Eventually they might be ratelimiting only the IP you were trying from, which might be a good idea. > On systems with /dev/urandom, the following function will give challenge > strings which should be immune to the problems discussed: DONT do this. This will deplete the random pool for no good reason. A challenge does not need to be truely random, unless there exist vulnerabilities in the used hash function that will allow better analysis, if the challenge has a specific structure. A challenge only needs to be _different_ each time. Using truely random data of sufficient length will of course yield different data with a very high probability, but is IMHO overkill. Note that at least some implementations of /dev/urandom will start out giving away _all_ the entropy pool and then continue on a cryptographically strong pseudo-random-generator when the pool is empty. If you drain the random pool for simple stuff like that, it will not be filled enough for the really important matters like key generation. CU, Andy -- = Andreas Beck | Email : <[EMAIL PROTECTED]> =
Re: remote DoS in Mozilla 1.0
Tom <[EMAIL PROTECTED]> wrote: > > Is this really a mozilla bug? > It's a bug in X that becomes remote-exploitable through mozilla. Ack. If X can be crashed by an application, X is at fault. We all know, that there are "legal" ways to make X unuseable (xlock e.g.), but actually crashing the X server should never happen, as a faulty application may cause data loss in correct applications this way. Not what we expect in a Unix environment. > > (a) Fix every app to disallow font sizes bigger then > > (b) Fix XFS to return an error code to the calling application > > when requested font size is greater then configured > > Personally i would go for b. > Personally, I would go for both, with a limitation on a, namely that > apps that accept remote data (i.e. mozilla) should definitely do some > checking on that data before handing it to the local system (i.e. X). Right. Applications that accept untrusted data have a special responsibility to canonicalize them in order to protect the underlying system from the possible side effects. No matter if the underlying system _should_ be able to cope with them. However that does not mean, the bug in the lower layers may remain there. Also note, that - as I already reported to Tom in PM - not all X servers are affected. I tested the example sites using Mozilla 1.0RC2 on an XGGI server which is based on rather old X-consortium code IIRC and the expected effects did not show up. CU, Andy -- Andreas Beck | Email : <[EMAIL PROTECTED]>