[Full-Disclosure] Re: Firespoofing [Firefox 1.0]
On Tue, 11 Jan 2005, mikx wrote: The bug is confirmed but currently unfixed (open for more than 3 months). As a partial workaround set dom.disable_window_flip to true in about:config. Setting most of dom.disable_window_open_feature.* to true (and making it impossible to remove browser decorations from browser windows) is a pretty efficient (even if not 100% bullet-proof) way to thwart this kind of attack. As well as other GUI spoofing attacks. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
[Full-Disclosure] Re: Linux kernel scm_send local DoS
On Tue, 14 Dec 2004, Paul Starzetz wrote: The Linux kernel provides a powerful socket API to user applications. Among other functions sockets provide an universal way for IPC and user- kernel communication. The socket layer uses several logical sublayers. One of the layers, so called auxiliary message layer (or scm layer), augments the socket API by an universal user-kernel message passing capability (see recvfrom(2) for more details on auxiliary messages). More nasties might be lurking nearby (at least in 2.4): - additional, almost identical, copies of cmsg parsing code appear in ip_cmsg_send() (net/ipv4/ip_sockglue.c) and datagram_send_ctl() (net/ipv6/datagram.c) - sys_sendmsg() (net/socket.c) is willing to allocate almost arbitrary large blocks of kernel memory --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] I'm calling for LycosEU heads and team to resign or be sacked
On Fri, 3 Dec 2004, n3td3v wrote: It is and never will be an acceptable and effective way to beat spam or any other misuse of the internet. [...] Spammers and hax0rs will not allow Lycos EU to build its bot network of screensavers, if and when the site comes back online again. Why would they bother not allow Lycos EU to... if it was not an effective way to harm their so-called business? They bothered ergo it must have had the potential to harm them. Of course it was pretty stupid to try to fight those bastards using a system with a single point of failure (both in technical and legal sense). The screensaver can't be allowed to be a socially acceptable way to solve any internet based problem. Desperate situations demand desperate measures. The spammers are *already* DDoSing us (*). And it gets worse every day. Retaliation might be questionable from the ethical point of view but it is be one of the last weapons left in our arsenal. (*) For instance, one of our servers was joe-jobbed in June. The poor machine was unable to handle the extra traffic (400-500 mails/hour) and kept crashing until I blacklisted most of the zombies in a rather brutal way (the blacklist consists of several /8 and tens of /16 blocks!). It reduced the traffic to an acceptable level (tens/hour) but they still have not given up. They've been joe-jobbing one machine for five months without an interruption! You got to admire such persistence! --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Time Expiry Alogorithm??
On Mon, 22 Nov 2004, Georgi Guninski wrote: would prefer to keep my secrets encrypted with algorithm whose breaking requires *provable* average runtime x^4242 or even x^42 instead of *suspected runtime* 2^(x/4). (due to lameness the previous statement may be incorrect but hope the idea is clear). afaik crypto algorithms don't exists with provable average breaking time in suitable P. Provable complexity is a rather scarce commodity in the area of cryptography. Yes, there are tons of proofs out there but most of them are based on *unproven* conjectures about the complexity of certain basic problems (RSA problem, discrete logarithm etc.), therefore the best thing we get is provable *relative* complexity. Most of the cryptography is black magic (I wouldn't say that if I haven't heard similar claims from true cryptologists...g). Of course, you can always use the Vernam cipher when you need something provably secure. :) --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Time Expiry Alogorithm??
On Fri, 19 Nov 2004, Anders Langworthy wrote: If a certain deterministic computation (e.g. decryption) can be made in time T, then it can be made in any time T' T. This is true for breaking a cipher by brute force, but it doesn't account for (stop looking at me) somehow incorporating a timestamp into the encryption scheme to prevent 'legit' decryption after a certain time. As you yourself pointed out, the timestamp has to be some kind of unforgeable trusted timestamp. Such a scheme is not a deterministic computation from the message recipient's point of view because the other party behaves nondeterministically (in the sense the recipient cannot predict its exact future behaviour using known information only). Anyway, replay attack (record the trusted timestamp and reuse it later) is still possible. It's even worse when generic timestamps not dependent on the message are used because the enemy can gather and record timestamps in advance. Therefore we need special timestamps for every encrypted message. And this is the point where the timestamp part becomes superfluous: we can simply break the decryption key into two parts (neither of them sufficient to decrypt the message alone), give one part to the recipient, and the other part to the trusted third party guaranteeing 1. to give it to the recipient when it asks before the expiration time, 2. to discard it and not to give it to anyone after the expiration time. We can use any conventional encryption because we are unable to stop the recipient from recording all the inputs (or even the output) and repeat the decryption...unless the recipient decrypts and views the message on *the sender's* TCB (rather than his/her own computer) but there is little need to invent new complex cryptographical schemes if the sender's TCB is used because the sender's TCB can implement arbitrary access control of the sender's choice. I'm going to disagree as politely as possible. As an example, using RSA with 1024 bit keys allows for around 10^150 possible primes. Compare this to the 10^70 some atoms in the known universe to see how disgustingly big that number is. Cracking this encryption scheme by searching the keyspace is laughable. There are many things that can go wrong: gradual improvement of factorization algorithms (very likely, IMHO) can erode the strength of shorter keys, a major breakthrough (quantum computing?) can kill RSA with one mighty blow, your PRNG used to generate keys can be weaker than expected... Mathematically, this is a very remote possibility, as factoring primes is probably an NP problem, and P is probably not NP. Neither of these has been proven, however. According to my vague recollection of what I heard from people more skilled at the complexity theory, P != NP implies the existence of an infinite scale of complexity classes between P and NP and factorization (of composite numbers of course, factorization of primes is trivial... unless you are Bill Gates (*)) is suspected to represent one of those classes more complex than P but less complex than NP-complete. (*) Bill Gates, The Road Ahead, p. 265: The obvious mathematical breakthrough [to break modern encryption] would be development of an easy way to factor large prime numbers. Using larger keys will still provide a measure of security. Not for ciphertexts already encrypted with shorter keys. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Time Expiry Alogorithm??
On Fri, 19 Nov 2004, Gautam R. Singh wrote: I was just wondering is there any encrytpion alogortim which expires with time. For example an email message maybe decrypted withing 48 hours of its delivery otherwise it become usless or cant be decrypted with the orignal key No. If a certain deterministic computation (e.g. decryption) can be made in time T, then it can be made in any time T' T. Even if the computation needs cooperation by your computer that refuses to cooperate when the time limit expires (e.g. the recipient of the message needs to ask you for an extra key), you can always do the computation once and save the result (e.g. the plaintext). Well, I admit, this holds unless your computer has been possesed by Palladium (and is not *your* computer anymore). On the other hand, the power of hardware as well as the knowledge of cryptanalysis oincreases as the time passes, ergo any cipher is going to expire...in the sense someone will become able to break it and recover the plaintext without the (a priori) knowledge of the encryption key. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
[Full-Disclosure] Re: Linux ELF loader vulnerabilities
On Wed, 10 Nov 2004, Paul Starzetz wrote: One of the Linux format loaders is the ELF (Executable and Linkable Format) loader. Nowadays ELF is the standard format for Linux binaries besides the a.out binary format, which is not used in practice anymore. BTW: a.out loader appears to be still full of integer overflow bugs. 1) The Linux man pages state that a read(2) can return less than the requested number of bytes, even zero. It is not clear how this can happen while reading a disk file (in contrast to network sockets), however here some thoughts: It can happen when the end of file is encountered. One might exploit this to create an oracle giving quizzical answers about unused kernel memory: You run your own malformed ELF binary that makes the kernel allocate N bytes, read M N bytes, and interpret all N bytes including unintialized N-M bytes as ELF phdr entries. The fact the kernel ignores mmap() errors could make it interesting. On a NFS volume mounted with intr option, it can happen when the process receives a signal in the middle of read(). I don't dare to say whether it can happen in the middle of a page. - - most of the standard setuid binaries on a 'normal' i386 Linux installation have ELF headers stored below the 4096th byte, therefore they are probably not exploitable on i386 architecture. I'd say that binaries with essential headers (phdr, interp) not fitting into the first page of the executable file are extremely rare on any platform. Afaik all standard tools put phdr right after ehdr, and interp right after phdr. Standard ehdr size is 52 bytes (64 for 64-bit arch), one standard phdr entry is 32 bytes (56? for 64-bit), and there is only a handful of entries (=10) in an ordinary phdr. Interp is quite short, say 50 bytes. This makes 1000 bytes total. Have you find any naturally occuring binary with big headers? 2) This bug can lead to a incorrectly mmaped binary image in the memory. There are various reasons why a mmap() call can fail: [...] Security implications in the case of a setuid binary are quite obvious: we may end up with a binary without the .text or .bss section or with those sections shifted (in the case they are not 'fixed' sections). ET_EXEC files (ordinary binaries) have fixed mapping. ET_DYN (ld.so or relocatable binaries; and dynamic libraries but they are irrelevant in this context) get MAP_FIXED after the first segment has been mapped successfully. But there's a catch: ld.so is loaded by load_elf_interp() that stop after the first mmap() failure. Its return value is wrong but the best thing we can get with ET_EXEC binaries (both with and without a dynamic linker) is an unmapped segment. A missing segment is likely to kill the program before it can do any harm. ET_DYN binaries may, on the other hand, be more exploitable if their memory layout is messed up the right way. (Isn't it ironic some people use ET_DYN binaries in order to be able to randomize process address space and make their systems more resistant?) 3) This bug is similar to 2) however the code incorrectly returns the kernel_read status to the calling function on mmap failure which will assume that the program interpreter has been loaded. That means that the kernel will start the execution of the binary file itself instead of calling the program interpreter (linker) that have to finish the binary loading from user space. As far as I can tell the kernel puts the result of kernel_read(), i.e. the interpreter's phdr size (= page size), into elf_entry and initializes the process' instruction pointer to elf_entry. The inevitable consequence is that the process jumps into the large black hole at the begining of its address space (assuming standard Linux memory layout) and dies before it can do anything harmful. Do I miss anything? 4) This bug leads to internal kernel file system functions beeing called with an argument string exceeding the maximum path size in length (PATH_MAX). It is not clear if this condition is exploitable. This is funny. There used to be elf_interpreter[elf_ppnt-p_filesz - 1] = 0; there but it was optimized out between 2.2 and 2.4. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Senior M$ member says stop using passwords completely!
On Sat, 16 Oct 2004, Frank Knobbe wrote: It's a nice recommendation of MS to make (to use long passphrases instead of passwords). But I don't consider 14 chars a passphrase. Perhaps they should enable more/all password components to handle much longer passwords/phrases. A passphrase consisting of 7 words and 12 bits of entropy per a word is as guessable as a password with 14 characters and 6 bits of entropy per a character. You get 84 bits of total entropy in both cases. The only advantage of passphrases is that lusers might find long random sequences of words easier to remember than long random sequences of characters. (But wait: 12 bits of entropy per a word--this is equivalent to a uniform choice of one word out of 4096. 4 thousand? That might exceed an average luser's vocabulary by an order of magnitude! ;) --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] RE: Disclosure policy in Re: RealPlayer vulnerabilities
On Fri, 8 Oct 2004, Martin Viktora wrote: I truly believe that vulnerability disclosure should follow these steps: 0. (The primordial sin) The vulnerable product is released and all information about the vulnerability is made available *by the vendor itself* to anyone with enough competence, free resources, motivation, and a copy of the product. This is conditio sine qua non. The rest of the story is nothing but deobfuscation of that information. Second, you say that vendors must work much harder at reducing patch development time and I cannot agree with you more, especially after what I stated above. Vendors must work much harder to avoid releasing vulnerable code in the first place. No vulnerabilities--no 0-says, no disclosures, no incidents, no need to hurry to install security patches. Or, at least, they themselves should proactively find and fix vulnerabilities in their own products. Isn't it absurd to wait until someone else does their work (security QA) for them and even expect the other party to follow their standards (responsible disclosure)? --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Scandal: IT Security firm hires the author of Sasser worm
On Mon, 20 Sep 2004, Vincent Archer wrote: He has also demonstrated his absolute lack of ethical restraint, [...] This makes him a perfect employee for any modern business because he won't make trouble when his employer lies to its customers and sells crappy products and services to them. :P --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Linux kernel file offset pointer races
On Wed, 4 Aug 2004, Andrew Farmer wrote: Furthermore, mtrr_read doesn't seem to exist anywhere in the Linux kernel, at least not by that name. The function in question would probably exist in linux/arch/i386/kernel/cpu/mtrr/if.c, but there's nothing of the sort in there. Heck, the kernel code shown isn't even VALID. The kernel code shown is from arch/i386/kernel/mtrr.c in 2.4. 2.6 is different but the race between read()/write() and llseek() (or even other read()/write() on the same fd (*)) is still possible. I don't know whether it is exploitable on 2.6 but afaik it violates POSIX (see my post to LKML: http://www.uwsg.iu.edu/hypermail/linux/kernel/0408.0/0925.html) ergo it should be fixed. (*) write()-write() race on the same inode using generic_file_write() is not possible because they are serialized by inode-i_sem. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Exploits in websites due to buggy input validation where mozilla is at fault as well as the website.
On Wed, 14 Jul 2004, Seth Alan Woolley wrote: If the topic of exploiting browsers to gain unauthorized access to websites with buggy input validation is back in vogue, here's a strange situation for you that _only_ works in mozilla-based browsers: http://bugzilla.mozilla.org/show_bug.cgi?id=226495 See http://www.w3.org/TR/html401/appendix/notes.html#h-B.3.7 (and SHORTTAG ON in http://www.w3.org/TR/html401/sgml/sgmldecl.html) divscript src=indexvuln.js/div should be interpreted as divscript src=indexvuln.js/script/div W3 HTML validator interprets it this way (complaining about missing /script). There are two questions: 1. Should Mozilla support this bizzare esoteric feature of HTML? (in fact, this is a bizzare esoteric feature of SGML) 2. Should Mozilla mangle the source when you view it? I believe the answer is no in both cases. Ad 1. That support should be completely eliminated or at least made configurable and disabled by default. Ad 2. I really hate it. It's like MSIE turning \'s into /'s in URL. If you read the comments on the reported bug, they seemed to fail to understand the bug and how easy it would be to fix while maintaining backwards compatibility. Then they resolved it duplicated on me when it wasn't the same bug as the other bug, essentially keeping it quiet. Excuse me? As far as I can tell it is the same problem. The only difference is the fact you demonstrated possible security consequences of it. Lots of perl and php scripts exist out there that filter for the regular expression '.*' matching only whole tags instead of '[]' which matches either end of a tag. The mistake made by those scripts is obvious: they attempt to deny bad things and allow everything else rather than allow known good things (ie. well-formed documents in some harmless subset of (X)HTML) and deny everything else. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Erasing a hard disk easily
On Tue, 13 Jul 2004, Aditya, ALD [ Aditya Lalit Deshmukh ] wrote: is the addition of /dev/full sufficent ie /dev/zero alternated by /dev/full should do the trick ? ie write zeros and ones on the disk, /dev/full is full of zeroes...like /dev/zero (opened for reading) --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Is Mozilla's patch enough?
On Mon, 12 Jul 2004, Aviv Raff wrote: As you may already know the Mozilla's patch for the shell protocol security issue is merely a global configuration change. But is it enough? No. As someone has already pointed out, Mozilla should whitelist safe external protocols rather than blacklist unsafe external protocols. If an attacker has a file writing access to the user's default profile directory, or somehow manages to update/create the file user.js (or even worse - mozilla.cfg) he can override the patch's configuration change, and enable the shell protocol handler again. The user has already lost. Game over. An attacker can exploit the ability to modify the user's configuration in many different ways. E.g. redirect the browser to a proxy under the attacker's control, make Mozilla use a trojanized Chrome or a trojanized Java plugin, etc. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] software burning cpu or mobo ?
On Fri, 2 Jul 2004, Georgi Guninski wrote: it is strange that mandrake damages some cdrom drives [1] and lm_sensors damages some thinkpads [2] without any intention of doing so. AFAIK in both cases it is an accidental EEPROM/firware corruption rather than real physical damage. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
RE: SUPER SPOOF DELUXE Re: [Full-Disclosure] Microsoft and Security
On Thu, 1 Jul 2004, Thor Larholm wrote: It has always been standard practice that you can change, but not read, the location of any window object to a site from the same protocol and security zone. A frame is a window object and all window objects are safely exposed because they by themselves does not reveal any information about the site inside the frame. You can get a handle of any window object to any depth because the frames collection is also safely exposed. This does not give you any kind of access to the document object inside, which would be necessary for any kind of code injection or cookie theft. If a script from site A can replace the contents of a frame within a document from site B then site A is able to violate the *integrity* of B's contents. This is unacceptable. Indeed, a cuckoo's frame from A would be (should be) unable to inject code into documents from site B or steal its cookies. But it could masquerade as a genuine frame from B and fool the user. Imagine a login frame on site B being replaced by a visually indistinguishable frame from site A. You type your password (assuming you are entering it into a form from B), press enter and boom! your secret password is sent to A! Do you always check the URL of any frame you interact with? Do you expect an average user to do that? And of course, the requirement that A and B 1. use the same protocol and 2. are in the same security zone is snake oil. Ad 1. it is trivial for an attacker to set up an HTTPS server in order to attack users of another HTTPS server. Ad 2. there are only four or so different zones in MSIE, ergo in most cases a good site B will share the same zone with a large number of potential candidates for an enemy site A. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Evidence of a ISC being hacked?
On Thu, 24 Jun 2004 [EMAIL PROTECTED] wrote: It's easier to just #define the critter than to re-re-invent the C code for vsnprintf() (which isn't always trivial, as your vsnprintf() has to play nice with the vendor's stdio - this can be .. umm... interesting if the innards of the vendor stdio are more bizzare than usual... vsnprintf() does not have to play nice with stdio. It does not have to play with stdio at all. You don't need to mess with stdio in order to stuff some characters into an array. Go ahead - go and re-write a vsnprintf, and compare that to the time it takes to do the #define It is rather easy as long as everything you need are common string and integer directives. Indeed, floats are tricky. Exotic C99 is even more tricky. But I think the set of printf features required by dhcpd and similar programs is (or should be) pretty small. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] SpenderSEC Advisory #1
On Sun, 20 Jun 2004 [EMAIL PROTECTED] wrote: The first major problem is present in the OpenBSD patch in at [1], where the failure of falloc() results in a continuation of the loop, which can update the value of the error variable, resulting in either fd 0 or fd 1 not being correctly reopened to /dev/null while a successful falloc() for fd 2 sets error to a suitable value. Old news, Mr Spender(?), see http://www.securityfocus.com/archive/1/10147/1998-07-25/1998-07-31/2 or http://seclists.org/lists/bugtraq/1998/Jul/0376.html Hmm. In theory, yes. But OpenBSD implementation seems to have a potential small hole. It should abort when it cannot fix everything but it does not. PERHAPS, a temporary resource starvation could break it. This was sent that to Bugtraq (and cc'ed to Theo de Raadt) in 1998. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Induce Act
On Fri, 18 Jun 2004, Eric Paynter wrote: Could be used for is pretty scary... The Internet could be used for copyright infringement. So can photocopiers, tape recorders, hard disk drives... heck, a pencil and paper could be used for copyright infringement if I'm transcribing music with it and then selling the hand-written copies. Better stop making pencils!! It's even worse! People could use the stuff filling their skulls to memorize copyrighted data and reproduce it later. Better lobotomize everyone! --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
RE: [Full-Disclosure] MS Anti Virus?
On Thu, 17 Jun 2004, joe wrote: Home users never should have been impacted as they should be running firewall software on the internet connections. The fact that they don't isn't MS's fault, however MS is stepping up with XP SP2 to help out. On top of that they should be patching when necessary. But it is their fault they release OS with ~5 hard-to-deactivate plus ~5 almost-impossible-to-deactivate dangerous but mostly useless (*) network services enabled by default that is guaranteed to be owned within 10 minutes after you plug it to the network unless you 1. install extra firewalling software, or (assuming you got the version with a builtin packet filter) 2. smoke enough grass to be able to grok their own configuration dialog windows (**). Indeed other vendors made the same stupid mistake in the past (and some of them insist on repeating it). (*) Who needs network accessible MS RPC services on a home PC? (**) I admit I am talking about the Czech version. Maybe the English version, not affected by the creativity of any localization team, is somewhat more understandable. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] !! Internet Explorer !!
On Fri, 11 Jun 2004, Koen wrote: http://www.mozilla.org. And how exactly does this help in a corporate environment where you are obliged to use Internet Explorer because they are running some kind of bloated service/application that only runs in IE? It helps them learn they should think twice before they deploy any kind of bloated IE-only application. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
RE: [Full-Disclosure] Psexec on *NIX
On Thu, 6 May 2004, Chris Carlson wrote: I know it is possible to remotely install any solution and then use it, but it doesn't make sense to do so. Why would I install and run an ssh daemon just to use it to run another program, then delete the ssh daemon? Why would I do that with anything? It just doesn't make sense. Psexec does more or less the same thing in an automated way: it uploads a program to the target machine (via ADMIN$ share), registers it as a new services, starts the service, connects to the service and asks it to execute the given command, stops the services, unregisters the service and removes the program. Last time I looked, smbclient and rpcclient from Samba TNG were able to accomplish all of the listed tasks but service registration and unregistration. An alternative method to run code on a remote MS Windows machine is to start an uploaded program by the Scheduler service (assuming this service is running or at least ready to be started). This can be done using rpcclient. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Re: [Full-Disclosure] Core Internet Vulnerable - News at 11:00
On Tue, 20 Apr 2004, Michal Zalewski wrote: That said, kudos to Watson: it is definitely good to see this problem being finally discussed in broad daylight; I think it would be good to see some kludges intended to mitigate it a bit. Data injection may be thwarted by TCP timestamps (RFC 1323). Timestamps are 32-bits long and received echoed timestamps must correspond to (recently) sent timestamps. The exact implementation would probably be somewhat tricky but I think it might be able to extend the effective sequence number by at least 16 bits. A spoofed timestamp-less SYN or SYN-ACK packet during the initial 3-way handshake might prevent the use of TCP timestamps but an attacker would have to guess full 32 bits of an ISN (or of two ISNs in the SYN-ACK case). Unfortunately timestamps won't help against spoofed RST packets because existing TCP implementations are supposed not to send them in RST packets. --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] Resistance is futile. Open your source code and prepare for assimilation. ___ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html