Other articles state that as "which replaced the original servers with off-the-shelf Dell hardware running Microsoft Windows 2000 Advanced Server"
Also there are other mentions of Windows Servers replacing UNIX servers. Don't think I have ever met someone who would be willing to call Win9x a server. > but it seems that the only valid approximation of what it could originally mean is an OS problem. It is a valid approximation of lots of apps that have problems with GetTickCount including MS themselves. In the years of helping companies and their vendors I have seen MANY occurrences of this problem. It is an extremely common problem. The number one reason given by vendors when I approach them on it is they weren't aware of another function to get a timer. The second reason was that they were aware but didn't like working with 64 bit integers which can be a pain in some compilers/languages. > - Why would they use such a ridiculous counter? Applications usually > do not have to count time on their own, and usually rely on RTC data. > Counting miliseconds seems futile, though I suppose it could be > just a matter of an obscure design. There are valid uses for counters like this, usually for sequencing/syncronization. I can't say anything other than you haven't had experience with them. Granted many uses of handling time in an app this way would be better served with an event driven timer or signals, but not all coders are comfortable doing things like that. The non-hardware specific RTC capability from windows is through the high resolution timers though even MS doesn't call Windows (other than CE/Embedded) a RT OS. The next closest approximation is through GetTickCount which is there to give an easy 32 bit answer to things, again because many coders don't like 64 bit integers. > - Why wouldn't the same code fail on unix previously? Who says it is the same code? The fact that it does fail at 49.7 days would seem to indicate to me it isn't because they are using some different method of doing the timing, not number of ms since start time in a 32 bit value. > - Why would they claim again and again that this was an OS "feature"? Because the people speaking don't code and the vendor probably said so. I heard the same thing out of a vendor a couple of years ago when a company contacted me about a timer that would start counting down after the program ran for many days. The vendor was screwing up in how they used clock() which returns an unsigned long. They swore up and down it was MS's fault until I wrote code and duplicated exactly what they were doing and had another example side by side with it that worked perfectly. I just the other day dealt in the newsgroups with another vendor who ran into the exact same problem with clock(). The issue is no or incomplete understanding of basic data types. joe -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Michal Zalewski Sent: Friday, September 24, 2004 5:26 PM To: joe Cc: [EMAIL PROTECTED] Subject: RE: [Full-Disclosure] Windoze almost managed to 200x repeat 9/11 On Fri, 24 Sep 2004, joe wrote: > It says right in the article they were running Windows 2000 Advanced Server. > The systems were not impacted by the Win95 hang bug. Almost certainly > Windows was fine... period. Ahem... the most informative piece I could find reads: http://www.latimes.com/news/local/la-me-faa16sep16,1,3729661.story When the system was upgraded about a year ago, the original [unix] computers were replaced by Dell computers using Microsoft software. Baggett said the Microsoft software contained an internal clock designed to shut the system down after 49.7 days to prevent it from becoming overloaded with data. This appears to be a fine example of a meaningless gibberish, but it seems that the only valid approximation of what it could originally mean is an OS problem. Which is consistent with what we know about old Microsoft OSes. Sure, the same problem could happen if the application running on that box used a 32-bit integer to store milisecond count since its launch - but: - Why would they use such a ridiculous counter? Applications usually do not have to count time on their own, and usually rely on RTC data. Counting miliseconds seems futile, though I suppose it could be just a matter of an obscure design. - Why wouldn't the same code fail on unix previously? - Why would they claim again and again that this was an OS "feature"? It seems that all the claims support the OS flaw version, though of course it's not a good idea to trust the press on technical issues. Until we know more, getting into an off-topic, groundless flamewar is not needed. -- ------------------------- bash$ :(){ :|:&};: -- Michal Zalewski * [http://lcamtuf.coredump.cx] Did you know that clones never use mirrors? --------------------------- 2004-09-24 23:08 -- http://lcamtuf.coredump.cx/photo/current/ _______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html _______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html