Linux-Advocacy Digest #470, Volume #25            Thu, 2 Mar 00 09:13:04 EST

Contents:
  Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K) (Edward 
Rosten)
  Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K) (Edward 
Rosten)
  Re: 64-Bit Linux On Intel Itanium (was: Microsoft's New Motto 
(=?iso-8859-1?Q?Paul_'Z'_Ewande=A9?=)
  Re: Absolute failure of Linux dead ahead? ([EMAIL PROTECTED])
  Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K) ("Joseph T. 
Adams")
  Re: Absolute failure of Linux dead ahead? (mlw)
  Re: Absolute failure of Linux dead ahead? (Christopher Browne)
  Re: Absolute failure of Linux dead ahead? ("Joseph T. Adams")
  Re: Absolute failure of Linux dead ahead? (Edward Rosten)
  Re: Microsoft migrates Hotmail to W2K (Christopher Browne)

----------------------------------------------------------------------------

From: Edward Rosten <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K)
Date: Thu, 02 Mar 2000 11:16:00 +0000

"Joseph T. Adams" wrote:
>
> Also, NT and W2K run only on a single hardware platform, one designed
> primarily for workstation rather than server use.  The
> cost-effectiveness of x86 servers tends to decline dramatically as
> the load per box grows past a certain point; improvements in the
> platform over time continue to push that point of diminishing returns
> upward, but at the present time, neither NT nor W2K, nor any other
> x86-based OS, is capable of competing on the high end against
> platforms that are optimized for better hardware.

Good point - many of you seem to be missing it. Despide many attempts to
produce very high performance i86 based platforms, ith P3 is not a
suitable processor. It is power hungry and very badly designed. It and
it's support hardware are semi-archaic in nature:
A single PIII box may have a user wanting to run an old dos APP there
fore, it is backwards compatible (more or less) with an 8086. It runs
NT.
An 8 way PIII essentially runs the same NT, therefore, there are parts
that are backwards compatible with an 8086. Don't forget the whole extra
16bit subsystem that adds to cost complexity and the whole odd nature of
it all. One of the main attempts to get NT to run on a /different/ i86
platform was for SGI, who made an i86 based workstation. Although NT ran
on it, almost nothing ran on NT on it, so even the huge i86en have some
backwards compaitibility with your cherished 8086 that you used to run
Alley Cat, Digger and Word 1 on.


Like I said, the P3 system, and indeed all PC architecture is arse over
apex. 
There are other (better designed) processors that are much more
suitable. The StrongArm has been scaled up to 31 processors at an
estimated cost per 31 processor computer of about $3000 (and about 40W
power).Let me see i86es do that.
Further to that, it is only a 32 bit processor. 64bit processors are
*much* more capable, so fewer are needed.
I would like to point out (go and read some O/S and parallel theory if
you don't trust me) that the fewer processors the better: a 2 processor
P3 is NOT 2x as fast as one P3, because data busses, resources,
communication and kernel confilcts cause bottlenecks. At the moment,
Cray have the best multi processdor bus design, and no one else knows
what it is.
The conclusion from this is that a proper computer will be much better
that a bunch of i86s. Since Microsoft's abject failure to successfully
port NT to better hardware (I don't know anyone who knows anyone who has
seen NT running on a DEC), then the only sensible conclusion is that NT
is not suitable for seriously big work.
If NT was ported, then maybe it would prove to be as good as UN*X (not
my personal view, but I'm trying to be objective). Until that time, it
is not.


-Ed

> 
> The bigger the application, the more costly PC-based solutions become.
> PCs are not and probably cannot be competitive as high-end servers,
> because of hardware limitations that are intrinsic to the platform.
> You can only get lots of MIPS or TPS by adding more boxes, and as you
> do, the cost of communicating and coordinating work done by those
> boxes can grow almost exponentially unless the app is specifically
> designed for that kind of architecture, so even adding more boxes
> doesn't make clusters of PCs viable as high-end servers UNLESS - and
> this is they key, the reason why both Linux and NT clusters can either
> succeed or fail - unless the app is designed, from the ground up, to
> run on the specific OS, hardware, and network configuration that is to
> be used.
> 
> 
-- 
Did you know that the reason that windows steam up in cold weather is
because
of all the fish in the atmosphere?
        -The Hackenthorpe Book Of Lies

------------------------------

From: Edward Rosten <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K)
Date: Thu, 02 Mar 2000 11:20:58 +0000

Drestin Black wrote:
> 
> "Mr. Rupert" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> >
> >
> > Both Chad's and Drestin's rebuttal argument to Joseph's post is
> > absolutely moot.  MS is runnig UNIX at their hotmail site.
> >
> > That speaks volumes for NT and W2K.  End of story!
> 
> Actually a better reply to you "Mr. Rupert"
> 
> When hotmail.com runs on W2K later this year, I'll be expecting to hear you
> say: "MS is running W2K at their hotmail site. Solaris/BSD couldn't handle
> it and sucks - W2K is king" You'll be expected to lower your head and scurry
> away from any comparision threads because all we have to do is say; "Yea,
> but MS runs hotmail on W2K so... nah nah nah nah naaaaaaaah nah" and we'll
> win! yipee!! (I'm hoping everyone recognizes the sarcasm dripping here)

But how will *vocates know how many (if any) extra computers will be
being used. If MS migrate to Win* and need twice the number of
processors (but migrate non-the-less) does that still mean that you win?
-Ed



-- 
Did you know that the reason that windows steam up in cold weather is
because
of all the fish in the atmosphere?
        -The Hackenthorpe Book Of Lies

------------------------------

From: =?iso-8859-1?Q?Paul_'Z'_Ewande=A9?= <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy,comp.unix.advocacy
Subject: Re: 64-Bit Linux On Intel Itanium (was: Microsoft's New Motto
Date: Thu, 2 Mar 2000 12:48:13 +0100


"Donn Miller" <[EMAIL PROTECTED]> a écrit dans le message news:
[EMAIL PROTECTED]

> I think Windows advocates overexaggerate the kernel recompiling

IMO, it has the same amount of relevancy as the frequent crashes of Windows.

> thing.  Really, you only need to recompile your kernel if you're
> applying bug fixes, or adding a new feature that can't be implemented
> via loadable kernel modules.

I personally have nothing agaisnt recompiling a kernel, having done so 4 or
5 times on FreeBSD, FWIW.

> - Donn

Paul 'Z' Ewande



------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.development.system
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 12:47:36 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:

[snip]

> Wait, wait!  There are other scary items forthcoming:
>
> a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
> happens to be in 2038.  Stuff Will Break Then.
>
> This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
> cliff" that everyone worried last year about.
>

I've alway wondered about this one.  Is there any reason we can't
just agree that the world actually began in 2000 and modify a few
system calls?  I suppose that anyone who has a file laying around
since pre-2000 will get the wrong timestamp, but will anyone really
care in 2038? Are there any programs out there that code dates as
the number of seconds since 1970?  Is there any significance to
1970 other than reminding Thompson how old he's getting?  Inquiring
minds want to know:)

Clark



Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Joseph T. Adams" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Fairness to Winvocates (was Re: Microsoft migrates Hotmail to W2K)
Date: 2 Mar 2000 13:14:09 GMT


Although Drestin and Chad are in my killfile, the bits and pieces of
their "rebuttal" quoted by others lend further support to my
contention that MS did not port Hotmail because it knew it was a big
job and would not be worth the cost of doing so.

The relevance of TPC benchmarks to anything I've argued has not been
demonstrated; an application like Hotmail would seem to stress *every*
piece of architecture needed to support it: the network, the TCP/IP
stacks, disk I/O, CPU, and possibly RDBMS performance (if it uses an
RDBMS - I'm not convinced that an app like this should, given other
forms of data access that may be much more efficient).

The contention that Hotmail is less busy than microsoft.com seems
absurd: generally microsoft.com is visited mostly by software
developers, a relatively small subset of the Internet population,
whereas tens if not hundreds of millions of people worldwide - a
sizeable fraction of all people who have even part-time Internet
access, and including many who have no access other than public Web
browsers at libraries, etc. - have Hotmail accounts.

The argument that Microsoft has lots of money, and that money is
therefore no object, is similarly silly.  People and businesses with
lots of money usually didn't get that way by wasting it.

My point was not to insult Microsoft by pointing out facts, although I
realize that some of the facts are not favorable to Microsoft and
therefore could be (and obvious was, by some of the Winvocates)
construed as insulting.  Rather, my point was to defend NT/W2K against
the (IMO) unfounded accusation that it could not handle the load.  I'm
fairly convinced that a sufficiently sizeable NT/W2K cluster could,
*if* the app were rewritten to take advantage of that architecture and
*if* Microsoft saw this as a cost-effective proposition.

And my purpose for doing that was to put to rest this idea that the
migration or non-migration of Hotmail, at this or any point in time,
*proves* anything other than that Hotmail has or has not been migrated. 
Doing so does not prove the superiority of NT/W2K; failing to do so
does not prove that it's trash.

With that canard pushed aside, we could then discuss what lessons can
be legimitately inferred from the situation, and how these lessons
might be applied to prospective decisions to port from UNIX to NT. 

All I can infer at present is that Microsoft deemed the migration to
be not worth pursuing at the time the decision was made.


Joe

------------------------------

From: mlw <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 08:20:02 -0500

[EMAIL PROTECTED] wrote:
> 
> In article <[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] wrote:
> 
> [snip]
> 
> > Wait, wait!  There are other scary items forthcoming:
> >
> > a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
> > happens to be in 2038.  Stuff Will Break Then.
> >
> > This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
> > cliff" that everyone worried last year about.
> >
> 
> I've alway wondered about this one.  Is there any reason we can't
> just agree that the world actually began in 2000 and modify a few
> system calls?  I suppose that anyone who has a file laying around
> since pre-2000 will get the wrong timestamp, but will anyone really
> care in 2038? Are there any programs out there that code dates as
> the number of seconds since 1970?  Is there any significance to
> 1970 other than reminding Thompson how old he's getting?  Inquiring
> minds want to know:)

I am not sure that I care about this one, it is 37 years away. In 37
years, 64 bit computers will be obsolete. Right now we have the 2G file
issue which is far more important and is a real problem today.

-- 
Mohawk Software
Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support. 
Visit http://www.mohawksoft.com

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.development.system
Subject: Re: Absolute failure of Linux dead ahead?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 02 Mar 2000 13:29:55 GMT

Centuries ago, Nostradamus foresaw a time when [EMAIL PROTECTED]
would say:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>[snip]
>> Wait, wait!  There are other scary items forthcoming:
>>
>> a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
>> happens to be in 2038.  Stuff Will Break Then.
>>
>> This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
>> cliff" that everyone worried last year about.
>
>I've alway wondered about this one.  Is there any reason we can't
>just agree that the world actually began in 2000 and modify a few
>system calls?  I suppose that anyone who has a file laying around
>since pre-2000 will get the wrong timestamp, but will anyone really
>care in 2038? Are there any programs out there that code dates as
>the number of seconds since 1970?  Is there any significance to
>1970 other than reminding Thompson how old he's getting?  Inquiring
>minds want to know:)

Changing the epoch's beginning would cause a new and different bit
of confusion, as it invalidates all the present date calculations.

It might be simple enough in some ways, but the *conversion* 
process, to force everything over to the new epoch, whilst using
the same data format, would Not Be Pretty.  It has the problem
that you can't tell if a particular value has been converted or
not if they use the same format.
-- 
Rules of the Evil Overlord #57. "I will not rely entirely upon "totally
reliable" spells that can be neutralized by relatively inconspicuous
talismen." 
<http://www.eviloverlord.com/lists/overlord.html>
[EMAIL PROTECTED] - - <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------

From: "Joseph T. Adams" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: Re: Absolute failure of Linux dead ahead?
Date: 2 Mar 2000 13:34:03 GMT

In comp.os.linux.advocacy Christopher Browne <[EMAIL PROTECTED]> wrote:
: Wait, wait!  There are other scary items forthcoming:

: a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
: happens to be in 2038.  Stuff Will Break Then.  

: This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
: cliff" that everyone worried last year about.

I would hope that apps are not still using 32-bit hardware and time_t
structs by that time.

Indeed, I hope (and am reasonably certain) that Moore's Law will make
it possible and beneficial to start to do at least some kinds of
system work in a language much safer than C.


: b) Resolution of the 2GB file problem on 32 bit architectures.  This
: is, again, a 32-bit-ism that is starting to bite people working with
: today's Very Large Disk Drives.  The comprehensive fix to this will
: likely be synchronized with the 2038 problem, as resolutions for both
: likely involve moving from 32 bit values to 64 bit values.

: There will be some period of "inconvenience" at whatever point libraries,
: filesystems, and kernels have to be synchronized together to fix this.

I believe that will happen this year.  The necessary components are
already in place.  All that has to happen (AFAIK) is for a
distribution vendor to do the painstaking work of recompiling
everything (including glibc) and fixing stuff that isn't 64-bit clean
or otherwise breaks.  Yes, this is a pain, but it has to be done, and
the first vendor to do it successfully will gain a huge advantage over
the others.


: c) One of these days, someone may actually come up with a C++ ABI that
: would be expected to interoperate widely.  No approach for this is yet
: available.

I'm not highly convinced that binary portability is even a good idea,
much less necessary, in a world in which most of the software worth
having is available in source code form.

Nor am I convinced that C++ will gain a huge market share in the *nix
world beyond what it has now.  Standard C++ is a nice language for the
developer, if used properly, but a nightmare for compiler and library
writers.  Most of what might have been done in Standard C++ if it had
been available for Linux before very recently, has been done using a
combination of C and higher level languages such as Perl or Python.

I would be extremely unlikely to write Linux apps in C++ *unless* they
were intended to be deployed in a KDE/Qt environment, in which, for a
variety of reasons, C++ does make the most sense.  In such an
environment, the necessary libraries can generally be assumed to exist
and to be at least adequately stable.  And I would release under a
GPL- or LGPL-compatible license, so source would be available.


: d) Many of the problems go away when you've got tools that automatically
: recompile software using local tools thus maximizing compatibility, and
: possibly even performance.  The BSD Ports system provides this, and
: Debian looks to be looking towards this.  (Which has the further merit
: of diminishing C++ ABI worries...)

I think these are both excellent approaches, and think that whoever
could combine the best features of both would have a real winner. 

I've always had a love/hate relationship with binary RPMs.  I love the
fact that they *usually* work or at least yield sufficiently useful
clues as to make it possible to make them work.  But I hate the idea
of using binaries.  They introduce potential security risks; they may
have subtle bugs caused by slightly differing library versions; they
may very well break if I upgrade my kernel, and then I'm stuck going
out and getting the source anyway.  They certainly won't work reliably
if I change to a very different distribution such as Debian where the
system files tend to live in different places.

A cross between dpkg and *BSD ports would be awesome, and if it
existed, I would strongly encourage developers to use it, in addition
to or even instead of RPMs.


: Part of the problem Red Hat has had is that they didn't have anyone
: truly responsible for system testing.  Testing was generally "supposed
: to be done," but without any pointed responsibility, this doesn't 
: necessarily happen.   About six months ago, I'm told they hired 
: someone to be responsible for testing, which should lead to there
: being some automated tests that should be helpful.  The acquisition
: of Cygnus is pretty interesting in that Cygnus has been collecting
: test suites for compilers for rather a while now, hopefully providing
: them with some expertise that might rub off.


The more complex software becomes, the more necessary it is to
automate the testing process.  It's hard for me to understand how any
robust software could possibly get built without it.


Joe

------------------------------

From: Edward Rosten <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 13:46:55 +0000

Linux is fully 64-Bit on systems that are fully 64-Bit. So, on systems
that are designed for proper performance, it gives proper performance. I
am not a fan of the i86 and think that it is faintly ridiculous that
intel are still persuing the 32 bit line, (by jacking up the [MG]Hz) and
trying to get more performance out of an inherently limited design. If/
When the merced becomes avaliable, and when they stop the 32 bit line
for PCs all together, then Linux on a PC will be 64 bit.

64 bit subsystems on a 32 bit machine need either 2 sets of systems
calls or need serious hacks to avoid a big drop in performancs. We
really all should move over to proper computers.

Just don't ask me about my opinion when I want a 3GB file on a pentium
;-)

-Ed

-- 
Did you know that the reason that windows steam up in cold weather is
because
of all the fish in the atmosphere?
        -The Hackenthorpe Book Of Lies

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Microsoft migrates Hotmail to W2K
Reply-To: [EMAIL PROTECTED]
Date: Thu, 02 Mar 2000 14:03:52 GMT

Centuries ago, Nostradamus foresaw a time when Drestin Black would say:
>"Christopher Browne" <[EMAIL PROTECTED]> wrote in message
>news:s8lv4.55695$[EMAIL PROTECTED]...
>> >Yes, there are several TPC-C entries, but the point is that TPC-C
>> >isn't the be-all, end-all benchmark --- it's only one of several.
>>
>> This sort of thing happens fairly often; it was not so many years ago
>> that C compilers were written to detect ByteMark benchmark programs,
>> and generate code that essentially precomputed the answers at compile
>> time, so that the compilers generated spuriously high benchmark
>> numbers.
>>
>> What the Compaq/MS solution most likely displays is *not* that
>> they've got better software or hardware, but rather that they found
>> the "trick" that decomposes TPC-C such that it is no longer a useful
>> (e.g. - realistic) measure of performance on large transaction sets.
>>
>> This is why the TPC council has a sizable set of TPC benchmarks, and
>> is why nobody cares anymore about the older TPC-A or TPC-B
>> benchmarks.  Presumably once it is established that everybody can
>> "cheat" on TPC-C, it will be discontinued as well.
>
>Oh - I see how it goes. god this is really amazing. TPC stands as THE
>benchmark to beat so long as unix boxes reign supreme. For two years it's
>been; Ha! NT/SQL might be good for cheap solutions (cause they always won on
>price/performance) but when you want SERIOUS performance, it's big expensive
>iron and unix running oracle. ha!
>
>So, now MS smokes the competition and suddenly... oh, they cheated. yeah,
>that's it. Musta been cheating. Yea, we know the world is watching and
>everyone has scoured the 600 page disclosure and it's been independently
>audited and it's the same benchmark everyone else uses but suddenly: ms
>cheated, that's it that's how they did it.
>
>what sour grapes.

Why didn't MSFT report "blazing" TPC/A or TPC/B results?  Or "ByteMark"
benchmarks?  *Because nobody particularly trusts such benchmarks anymore.*

It wasn't Microsoft that did the "cheating" that resulted in their
deprecation, which means that it's *you* that have the "sour grapes."

The point is that with many of these benchmarks, there is some sort of
trick that can be played that allows outrageously high numbers without
the total system actually being of such high performance.

I'm reasonably happy that when I book airline travel, it's *MVS* that's 
used to locate a flight.  It may not win somebody's "price/performance"
ratio contest, but it stays running for months at a time under heavy
processing loads.
-- 
"Without  insects,  our ecosystem  would  collapse  and  we would  all
die.  In  that respect,  insects  are  far  more important  than  mere
end-users."  -- Eugene O'Neil <[EMAIL PROTECTED]>
[EMAIL PROTECTED] - - <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to