Linux-Advocacy Digest #552, Volume #28           Tue, 22 Aug 00 01:13:04 EDT

Contents:
  Re: BASIC == Beginners language (Was: Just curious.... (T. Max Devlin)
  Re: Fragmentation of Linux Community? Yeah, right!
  Re: Windows stability: Alternate shells? ("Erik Funkenbusch")
  Re: Anonymous Wintrolls and Authentic Linvocates - Re: R.E.          Ballard       
says    Linux growth stagnating ("Christopher Smith")
  Re: Would a M$ Voluntary Split Save It? (Courageous)

----------------------------------------------------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: BASIC == Beginners language (Was: Just curious....
Date: Tue, 22 Aug 2000 00:34:48 -0400
Reply-To: [EMAIL PROTECTED]

Said Aaron R. Kulkis in comp.os.linux.advocacy; 
>"T. Max Devlin" wrote:
   [...]
>True...but the Unix shell is even MORE ACCESSIBLE to novice users.

But not the Unix shell syntax.  There's just no way.

   [...]
>What I like about the Unix shells that were specifically designed

Precisely my point.

>with programming in mind ( Bourne-shell "sh" and Korn shell "ksh")
>are:
All right!  At last someone's gotten over their shyness.  I'm willing,
no eager, to try to believe that the Unix shell is every bit as easy or
accessible for automating simple tasks in rudimentary macro-style
fashion to people who never need to, aren't able to, and *don't have any
talent to* develop complex software programs.  Let's see what comes of
this list:

>1: totally accessible to new users

Aye.  Just start typing.  But what does that have to do with syntax?

>2: very comfortable learning curve -- programming is merely an
>       extension of the basic command line which every user uses.

Aye.  But that's kind of the problem, you see.  I'm not sure if its a
cop out to say that its possible that someone who is good at programming
couldn't possibly know how comfortable the curve is for people who would
make lousy programmers.  Plus, do you really want the problem of VB
users who think they're programmers to be compounded even further by
making everyone who manages to work out a couple macros to think they're
a "real programmer"?  Perhaps we need to consider returning to that old
"3rd generation/4th generation language" bit.  That's really the
problem, you know, and why VB is such an abomination.

>3: fully structured (no reliance upon goto's)

Is this really of value when you're *only* going to use something for
truly trivial things?  Perhaps you all simply over-estimate (still,
despite my pleas) the kind of work that I figure can be done in BASIC.

>4: full documentation is available online

Well, that's *purely* implementation, though we could chalk up "entirely
integrated", but that just goes back to point 1.

>5: Numerous working shell scripts, available for casual inspection
>       are litered throughout any Unix system.  In my junior
>       year at Purdue, one day I was quite surprised to discover
>       that the command "lpr" (on BSD systems) and many others
>       are actually shell scripts.

Certainly, this makes shell scripting easier to learn.  Does it make
shell scripting syntax any easier to use without years of practice?

-- 
T. Max Devlin
  -- Such is my recollection of my reconstruction
   of events at the time, as I recall.  Consider it.
       Research assistance gladly accepted.  --


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Fragmentation of Linux Community? Yeah, right!
Date: Mon, 21 Aug 2000 21:32:36 -0700
Reply-To: <[EMAIL PROTECTED]>


Erik Funkenbusch <[EMAIL PROTECTED]> wrote in message
news:26no5.7317$[EMAIL PROTECTED]...

> If the Linux consensus is that DnD will not be supported under multiple
> environments, then Linux has lost the war.  It will never, as a desktop
OS,
> surpass even the Macintosh, which does have a common API for DnD across
all
> it's apps.

"...across all it's apps"?  All its applications that are designed to run in
a single desktop environment correct?  That is one point where your argument
fails.  Unlike other OS's that you may be familiar with, Linux has numerious
independent user environments as do most unix OS's.  Drag and drop will
NEVER be workable across ALL Linux applications, it just plain can never
happen.  I will leave explaining why this is true as an exercise for you.






------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Subject: Re: Windows stability: Alternate shells?
Date: Tue, 22 Aug 2000 00:10:48 -0500

"R.E.Ballard ( Rex Ballard )" <[EMAIL PROTECTED]> wrote in message
news:8nqin2$fj5$[EMAIL PROTECTED]...
> In article <yr0n5.6505$[EMAIL PROTECTED]>,
>   "Erik Funkenbusch" <[EMAIL PROTECTED]> wrote:
> > "R.E.Ballard ( Rex Ballard )" <[EMAIL PROTECTED]> wrote in message
> > news:8nhokh$v4f$[EMAIL PROTECTED]...
> > > > You do realize that AT&T and the bells have always used
> redundancy.
> > > > switches fail, but they've always had extensive cutovers and the
> > > > ability to re-route around failures.
> > >
> > > Yes.  I was a developer on one of the first computerized Directory
> > > Assistance systems to go nationwide.
> >
> > Then why did you insinuate that Unix stability
> > was the reason phone systems
> > genearlly have zero to little downtime?
>
> Because modern telephone switching systems are controlled
> by UNIX based switches such as the #5ESS and similar switches
> from Northern Telecomm.
>
> Nearly every carrier and regional uses UNIX in some flavor for it's
> mission-critical systems.

You didn't answer the question.  You claimed that the telephone network was
stable because it uses Unix.  I said no, it's not.  It's stable because it
has redundancy, and redundancy on the redundancy.  The stability of the OS
that's used is virtuall irrelevant to stable the telephone networks are as a
whole.

They could be using AmigaOS and still have the same levels of overall
network stability.

Unix is used because AT&T defined the market, and AT&T owned the OS.  Now
it's just legacy network effects.  They could have used any number of other
stable OS's as well.

> > With that kind of redundancy, you could run a 24/7/365 shop
> > on Dos 4 (the most unstable version ever released).
> >  What OS you're using is irrelevant to
> > the overall reliability in this case.
>
> Not entirely.  The one critical requirement was that the entire OS
> had to be available in Source code format because problems needed
> to be FIXED in a very short period of time.  When a bug affected
> one machine, it usually hit all of them very quickly.  The bug
> had to be fixed before multiple machines were lost.

And you don't think MS provides source code liscenses to strategic partners?
They do.

> In one case, after an operator had taken down the system 3 times
> (we had ways of finding these things out) we paid him $500 cash
> to show us what he did.  We fixed the bug in 7 hours.  At no time
> during this "test phase" was the entire system down for more than
> a few minutes, but in production, with money due back to the client
> if downtime exceeded 15 minutes/year, this was a bug that had to
> be fixed.  On numerous occaisions, it was only because we had
> the source code to *everything* that we could quickly handle the
> problems that did come up.

Source availability is irrelevant in such high dollar projects, since you
can most certainly get it.

> > Really?  3 years ago vendors were not providing Linux drivers.  They
> were
> > all written by Linux enthusiests.  Drivers being supplied by vendors
> is a
> > very recent thing.
>
> Actually, the KA9Q drivers and the Linux drivers were among the first
> ported to Winsock.  It was easier to plug a simple veneer between the
> working driver and the winsock API.  You could pretty easily slip NDIS,
> ODI, or KA9Q over any of the packet drivers, Winsock needed NDIS.
>
> I believe the NE2000 driver was running on Linux before it was available
> under Windows 95.

Microsoft shipped a generic NE2000 driver with Windows 95.  Sorry.

> > > In windows, interprocess communication wasn't really formalized
> until
> > > MSMQ.  Prior to that, NT provided DCOM, but with a 95% loss of
> > > performance (20 times slower) when switching from COM (in-process)
> > > to DCOM (out of process), it was avoided like the plague.
> >
> > More Rex-no-babble.  DCOM is a remoting architecture.
>
> DCOM can be used for either interconnecting two remote machines,
> OR for interconnecting to independently running processes.

No, it can't.  DCOM is *ONLY* for remote processes.  If you're connecting to
local processes (unless you're using a loopback) COM is used, DCOM is not.

> We
> evaluated this capability at Prudential (since this would have
> reduced the instability between 3rd party vendor libraries.
> Unfortunately performance was so slow that we decided do go with
> in-process COM instead.  The decision killed 2 projects (which had
> problems with library cross-talk).

What kills me is that you speak so matter of factly about something which
you clearly do not understand.

> >  It has nothing to do
> > with out of process IPC (which occurs on the same machine).
> > IPC has been "formalized" since NT in 1993, when it provided
> > such things as pipes, named pipes, mail slots, shared memory,
> > and message passing.
>
> Unfortunately, most of these offer poor memory protection and/or
> increase the risk of deadlocks.  Prior to NT 4.0 Svc Pack 3, which
> put semephores everywhere, BSODs due to corrupted IPC, Video, and
> Winsock were very common.  Even under NT4/SP3 applications written
> directly to lower level MFC and OLE APIs were notorious for BSODs.

???? Lower level MFC????  Gezus Rex, stop while you've got SOME credibility.
MFC is not an API in any sense of Windows (it's a class framework) and
there's no such thing as a "low level MFC API" even if you're stretch the
terminology to loosely accept MFC as some sort of API at all.

IPC cannot cause deadlocks in and of itself.  If your apps don't use any
kind of synchronization (such as a semaphore or mutex), then the deadlock is
your own fault.  Not with the OS.  Sorry, you just don't know what you're
talking about here.

All of this is irrelevant though, since you claimed that these were added
later.  You're backpeddling Rex.

> Eventually, the ISVs revised all of their applications to support
> the new APIs, and usually had to distribute them freely.  Many
> vendors also opted to load the older versions of some of the DLLs.

And which specific API's were those?  Come one, name them.

> > DCOM is much slower because it's used between machines,
> > over a network.  Not between processes on the same machine.
>
> It's not practical to use DCOM between processes on the same
> machine, but it is possible.  When you have two processes which
> are independent, they can be connected via in-proccess, out-of-process,
> or networked objects.  When you wanted independence to the point of
> being able to change the components at run-time, you had to at least
> go out-of-process.  Behind the covers, you were running DCOM over
> Winsock.

You can change componenets at runtime with in-proc DLL's as well.  Ever
heard of CoCreateObject?

> > > Again, Microsoft is aware of these problems in NT, and has made a
> > > number of changes to Windows 2000 to eliminate them.  In some cases,
> > > they preserved the windows APIs while cleaning up the back-end.  In
> > > other cases, they introduced new APIs like COM+, MTS, and MSMQ.
> >
> > Again, you have no idea what COM+, MTS, or MSMQ are, do you?  They're
> > transactioning technologies for use in large scale database
> applications.
> > They have nothing to do with IPC in general.
>
> MSMQ can be used for IPC, similar to MQSeries (which I know a GREAT DEAL
> about).  MTS provides protection between the threads when hundreds of
> connections are made to a single server process, similar to the way
> UNIX or LINUX ip deamons fork processes and let the child handle
> the accepted connection.
>
> COM+ is an abscraction of COM for C++ that hides most of the ugliness
> of thread management, object management, and DCOM from the application
> programmer.  Essentially it makes the C++ version look more like the
> friendlier VB and J++ versions.
>
> At least that's what this here book from Alex Homer and David Sussman
> says (boiling down a 493 page book into 3 paragraphs).

COM+ provides those abstractions, but that's not what it *IS*.  COM+ is
basically the integration of COM, MTS, and MSMQ into a single API, with
extensions for C++ as well.  How else could you use COM+ from VB then if it
was only a C++ technology?

> > Incompatible with any other platoform?  I guess that's why COM and
> DCOM
> > exist for Solaris and HP/UX.  You should know that almost nothing is
> > incompatible with another platform, it just takes someone to write it.
>
> Only a very limited subset of COM and DCOM exist for these platforms.
> Microsoft first farmed it out, and later took it back, partly because
> the third party vendor was providing too many of the hooks needed for
> CORBA Client GUI objects.

CORBA Client GUI objects... CORBA didn't even have a component architecture
until CORBA 3, which was very recently.

> Many of the features of COM, such as drag-and-drop, dde, and embedding
> are not supported for X11 interfaces.

Because those are desktop features, not distributed features.  They don't
work in DCOM at all, even between windows machines.

> Gnome and KDE provide similar
> functions that can be mapped via CORBA, but not without more than
> Microsoft makes available.  Otherwise, it would be trivial to run
> Microsoft COM/DCOM applications via Linux/UNIX or X11 clients.

Oh yeah, sure.  (that's sarcasm, btw).

> > > In UNIX a much higher percentage of the executable and static memory
> > > is shared by all processes.  Futhermore, a larger percentage of
> > > the buffer memory is also shared.
> >
> > And how do you quantify this statement?
>
> Compare memory utilization, number of processes on UNIX vs
> number of Processes on Windows.  Number of threads on UNIX vs
> number of Threads on UNIX.  Compare the memory utilization for
> UNIX vis the memory utilization for Windows for the same number
> of processes.  It's not at all unusual to see 200 to 300 processes
> on a Linux workstation, and 2000 to 3000 processes on a Linux server.
> The statistics on both machines can be easily monitored.

And are completely irrelevant.  Windows applications are much more feature
rich than their typical Unix counterparts, and strictly ported applications
use similar amounts of memory.

> Of course, we're  comparing apples and oranges.  Linux runs thousands
> of processes that all call a very small number of highly optimised
> (to reduce thrashing, swapping, and cache misses) library routines
> that are rarely paged out.  Each process has only a few kbytes of unique
> code.  Many contain only a few hundred.  The "biggies" are X  (neary
> 12 meg) and ld-linux.so (sharable verion of the Motif library used by
> Netscape).

I guess you've never heard of DLL's under Windows.

> > NT had "Unix way" pipes from day one.  It didn't "eventually" do it
> that
> > way.  My first edition "Inside Windows NT" by Helen Custer (1992)
> describes
> > them in detail.  You're confusing DOS and Win32.  Windows 95 was not
> the
> > first Win32 architecture.
>
> Correct.  NT 3.51, if you considered that a viable and successful
> product
> (which didn't sell very well as anything other than a file-server)
> then you *could* say that NT has had UNIX style pipes since 1994. After
> over 10 years of making pipes unworkable.  The key is that it wasn't
> considered a strategic means of communicating between processes.

No.  The the first version of NT was 3.1, not 3.51.  And that was 1993.
3.51 was released in 1995 btw.

I wish I had a dime for every matter of fact statement you've made, only to
back down when challenged.

> Microsoft discouraged the use of direct access to pipes and sockets.
> They discouraged the use of streams.  They prefered that programmers
> call their proprietary APIs which would grab values from the stream,
> or put values onto the stream, but which gave the programmer little
> or no control of the the data being streamed.  Even ISAPI tries to hide
> the formats of HTTP and HTML.

While MS provides these features, they're to help the developer out.
Nothing stops you from doing the work yourself (as you would have to do
under Unix).

> Compare this to UNIX, where parsers and scanners like Yacc and PERL
> give the application programmer direct access and control over both
> input streams and output streams.  Compare this to CORBA where backward
> compatibility is carefully protected.

Yacc and PERL exist for Windows, and have for quite some time.  such scripts
have always been slow, even under Unix.  In fact, this is one of the main
reasons Apache provides modules, to prevent that.

> Backword compatibility of Windows in an oxymoron, and it's contrary
> to the revenue and marketing intrests of Microsoft's current revenue
> structure.  If Microsoft DOES start selling service contracts, backward
> compatibility will suddenly become a VERY strategic benefit.
> Personnally, as someone who does still have to deal with NT and
> Microsoft  pretty regularly, I'd LOVE to see Microsoft adopt a
> support based revenue model.  It would make my job as an integrator
> and architect MUCH easier.

Microsoft spends an inordinate amount of time making broken apps work on new
versions of their OS's.  Check out the compatibility section of your local
Windows 9x machine.

> > > Actually, the "UNIX way" is to have two blocks of memory.  This way,
> > > the sender can fill blocks WHILE the receiver is draining them.
> > >
> > > Unfortunately, unless you have the NT resource kit, most Windows
> > > programs are still written to a paradigm based on huge monolithic
> > > objects that must be read into memory in their entirity before the
> > > methods of the object can be invoked to modify the object.  If you
> > > have a large object such as a BMP file, or a Word document, this
> > > can involve megabytes between the processes.
> >
> > What the hell are you talking about?
> > What does the NT resource kit have to do with anything?
>
> I first found out about it when investigating the possibility of
> getting an MCSE.  Everyone I talked to, online and offline said
> that the first step was to read the NT resource kit cover to cover
> (pretty good reading, some of Microsoft's best).
>
> Of course, the NT resource kit also has all of the POSIX tools, and
> a bunch of UNIX-like tools that I also found VERY Interesting.
> This is ONE Microsoft product I was GLAD I purchased.  To me,
> NT isn't complete without the standard NT resource kit installation.

That doesn't explain your statement that:
> > > Unfortunately, unless you have the NT resource kit, most Windows
> > > programs are still written to a paradigm based on huge monolithic
> > > objects that must be read into memory in their entirity before the
> > > methods of the object can be invoked to modify the object.

Please explain how you came to this conclusion.

> > Please, pray tell, let us know how Unix changes all this.
>
> UNIX used delimited streams from the very beginning and established a
> set of standards for creating, passing, and parsing delimited streams
> rarely required more than a few hundred bytes of memory.  I've created
> stream processors that handled reconciliation, auditing, and
> summarization
> of multiple 30 gigabyte tables with only a few hundred lines of PERL
> code.

All of which is doable under NT.

> Even though PERL is fully available on NT and Linux (though the NT
> version
> is a bit crippled by lack of many UNIX functions) Microsoft has strongly
> discouraged the use of it, promoting VBScript instead.

Please provide one reference.  Just one, of microsoft discouraging perl's
use.

HINT:  Not advocating something, is not discouraging it.

> > > Sure, some of the files are actually compressed binary executables.
> > > More building blocks.  Most of the packages however are essentially
> > > a combination of simple components combined with some PYTHON, PERL,
> > > or TCL scripts along with some BASH scripts.
> >
> > I'm not talking about the packages.
> > I'm talking about the database files used by RPM.
>
> Notice that these are orthagonal.  You can dump the database in text
> format if you'd like.

Totally irrelevant.  I said, if text files are so much superior on Unix, why
are the red had package databases stored in binary format?





------------------------------

From: "Christopher Smith" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Anonymous Wintrolls and Authentic Linvocates - Re: R.E.          Ballard  
     says    Linux growth stagnating
Date: Tue, 22 Aug 2000 15:08:58 +1000


"Bob Hauck" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Tue, 22 Aug 2000 11:50:51 +1000, Christopher Smith
> <[EMAIL PROTECTED]> wrote:
>
> >Nothing in the OS could have stopped those thing you're talking about
> >not do what they did.  The attachments are executed, with the user's
> >permission, at the same privelege level as the user - just like they
> >would be under *nix.
>
> Except that none of my *nix mailers will execute a shell script when I
> click on it.

Neither will outlook.  You have to specifically tell it to.

Plus, there's nothing _stopping_ your *nix mailers from running a script by
piping it to the shell - it'd get the same dumb people .vbs attachments do.

> You're right, the OS can't stop this.  But then, it *is*
> a poor design from a security point of view to have applications behave
> the way Outlook does.

It's a convenience thing.  Like all convenience things, it's also a security
risk, hence the warning and request for specific permission before it
actually _does_ something.

> >Given a video card costs about $10, and the VGA driver is about as
> >solid as they get, I'd say this particular weakness is very, very
> >theoretical.
>
> It is quite a bit less theoretical for embedded systems.

Whoa.  Who said anything about embedded systems ?

> MS did make
> an "embedded NT", the thought of which frankly really cracks me up, but
> they had to do lots of contortions to do it.  One particularly stupid
> example being the question of how you respond to system popup boxes if
> there is no display or keyboard?  Answer...you make an automatic
> button-pusher daemon!  Hardly an elegant solution I think, but what
> else are they going to do with everything all glued to everything else?

I find it hard to believe that embedded NT features a GUI.  I also find it
hard to believe that the GUI is particularly hard to remove.

> Then there's the reported ~12 MB GUI overhead that can't be got rid of.
> Having that sit on disk isn't a big deal, having it sit in flash ROM
> gets pretty costly pretty quickly, especially if we are talking about a
> product produced in volume, even a fairly high-end one.  I guess that's
> another reason why you don't see many $800 Internet appliances and $100
> webcams and $200 routers using NT but lots of them running Linux and
> *BSD,

Do you have any actual evidence or documentation, or is it all "reported" ?

> The problem with welding the GUI on and "integrating" everything they
> way MS is doing it is that you reduce the flexibility of the system.

It's only welded and integrated from a product description point of view.

> You end up with a product that works tolerably well for it's intended
> "desktop and departmental server with 512 MB of RAM and infinite disk
> capacity" markets but is hard to customize for anything outside of
> that.  It sure makes it harder to scale _down_ and the PC architecture
> gets in the way when scaling _up_, so that's quite a bind to be in.




------------------------------

From: Courageous <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.os.os2.advocacy,comp.sys.mac.advocacy
Subject: Re: Would a M$ Voluntary Split Save It?
Date: Tue, 22 Aug 2000 05:02:56 GMT

JS/PL wrote:

> Not only has the trial been riddled with unlawfull acts against Microsoft,
> even the penalty hasn't been lawfully applied. According to the law, fine
> them 10 million and be done with it :-)

It would be within the spirit of the law to make those year-2000
adjusted dollars. :)



C//

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to