Linux-Advocacy Digest #683, Volume #33           Wed, 18 Apr 01 06:13:05 EDT

Contents:
  Re: Linux is for the lazy ([EMAIL PROTECTED])
  Re: What's the point (Matthew Gardiner)
  Re: To Eric FunkenBush ("Erik Funkenbusch")
  Re: What's the point (robert bronsing)
  Re: To Eric FunkenBush ("Erik Funkenbusch")
  Re: Blame it all on Microsoft ("Erik Funkenbusch")
  Re: Could Linux be used in this factory environment ? ("Erik Funkenbusch")
  Re: Could Linux be used in this factory environment ? ("Erik Funkenbusch")
  Re: Who votes for Sliverdick to be executed: AYEs:3 NAYS:0 (1 ABSTAIN) ("Marksman")
  Re: To Eric FunkenBush (Donovan Rebbechi)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Linux is for the lazy
Date: 18 Apr 2001 17:03:05 +1000
Reply-To: [EMAIL PROTECTED]

Scott Moore <[EMAIL PROTECTED]> writes:
>Brian Langenberger wrote:

>Well, you just wanted a fight,

Sheesh! I have just read through half a dozen follow-ups that took the
original article as an attack on linux, rather than the declaration of
appreciation it was. Is it really *that* hard to understand?

If there was any doubt, the last sentence should have made it clear:

   if you want to add a really successful feature to the Unix UI, make
   it a feature that facilitates laziness - because I'm really
   enjoying not having to do so much work... :)

The guy is enjoying having his lazyness catered for! 

Bernie "lazy as anything" Meyer
-- 
Thou wilt show my head to the people: it is worth showing
Georges Jacques Danton
French revolutionary
To his executioner, 5 April 1794

------------------------------

From: Matthew Gardiner <[EMAIL PROTECTED]>
Subject: Re: What's the point
Date: Wed, 18 Apr 2001 19:46:14 +1200

<snype>
> Remember to type in the full path to su! You wouldn't want to email
> anyone your password now would you?
How is that possible, to send your password via email?

Matthew Gardiner

-- 
I am the resident BOFH (Bastard Operator From Hell)

If you don't like it, you can go [# rm -rf /home/luser] yourself

Running SuSE Linux 7.1

The best of German engineering, now in software form

------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Subject: Re: To Eric FunkenBush
Date: Wed, 18 Apr 2001 04:17:27 -0500

"The Ghost In The Machine" <[EMAIL PROTECTED]> wrote in
message
> >For some, but as of today, very few people even use the STL, much less
their
> >own templates.
>
> Sez you.  Personally, I think the STL is a very well-engineered idea,

Of course it is.  I made no statements about the quality or usefulness of
STL or templates, only that very few people (in comparison to the majority)
are not using them.  My own experience is that less than 10% of the C++
programmers even know how to use std::string, much less containers or
algorithms.

> although as usual Microsoft, by actually implementing it to spec,
> bodged it somehow. :-)  It turns out MSVC++ 6.0, or the header files
> compled thereby, want std:: everywhere, unless one wants 'using
> namespace std;' near the top of his programs.  As far as I understand
> it, this is to perfect spec, but it is annoying.

This is required of any compliant C++ compiler.  Not doing so violates the
standard and will make your code non-portable, annoyance or not.  That's the
way it has to be.

> Then again, g++ is freeware; just because it's freeware doesn't
> mean it's perfect.  I'm not even sure g++ and gdb interact correctly
> on Linux yet -- I've had gdb think it's deep in string.h when it's
> actually somewhere else.  How I report that as a bug, I don't know.
> I can't reproduce it on tiny programs.  And of course g++ and std::
> don't get along well for some reason.

They get along fine, it's just that g++ ignores the std namespace
completely.





------------------------------

From: robert bronsing <[EMAIL PROTECTED]>
Subject: Re: What's the point
Date: Wed, 18 Apr 2001 11:24:28 +0200



Eric wrote:

> So my question is, for the home user, what's the point?  Has anyone learned
> Linux from the ground up just to use it at home?  What's the advantage?  I'm
> convinced Linux is great if you want to run a server or whatever, but is
> there a point in home users running Linux?
> 
> thanks - eric

I have learned about Linux from the ground up and I only use it at home.
At work I must use Windows, and I am frustrated daily (at least) with
bluescreens and/or reboots. Costs a lot of time, time that I could use
for better things. And at work, that is important.
Before I started with Linux, I only had experience with ms-dos and
windows and I had been using computers for >15 years (strarted with an
ZX-81 and a TI-99/4a) but never did I use UNIX or OS's like that. In
those years of hobbying with computers I did grow increasingly
frustrated by a number of things at home using windows. I made a radical
switch to linux in 1999. I first used RedHat but I switched to Slackware
this year. Hey, it's free so I can try them all if I like.
My main frustration wasn't even the instability of windows. At home, I
don't use critical applications or anything so a reboot or two isn't a
problem. The problem was (is) that with windows, you can only do those
things that microsoft programmers think you may want to do. If they
allow you to. For example, if you want to remove a certain inet browser,
you can only do that if you know exactly what you're doing. regedit is
not something to mess around with if you don't know what it does
exactly. Another major frustration to me was that there is no serious
CLI. All you have is a mouse. Sorry, but I am much faster typing in what
I want, than searching for a place to click, hoping that it does what I
think (hope) it does. With a CLI, I tell the computer what it should do.
I don't want my computer to think for me.
With linux, there is no one appearance of you computer. You can
configure the thing to taste. You are forced to think about what you
want to do with a computer. 
I don't mind that it takes me some time to learn (about) Linux. When I
first powered up my TI-99/4a I didn't know what to do either. It takes
some time. First time wrestle with ms-dos also meant I needed to learn
what the commands are, what the peculiarities are, how things work.
And, it's not exactly rocket-science. It's just something you need to
learn. Learning how to speak Arabic also doesn't happen over night, and
I do think Linux is easier to learn.

I think it's too easy to say 'Oh, well, linux is so hard I'll just stop
using it'. Stick with it, and at one point you'll find that you start to
know and understand things....and that's when the fun starts.






-- 
Robert Bronsing

------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Subject: Re: To Eric FunkenBush
Date: Wed, 18 Apr 2001 04:40:25 -0500

"Chronos Tachyon" <[EMAIL PROTECTED]> wrote in
message
> >> OK, let me get this straight.  You're arguing that the only reason a
> >> printf-style "Hello, World!" compiled by a C++ compiler is so much
larger
> >> is that it uses stdio instead of iostream?  *groan*
> >
> > Where the hell did you get that?  I said that C code compiled as C++ is
> > larger because the startup code has to do more.  I said that it doesn't
> > HAVE to be this way though, a smart enough linker could know to not
> > include exception support if you don't use anything that uses
exceptions,
> > and it could know to link in the C startup code if no C++ features are
> > used that require the C++ startup code.
>
> OK, sounds like we're not on the same wavelength.  I'm trying to argue
> against replacing separate C and C++ compilers with C/C++ compilers.  I'll
> confine my response to the rest of your paragraph to C++.

Your argument is that having a unified compile would make C code more
bloated and take longer to compile.  My argument is that this isn't
necessarily true.

> It would be extremely tricky to turn off many C++ features when you
compile
> a program, for one simple reason:  how the hell can the compiler know for
> certain that no libraries will be linked in that require support for them?

You're not reading what I wrote.  I said the *LINKER* could be smart enough
not to link in exception code when none of the object files (and libraries
are just containers of object code) use exceptions.  I'm also saying that
the linker could be smart enough not to link in C++ startup code when there
is no need to call constructors of global objects or initialize the
exception code.

No, the compiler doesn't know anything about the other object modules, but
the linker does.  The compiler doesn't generate exception code in the object
modules unless you put a try bock in there.  If the linker detects that none
of the object modules have exception code, there is no reason to link in the
exception handling startup code.

> If a library requires support for exceptions, any program linked to it
must
> also be compiled with exception support (the converse is not generally
> true).

Of course, and the linker should detect that.

> If you use only static libraries, and the object files in that
> library have some sort of header describing what C++ features they were
> compiled with, then it becomes possible.  Dynamic compile-time linking is
> psychotic enough in C++ without this monkey business, and
> GetProcAddress/dlsym dynamic loading is right out the window.

dynamic libraries are a slightly different issue.  If you're linking to
imports library stubs, then the stubs will have exception support in them
that the linker can detect.  If you're LoadLibrary and GetProcAddress'ing
them, then you're in the same boat as you are today if you do that and
manually turn off exception handling.

> > Actually, pre-compiled headers were invented long before the first
> > commercial C++ compiler (borland), and in C-only compilers.  Remember
that
> > PC's were much slower back then, as were their disk drives and much of
the
> > overhead of compiling was in disk activity.  precompiling the headers
was
> > a big win.
>
> Fair enough, I'm not well versed in compiler history prior to about 1995.

Some of the first ANSI C compilers of about 1989/90 had this feature.  For
instance, Lattice C (which MSC was based on), Zortech C, etc...

> > My point of the 3000 #includes was to show you that they *always* effect
> > compile times.  even if you only include 1.
>
> Yes, compile time is affected by the number of headers you include.  I
> never said that including more headers than necessary was ALWAYS harmless,
> I said it was USUALLY harmless, where "harmless" means "making so minute a
> difference as to be imperceptable".

C and C++ have different dependancy graphs.  A typical C++ module also
includes a header file of the class definition, whereas the typical C file
will usually only include system includes and perhaps some global extern
definitions.

Where C++ begins to grow is that the C++ header files often include other
header files they depend on, and the number of files to be processed grows
exponentially.  Again, this can be curbed with proper dependancy management,
but most programmers don't seem to do this very well.

> > Don't believe me?  Write a
> > standard C windows SDK applicaiton, then use the precompiled header
option
> > and see the difference.
>
> Non sequitur, unless you compile your application with Cygwin or another
> pure C compiler.  Most commercial compilers in 'doze are C++, with a thin
> veneer of ANSI C compatibility.  And yes, I've written apps with the Win32
> SDK, and PCH helps.  Slightly.  It makes a much bigger difference when
> compiling MFC apps.

C and C++ have so many differences.  In fact, MSC uses two seperate
compilers.  One for C and one for C++, but they're called from the same
command line.  This is not a "thin vaneer" on a C++ compiler.

> >> An interesting note:  I did some experiments on a 2.4 ramfs filesystem,
> >> and it seems that g++ is actually more efficient at parsing C headers
> >> than gcc.
> >> I would attribute this to the necessity of having an efficient parser
for
> >> a complex language like C++; something tells me that two different
teams
> >> wrote the C and C++ parsers in GCC.  g++ seems to be roughly O(n log n)
> >> in number of lines when parsing C code, whereas gcc approaches O(n^2).
> >> Yech, bad algorithms.
> >
> > Guess that blows your theory that C++ is always slower at compiling.
>
> Yep, defeat snatched from the jaws of victory.  g++ still produces (much)
> larger executables for C code, however, so I think I'll stick with gcc.
In
> principle, however, C code is still easier to parse, and a "perfect" C
> compiler will compile code faster than a "perfect" C++ compiler.

Again, not necessarily.  You are making statements without anything to back
it up.  I've already given you examples that can counter that.

> >> Yes, exception handling specifically cannot be turned off.  You can't
> >> turn off C++, however, and that's my problem with the idea of merging C
> >> and C++ compilers.
> >
> > No, but you can make a merged C/C++ compiler parse code just as
> > efficiently as a standalone C++ compiler.
>
> [Note: that was a typo, I meant "exception handling ... CAN be turned
off".]
>
> I don't think I've ever argued that, but it doesn't really make sense to
> turn a C++ compiler into a C/C++ compiler, since a compliant C++ compiler
> can already compile the majority of ANSI C89 code without a hitch.

I agree that it doesn't make sense, primarily because of the differences in
the languages.  C and C++ treat many things differently.  Arrays are treated
differently, void is treated differently, text literals are treated
differently, and produce very different code.

My argument is primarily against the concept that C++ must be more
inefficient at compiling regular C code than a standalone C compiler.

> > structs without function members are identical to C structs.  The
standard
> > refers to them as POD's (or Plain old Data).  Again, you are confusing
> > that of course, C++ programs *CAN* be more complex to parse, that it
> > doesn't mean it is *ALWAYS* more complex to parse.
>
> True; however, the code that supports cross-dressing structs is obviously
> in there, making the compiler pudgier and adding more complexity to the
> decision tree that the compiler has to navigate.  These can threaten cache
> locality and branch prediction, respectively, which are critical to good
> performance (on x86, at least).

Yes, it does add more complexity to the decision tree, but as I already
said, if you weight the decision tree to favor common operations then you
can remove the complexity hit when compiling the most common code.

> >> I think you're missing the forest for the trees here.  My point is that
> >> it's a symptom of growing language bloat.  Surely N, for instance, is
> >> unbefitting of a language that's meant to be little more than a
glorified
> >> portable assembler.
> >
> > And my point is that a C++ compilers added complexity does *NOT* have to
> > give you a hit if you're not using that complexity.  Yes, in many
> > compilers
> > today you do.  But that doesn't meant it has to be this way.
>
> My issue is with things like _Complex that are better left to user-created
> libraries in C instead of bloating the compiler and the standard library.
> Don't people think of boot floppies anymore?  I dread the day when
stripped
> and optimized glibc hits the 4 MB mark.

So don't use dynamic linking then, use static linking.





------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Crossposted-To: comp.theory,comp.arch,comp.object
Subject: Re: Blame it all on Microsoft
Date: Wed, 18 Apr 2001 04:44:22 -0500

"mlw" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Erik Funkenbusch wrote:
> >
> > "mlw" <[EMAIL PROTECTED]> wrote in message
> > > It is well documented how Microsoft limited Borland's access to
Windows
> > > information.
> >
> > What are you talking about?  Even if true, limiting access doesn't make
your
> > product buggy.  Programs crash because they do something like
dereference
> > invalid memory.  The IDE simply doesn't do what it's supposed to do in
many
> > circumstances, which is poor programming and nothing else.
>
> When making a development environment, one needs all the lowest level
> information available. In Windows, a debugger is not a trivial matter.
Borland
> did a great job with the information available. For a couple years, they
had
> the best compiler and environment.

The debugger is mostly x86 oriented, not Windows oriented.  In fact, MS
didn't even write their own debuggers at first, but rather licensed them
from NuMega.  What does that tell you about how difficult it is for a third
party to write a debugger for Windows.




------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Subject: Re: Could Linux be used in this factory environment ?
Date: Wed, 18 Apr 2001 04:46:16 -0500

"GreyCloud" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Erik Funkenbusch wrote:
> >
> > "Jean-David Beyer" <[EMAIL PROTECTED]> wrote in message
> > > I thought that a few years ago, the U.S.Navy tried a computer
> > > controlled battleship, and the computers ran Windows NT (probably 3.51
> > > in those days), and it crashed so bad the ship had to be towed into
> > > port. (I may not have the facts exactly correct, but it was pretty
> > > much like this.) Maybe the computers were not exactly your
> > > bargain-basement PCs, but the software must have been. If the U.S.Navy
> > > is dumb enough to use Microsoftware in a battle-critical system, why
> > > would not some private industry be just as dumb?
> >
> > Why let the facts get in the way of a good dis, right?  Your lack of
> > knowledge on the issue doesn't seem to prevent you from jumping to
> > conclusions.
> >
> > The facts in the matter are a) that it wasn't a battleship, and b) that
they
> > were running a beta version of the control software which did not
validate
> > entry fields.  As such, when an operator entered a 0 into a field, it
was
> > stored in the database, causing all subsystems that depended on that
> > information to fail with a divide by zero exception.
> >
> > The application could not be restarted because every time they restarted
it,
> > it would re-read the data values and crash again, thus the ship was dead
in
> > the water.  Further, the ship wasn't towed in, the ship had alternate
> > propulsion mechanisms onboard because it was an experimental project
running
> > beta software.
> >
> > The Navy and the canadian company that wrote the software stated that
the
> > problem was not related to NT in any way.  In fact, the canadian
contractor
> > laid the blame on the Navy for not installing their validated version
before
> > the incident, which would have prevented the problem from ever occuring.
> >
> > The navy, however, believed that they should shake out the vessel and
see
> > where the potential failures might be so that in real emergency
situations,
> > they would know how to respond.
>
> For all concerned here on this topic: If you know don't talk... If you
> are a contractor for the Navy or U.S. Gov. you have some form of
> clearance...  don't talk.
> You could lose your clearance status and hence your job(s).  Heads up
> fellows.

The information is available in public documents.  Here's a good place to
start:

http://www.jerrypournelle.com/reports/jerryp/Yorktown.html




------------------------------

From: "Erik Funkenbusch" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware,comp.os.linux.misc
Subject: Re: Could Linux be used in this factory environment ?
Date: Wed, 18 Apr 2001 04:51:57 -0500

"Brent R" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Erik Funkenbusch wrote:
> >
> > "Jean-David Beyer" <[EMAIL PROTECTED]> wrote in message
> > > I thought that a few years ago, the U.S.Navy tried a computer
> > > controlled battleship, and the computers ran Windows NT (probably 3.51
> > > in those days), and it crashed so bad the ship had to be towed into
> > > port. (I may not have the facts exactly correct, but it was pretty
> > > much like this.) Maybe the computers were not exactly your
> > > bargain-basement PCs, but the software must have been. If the U.S.Navy
> > > is dumb enough to use Microsoftware in a battle-critical system, why
> > > would not some private industry be just as dumb?
> >
> > Why let the facts get in the way of a good dis, right?  Your lack of
> > knowledge on the issue doesn't seem to prevent you from jumping to
> > conclusions.
> >
> > The facts in the matter are a) that it wasn't a battleship, and b) that
they
> > were running a beta version of the control software which did not
validate
> > entry fields.  As such, when an operator entered a 0 into a field, it
was
> > stored in the database, causing all subsystems that depended on that
> > information to fail with a divide by zero exception.
> >
> > The application could not be restarted because every time they restarted
it,
> > it would re-read the data values and crash again, thus the ship was dead
in
> > the water.  Further, the ship wasn't towed in, the ship had alternate
> > propulsion mechanisms onboard because it was an experimental project
running
> > beta software.
> >
> > The Navy and the canadian company that wrote the software stated that
the
> > problem was not related to NT in any way.  In fact, the canadian
contractor
> > laid the blame on the Navy for not installing their validated version
before
> > the incident, which would have prevented the problem from ever occuring.
> >
> > The navy, however, believed that they should shake out the vessel and
see
> > where the potential failures might be so that in real emergency
situations,
> > they would know how to respond.
>
> Still, I think their point was that a single application brought the
> entire show down... a situation that's critical when it really matters
> (which admittedly it usually doesn't).

It brought the whole show down because the application was central to the
entire system.  When the application won't run, neither does the system.
That has nothing to do with the OS.

> I've been an MS defender in here... still I would never use NT to do
> something like that... that's just not what it's made for. UNIX is more
> apptly suited in that role.

Unix is neither more or less aptly suited.  Please explain how the same
design would somehow make the application work in Unix.





------------------------------

Reply-To: "Marksman" <[EMAIL PROTECTED]>
From: "Marksman" <[EMAIL PROTECTED]>
Crossposted-To: 
misc.survivalism,alt.fan.rush-limbaugh,soc.singles,alt.society.liberalism,talk.politics.guns
Subject: Re: Who votes for Sliverdick to be executed: AYEs:3 NAYS:0 (1 ABSTAIN)
Date: Wed, 18 Apr 2001 06:04:41 -0400

AYE!

"David L. Moffitt" <[EMAIL PROTECTED]> wrote in message
news:9bg6jm$207a$[EMAIL PROTECTED]...
> %%%% AYE!!!!
>
> "Rob Robertson" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> > Aaron R. Kulkis wrote:
> >
> > <snip>
> >
> >  Re:
> >
> >  "Let's take a nice, Glen "Sliverdick" Yeadon style pure-democratic
> >   vote:
> >
> >   All for putting Glen "Sliverdick" Yeadon up against the wall, and
> >   filling him full of lead, say "AYE!"  All opposed, say "NAY"
> >
> >   Let's see how much Sliverdick likes democracy now."
> >
> > > AYES:3
> > > NAYS:0
> >
> >   ABSTAIN:1
> >
> >  An example of the dangers of pure democracy is all well and good,
> > but I reject pure democracy even if Glen advocates it and wouldn't
> > vote either way on the matter; there is no moral justification for
> > the action or the mass decision behind it.
> >
> > _
> > Rob Robertson
>
>



------------------------------

From: [EMAIL PROTECTED] (Donovan Rebbechi)
Subject: Re: To Eric FunkenBush
Date: 18 Apr 2001 10:08:19 GMT

On Wed, 18 Apr 2001 04:17:27 -0500, Erik Funkenbusch wrote:
> "The Ghost In The Machine" <[EMAIL PROTECTED]> wrote in
> message
>> >For some, but as of today, very few people even use the STL, much less
> their
>> >own templates.
>>
>> Sez you.  Personally, I think the STL is a very well-engineered idea,
> 
> Of course it is.  I made no statements about the quality or usefulness of
> STL or templates, only that very few people (in comparison to the majority)
> are not using them.  My own experience is that less than 10% of the C++
> programmers even know how to use std::string, much less containers or
> algorithms.

When I think about this, it doesn't surprise me in the least. I'm teaching
C++ part time at a university, and a lot of the *instructors* barely know
STL, ditto for textbook writers. Ignorance trickles down from textbook 
writers to instructors to students (or from textbook authors directly
to self-studyers)

I suppose a lot of this probably has to do with the fact that the inclusion
of STL in the standard is really a very new thing.

-- 
Donovan Rebbechi * http://pegasus.rutgers.edu/~elflord/ * 
elflord at panix dot com

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to comp.os.linux.advocacy.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to