RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts

Er, um, yes.  I stand corrected.


-Original Message-
From: Steve VanDevender [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 14, 2000 11:44 AM
To: Marty Fouts
Cc: '[EMAIL PROTECTED]'; Michael Rothwell; Linux kernel
Subject: RE: Advanced Linux Kernel/Enterprise Linux Kernel


Marty Fouts writes:
 > Actually, you have the sequence of events slightly out of order.  AT&T,
 > specifically Bell Labs, was one of the participants in the program that
 > would develop Multics. AT&T opted out of the program, for various
reasons,
 > but it continued apace.  The PDP-8 of fame was one that, according to
 > Thompson, happened to be available and unused.

The original system on which UNIX development started was not a PDP-8,
but a PDP-7.  The earliest UNIX was also written in assembler.  Thompson
and Ritchie developed C as a higher-level implementation language during
the process of porting UNIX from the PDP-7 to the PDP-11.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts

I would agree that Multics probably wouldn't qualify as a platform for
enterprise computing these days, but it is interesting to examine the 9
stated goals, and see how they relate to enterprise computing.  They are
applicable goals, although they certainly don't qualify as the only set, and
could well be expanded given what has been learned in the 35 years since.

Marty

-Original Message-
From: Buddha Buck [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 14, 2000 9:52 AM
To: Michael Rothwell; [EMAIL PROTECTED]
Cc: Linux kernel
Subject: Re: Advanced Linux Kernel/Enterprise Linux Kernel


At 01:10 PM 11/14/00 -0500, Michael Rothwell wrote:
>"Richard B. Johnson" wrote:
>
> > Relating some "nine goals of 'Enterprise Computing'" to Multics is
> > the bullshit.
>
>Funny, I got those off the "Multics FAQ" page.

It may be reasonable to question them as "goals of 'Enterprise Computing'".

I found, on http://www.multicians.org/general.html, a list of those same 
nine goals, introduced by the sentence "As described in the 1965 paper 
Introduction and Overview of the Multics System by Corbató and Vyssotsky, 
there were nine major goals for Multics:"

While those were the goals of Multics, it is not at all clear that Multics 
would classify these days as a platform for "Enterprise Computing".  I'll 
note that the word "enterprise" does not appear in either the general FAQ 
page I cited, nor in the linked article it cites.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts


Dick Johnson wrote:

> The original DEC was "given" to W. M. Ritchie and his staff in
> "Department 58213". He wanted to use it for games. To do so, required
> him to write some sort of OS, which became Unix.

A typo, I assume.  That's D(ennis) Ritchie.

> As I said, when Multics was designed, the only criteria as to
> get it to work on a DEC. It didn't. To use this development as
> an example of "enterprise computing" is absurd and belies its
> well documented history.

How odd then, that Corbato's '65 paper specifically describes it as a
research effort on a GE system, and both Ritchie and Thompson have written
to similar effect and Glasser et al wrote 

In the late spring and early summer of 1964 it became obvious that greater
facility in the computing system was required if time-sharing techniques
were to move from the state of an interesting pilot experiment into that of
a useful prototype for remote access computer systems. Investigation proved
computers that were immediately available could not be adapted readily to
meet the difficult set of requirements time-sharing places on any machine.
However, there was one system that appeared to be extendible into what was
desired. This machine was the General Electric 635.

Multics grew out of research into the design of timesharing systems at MIT,
and is from the same family of systems as ITS.  It had a long and
interesting history and was supported by Honeywell into the 90s.

There were several other interesting OSes developed in that time frame, such
as SDS's CP/V for the Sigma series, but most of them were not described in
the literature and so are long forgotten.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts

Actually, you have the sequence of events slightly out of order.  AT&T,
specifically Bell Labs, was one of the participants in the program that
would develop Multics. AT&T opted out of the program, for various reasons,
but it continued apace.  The PDP-8 of fame was one that, according to
Thompson, happened to be available and unused.

-Original Message-
From: Richard B. Johnson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 14, 2000 10:01 AM
To: Michael Rothwell
Cc: Linux kernel
Subject: Re: Advanced Linux Kernel/Enterprise Linux Kernel


On Tue, 14 Nov 2000, Michael Rothwell wrote:

> "Richard B. Johnson" wrote:
> 
> > Relating some "nine goals of 'Enterprise Computing'" to Multics is
> > the bullshit. 
> 
> Funny, I got those off the "Multics FAQ" page.
> 
> -M
> 


History is being rewritten. When Multics was being developed by AT&T,
it was found to be unusable on the DEC. It was a PDP-8, so the
story is told.  General Electric got the first contract to make
a machine specifically designed for Multics and development
continued.

The original DEC was "given" to W. M. Ritchie and his staff in
"Department 58213". He wanted to use it for games. To do so, required
him to write some sort of OS, which became Unix.

As I said, when Multics was designed, the only criteria as to
get it to work on a DEC. It didn't. To use this development as
an example of "enterprise computing" is absurd and belies its
well documented history.


Cheers,
Dick Johnson

Penguin : Linux version 2.4.0 on an i686 machine (799.54 BogoMips).

"Memory is like gasoline. You use it up when you are running. Of
course you get it all back when you reboot..."; Actual explanation
obtained from the Micro$oft help desk.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts

Sorry, wrong answer, but thanks for playing.  Multics was not abandoned as
unusable, and was, in fact, widely used, sometimes in what would now be
called "mission critical" applications, for a long time. While Honeywell
finally stopped supporting Multics sometime in the 90s, I was surprised and
delighted to find that there are still Multics systems running.

There may be many people on this list who know the history of Unix, but from
this thread, I'm thinking that perhaps there is some confusion between the
history and the mythology.

Perhaps we could get AT&T, Lucent, or whomever owns the copyright these
days, to reprint the "Unix" issue of the Bell Systems Journal.

Marty

-Original Message-
From: Richard B. Johnson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 14, 2000 8:26 AM
To: Michael Rothwell
Cc: Linux kernel
Subject: Re: Advanced Linux Kernel/Enterprise Linux Kernel


On Tue, 14 Nov 2000, Michael Rothwell wrote:

> One historically significant "Enterprise" OS is Multics. It had nine
> major goals. Perhaps we should think about how Linux measures up to
> these 1965 goals for "Enterprise Computing."
>

Multics???  No way. It was abandoned as unusable and part of the
kernel code, basically the boot loader, was modified to become
part of Unix.

You have way too many persons on this list who know the history of
Unix to try this BS.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.0 on an i686 machine (799.54 BogoMips).

"Memory is like gasoline. You use it up when you are running. Of
course you get it all back when you reboot..."; Actual explanation
obtained from the Micro$oft help desk.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Advanced Linux Kernel/Enterprise Linux Kernel

2000-11-14 Thread Marty Fouts

Sorry, wrong answer, but thanks for playing.

When Multics was developed, (early 60s,) DEC equipment wasn't even
interesting to much of an audience.  The original equipment Multics ran on
was build by one of the "seven dwarf" computer companies, (GE) which was
soon to get out of the computer business altogether.

I would suggest Organick's book, if I could recall the title.

Marty

-Original Message-
From: Richard B. Johnson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 14, 2000 8:42 AM
To: Michael Rothwell
Cc: Linux kernel
Subject: Re: Advanced Linux Kernel/Enterprise Linux Kernel


On Tue, 14 Nov 2000, Michael Rothwell wrote:

> "Richard B. Johnson" wrote:
> > Multics???  [..] way too many persons on this list who know the history
of
> > Unix to try this BS.
> 
> So, you're saying their nine goals were bullshit? Multics had a lot of
> problems. But it did a lot of ground-breaking. Perhaps you should reply
> to the nine goals, or the general topic of "Enterpriseness," rather than
> merely express your irrelevant hatred for Multics.
>

Relating some "nine goals of 'Enterprise Computing'" to Multics is
the bullshit. When Multics was being developed, the singular goal
was to make an operating system that worked on DEC Equipment without
having to use DEC software. The emphasis was on trying to make it
work period.


Cheers,
Dick Johnson

Penguin : Linux version 2.4.0 on an i686 machine (799.54 BogoMips).

"Memory is like gasoline. You use it up when you are running. Of
course you get it all back when you reboot..."; Actual explanation
obtained from the Micro$oft help desk.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Installing kernel 2.4

2000-11-07 Thread Marty Fouts

There's been a bunch of related work done at the Oregon Graduate Institute
by Calton Pu and others.  See
http://www.cse.ogi.edu/DISC/projects/synthetix/publications.html for a list
of papers.



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, November 07, 2000 3:25 PM
> To: Linux Kernel Mailing List
> Cc: [EMAIL PROTECTED]
> Subject: Re: Installing kernel 2.4
> 
> 
> 
> > There are tests for all this in the feature flags for intel and
> > non-intel CPUs like AMD -- including MTRR settings.  All of 
> this could
> > be dynamic.  Here's some code that does this, and it's similiar to
> > NetWare.  It detexts CPU type, feature flags, special instructions,
> > etc.  All of this on x86 could be dynamically detected.
> 
> Detecting the CPU isn't the issue (we already do all this), 
> it's what to
> do when you've figured out what the CPU is. Show me code that can
> dynamically adjust the alignment of the routines/variables/structs
> dependant upon cacheline size.
> 
> regards,
> 
> Davej.
> 
> -- 
> | Dave Jones <[EMAIL PROTECTED]>  http://www.suse.de/~davej
> | SuSE Labs
> 
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Installing kernel 2.4

2000-11-07 Thread Marty Fouts

There is a variation of #2 that is often good enough, based on some research
work done at (among other places) the Oregon Graduate Center.  I don't have
the references handy, but you might want to look for papers on "sandboxing"
authored there.

The basic idea is similar to the one used by many 'recompile on the fly'
systems, and involves marking the code in such a way that even inline pieces
can be replaced on the fly.  Very useful for things like system specific
memcpy implementations.

Marty


> -Original Message-
> From: David Lang [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, November 07, 2000 4:11 PM
> To: Jeff V. Merkey
> Cc: [EMAIL PROTECTED]; Martin Josefsson; Tigran Aivazian; Anil kumar;
> [EMAIL PROTECTED]
> Subject: Re: Installing kernel 2.4
> 
> 
> Jeff, the problem is not detecting the CPU type at runtime, 
> the problem is
> trying to re-compile the code to take advantage of that CPU 
> at runtime.
> 
> depending on what CPU you have the kernel (and compiler) can 
> use different
> commands/opmizations/etc, if you want to do this on boot you have two
> options.
> 
> 1. re-compile the kernel
> 
> 2. change all the CPU specific places from inline code to 
> function calls
> into a table that get changed at boot to point at the correct calls.
> 
> doing #2 will cost you so much performance that you would be 
> better off
> just compiling for a 386 and not going through the autodetect 
> hassle in
> the first place.
> 
> David Lang
> 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Linux on IA64 (was RE: non-gcc linux?)

2000-11-05 Thread Marty Fouts

If I understand the SGI compiler's history correctly, it's more than "some
code."  (I would guess that it would be 70-80% of the volume of a compiler,
as Pro64 appears to only share the front end with gcc, the entire backend is
from scratch.) IA64 is architecturally very different than the sort of ISA
that GCC is appropriate for handling, and the problem is structural, and
possibly fundamental.  Pro64, on the other hand, as described on its web
site, and from a quick scan of the code, appears to be derived from a long
line of compilers that were aimed at ISA architectures similar at some level
to IA64.  It has, for example, a completely different IR (Whirl) and code
generation strategy.

I would be very surprised if anyone could make a case acceptable to SGI that
it assign the copyright to that much technology to any third party.
Besides, the whole idea of copyright on substantially derived work and
assigning copyright to ensure defensibility of copyright is an extremely
gray area in copyright law.

Anyway, if your concern is an efficient Linux kernel on UnObtaninum, there
are going to be a lot more interesting opportunities than merely converting
the base kernel so that Pro64 will compile it.  Provided that Intel left in
certain features from HP, UnObtanium is going to want a very different
underlying architecture for the most efficient kernel design.



> -Original Message-
> From: Tim Riker [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, November 05, 2000 1:18 PM
> To: Jakub Jelinek
> Cc: Alan Cox; Linux Kernel Mailing List
> Subject: Re: non-gcc linux?
> 
> 
> yes, exactly what my comments stated.
> 
> Jakub Jelinek wrote:
> > 
> > On Sun, Nov 05, 2000 at 01:52:24PM -0700, Tim Riker wrote:
> > > Alan,
> > >
> > > Perhaps I did not explain myself, or perhaps I misunderstand your
> > > comments. I was responding to a comment that we could 
> just copy some of
> > > the optimizations from Pro64 over into gcc.
> > 
> > That's hard to do, because the whole gcc has copyright 
> assigned to FSF,
> > which means that either gcc steering committee would have to make an
> > exception from this for SGI, or SGI would have to be 
> willing to assign some
> > code to FSF.
> > 
> > Jakub
> 
> -- 
> Tim Riker - http://rikers.org/ - short SIGs! 
> All I need to know I could have learned in Kindergarten
> ... if I'd just been paying attention.
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Topic for discussion: OS Design

2000-10-22 Thread Marty Fouts

"microkernel" is an unfortunate term.  Once upon a time it had a reasonably
well understood technical meaning and then Cutler claimed that NT had a
'microkernel' design and the FUD  set in.   In the literature I'm familiar
with, (not counting marketing hype,) 'micokernel means two distinct classes
of things, although they are often confused, sometimes in the same paper:

1) An implementation technique, probably pioneered in Accent, certainly
popularized in Mach, currently championed in the GNU Hurd (I think,) and
stemming from a lot of earlier work on capabilities an such like, tracing
back to Multics. It consists, mainly, of the idea  of dividing the
'services' of an O/S kernel into servers each in a separate address space
communicating via message-passing RPC mechanism. In this case, 'microkernel'
refers to the nucleus of the system that manages the message passing traffic
and implements the 'virtual machine' layer of the system.  (I oversimplify,
but it should do for this discussion.)
2) The general concept of moving service facilities to the other side of the
user/supervisor boundary and limiting the nucleus (that part that runs in
supervisor mode) to a very small set of functionality, usually the bare
minimum necessary to implement the VM and communication.

The problem with 'microkernels', like the much earlier problem with
capabilities based architectures is that there is, in most designs, a
mismatch between hardware architecture and software requirements, most
notably in the cost of making a procedure call that crosses between the
'user-space' services and the microkernel - a penalty that can be doubled if
the services have to make calls upon each other.

The problem stems from a misfeature of most computer system architecture
that has the VM system overlapping the functionality of memory
addressability and memory accessability in such a way that changes to either
require 'heavyweight' operations on the VM hardware (TLBs, page tables, et
cetera.)  There have been three 'solutions' to this problem:

1) Do the logical separation into services, but don't use separate address
spaces.  This keeps the performance but doesn't' get any hardware memory
protection advantage. It doesn't seem worth trying to retrofit an existing
kernel into such a model simply for the modularity gain that, if the kernel
source is well partitioned anyway, might not be very large.
2) Do the heavyweight message passing, and have people laugh at your
performance.
3) Work *very* hard to find a compromise - which may be possible, but few
people have yet accomplished.

I have had the good fortune of working with one architecture (PA-RISC) which
gets the separation of addressability and accessability 'right' enough to be
able to partition efficiently and use ordinary procedure calls (with some
magic at server boundaries) rather than IPCs.  There are others, but PA-RISC
is the one I am aware of.  When I last looked, which is when I was still
working on it, IA-64, the newest architecture from Intel, due out "any day
now" still preserved that design, which we had worked hard to get them to
keep.

The PA-RISC architectural approach isn't perfect - there are some limited
resources that we wish we had more of - but we were able to demonstrate very
good 'microkernel' performance. *in the second sense of 'microkernel*. We
did this by taking the Brevix concept of "interfaces" and actually
implementing it in HP/UX, and then running some benchmarks designed to
stress server-to-server communication extensively and were very pleased with
the results.

So, the long answer to your question is:

1) a new O/S designed from the ground up in a 'microkernel'-ish way, like
QNX, which doesn't actually do the memory partitioning, or which carefully
designs the components to minimize communication across protection domains,
can be very efficient, but runs into difficulty when its initial assumptions
are violated.  (See, for instance, the history of Chorus.)
2) Given the right hardware, it is possible to partition an O/S so that very
little of it at all is 'nucleus' and the vast majority of it is loadable
modules - and you can use simple loader directives to decide if a module
shares address space with the nucleus or lives in a separate address space.
- I hope that Itanium will end up that way, but doubt that the HP work will
survive the marketing decisions that Intel has had to make.

Linux isn't really a good basis for a nucleus/server design, because it is
already pretty well partitioned from a source-code point of view, but is
based on a long history of optimization for all-in-one-address-space
'kernels.'

By the way, even highly decomposed very modular systems aren't as flexible
as the people who first developed them expected them to be, but the reason
behind that is for a whole other discussion.

Marty

-Original Message-
From: Dwayne C . Litzenberger [mailto:[EMAIL PROTECTED]]
Sent: Sunday, October 22, 2000 3:59 PM
To: Peter Waltenberg
Cc: [EM

RE: Topic for discussion: OS Design

2000-10-22 Thread Marty Fouts

FWIW, 'message passing' is the wrong answer to the question 'how do I
separate the components of a kernel into distinct modules for ' but
that's because it's tied to the Accent ancestry of the Mach style
"microkernel".

One of the few things we did get right in Brevix was the idea of an
interface transition that used the memory management architecture of PA-RISC
effectively to give the modularity and production without the overhead of
message passing.  If you want someone to add components to hardware to
support that, get them to set up a system that effectively separates memory
addressability from memory accessability, as PA-RISC did. (Oh wait, we did
that. If Intel didn't throw it away, Itanium *will* have such an
architecture.)

Crossing memory protection domains does not need to be slow.  It's not in
PA-RISC, although it is in the VAX-ish memory architecture of systems like
x86. It doesn't have to be in IA-64, if one is willing to abandon 'legacy.'



-Original Message-
From: Dwayne C . Litzenberger [mailto:[EMAIL PROTECTED]]
Sent: Sunday, October 22, 2000 8:57 PM
To: Erno Kuusela
Cc: [EMAIL PROTECTED]
Subject: Re: Topic for discussion: OS Design

[snip]
> crossing memory protection domains is slow, there's no way around
> it (except better hardware).

So what we really need to do is get some custom "RAM blitter" into our
hardware to do the memory copies needed for fast context switching and
message
passing.

Too bad nobody on this list works at an electronics design company... ;-P

--
Dwayne C. Litzenberger - [EMAIL PROTECTED]

- Please always Cc to me when replying to me on the lists.
- Please have the courtesy to respond to any requests or questions I may
have.
- See the mail headers for GPG/advertising/homepage information.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [Criticism] On the discussion about C++ modules

2000-10-21 Thread Marty Fouts

I prefer Peter Salus' wording, myself: The difference between theory and
practice is always larger in practice than in theory.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [ADMIN] some list related topics ..

2000-10-20 Thread Marty Fouts



> -Original Message-
> From: Matti Aarnio [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, October 19, 2000 1:26 PM
> To: [EMAIL PROTECTED]
> Subject: [ADMIN] some list related topics ..

[snip]

> 
>   3) some ISP systems yield 500 series errors with text:
>   "system is temporarily busy"
>  or something of that effect.  Now THAT is really offensive
>  stupidity by the ISP software folks...
> 

  There is nothing in the SMTP RFCs that require any to be able to
accept all email at all times.  SMTP is *not* designed to be a reliable
delivery mechanism, let alone a first-time reliable delivery mechanism.
Refusal to accept email because the receiving system is under high load is
well understood, commonly accepted, and even codified in implementation
practice.

In my opinion, you are doing a GoodThing(tm) by trying to weed broken
addresses from the mailing list. But please don't demand from the internet
behavior it wasn't designed to provide.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: An excellent read on CPU caches

2000-10-17 Thread Marty Fouts

There are some details in error in this document, and the discussion of
cache-coherence might be expanded or dropped altogether, rather than hinted
at.  I've sent a long note to the author with "diffs' for a next edition.

Thanks for pointing it out, I know of several situations in which it will be
useful to point people at it.

Marty

-Original Message-
From: David Ford [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 17, 2000 12:53 PM
To: LKML
Subject: An excellent read on CPU caches

http://www.systemlogic.net/articles/00/10/cache/print.php3

--
"The difference between 'involvement' and 'commitment' is like an
eggs-and-ham breakfast: the chicken was 'involved' - the pig was
'committed'."

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [Criticism] On the discussion about C++ modules

2000-10-16 Thread Marty Fouts

Do you know that there is actually a name for the logical fallacy behind
this sort of argument?

But please, enlighten me, what precisely about having once wrote some file
system code for Linux qualifies one as an expert on the topic of the
relative difficulty of optimizing C and C++  as used in kernel development?

What is it about this group that some of its members so quickly close ranks
around the secret handshake when they don't have an actual response to a
technical point?  And what are you going to say instead when I finally do
get around to contributing code to Linux and I still point out bogosities
when they come up?

-Original Message-
From: Tigran Aivazian [mailto:[EMAIL PROTECTED]]
Sent: Monday, October 16, 2000 1:05 AM
To: Marty Fouts
Cc: 'Jeff V. Merkey'; [EMAIL PROTECTED]
Subject: RE: [Criticism] On the discussion about C++ modules

On Mon, 16 Oct 2000, Marty Fouts wrote:

> Which part of "what you wrote doesn't make sense, (for the following
> reasons,) please explain it" are you having trouble responding to in
public?

the pragmatic and subjective part. Jeff wrote some cool nwfs code for
Linux which is publically available and what useful kernel code for Linux
did you write and where can I download it? Therefore, for this very simple
subjective and pragmatic reason, Jeff (in my eyes) is right and you are
wrong :)

Send patches, not "clever thoughts", clever thoughts are cheap and useless
and can remain confined to the "comp. science departments of certain
universities" for all I care...

Regards,
Tigran
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [Criticism] On the discussion about C++ modules

2000-10-16 Thread Marty Fouts

Which part of "what you wrote doesn't make sense, (for the following
reasons,) please explain it" are you having trouble responding to in public?

This has nothing to do with some imagined 'fight' and everything to do with
a public challenge to a publicly made statement that, IMHO, gives every
appearance of being nonsense.

But I do agree that discussions of C++ belong off list.

Marty


-Original Message-
From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
Sent: Monday, October 16, 2000 12:44 AM
To: Marty Fouts
Cc: [EMAIL PROTECTED]
Subject: Re: [Criticism] On the discussion about C++ modules


Take your fights with me offline.  You have my email address.

Jeff

Marty Fouts wrote:
>
> Um? Huh?  This seems like mumbo-jumbo to me.  With the exception of those
> parts of the kernel that actually manipulate the hardware as hardware, --
> which is a surprisingly small part of the kernel, even of the parts of the
> kernel that look like what they do is manipulate the hardware as hardware
--
> code executing in a kernel behaves exactly like code executing in any
other
> part of the system. - It is, in fact, often not possible to tell outside
the
> processor control registers, whether the executing code is running in
'priv'
> mode or not, so the same code will show the same bus trace in or out of
the
> kernel.
>
> In fact, if the underlying hardware architecture has an appropriate
> separation between memory addressability and memory accessability
mechanisms
> within address translation,  and a reasonable i/o architecture, only a
very
> tiny fraction of 'the kernel' needs to execute with any different
privileges
> than any other application.  (I got it down to page table entry management
> and trap/interrupt entry and exit in one kernel, but that was on a *very*
> nice hardware architecture.)
>
> Marty (who *has* used logic analysers to debug new CPU designs and other
OS
> problems.)
>
> -Original Message-
> From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, October 15, 2000 11:20 PM
> To: [EMAIL PROTECTED]
> Cc: J . A . Magallon; [EMAIL PROTECTED]
> Subject: Re: [Criticism] On the discussion about C++ modules
>
> Not meant to offend, but it's obvious you are not grasping hardware
> optimization issues relative to kernel development and performance.  I
> would recommend getting your hands on a bus analyzer, and testing out
> some of your theories, and explore for yourself relative to these issues
> with some hard numbers.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [Criticism] On the discussion about C++ modules

2000-10-15 Thread Marty Fouts


Um? Huh?  This seems like mumbo-jumbo to me.  With the exception of those
parts of the kernel that actually manipulate the hardware as hardware, --
which is a surprisingly small part of the kernel, even of the parts of the
kernel that look like what they do is manipulate the hardware as hardware --
code executing in a kernel behaves exactly like code executing in any other
part of the system. - It is, in fact, often not possible to tell outside the
processor control registers, whether the executing code is running in 'priv'
mode or not, so the same code will show the same bus trace in or out of the
kernel.

In fact, if the underlying hardware architecture has an appropriate
separation between memory addressability and memory accessability mechanisms
within address translation,  and a reasonable i/o architecture, only a very
tiny fraction of 'the kernel' needs to execute with any different privileges
than any other application.  (I got it down to page table entry management
and trap/interrupt entry and exit in one kernel, but that was on a *very*
nice hardware architecture.)

Marty (who *has* used logic analysers to debug new CPU designs and other OS
problems.)

-Original Message-
From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
Sent: Sunday, October 15, 2000 11:20 PM
To: [EMAIL PROTECTED]
Cc: J . A . Magallon; [EMAIL PROTECTED]
Subject: Re: [Criticism] On the discussion about C++ modules


Not meant to offend, but it's obvious you are not grasping hardware
optimization issues relative to kernel development and performance.  I
would recommend getting your hands on a bus analyzer, and testing out
some of your theories, and explore for yourself relative to these issues
with some hard numbers.  


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Random thoughts on language arguments and kernel development.

2000-10-15 Thread Marty Fouts

As several people are sure to remind me, the Linux Kernel mailing list is
not the right forum for a discussion on language choice and the impact on
kernel development, but as this is not the first time I've followed this
class of argument, I'd like to make a couple of general observations that I
hope *are* relevant to the list.

First, arguments of the form "it's ok for applications, but kernels need
more control, performance, et cetera than language X provides, which is why
we use language Z" are fallacious.  The end user of a computer system cares
about the *entire* performance of that system, not just the kernel.
Peformance issues exist whether the hardware is executing in 'system' mode
or not (to use an archaic term) and applications are not immune from these
considerations simply because they execute in 'user' mode (to use a related
archaic term.)  The point is that performance critical sections of code need
to be treated as such, no matter where they occur, and that one person's
performance critical code may never be exercised in another person's
workload.

Second, arguments of the form "language X is slower, sloppier, harder to
control than language Z" suffer from extreme oversimplification.  The most
that can be said when comparing the relative performance of two programming
languages is that the combination of programming practice and compiler
sophistication is such that typically, programs written in language Z need
more care than programs written in language X in order to reach the same
level of performance, controllability, et cetera.  (For instance, in the mid
80s, I recoded a rather famous Fortran vector supercomputing benchmark into
C, and, for a particular combination of machine and compiler, was able to
actually outperform the Fortran version in many of its tests. The machine's
vendor spent far more effort on their Fortran than C compiler, and in a few
years even that machine returned to Fortran outperforming C.)

The Linux kernel is an old-style process oriented Unix clone.  As such, it
is best written in C, and should probably remain in C, since its overall
structure falls into a family of kernels written in C over a period now
approaching 30 years.  That is, in fact, the best reason for leaving Linux
in C: preserving the long legacy of C related optimization and the C related
programming skill set.

That is *not* an argument for not writing an operating system core in mixed
languages or even other languages. Not counting assemblers, I've worked on
operating systems in a lot of languages, including Ada, Algol, Basic, C,
C++, Forth, Fortran, Lisp, Pascal, PL/I and Smalltalk, and all of those that
were in use for any period of time reached acceptable (and often very good)
levels of performance. While C++ implementations and programming practice
are not yet anywhere as mature as those for C, I expect that within a decade
a new breed of operating system cores, which have overthrown the idea of
'kernel' without falling into the trap of 'microkernel' will begin to
arrive, written in languages that more suitably match the programming
paradigms of the application systems built on top of them.

The last, by the way, is the key to language choice for OS implementation,
in my opinion: it is best to choose the language that best resolves the
competing constrains of fitting the prevalent paradigm while having robust
implementations and mature understanding of programming practice suitable to
the language.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux 2 patents

2000-10-06 Thread Marty Fouts

I don't know a lawyer I would trust who would give free legal advice on a
mailing list without the usual disclaimers.

And I don't care what you've done elsewhere, you have, here, been misleading
about patent law. I stand by my recommendation that people who are
interested should read the Nolo Press book and then, if they have specific
issues, consult an IP lawyer on those particular issues.

In addition to the Nolo press, by the way, the US Patent Office now has a
web site with good general information for those people who are interested
in US patent issues.  (http://www.uspto.gov/) I suppose there is a similar
web site for people interested in EU patent specifics as well.  One of the
serveral ways in which you were mistaken in your assertions is that you've
neglected to clarify where US Patent Law differs from Patent Law in other
jurisdictions.  You may be in Utah, but not everyone on this mailing list
is.




> -Original Message-
> From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
> Sent: Friday, October 06, 2000 3:40 PM
> To: Marty Fouts
> Cc: 'jesse'; [EMAIL PROTECTED]
> Subject: Re: Tux 2 patents
> 
> 
> 
> 
> Marty Fouts wrote:
> > 
> > I don't do pissing matches, Jeff, and won't compare the 
> quality of the IP
> > experts I have access to to the quality of those you have access to.
> > 
> > I will say that you are wrong about disclosure because you 
> have overly
> > simplified, and again recommend that people who care should 
> discuss their
> > specific cases with real lawyers, which neither you no or I are.
> 
> Excuse me -- I was one of the attorneys on the Novell/TRG lawsuit --
> check my motions
> and filings.  In fact, check the 4th District Court in general for my
> filings in other cases.  You can check the Texas courts for 1980's as
> well.  Just because I have been a software 
> engineer for the past 20 years does not mean I did something else in a
> previous life. 
> 
> Since all my friends are lawyers and TRG runs a law firm out of here
> should say something.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux 2 patents

2000-10-06 Thread Marty Fouts

I don't do pissing matches, Jeff, and won't compare the quality of the IP
experts I have access to to the quality of those you have access to.

I will say that you are wrong about disclosure because you have overly
simplified, and again recommend that people who care should discuss their
specific cases with real lawyers, which neither you no or I are.

As a starting point I recommend Nolo press' "Patent Copyright & Trademark"
book and that an individual see a lawyer for their specific case.

(The Nolo press book can be bought online at
http://www.nolo.com/product/pct.html?t=0023003202000)



-Original Message-
From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 06, 2000 2:35 PM
To: Marty Fouts
Cc: 'jesse'; [EMAIL PROTECTED]
Subject: Re: Tux 2 patents


I've filed lots of patents in my day Marty -- this is correct.  I have
two patent lawyers on staff.  Want to try again..

 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux 2 patents

2000-10-06 Thread Marty Fouts

This is not correct.  There is a lot of partially correct information being
passed around in this thread, and I strongly suggest that people who are
interested not rely on what is being said here, but read the NOLO press book
as a starter, and talk to an IP lawyer if you need to know the details.

Marty

-Original Message-
From: Jeff V. Merkey [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 06, 2000 11:52 AM
To: Marty Fouts
Cc: 'jesse'; [EMAIL PROTECTED]
Subject: Re: Tux 2 patents


And you only get the year of protection **IF** you have filed a
provisional patent application, which expires 12 months after it's
issued.  You must then file a non-provisional patent application before
the year runs out, or you cannot patent the techniques.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux 2 patents

2000-10-06 Thread Marty Fouts

Please be careful with attributions.  I did not write the paragraph
attributed to me below, which contains information I believe is incorrect.

-Original Message-
From: Daniel Phillips
[mailto:[EMAIL PROTECTED]]
Sent: Friday, October 06, 2000 12:24 PM
To: Marty Fouts; [EMAIL PROTECTED]
Subject: Re: Tux 2 patents

Marty Fouts wrote:
>> IANAL, but I believe that once you've implemented a method in a released
> product, you have only one year to file the patents for it.  If you don't
> file patents for it within this time period, it becomes public domain.  I
> think it would be possible to invalidate their patents, but I don't think
> it would be possible to get your own patent on it after the fact and
refuse
> to let them use it.

No, that was never under consideration (I guess I just don't have the
right mindset for this:)  I'm looking at the ways in which the phase
tree algorithm is superior to what they're doing.  And actually, I'm not
worried about NetApp, I'm worried about Sauron^H^H^H^H^H^H Bill.

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux 2 patents

2000-10-06 Thread Marty Fouts

IANAL; this is not legal advice.

The 'one year'  you are referring to is from 'disclosure', not from released
product.  "disclosure" in this case is a legal term-of-art. Further, there
is a difference between US and European Union patent law, in that, IIRC, EU
law requires patent application before _public_ disclosure.  In effect,
"disclosure" means revealing the idea to anyone, inside your organization or
out, but there are all sorts of corner cases in the law.

Nolo Press had a good book that discusses copyright and patent law, although
they may not have had the chance to update it to reflect recent changes.

In any event, if you are serious about either getting or trying to overturn
a patent, you need to see a lawyer specializing in patent law, because case
law frequently changes the nuances in this area.

-Original Message-
From: jesse [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 06, 2000 10:53 AM
To: [EMAIL PROTECTED]
Subject: Re: Tux 2 patents

On Fri, Oct 06, 2000 at 09:13:25AM +0200, Daniel Phillips wrote:
> > Once you use the technique and it's documented as clear by a patent
> > lawyer, it will be safe for you to use forever, particularly if it's
> > in the public domain. This is winning
>
> This is good to know, but what I was talking about is taking it *out of
> the closed source* domain.  The idea is to take our best ideas out of
> the closed source domain.  After a few years of doing that, it's my
> guess that the evil software patent system would keel over and die.

IANAL, but I believe that once you've implemented a method in a released
product, you have only one year to file the patents for it.  If you don't
file patents for it within this time period, it becomes public domain.  I
think it would be possible to invalidate their patents, but I don't think
it would be possible to get your own patent on it after the fact and refuse
to let them use it.

-Jesse
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Tux2 - evil patents sighted

2000-10-02 Thread Marty Fouts

IANAL

That said, I would refer anyone interested in 'prior art' in patents to
http://www.ipmall.fplc.edu/ipcorner/bp98/welch.htm
especially the brief discussion on what 'prior art' is to the patent office.
Also, for those who believe that similar concepts will void patents, I would
suggest a search of the IP literature on the topic of 'narrowly defined.'

As to whether or not Network Appliance's patents would hold up in court, I
offer two contradictory opinions:

   Factoid: 90% of all patents are never challenged, while 80% of those that
are are overturned.

and

   "Going into court is throwing the dice."

I will defer discussion of the 'evil' of patent law to some more appropriate
forum.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Anyone working on multi-threaded core files for 2.4 ?

2000-09-29 Thread Marty Fouts



> -Original Message-
> From: Alan Cox [mailto:[EMAIL PROTECTED]]
> Sent: Friday, September 29, 2000 2:08 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
> [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Anyone working on multi-threaded core files for 2.4 ?
> 
> 
> > > while the dump is taken? How about thread A coredumping, 
> half of the image
> > > being already written and thread B (nowhere near the 
> kernel mode, mind
> > > you) changing the data both in the area that is already 
> dumped and area
> > > the still isn't? After that you can look at the dump and 
> notice absolutely
> > > corrupted data structures - very effective in 
> misdirecting your attempts
> > > to figure out what went wrong.
> > 
> > Couldn't all threads be stopped before coredumping begins?
> 
> Unless I am missing something doesn't a truncate of a file in 
> parallel also
> yank the pages from under the dump too
> 

a "good enough" bit of coherence for dumping, it seems to me, can be met by
insuring that none of the threads in the "process" are scheduled against a
CPU during the dump.  On a UP this can be relatively simple to do by making
sure that each related thread is kept off the run queue while the dump
occurs, since it is known that none of the non-failing threads were running
when the dump started.  On an MP you must also make sure that none of the
threads are running on any of the other processors when the dump starts.
Once all of the threads are stopped, then an "normal" dump is enough,
augmented by (optionaly) dumping the thread-specific state (ie PCB and
stack) of all of the threads in the "process."

Open question: whether or not to allow the remaining threads to continue
once the dump is completed, to abort them, or to signal them.  Probably
should be run time configurable.

Marty
  
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Anyone working on multi-threaded core files for 2.4 ?

2000-09-29 Thread Marty Fouts



> -Original Message-
> From: Igmar Palsenberg [mailto:[EMAIL PROTECTED]]

[snip]

> 
> Maybe I'm totally stupid, but I think you need to sync the 
> threads so that
> the're in the same state. And I don't think it's that simple.
> 
> Or I'm talking totally nonsense here :)
> 

I think one needs to be careful of not letting a desire for the perfect
solution prevent deploying a useful solution while working out the best one,
in this case.

When a multithreaded application "dies" due to one of the threads failing in
an unexpected and unrecoverable way, there probably isn't, at that point, a
"same state" for the threads to be in, and an non-coherent dump, while still
difficult to use, is more useful than not dumping any state at all, and
often is coherent enough to use anyway.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Linux kernel modules development in C++

2000-09-29 Thread Marty Fouts

I suspect that this discussion belongs off-list, because it apparently comes
up frequently.

But an observation from a Linux-Kernel "outsider":

Multilanguage developement (meaning using more than one language in the
product) makes any product harder to develop.  Because Linux is in C
originally for good reasons, it should probably remain in C until and unless
such time as a significant reason for a rewrite arrives.

Since Linux is trying to be, in some sense, "the last best Un*x" (apologies
to various people,) it makes sense that it should be written in "the last
best procedural language."

marty 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Linux kernel modules development in C++

2000-09-29 Thread Marty Fouts



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Friday, September 29, 2000 3:39 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Linux kernel modules development in C++
> 
> 
> But but but.. wasn't the very first C++ compilers really just 
> a preprocessor into standard C?
> 

Yes. Actually to K&R C, since that's the language that B.S. had available to
preprocess into.

Because of Turing-equivalence of all procedural languages, it is possible
for a "compiler" for any of them to be written as a preprocess for another
language, ala f2c and p2c. (Thus the recurring search for the "universal
assembler" and many of the advantages of implementation sharing in compiler
mid-ware and backend components.) The further the languages drift the more
difficult it is to write automatic translators between them.

C++, of course, is a considerablly different language now than it was when
B.S. introduced it.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Linux kernel modules development in C++

2000-09-29 Thread Marty Fouts

The C++ standard, like most language standards is available from ANSI, in
the US. It is ISO/IEC standard 14882, and can be purchased online as a PDF
document from ANSI.
(http://webstore.ansi.org/ansidocstore/product.asp?sku=ISO%2FIEC+14882%2D199
8) It costs $18 for an individual.

Standards bodies such as the IEEE and ANSI charge for copies of standards
because that helps them with the cost of producing and distributing the
standards. The cost for the ANSI standards is nominal.

My own opinion is that no, the nominal cost of standards documents has
little to do with why programmers don't have complete and up to date
definitions of the language.  Most of them, after all, are willing to pay
3-4 times that much for tutorial or text books on the language, often more
than one. My opinion is that few C or C++ programmers actually possess
complete and up to date definitions of the language, because many of them
are unaware of or uninterested in the existence of such standards, because
they believe that the dielect of the language they are using on their
platform of choice is, for their purposes, the language, and so they believe
they only need the vendor reference for the language. Also, standards are
written in a peculiar style and dialect, and they require developing a
certain kind of reading skill to be useful.

This list provides, I believe, an example of a class of programmers who have
little interest in the standard definition of the language, since, I'm told,
the Linux kernel isn't written in a standard programming language, but,
rather, in a dialect which is a subset of Gnu C. Thus, to be a linux kernel
hacker, it seems, one would be more interested in knowing what that dialect
is than in knowing what the standard is.

Marty

> -Original Message-
> From: Daniel Phillips [mailto:[EMAIL PROTECTED]]
> Sent: Friday, September 29, 2000 1:16 AM
> To: Marty Fouts
> Subject: Re: Linux kernel modules development in C++
> 
> 
> Marty Fouts wrote:
> > IMO, it was worse even than that.  C++ itself hadn't 
> stablized as a language
> > to the point where it would have been wise to use on a 
> kernel at that time.
> > 
> > The language only really stablized in  '99, I think.  It's 
> too soon to tell
> > whether it would be usable for kernel development, although 
> various projects
> > that have tried to use it in an OO way have floundered for 
> one reason or
> > another.
> 
> So, (trying to salvage something useful from this thread) where can I
> download this standard?  Or is it, as I suspect, another toll 
> bridge on
> the information highway?  And if so, why do we insist on doing stupid
> things like that to ourselves?   And could this have something to do
> with the fact that very few C or C++ programmers actually possess
> complete and uptodate definitions of the languages?
> 
> --
> Daniel
> 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Linux kernel modules development in C++

2000-09-28 Thread Marty Fouts



> -Original Message-
> From: Horst von Brand [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 28, 2000 1:45 PM

[snip]

> When Linux started, there was _no_ decent freeware C++ 
> compiler around.

IMO, it was worse even than that.  C++ itself hadn't stablized as a language
to the point where it would have been wise to use on a kernel at that time.

The language only really stablized in  '99, I think.  It's too soon to tell
whether it would be usable for kernel development, although various projects
that have tried to use it in an OO way have floundered for one reason or
another.

BTW, C++ isn't really a pure OO language, it is a language that has
facilities that support a wide range of programming paradigms.  One of the
things that makes it hard to use effectively is taking that into account
when using its various facilities.

Marty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-18 Thread Marty Fouts

Gene did the instruction set architecture along with some others. I think he
was also involved in the i/o architecture.

-Original Message-
From: Joel Jaeggli [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 18, 2000 4:59 PM
To: Marty Fouts
Cc: 'Malcolm Beattie'; [EMAIL PROTECTED]
Subject: RE: Availability of kdb

Gene Amdahl I think...


On Mon, 18 Sep 2000, Marty Fouts wrote:

> I think that more people quote Brooks than have read him and that more
> people know him from the Mythical Man Month than from the POO.
>
> He wasn't, by the way, the principle architect of OS/360; he was the
manager
> of the 360 development organization.  I will email a monster cookie to the
> first person who correctly identifies the original architect of OS/360.
>
> And yes, if Linus manages to learn some new lesson from Linux and writes a
> book about it of the endurance of MMM, I'll be shown wrong in my assertion
> about his being remembered.
>
> By the way, my favorite part of the anniversary edition of MMM is Brooks'
> apology to Gries about being wrong about information hiding.
>
> -Original Message-
> From: Malcolm Beattie [mailto:[EMAIL PROTECTED]]
> Sent: Monday, September 18, 2000 3:22 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Availability of kdb
>
> Marty Fouts writes:
> > Here's another piece of free advice, worth less than you paid for it: in
> 25
> > years, only the computer history trivia geeks are going to remember you,
> > just as only a very small handful of us now remember who wrote OS/360.
>
> You mean like Fred Brooks who managed the development of OS/360, had
> some innovative ideas about how large software projects should be run,
> whose ideas clashed with contemporary ones, who became a celebrity?
> You don't spot any parallels there? He whose book "Mythical Man Month"
> with "No Silver Bullet" and "The Second System Effect" are quoted
> around the industry decades later? And you think that's only a small
> handful of people?
>
> --Malcolm
>
> --
> Malcolm Beattie <[EMAIL PROTECTED]>
> Unix Systems Programmer
> Oxford University Computing Services
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
>

--
--
Joel Jaeggli   [EMAIL PROTECTED]

Academic User Services   [EMAIL PROTECTED]
 PGP Key Fingerprint: 1DE9 8FCA 51FB 4195 B42A 9C32 A30D 121E
--
It is clear that the arm of criticism cannot replace the criticism of
arms.  Karl Marx -- Introduction to the critique of Hegel's Philosophy of
the right, 1843.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-18 Thread Marty Fouts

I think that more people quote Brooks than have read him and that more
people know him from the Mythical Man Month than from the POO.

He wasn't, by the way, the principle architect of OS/360; he was the manager
of the 360 development organization.  I will email a monster cookie to the
first person who correctly identifies the original architect of OS/360.

And yes, if Linus manages to learn some new lesson from Linux and writes a
book about it of the endurance of MMM, I'll be shown wrong in my assertion
about his being remembered.

By the way, my favorite part of the anniversary edition of MMM is Brooks'
apology to Gries about being wrong about information hiding.

-Original Message-
From: Malcolm Beattie [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 18, 2000 3:22 AM
To: [EMAIL PROTECTED]
Subject: Re: Availability of kdb

Marty Fouts writes:
> Here's another piece of free advice, worth less than you paid for it: in
25
> years, only the computer history trivia geeks are going to remember you,
> just as only a very small handful of us now remember who wrote OS/360.

You mean like Fred Brooks who managed the development of OS/360, had
some innovative ideas about how large software projects should be run,
whose ideas clashed with contemporary ones, who became a celebrity?
You don't spot any parallels there? He whose book "Mythical Man Month"
with "No Silver Bullet" and "The Second System Effect" are quoted
around the industry decades later? And you think that's only a small
handful of people?

--Malcolm

--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-18 Thread Marty Fouts

Then I suggest you skip the one paragraph at the beginning of my comment
that wasn't appropriately diplomatic and read the portion that you snipped.
It contains a wee bit of wisdom.

-Original Message-
From: Tigran Aivazian [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 18, 2000 1:36 AM
To: Marty Fouts
Cc: 'Larry McVoy'; 'Linus Torvalds'; Oliver Xymoron; Daniel Phillips; Kernel
Mailing List
Subject: RE: Availability of kdb

On Sun, 17 Sep 2000, Marty Fouts wrote:
> I've probably debugged more operating systems under more varied
environments
> than nearly anyone here, having brought up a new OS, compiler, and CPU

yea yea yea, if you are so good then you should be concentrating on giving
your goodness and wisdom and experience to us and not boast about it. For
to give is more blessed than receive.

Regards,
Tigran
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-17 Thread Marty Fouts

I am amused that you snipped the relevant part of the discussion in order to
take a snipe at the throwaway part.

Do you have any comment on the arguments I've made, or do you just like
sniping?



-Original Message-
From: David S. Miller [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 17, 2000 10:40 PM
To: Marty Fouts
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Availability of kdb

   From: Marty Fouts <[EMAIL PROTECTED]>
   Date:Sun, 17 Sep 2000 22:42:22 -0700

   I've probably debugged more operating systems under more varied
   environments than nearly anyone here

Which one of them was %100 distributed where no two of the developers
were in the same building and the only method of communication was
electronic?

Besides, "My shit doesn't stink because I've done this a 1000
different times for twice as long as anyone here, blah blah" is not of
much value, perhaps you've done it wrong all this time or perhaps each
time it was under circumstances which are much different than the
Linux development model.  This is what my first paragraph was meant to
point out.

I find that I get more respect from people, especially "new ignorant"
people if I don't start any of my arguments with "my shit doesn't
stink because I did this and that... me me me...".

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-17 Thread Marty Fouts

I agree about needing to know all of the tools in the tool chest, including
the hand ones. Nothing in what I've said about needing to include the
debugger has been an argument against *also* having a full chest of other
tools.

On the other hand, Linus is wrong, and your attempt to defend him is a
non-sequitor.

I've probably debugged more operating systems under more varied environments
than nearly anyone here, having brought up a new OS, compiler, and CPU
concurrently and having debugged everything from card-batch monitors to
fairly large distributed systems.  On any number of occasions, as others
here have noted, a debugger has been essential, and utter knowledge of the
code is of no use, because the code has relied on hardware that is in
reality behaving differently than it is documented to behave.  No amount of
code reading is going to find those cases.

I've also found a lot of problems where the symptoms appear in one subsystem
but are the result of a bug in another.  No amount of reading the code from
the first subsystem is going to find the bug in the second, but those are
the bugs that are going to give you the most headaches without good
debugging tools.

You continue to conflate two related but different parts of the debugging
process.  Identifying what the system *is* doing is different than
identifying what it *should* be doing.  The social-engineering argument
against using debuggers is an argument against  making the mistake of trying
to both jobs with one tool.

In my "perfect" universe, here's how I debug kernels, when I'm not worried
about the hardware being the problem, and the problem is in code written by
someone else:

* Have access to a browsable cross-referenced source tree for the exact
kernel being debugged (in a _really_ perfect world, I'd have specifications
for what the code should be doing, but hey, we're in OS hacking here, so
we'll ignore software engineering things like requirements, specifications,
pre-conditions, "programming with contracts", and stick to
hairy-chested-he-man-debugging)
* Do my best to reduce the test case to the smallest set of actions that
will reproduce the failure  - this may involve using debuggers, logic
analysers, and various "jigs" to help isolate system behavior.
* Loop:
o Read the source code to see if I can understand the failure behavior, in
the process, questions will arise, like "shouldn't this variable be mumble
at this point
o Run the test case under the debugger *while* reading the source code (best
if I can use a remote debugger that interacts with my source code browser)
and use the debugger to validate my assumptions by answering those questions
* the loop is repeated until I have an "aha" moment, which is the point at
which I *think* I see what the code is doing that it shouldn't have done.
* Stop what I'm doing and have a diet coke.  Preferably while reading the
module that I think is misbehaving. (in a _really_ perfect world, while
reading it with the guy who wrote it.)
* Write up and desk check the code that should change the behavior (with the
guy who wrote the original as a code reviewer, if possible)
* Run the regression suite - if the patch works, take it to code review.  If
it passes code review, try to put the test case in the regression suite, if
it can be done.

The debugger is useful, along with visualization tools, trip wires and a
dozen other techniques in solving a very important social engineering
problem that I haven't seen mentioned in this thread: The bug got there
after my team's best effort to write correct code in the first place, an
effort that involves specs, code reviews, coding standards and a number of
other tools.  That means we have a conceptual failure to understand our own
code.  As any proof reader will tell you: mistakes like that that get by are
nearly impossible to catch.

Basically, I use a debugger when I realize that I've developed a perception
block and I want to validate my perception against reality.  Computing is,
after all, an empirical science.

Marty


-----Original Message-
From: Larry McVoy [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 17, 2000 8:41 PM
To: Marty Fouts
Cc: 'Linus Torvalds'; Oliver Xymoron; Tigran Aivazian; Daniel Phillips;
Kernel Mailing List
Subject: Re: Availability of kdb

On Sun, Sep 17, 2000 at 02:33:40PM -0700, Marty Fouts wrote:
> Um, for what ever it is worth, if you want to compare "power user"
carpentry
> to "hand tools only" you can actually do it fairly easily on PBS in the
US.
> There used to be a program done by a guy who did everything by hand.  I
> loved to watch it, especially the parts where he cut himself badly because
> there are somethings it is dumb to do with hand tools, but he was stuck
with
> his dumb rule.  There's another show, still on, called "The New Yankee
>

RE: Availability of kdb

2000-09-17 Thread Marty Fouts

I'm not laboring under the mistaken impression that there is any capital-T
truth in operating system design, nor that there is a capital-R right way to
do things. Nor do I make the mistake of trying to cover up bad ideas about
social-engineering with poorly thought out examples about carpentry, and
then stomp my feet demanding that people who point out the failings of those
metaphors should stop having "silly" ideas and compete with me in some
imaginary game of "goodness."  I don't do drinking games, pissing matches,
or popularity contests, and I'm not impressed by them either.

I'm on this mailing list because the company that I currently work for has
decided to use Linux in its products. I'm trying to figure out how to make
commercial products that will survive in the market place while also finding
a way to give back to the open source community.  I'm going to stay here for
as long as it is in my judgement in my company's interest for me to follow
Linux development, and my belief that I can utilize my company's interest in
Linux to give back to the free software community.

I'm not particularly interested in convincing you. You are inexperienced,
headstrong, and laboring under many of the misapprehensions that come with
celebrity, as well as being intelligent and observant. Experience will teach
you or leave you by the wayside, and that's your karma to cope with. 

I won't bother to offer my critique of the Linux kernel just now, because
I'm fairly sure you aren't interested.

I will from time to time, if my experience happens to intersect with an
active discussion, raise some observations that seem appropriate.  You are
welcome to take from twenty five years of deep experience in the computer
industry what ever lessons you may or to ignore them completely, as is
anyone else.
 
As far as "where I'll be in 10 years," the answer is probably, as it has
been for the last 10 years, "where I was 10 years ago", which is trying to
cope with the latest wunderkind and figure out how to make money off the
process, produce software that I'm not too embarrassed to admit having been
involved with, and occasionally contribute to the free software community -
all while having fun and utterly failing to take myself or any of this
seriously.

Your opinions are worth to me exactly the quality of argument you can raise
to back them - the opinion equivalent of 'show me the code'. Frankly, you
don't argue your case at all effectively, and I'm totally unimpressed by an
approach that amounts to "if you don't like the way we play on my sandlot,
go find your own".

There is no correlation between the level of tool use and the quality of
product produced. Having tools allows you to do things that not having tools
prohibits you from doing, but the bell shaped curve for quality has
maintained pretty much over the course of human tool list.  If you think
that that is a "silly" idea, feel free to rebut it, but don't think that
calling an idea by a demeaning name and demanding that I go off and roll an
operating system is anything more than throwing a tantrum.

Here's another piece of free advice, worth less than you paid for it: in 25
years, only the computer history trivia geeks are going to remember you,
just as only a very small handful of us now remember who wrote OS/360.  Work
hard on having fun, the rest will sort itself out.

Marty (as silly as ever)

 
-Original Message-
From: Linus Torvalds [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 17, 2000 5:49 PM
To: Marty Fouts
Cc: Oliver Xymoron; Tigran Aivazian; Daniel Phillips; Kernel Mailing List
Subject: RE: Availability of kdb


On Sun, 17 Sep 2000, Marty Fouts wrote:
>
> Craftsmanship is in the way you approach what you do, not in the tools you
> use to do it.  And, frankly, if you wish to artificially limit your use of
> tools, all you are doing is hobbling yourself.

You know what?

Start your own kernel (or split one off from Linux - that's what the GPL
is all about), and we'll see where you are in ten years. If kernel
debuggers are so much better, then you'll be quite well off, and you can
prove your silly opinions by _showing_ them right.

In the meantime, I've shown what my opinions are worth. Take a look, any
day. Available at your nearest ftp-site.

Talk is cheap. If you want to convince me, do the WORK.

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-17 Thread Marty Fouts

Um, for what ever it is worth, if you want to compare "power user" carpentry
to "hand tools only" you can actually do it fairly easily on PBS in the US.
There used to be a program done by a guy who did everything by hand.  I
loved to watch it, especially the parts where he cut himself badly because
there are somethings it is dumb to do with hand tools, but he was stuck with
his dumb rule.  There's another show, still on, called "The New Yankee
Workshop". I love to watch it, just to count the number of power tools Norm
Abrams manages to use in a single project.  (I think the most I saw in one
one hour episode was 40-something.)

Craftsmanship does *not* come from artificial rules about what tools you are
allowed to use.  There were hack carpenters when there weren't any power
tools, and the cabinet makers I know who do the best work (the sort that
gets them several thousand dollars a piece for small pieces of furniture)
use every power tool they find appropriate to their work; just as they
construct and use jigs and rely on all the other "tricks of the trade".

Craftsmanship is in the way you approach what you do, not in the tools you
use to do it.  And, frankly, if you wish to artificially limit your use of
tools, all you are doing is hobbling yourself.

Marty

-Original Message-
From: Linus Torvalds [mailto:[EMAIL PROTECTED]]
Sent: Saturday, September 09, 2000 5:35 PM
To: Oliver Xymoron
Cc: Tigran Aivazian; Daniel Phillips; Kernel Mailing List
Subject: Re: Availability of kdb




On Sat, 9 Sep 2000, Oliver Xymoron wrote:
> 
> Tools are tools. They don't make better code. They make better code easier
> if used properly.

I think you missed the point of my original reply completely.

The _technical_ side of the tool in question is completely secondary.

The social engineering side is very real, and immediate.

It's not whether you can use tools to do the work.

It's about what kind of people you get.

You were the one who brought up the power drill analogy. I'll take it, and
run with it, and maybe you can see _my_ point by me taking your analogy
and running with it.

Yes, using a power-drill and other tools makes a lot of carpentry easier.
To the point that a lot of carpenters don't even use their hands much any
more. Almost all the "carpentry" today is 99% automated, and sure, it
works wonderfuly - especially as you in carpentry cannot do it any other
way if you want to mass-produce stuff.

But take a moment to look at it the other way. 

If you want to find the true carpenters today, what do you do? Not just "a
carpenter". But THE carpenter.

I'm saying that maybe you put up a carpentry shop where everything is
lovingly hand-crafted and tools are not considered to be the most
important part - or even necessarily good. And yes, some people
(carpenters in every sense of the word) will be frustrated. They can't use
the power-lathe that they are used to. It doesn't suit them. They _know_
that they are missing something.

But in the end, maybe the rule to only use hand power makes sense. Not
because hand-power is _better_. But because it brings in the kind of
people who love to work with their hands, who love to _feel_ the wood with
their fingers, and because of that their holes are not always perfectly
aligned, not always at the same place. The kind of carpenter that looks at
the grain of the wood, and allows the grain of the wood to help form the
finished product.

The kind of carpenter who, in a word, is more than _just_ a carpenter.

  [ Insert a silent minute to contemplate the beaty of the world here. ]

Go back and read my original reply to this thread.

Really _understand_ the notion of two kinds of people. 

And think about what kind of people you'd like to work with.

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Scalability Efforts

2000-09-07 Thread Marty Fouts

I don't know. It may well be that by the time Linux is seriously in
contention for cc-NUMA, the number of architectures will be seriously
reduced, in much the same way that the number of architectures for general
purpose computers got shook out in the '80s and '90s.  In that case, my dire
warning won't really matter, and best practice and possibly even simple
algorithms will work.

One of the things I investigated at HP Labs in the mid 90s was on-the-fly
configuration by algorithm substitution.  There was some good work on
underlying technology for the nitty part of substituting algorithms done at
the Oregon Graduate Center.  This is substitution, rather than run time
choice, to avoid the overhead of making the algorithm-choice branch
frequently at runtime, and is an attempt to generalize techniques like
back-patching the memory copy routines at boot time.

As we all know, the major problem with one-size-fits-all-algorithms is
scalability.  Algorithms that are efficient for small N in the order
statistic don't scale well, but algorithms for large N tend to have too much
overhead to justify using them when N is small.  List management (of which
operating systems are major users) gives a trivial example.  A list that
might have a half dozen items at all is trivial to maintain in sorted order
and to search linearly, while one with thousands of entries an frequent
insertions requires data structures that would have outrageous overhead for
small N, and may never be kept in sorted order at all.

cc-NUMA complicates the problem because not only do you have the dimension
of growth to take into account, which could probably be coped with by
back-patching generalizations, but you also have variation in system design.
Relative costs of coherence change at different numbers of processors, and
some systems have complicated memory hierarchies while others tend to have a
small number of levels.  I've worked with machines where the "right"
approach was to treat them as clusters of SMPs, effectively requiring an
approach in which each "virtual" smp ran its own independent kernel and a
lot of work had to be done to provide a single system image model and
support very efficient in-cluster IPC between the kernels.  (See, for
instance, my idea "Sender Mediated Communication," and John Wilkes' patent
on a related idea, documented in the Hamlyn papers.) On other machines,
there wasn't such a break in the memory hierarchy and running the whole
thing as one virtual SMP with algorithms tuned to the cost of sharing was
the right approach.  These systems may differ significantly in the source
code base of the implementation.  Processor scheduling, paging, even I/O
processing models can vary radically between machines.  (Try optimizing a
Tera using the same processor scheduling algorithms as are appropriate for a
T3, for example.)

But, as I say, this may all be a red herring, because the market place
doesn't tolerate a lot of diversity, and many of the interesting
architectures that the research community have worked on may never appear in
any significant numbers in the market place.  If you are able to limit your
solution to the problem to a fairly narrow class of cc-NUMA machines, then
the problem really becomes simply the run-time replacement of algorithms
based on system size.

Marty

-Original Message-
From: Jesse Noller [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 07, 2000 2:46 PM
To: Marty Fouts; [EMAIL PROTECTED]
Subject: RE: Scalability Efforts


But would it be possible to apply a sort of "Linux Server Tuning Best
Practices" method to items not unlike NUMA, but more specific to say,
webserver and file serving?

(this is a project i am working on, finding kernel and apache tuning
guidelines for maximum File/Web serving speed w/ the 2.4 kernel)

- Note, if anyone has any pointers, please let me know.

-Jesse

-Original Message-
From: Marty Fouts [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 07, 2000 5:30 PM
To: [EMAIL PROTECTED]
Subject: RE: Scalability Efforts


FWIW, large system scalability, especially NUMA is not tractable with a 'one
size (algorithm) fits all' approach, and can be a significant test of the
degree of modularity in your system.  Different relative costs of access to
the different levels of the memory hierarchy and different models of cache
concurrency, especially, tend to make what works for system A be maximally
pessimal for system B.

marty

...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Scalability Efforts

2000-09-07 Thread Marty Fouts

FWIW, large system scalability, especially NUMA is not tractable with a 'one
size (algorithm) fits all' approach, and can be a significant test of the
degree of modularity in your system.  Different relative costs of access to
the different levels of the memory hierarchy and different models of cache
concurrency, especially, tend to make what works for system A be maximally
pessimal for system B.

marty

-Original Message-
From: Rik van Riel [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 07, 2000 11:20 AM
To: Henry Worth
Cc: [EMAIL PROTECTED]
Subject: Re: Scalability Efforts

On Thu, 7 Sep 2000, Henry Worth wrote:

> With all the talk of improving Linux's scalability to
> large-scale SMP and ccNUMA platforms -- including efforts
> at several HW companies and now OSDL forming to throw
> hardware at the effort -- is there any move afoot to
> coordinate these efforts?

Nothing coordinated, AFAIK...

> Or is it all, whatever there may be of it, taking
> place offline?

Most of the times I've talked about this topic it
was in person with other developers at various
conferences.

For the VM subsystem and the scheduler we have some
ideas to improve scalability for NUMA machines. It's
not been implemented yet, but for most of it the
design seems to be pretty ok and ready to be implemented
for 2.5.

OTOH, some of the develish details still aren't resolved.
If there are people interested in discussing this topic,
I'll setup a mailing list for it ...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: Availability of kdb

2000-09-06 Thread Marty Fouts

None of my arguments for kernel debuggers add up to "add new things faster".
If you want to be able to add new things faster than you need to radically
restructure systems and your implementation process to better accommodate
modularization; a different process altogether.

My arguments for kernel debuggers add up to "understand the symptomological
behavior faster."  It has to do with reducing the overhead on those who are
doing maintenance so that they will have the time to do the thinking
necessary for doing the implementation right.  Some people don't need these
tools, and more power to them.  The vast majority of programmers, including
many of the careful effective ones you do want (or should want) contributing
to your system, do.

You can not social-engineer competence by depriving people of tools.  You
can reinforce competence by giving them tools and teaching them how and when
to use them.

We've done device driver development here, and absolutely needed the
visibility provided by a kernel debugger because the hardware was broken or
poorly documented and no amount of staring at source code would have helped
us recognize that.

I've used kernel data gathering and visualization any number of times to
identify complex interactions of subsystems that no amount of staring at
code would have uncovered.  Doing so has allowed me to fix kernel problems
in hours that would have taken months to fix otherwise - hours because it
took minutes to correctly identify the bogus behavior and then hours to work
through the code and figure out the best solution.

There simply isn't "one right way" to do anything, even kernel development.

-Original Message-
From: Linus Torvalds [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 06, 2000 12:52 PM
To: Tigran Aivazian
Cc: Daniel Phillips; Mike Galbraith; [EMAIL PROTECTED]
Subject: Re: Availability of kdb


On Wed, 6 Sep 2000, Tigran Aivazian wrote:
>
> very nice monologue, thanks. It would be great to know Linus' opinion. I
> mean, I knew Linus' opinion of some years' ago but perhaps it changed? He
> is a living being and not some set of rules written in stone so perhaps
> current stability/highquality of kdb suggests to Linus that it may be
> (just maybe) acceptable into official tree?

I don't like debuggers. Never have, probably never will. I use gdb all the
time, but I tend to use it not as a debugger, but as a disassembler on
steroids that you can program.

None of the arguments for a kernel debugger has touched me in the least.
And trust me, over the years I've heard quite a lot of them. In the end,
they tend to boil down to basically:

 - it would be so much easier to do development, and we'd be able to add
   new things faster.

And quite frankly, I don't care. I don't think kernel development should
be "easy". I do not condone single-stepping through code to find the bug.
I do not think that extra visibility into the system is necessarily a good
thing.

Apparently, if you follow the arguments, not having a kernel debugger
leads to various maladies:
 - you crash when something goes wrong, and you fsck and it takes forever
   and you get frustrated.
 - people have given up on Linux kernel programming because it's too hard
   and too time-consuming
 - it takes longer to create new features.

And nobody has explained to me why these are _bad_ things.

To me, it's not a bug, it's a feature. Not only is it documented, but it's
_good_, so it obviously cannot be a bug.

"Takes longer to create new features" - this one in particular is not a
very strong argument for having a debugger. It's not as if lack of
features or new code would be a problem for Linux, or, in fact, for the
software industry as a whole. Quite the reverse. My biggest job is to say
"no" to new features, not trying to find them.

Oh. And sure, when things crash and you fsck and you didn't even get a
clue about what went wrong, you get frustrated. Tough. There are two kinds
of reactions to that: you start being careful, or you start whining about
a kernel debugger.

Quite frankly, I'd rather weed out the people who don't start being
careful early rather than late. That sounds callous, and by God, it _is_
callous. But it's not the kind of "if you can't stand the heat, get out
the the kitchen" kind of remark that some people take it for. No, it's
something much more deeper: I'd rather not work with people who aren't
careful. It's darwinism in software development.

It's a cold, callous argument that says that there are two kinds of
people, and I'd rather not work with the second kind. Live with it.

I'm a bastard. I have absolutely no clue why people can ever think
otherwise. Yet they do. People think I'm a nice guy, and the fact is that
I'm a scheming, conniving bastard who doesn't care for any hurt feelings
or lost hours of work if it just results in what I consider to be a better
system.

And I'm not just saying that. I'm really not a very nice person. I can say
"I don't care" with a straight 

RE: Availability of kdb

2000-09-06 Thread Marty Fouts

I have been involved in the freely-distributable software community since
1975.  (Yes, Virginia, it predates the Free Software Foundation, and, in
fact, can be traced back to the '50s.)  Freely-distributable software has
some advantages,  but I didn't see then and I don't see now any path by
which it will replace commercial software development.  Anyone who thinks
that IBM, HP, et al adopting Linux is evidence to the contrary doesn't
understand the lesson of the Open Software Foundation.

It is fun to participate in this list, and, if I could find the synergy, I
would be willing to direct my current company (which is using Linux) to
develop some enhancements to Linux (that we might need as part of our
product development) and to contribute those enhancements to the community.
(That's why I subscribed, to find out what we could expect from Linux and
what we could contribute.) I even have some long range projects that I might
be willing to pay "Linux kernel experts" to do for me and then contribute to
the community.  What I'm trying to say is that I'd like to see Linux and
Free BSD and the FSF and so on all to succeed at what they are trying to do,
but I haven't seen anything change in 25 years that make me think that y'all
are having any fundamental impact on the economics of software development.

While I think that Merkey has a skewed view and an axe to grind, I think
that the level of optimism on the other side borders on the naïve.

-Original Message-
From: Tigran Aivazian [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 06, 2000 1:57 PM
To: Jeff V. Merkey
Cc: Linus Torvalds; Daniel Phillips; Mike Galbraith;
[EMAIL PROTECTED]
Subject: Re: Availability of kdb

On Wed, 6 Sep 2000, Jeff V. Merkey wrote:
> Then it may be that corporate America weeds out Linux over

more likely is that corporate America weeds out commercial software as a
model which was superseded by the free software. We (the creative anarchy
community led by Linus) are here to help that happen, gradually and slowly
without shocks and fits by those who love this world and the things that
are therein so much they choose to serve Mammon and make money on that
which should be inherently free (to wit, any form of information, isn't
it just 1s and 0s? :)

Regards,
Tigran

PS. If Linus objects to being a leader of such community, I may remind
that the keyword there is "anarchy" and that assumes that, to some extent,
the consent of a human to be or not to be its leader, may be freely
ignored by those being lead, without even disturbing the symbiotic harmony
between the leader and those he leads.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for

2000-09-06 Thread Marty Fouts

Isn't it time to offer the platitude about working smarter rather than
harder?

Nah, probably not.

Marty

-Original Message-
From: Alan Cox [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 06, 2000 2:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for

>Attempting to change people's habits by making it hard to debug.
>
> Hard work now leads to less work later.

Hard work now leads to less work full stop

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



kernel debugging (was RE: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux)

2000-09-05 Thread Marty Fouts

I've debugged quiet a few operating systems on a very wide range of hardware
over 25 years using a very wide range of tools and techniques, sometimes
even having to use logic analysers.  I've also watched this discussion for a
while. IMHO, y'all have conflated two quiet different processes (possibly
three,) and are trying to use your (lack of) tools as a form of social
engineering.

To the extent that computing is science, it is empirical science, and as
such any effective tools that gives you visibility into the running computer
belongs in your toolkit.  A good (remote) source level debugger, *properly
used*, is one of the most effective ways of obtaining visibility that I
know.  Tied in to a decent source code browser, it can also be a very
effective way of coming up to speed on the ins and outs of someone else's
code.

There are, in essences, three parts to debugging a problem: figuring out
what the system is really doing; figuring out why what it is doing is wrong;
and figuring out the best way to make it behave less wrongly.  Debuggers and
source code browsers can figure prominently in the first (as should Meyer's
programming contracts or a similar model backed up by real asserts in the
code, but that's a different topic.)

As I've posted earlier, our brains complement the computers, and we should
use both most effectively.  People are good at seeing patterns in the data,
but not so good at extracting the data or remembering it.  Good debuggers,
effectively used, help with both the extraction and the remembering.

Some people work best at identifying problems with abstraction and analysis.
That's the way they should work.  Others need hands on experience to
identify problems.  In the real world you need both kinds of people and you
need to supply tools for each.

Marty

-Original Message-
From: David S. Miller [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 05, 2000 5:03 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS
for Linux


   Date: Wed, 6 Sep 2000 12:00:13 +1200
   From: Chris Wedgwood <[EMAIL PROTECTED]>

   Right now as I see it (pretending everything is black and white);
   you, Dave, Linus and a few other people[1] are more than happy with
   debugging aids as they exist right now in a stock kernel.

   However, there are many many other people far less talented than
   yourselves and for use less capable people having a compile time
   options of IKD or something might really be of use

I think what it comes down to is that the folks who know the tree the
best and do the most work/fixing in it, think the debugging stuff
should remain a seperate patch.

We believe that it doesn't belong in the main source tree mainly
for two reasons:

1) It is clutter.  I don't want to see the debugging/debugger code
   when most of the time it serves no purpose.

   NOTE: This is different than "BUG()" and "ASSERT()" which serve
 a different purpose becuase they not only act as a
 consistency check, but they also _document_ how the author
 of the code believed the world around it must behave :-)

2) It is hoped that because it isn't the default, some new people
   will take the quantum leap to actually try debugging using the
   best debugger any of us have, our brains, instead of relying on
   automated tools.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux

2000-09-04 Thread Marty Fouts

FWIW, although this is an interesting theory, in my experience, having a
good kernel debugger allows me *more* time to think clearly, rather than
less.  YMMV.

IMHO, the division of labor between man and computer should be that each
does what they are best at. In the case of debugging, this means letting the
machine do the bookkeeping things that debuggers are good for.

The best course is being able to solve the problem from first principles
from a problem description and the source code.  But there are plenty of
time when the problem description is ambiguous, or the source code is
someone else's but I need a fix anyway, or any of a thousand other reasons
why I end up using a debugger.

After all, if there is any science in "computer science", it is empirical
science, and the debugger is a lab tool that allows me to quickly test
hypothesis about the source of the problem.

It has also never been my experience that taking longer to scope out the
problem leads to a better fix.  Quiet the contrary, especially when under
time pressures.  The sooner I can figure out what *is* broken, the more time
I have to think about how best to fix it.

While it may be true that (some) people spend more time thinking when they
are debugging without a debugger, it is probably also true that most of that
thought amounts to trying to figure out how to get the visibility the
debugger would have given you.

marty

-Original Message-
From: David S. Miller [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 04, 2000 5:04 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for
Linux

   Date:Sat, 02 Sep 2000 15:58:50 -0600
   From: "Jeff V. Merkey" <[EMAIL PROTECTED]>

   I can only assume the reason for this is one of control, and that
   there are no valid technical reasons for it.  I have spent more
   nights with printk() than I care to.

And I bet the lessons learned and the issues involved in those nights
with printk will never leave your brain, you will remember precisely
in the future next time you see the same types of symptoms what kinds
of things to look for and where.

This is what a debugger does not do for you.  The debugger allows you
to be lazy, step around, "oh yeah check for NULL" and never have to
_think_ about what you're doing or the changes you're making or even
if the same bug might be elsewhere.

This is why Linus does not allow a debugging facility like this into
the kernel, so people spend time _thinking_ when they go hunting down
bugs.

It takes longer to nail a bug, yes, but the resulting fix is always
far superior.  And the person who discovers the bug leaves with a much
larger amount of knowledge about how that area of the kernel works.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: thread rant [semi-OT]

2000-09-02 Thread Marty Fouts


just an aside on asynchronous i/o: concurrency by asychronous I/O actually
predates concurrency via thread-like models, and goes back to the earliest
OS-precursors.  Early work on thread-like concurrency models were, in part,
a response to the difficulties inherent in getting asynchronous I/O right,
and so now the pendulum is swinging back.

A pedantic nit: the basic Un*x I/O model, with syncrhronous interfaces
predates Un*x networking by some time.  I woudl make the case that the
people who grafted Un*x networking (most notably sockets) onto Un*x didn't
really understand the I/O model, and crafted cruft, that just happened to
have a few usable features, one being a sort-of way of sometimes getting
asynchrony.

Prior to posix, by the way, there were any number of Un*x variants that
asynchronous I/o models, supporting any number of i/o completion
notification models, buffering schemes and (lack of) cancellation semantics.
My personal favorite was the variant in the Cray-2 Unicos; my personal least
favorite was Intergraph's Clix.

Marty

-Original Message-
From: Dan Maas [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 01, 2000 11:50 PM
To: [EMAIL PROTECTED]
Subject: Re: thread rant [semi-OT]

[...]

Can we do better? Yes, thanks to various programming techniques that allow
us to keep more of the system busy. The most important bottleneck is
probably the network - it makes no sense for our server to wait while a slow
client takes its time acknowledging our packets. By using standard UNIX
multiplexed I/O (select()/poll()), we can send buffers of data to the kernel
just when space becomes available in the outgoing queue; we can also accept
client requests piecemeal, as the individual packets flow in. And while
we're waiting for packets from one client, we can be processing another
client's request.

The improved program performs better since it keeps the CPU and network busy
at the same time. However, it will be more difficult to write, since we have
to maintain the connection state manually, rather than implicitly on the
call stack.


So now the server handles many clients at once, and it gracefully handles
slow clients. Can we do even better? Yes, let's look at the next
bottleneck - disk I/O. If a client asks for a file that's not in memory, the
whole server will come to a halt while it read()s the data in. But the
SCSI/IDE controller is smart enough to handle this alone; why not let the
CPU and network take care of other clients while the disk does its work?

How do we go about doing this? Well, it's UNIX, right? We talk to disk files
the same way we talk to network sockets, so let's just select()/poll() on
the disk files too, and everything will be dandy... (Unfortunately we can't
do that - the designers of UNIX made a huge mistake and decided against
implementing non-blocking disk I/O as they had with network I/O. Big booboo.
For that reason, it was impossible to do concurrent disk I/O until the POSIX
Asynchronous I/O standard came along. So we go learn this whole bloated API,
in the process finding out that we can no longer use select()/poll(), and
must switch to POSIX RT signals - sigwaitinfo() - to control our server***).
After the dust has settled, we can now keep the CPU, network card, and the
disk busy all the time -- so our server is even faster.

[...]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



RE: thread rant

2000-09-02 Thread Marty Fouts

I'm confused.  Threads are harder than *what* to get right?

If you need concurrency, you need concurrency, and any existing model is
hard.  Besides, at some level, all of the concurrency models come down to a
variant on threads, anyway.

-Original Message-
From: Alexander Viro [mailto:[EMAIL PROTECTED]]
Sent: Saturday, September 02, 2000 1:07 AM
To: dean gaudet
Cc: Mike A. Harris; Michael Bacarella; Linux Kernel mailing list
Subject: Re: thread rant




On Sat, 2 Sep 2000, dean gaudet wrote:

> the thread bashing is mostly bashing programs which do things such as a
> thread per-connection.  this is the most obvious, and easy way to use
> threads, but it's not necessarily the best performance, and certainly
> doesn't scale.  (on the scalability side just ask yourself how much RAM is
> consumed by stacks -- how many cache lines will that consume, and how many
> TLB entries.  it sucks pretty fast.)
 
> state machines are hard.  while people on linux-kernel may be hot shot
> programmers who can do state machines in their sleep, this is definitely
> not the case for the average programmer.
> 
> fortunately fully threaded and fully state driven are two opposite ends of
> a spectrum, and there are lots of useful compromises in between where
> threads are used in a way that allows the average programmer to
> maintain/extend a codebase; while also getting the scalability of state
> machines.

Lovely. You've completely sidestepped the main part:

*threads* *are* *hard* *to* *write* *correctly*

Average programmer will fuck up and miss tons of race conditions. Better
than average programmers do. If you've got shared resource - you are in
for problems. Threads are useful. But they take more efforts and are much
harder to debug _and_ to prove correctness. In other words, that's one of
the last resort tools, not the first one. If you can do it without shared
state - don't bother with threads. Same as with rewrite in assmebler - do
it with small critical parts and don't unless you absolutely have to. The
first rule of optimisation: don't. Keep the critical sections small and if
you can avoid them - it's worth the efforts. You'll win a lot when it will
come to changing the thing. And you will have to change it, sooner or
later. Same goes for debugging. Non-threaded code is easier to understand.
Yes, you may be very clever. But you'll have to debug it on Friday evening
after a hard week when you want only one thing - go home and sleep. Or
somebody else will and he will curse you. KISS. And threads are _not_
simple.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/