Re: Current status of NTFS support

2001-04-20 Thread J. Dow

From: "Lee Leahu" <[EMAIL PROTECTED]>

> would somebody be kind enough to explain why writing to 
> the ntfs file system is extremely dangerous,  and what are the
> developers doing to make writing to ntfs filesystem safe?

My understanding of the situation is that writing to an NTFS volume is not
quite 100% guaranteed to destroy the disk directory structure. MS mutates it
faster than people can reverse engineer it in a proper "clean" manner. The
person who had been working the issue had access to MS information in support
of some other products. MS came down on him about supporting NTFS. So he has
surrendered such materials as he has rather than continue with the MS product
support and is concentrating on Linux. But until his NDA runs out he cannot
work on the NTFS code. Other people have picked up the ball. But as noted
MS mutates NTFS remarkably rapidly so I'd not look for support for NTFS in
the near future.

I have oversimplified the whole issue for which I hope others forgive me. I
see no benefit to a rehash of the issue so I am attempting to inject enough
information that it will be dropped.

{^_^}Joanne Dow, [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Microsoft begining to open source Windows 2000?

2001-03-08 Thread J. Dow

From: "Alan Cox" <[EMAIL PROTECTED]>

> > Please check out this article. Looks like microsoft know open source is the
> > thing of the future. I would consider that it is a begining step for full
> > blown GPL
> 
> Oh sure
> 
> Maybe 1200 people
> 
> "Users are prohibited from amending"
> 
> Sorry but Linus had > 1200 people able to modify his code in 1992

So did BillyG. The difference is that BillyG's were all overworked hackers
that were on the MS campus under BillyG's whip^H^H^H^Hpay. I treated that
as proof that you need WAY more than that many monkeys to generate something
stable and workable, if you adopted the Mongol hordes programming style.

BillyG HAS thousands changing the source code. He pays them to do it.
Linus has far fewer actually changing the source code if I read this
list correctly. Experience suggests this is as it should be. Even in
coding "too many cooks spoil the broth."

{^_-}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Microsoft ZERO Sector Virus, Result of Taskfile WAR

2001-03-06 Thread J. Dow

From: "Jens Axboe" <[EMAIL PROTECTED]>
To: "Andre Hedrick" <[EMAIL PROTECTED]>
Cc: "Alan Cox" <[EMAIL PROTECTED]>; "Linus Torvalds"
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>

> > This is a LIE, it does not destroy the drive, only the partition table.
> > Please recally the limited effects of "DiskDestroyer" and "SCSIkiller"
> >
> > This is why we had the flaming discussion about command filters.
>
> But I might want to do this (write sector 0), why would we want

Jens, and others, I have noted a very simple data killer technique that
at LEAST works on Quantum SCSI drives as of a couple years ago and some
other earlier drives I felt could be sacrificed to the test. You can write
as many blocks at once as SCSI supports to the drive as long as you do
*NOT* start at block zero. If you write more than 1 block to block zero
the drive becomes unformatted. The only recovery is to reformat the
drive. The data on the drive is lost for good. I found no recovery for
this. I have, to my great chagrin, discovered this twice, the hard way.
Once on a large Micropolis harddisk I was working with in the block zero
area for partitioning purposes. And the other time when I was attempting
to make a complete duplicate of a 2G Quantum SCSI disk to another identical
2G SCSI disk. I ended up writing a script for the process that wrote one
block to block zero and then proceeded to use large blocks for the rest
of the disk, using dd under 2.0.36 at the time.

If this problem still exists the lowest level drivers in the OS should
offer protection for this problem so people working at any higher level
do not see it and fall victim to it.

{^_^}Joanne Dow, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux on the Unisys ES7000 and CMP2 machines?

2001-03-04 Thread J. Dow

From: "Miles Lane" <[EMAIL PROTECTED]>

> I noticed that this article mentions that Unisys has
> no plans to port Linux to it's "cellular multiprocessor"
> machines.  So, I am wondering if anyone is working
> on this independantly.

Miles, if these babies are the 32 processor monsters that UniSys
has been making recently there IS interest to get Linux on it.
But the people I know who have mentioned "interest", mostly from
a curiosity standpoint, have their hands neatly tied by Microsoft.
Ya see, the developers at UniSys have NT source licenses so they
can develop the HALs for the monsters. Microsoft insists that they
spend a considerable time away from OS development before working
on another OS. So, no Linux port is in the offing, I suspect. The
people who KNOW the machine are not allowed to do it. And I can
guarantee you that the machines are not well documented at the
level a person making an NT port would need. (As an aside it seems
the UniSys guys know more about how to debug HALs without fancy
ICEs than the MS guys. At least the amount of travel between
Mission Viyoyo and Redmond suggests it.)

{^_^}   Joanne "Too many years in a DoD environment" Dow, who has
put a whole string of two's together to figure out the
above from clues laid by her HAL developer partner.
[EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux on the Unisys ES7000 and CMP2 machines?

2001-03-04 Thread J. Dow

From: "J Sloan" <[EMAIL PROTECTED]>

> My take on it is that unisys is an example of brain damage
> and it's easiest to ignore/work around them rather than
> trying to get them out of bed with microsoft. Nature will
> eventually take it's course with unisys as it did with Dec.

jjs, you can take that to the bank as collateral. Alas, my partner
has been with them since Burroughs days as an undegreed OS developer.
Finding the same level of pay in the "sane world" is proving rather
annoyingly difficult here in the San Berdoo County area. So he is
riding it out till he can retire. {o.o}

{^_^}Joanne Dow, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: rocketport pci question... it stopped working after 250 days uptime

2000-11-29 Thread J. Dow

From: "Federico Grau" <[EMAIL PROTECTED]>

> We have several linux boxes useing 8 port rocketport pci multiport serial
> cards.  Earlier last week 3 of them stopped working within a 24 hour period.
> These three boxes had similar uptimes (since their last kernel rebuild); 249
> days, 248 days, 250 days.  Comparing the logs of each box, we saw that each
> box's rocketport stopped working after aproximately 248 days 16 hours uptime.

If it was 248 days 13 hours 13 minutes 56.48 seconds this represents a 32 bit
counter on a 5ms clock overflowing. I'd look for that in the RocketPort code.
Although I remember Jeff remarking about something else failing at about the
same uptime.

{^_^}Joanne Dow, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: About IP address

2000-11-24 Thread J. Dow

From: "John Crowhurst" <[EMAIL PROTECTED]>

> > For example, Class B address range is 128.1.0.0 ~ 191.254.0.0
> > 
> > Why 128.0.0.0 and 191.255.0.0 can't use ?
> > 
> > I can't understand it
> 
> This is because its the network and broadcast addresses of a Class A address
> range. Simple answer :)

That is not a responsive answer, John. And since you gave it I issue you the
challenge to declare why 128.0.0.1 through 191.255.255.254 are not legal
address ranges.

{^_-}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: malloc(1/0) ??

2000-11-06 Thread J. Dow

From: "Dan Kegel" <[EMAIL PROTECTED]>
> [EMAIL PROTECTED] asked:
> > [Why does this program not crash?]
> >
> > main() 
> > { 
> >char *s; 
> >s = (char*)malloc(0); 
> >strcpy(s,"f"); 
> >printf("%s\n",s); 
> > } 
> 
> It doesn't crash because the standard malloc is
> optimized for speed, not for finding bugs.
> 
> Try linking it with a debugging malloc, e.g.
>   cc bug.c -lefence
> and watch it dump core.

I'm not sure that is fully responsive, Dan. Why doesn't the
strcpy throw a hissyfit and coredump?

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Play Kernel Hangman!

2000-11-06 Thread J. Dow

From: "Leen Besselink" <[EMAIL PROTECTED]>

> On Mon, 6 Nov 2000, Jeff Dike wrote:
>
> > After a stranger than usual late-night #kernelnewbies session on Thursday, I
> > was inspired to come up with Kernel Hangman.  This is the traditional game
of
> > hangman, except that the words you have to guess are kernel symbols.
> >
> > So, test your knowledge of kernel trivia and play it at
> > http://user-mode-linux.sourceforge.net/cgi-bin/hangman
> >
> > Jeff
>
> Actually, OpenBSD already has this (in the kernel !) After a kernel crash
> ones, I got in the kerneldebugger. I didn't really know how to use it, but
> I could play hangman. I just downloaded the source

Now that might be the best argument for a kernel debugger we've seen yet.

{O,o}Joanne Dow, somewhat crazed.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Off-Topic (or maybe on-topic)

2000-10-27 Thread J. Dow

From: <[EMAIL PROTECTED]>

> If Bill said 'screw you' to the blackmailer and made the press release,
> we should see the source on web sites soon.  Then we can see how bad it
> really is.  Maybe even fix it.

Dave, my partner has legal access to the MS source code. In some of my own
work I discovered an interesting apparent HAL bug related to the ACPI and
the PerformanceCounter API. A fix for a bad INTEL chip (24 bit counter that
doesn't always count correctly) was falsed by my K7M motherboard with a
700MHz Athlon on it. He adapts the HALs for some behemoth machines. So he
has seen the code involved. It is literally chock full of hacks and patches
and such - because of chip hardware defects. I'd be VERY careful about
casually going in and patching or repairing that source code based on such
dinnertable conversation about the HAL code as we've had. (I know no details.
I just know he regularly moans about it. - I bet he's having an interesting
day up there today. He's there for a meeting with the W2K folks. I'll have
to ask him how the anthill was today when he gets home.)

{^_-}Joanne Dow, [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Fw: failure to burn CDs under 2.4.0-test9

2000-10-06 Thread J. Dow


- Original Message - 
From: "J. Dow" <[EMAIL PROTECTED]>
To: "Andre Hedrick" <[EMAIL PROTECTED]>
Sent: Friday, October 06, 2000 0:35
Subject: Re: failure to burn CDs under 2.4.0-test9


> From: "Andre Hedrick" <[EMAIL PROTECTED]>
> 
> > On Thu, 5 Oct 2000, J. Dow wrote:
> > 
> > > For that matter Andre a 4 speed HP can certainly burn at 4 speed except
> > > that cdrecord and the OS conspire to prevent this through a mathematical
> > > error. It's rather a tad frustrating.
> > 
> > Explain, please
> 
> I skipped all the mkisofs stuff. I downloaded the 7.0 images and tried to
> burn them to CDROM on CDs rated for 6 speed and a drive, HP CDWriter+ 8100,
> rated for 4 speed on write. The system is an Athlon 700MHz with a K7M
> board. The CDWriter+ is IDE. The OS at the time was the -22 build from
> RedHat's pinstripe/.../preview directory recompiled in an attempt to
> make the via82cxxx_audio work. (It works right under 2.4.0-test8 I am
> running now.)
> 
> > cdrecord -v speed=4 dev=0,0,0 -eject -data $ISO9660_PATH/cd_image
> 
> Remarkably similar to the command I used. It still burned at 2 speed. I
> traced through the cdrecord code and three levels of OS driver finding
> several conversions from nominal speed ratings to "precise" speed ratings.
> The tool concluded that the drive had reported it could do 700kB/S while
> the software insisted it needed to be capable of at least705.6kB/S. It
> looked like in some places the kernel used a nominal 175kB/S as the
> conversion figure and in others 176.4kB/S. About then the maze of twisty
> little passages and the loss of time to persue it to a conclusion wore
> thin and I moved on to making money to pay for my food and housing and
> clothing by being a mercenary instead of philanthropist. I could have
> done a quick hack to cdrecord except it appears to be intimately tied
> to too many other things for which I'd have to install source and deal
> with. 'Sides, it looked like the kernel was slightly internally schizoid.
> That MIGHT be the proper place to repair things. And ide-scsi or ide-cd
> might be the proper place to start investigating.
> 
> This problem seems to have existed since 2.0.xx days when I first noticed
> I could not get cdrecord to even TRY to record at the drive and disc
> rated speeds. (Nor can I get it to do it with 2.2.16 on a machine with a
> SCSI HP CDWriter+ 9200 which won't do its rated speed, either.)
> 
> {^_^}
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: failure to burn CDs under 2.4.0-test9

2000-10-05 Thread J. Dow

From: "Andre Hedrick" <[EMAIL PROTECTED]>
> On Thu, 5 Oct 2000, Jeff V. Merkey wrote:
> 
> > I am seeing this as well.  I got around it by setting speed=2.  If you 
> > are using one of the newer R/W CD/DVD drives (which are slower than 
> > crap, BTW on Linux), you should set the speed manually and try 
> > progressively slower settings until you find one that works.
> 
> > cdrecord -v speed=2 dev=1,0,0 file.iso
> 
> Sorry Jeff, I have to call you on that one.
> 
> HP's are the know to burn at 4x and 8x clean.
> I just got my 8x CD-RW HP 9100 series and will verify the issues.
> Also will try to verify on my Panasonic DVD-RAM ATAPI.
> Linux DVD-RAM is looking like it will be showcased at Comdex in the
> DVD-RAM Pavilion.

For that matter Andre a 4 speed HP can certainly burn at 4 speed except
that cdrecord and the OS conspire to prevent this through a mathematical
error. It's rather a tad frustrating.

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Tux2 - evil patents sighted

2000-10-03 Thread J. Dow

From: "Daniel Phillips" <[EMAIL PROTECTED]>

> Yes, I know the game, Unisys played it with gif.  Wait until it's in
> widespread use then appear out of the woodwork and demand licence fees. 
> It's called submarining.  It's evil.  People and corporations who do it
> are little better than thugs.

This one is a bad example, Daniel. The word from inside UniSys is that
this was pure ineptitude in action.
{o.o}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: What is up with Redhat 7.0?

2000-09-30 Thread J. Dow

The install process for 7.0 includes the opportunity to install kgcc. If
you look at its description it tells you it is for kernel compiles. Use it.
It works. Quit complaining and RTFM sometime.

This list has enough traffic it should not have to suffer from people who
cannot RTFM.

{O.O}

- Original Message - 
From: "Daniel Stone" <[EMAIL PROTECTED]>


> OK, but I can't leave without pointing out that having gcc 2.96 breaks
> compiling gcc 2.95.2. I've got Debian for my main machine and RH7 the other
> machine on my desk as well as a couple of other test boxen (have to be
> administered by clueless WinNT-type operators, so Debian was out), and RH7
> refuses to compile 2.2.17 or 2.4.0-test9-pre7. "Aha!" thinks Daniel, "I'll
> just recompile gcc 2.95.2 and all will be well!". No joy; it refuses to
> compile. Shame, since RH7 has improved dramatically in terms of supporting
> hardware RAID 5 as the root partition from RH6.2 (i.e. from not at all to
> working perfectly).


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-18 Thread J. Dow

From: "Jeff V. Merkey" <[EMAIL PROTECTED]>
> 
> Marty,
> 
> I think they said they could care less about kernel debuggers.  Just go
> write one, use Keith's or ours or whatever, and do what you want with
> your Linux development -- Linus doesn't seem to care if you just make a
> fork of Linux or someone else does with a debugger for your projects. 
> These guys have been debugging and developing their OS over the internet
> for a long time, their debugging methods seem more tied to their
> "telepathic repore" with each other and some very solid and studious
> skills at reviewing code rapidly and a thorough understanding of it. 
> They also have the luxury of taking the world at their own pace with
> Linux evolution and have stated no desire to change their philosophy.  

Jeff et al who might prefer a kernel debugger,

One should note that when a person or critter is backed into a corner
and pressured hard enough that he makes an "over my dead body"
level statement more pressure is likely to solidify the position rather
than change it. In the case of a critter it is tied up with survival. With a
person it is often tied up with "face". At this point Linus has been led
to making a very strong negative comment about kernel debuggers.
He is pretty much in a position now that "demands" he not change that
opinion lest he appear to be a weak leader. We were all foolish to
back him into a corner.

I see an image of a penguin. He is wearing a bandana around his
forehead and cammie's for clothes. He is armed with an impressive
array of things which carve and cut and things which go bang and
boom and make holes in other things. He is glariing over the top of
a foxhole filled with even more such tools screaming, "You'll have that
kernel debugger in my main Linux build over my cold dead body!"

Now, who makes the cartoon for that one? (And if the penguin had
a superficial resemblance to Linus it'd be DELIGHTFUL!)

{^_-}Joanne "A crazed maniac herself" Dow, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-10 Thread J. Dow

From: "Stephen E. Clark" <[EMAIL PROTECTED]>

> Linus Torvalds wrote:
> >
> > On Sat, 9 Sep 2000, Oliver Xymoron wrote:
> > >
> > > Tools are tools. They don't make better code. They make better code easier
> > > if used properly.
> >
> > I think you missed the point of my original reply completely.
> >
> > The _technical_ side of the tool in question is completely secondary.
> >
> > The social engineering side is very real, and immediate.
> ...
>
> > Linus
> >
>
> Then why don't we get rid of the compilers and assemblers and go back to
> the old way of doing it
> all - coding on the bare metal. Believe it or not at one time it was
> done this way. Imagine where
> we would be if everyone had said lets not invent tools to make ourselves
> more productive.
>
> My $.02
>
> Steve Clark

And for my severely depreciated $0.02 I am becoming concerned
that these guys are more concerned about some macho ideal of
generating programs while half crippled than about having things
work properly and maintainably no matter what gets in the way.
Art has flaws in it that have been painted over, often two or three
times. I grew up with a giant painting of Beethoven along side the
dinner table. It had been presented to my step-grandfather by
the Leipzig Symphony Orchestra. It captured the brooding artist
wonderfully. And in humid weather you could see his third hand,
the one the artist didn't like and painted over.

For all the zen meditation on code I begin to wonder how many of
the fixes really are fixes or painted over features that didn't quite
work out. It worries me no small bit.

 and here I thought macho didnt' fit well with people who
used their brains. I see it is as alive and well here as on the
streets of East LA.

{O.O}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-10 Thread J. Dow

From: "Linus Torvalds" <[EMAIL PROTECTED]>
> Yes, using a power-drill and other tools makes a lot of carpentry easier.
> To the point that a lot of carpenters don't even use their hands much any
> more. Almost all the "carpentry" today is 99% automated, and sure, it
> works wonderfuly - especially as you in carpentry cannot do it any other
> way if you want to mass-produce stuff.
> 
> But take a moment to look at it the other way. 
> 
> If you want to find the true carpenters today, what do you do? Not just "a
> carpenter". But THE carpenter.
> 
> I'm saying that maybe you put up a carpentry shop where everything is
> lovingly hand-crafted and tools are not considered to be the most
> important part - or even necessarily good. And yes, some people
> (carpenters in every sense of the word) will be frustrated. They can't use
> the power-lathe that they are used to. It doesn't suit them. They _know_
> that they are missing something.
> 
> But in the end, maybe the rule to only use hand power makes sense. Not
> because hand-power is _better_. But because it brings in the kind of
> people who love to work with their hands, who love to _feel_ the wood with
> their fingers, and because of that their holes are not always perfectly
> aligned, not always at the same place. The kind of carpenter that looks at
> the grain of the wood, and allows the grain of the wood to help form the
> finished product.
> 
> The kind of carpenter who, in a word, is more than _just_ a carpenter.
> 
>   [ Insert a silent minute to contemplate the beaty of the world here. ]

Properly contemplated and I wonder at the hypocrisy of using a compiler
or an assembler instead of carefully hand crafted bits on a blank disk.

{o.o}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Linux-2.4.0-test8-pre6

2000-09-07 Thread J. Dow

> obpainintheass:  haven't you anti-debugger-religion folks been claiming
> that if you don't have a debugger you're forced to "think about the code
> to find the correct fix"?  so, like, why are you guessing right now?  :)

dean, that is another man behind the curtain we are supposed to ignore
when our annoying little dog finds him.

{^_-}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [OT] Re: Availability of kdb

2000-09-07 Thread J. Dow

Timur,
> Well, if it really is just his hobby, then he shouldn't be chanting the "World
> Domination" mantra.  Either Linux belongs to Linus, in which case it's
> irrelevant outside his personal world, or it is a tool for all computer users.
> If Linus really doesn't care who uses his OS, then he should not be
encouraging
> community participation, and he shouldn't be accepting speaking engagements at
> major conventions where business users attend to decide whether Linux is for
> them or not.

Shh, you're not supposed to notice that man behind the curtain
over there that your annoying little doggy found.

{^_-}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-07 Thread J. Dow

From: "Horst von Brand" <[EMAIL PROTECTED]>

> "J. Dow" <[EMAIL PROTECTED]> said:
> 
> [...]
> 
> > The point is that WITH a debugger you have to take that step as well.
> > A person without the self discipline to do that is still a child and should
> > not be in this business. The debugger gives you a better picture of what
> > is actually happening. If that leverage is considered to be a bad thing
> > I am surprised and dismayed. A bloody LOT of personal experience 
> > and technological bloody noses suggests it is a very good thing.
> 
> This is true as long as the debugger hooks have no (or very minimal) impact
> on the instrumented system. Impossible (or very nearly so) in the case of a
> massively parallel program like the kernel. Where it is possible it has
> been done, in general. General hooks into the distributed subsystems inside
> the kernel are only a bit easier to maintain than the instrumented code,
> and they impact its performance, stability and readability.

Oh bat doodoo. In the first place you use a debug build with the debugger
built in for debugging. You run a real build without the debugger for
production. In the second place since you have the sense to do the
aforementioned good practice you keep in mind that there are interactions
involved to indicate whether the problem goes away or not and how the
problem manifestation changes with the debugger present. In the third
place if you are reduced to printk you can have even worse timing issues
than with a well constructed debugger.

If you have that well constructed debugger you have a patch to install it.
If there is a "make debugged" and "make nondebugged" command in
the kernel Makefile and the debugger is not patched in for most people
until they issue the "make debugged" command you have exactly what
you have now with the addition of a debugger for which there is a huge
incentive to maintain it.

Heck, run up proposed patches in the debugger, confirm your desk
analysis of what they do, THEN install the patches into the kernel
formally. (And for Ghu's sake make it a source level debugger if you
want it to work right.)

> You did not build a logic analyzer and circuit simulator into each and
> every transmitter you built, did you?

I built in the test points these tools attached to in each and every module
of each and every radio. Except obviously the simulator was used before
hand to predict circuit performance. If you design for proper testpoints
there is little or no affect on operation and a great improvement in ease
of maintenance and tuneup.

> I've used debuggers too, and do so sometimes to check on stuff I write when
> I'm truly stumped, but in general I stay away from them. I (try to) write
> stuff in modular, cleanly separate parts with (I hope) well understood
> interactions. That makes isolating bugs easy enough without a detailed
> step-by-step following of the program, at least most of the time. Debuggers
> _can_ be useful, no question about that. But you can also waste a huge
> amount of time with them. They just give you _one_ very narrow picture of
> what is going on, and that picture is from a quite useless perspective most
> of the time.

That design approach plus a good debugger can reach the finished line
of clean completed and debugged (to the same level) code quicker than
without the good debugger. The times I have most loved the debugger are
when the code generated is very different from the code laid down either
due to a typo I continually overlooked (I am human  -  I think) or a compiler
bug. Sometimes it actually adds time to development if I didn't NEED to run
that debugger based verification that what I intended is what appeared.
(Sadly I don't quite have the self-discipline to make that run EVERY time
and even worse sometimes I have not had the debugger to work with.)
So far I have noticed that the debuggers catch a wider range of issues
than the desk analysis.

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [OT] Re: Availability of kdb

2000-09-07 Thread J. Dow

> On Thu, 7 Sep 2000, George Anzinger wrote:
> 
> > Chris Wedgwood wrote:
> > > 
> > > On Wed, Sep 06, 2000 at 12:52:29PM -0700, Linus Torvalds wrote:
> > > 
> > > [... words of wisdom removed for brevity ...]
> > > 
> > >  I'm a bastard, and proud of it!
> > > 
> > > Linus
> > > 
> > > Anyone else think copyleft could make a shirt from this?
> > 
> > I like this one better:
> > 
> > "And I'm right.  I'm always right, but in this case I'm just a bit more
> > right than I usually am." -- Linus Torvalds, Sunday Aug 27, 2000.
> > 
> 
> I like this one even better:
> 
> "Little children, keep yourselves from idols" -- St John, Ist century.
> 
> Regards,
> Tigran

Aw, Tigran, give the kid his hobby, OK? We can try to bang some
sense into his head and suggest ways his hobby could offer more
satisfaction from good results achieved and make it more fun for
the rest of us. But this IS his model train setup we're playing on. So
in the final analysis he gets to choose the means of train control we
use and everything else, as well.

{^_-}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-06 Thread J. Dow

> Or, to misquote Feynman (another cantankorous bastard, but proud of it):
> 
> "Look at the problem. Think really hard. And write the correct code."

In a smallish voice I note that the debugger helps you look at the problem.
It is your X-Ray vision.

{o.o}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Availability of kdb

2000-09-06 Thread J. Dow

Quoth Linus
> Apparently, if you follow the arguments, not having a kernel debugger
> leads to various maladies:
>  - you crash when something goes wrong, and you fsck and it takes forever
>and you get frustrated.
>  - people have given up on Linux kernel programming because it's too hard
>and too time-consuming
>  - it takes longer to create new features.

Quoth an unrepentant jdow.
 Nice men of straw there. I note you properly demolished them.


> It's partly "source vs binary", but it's more than that. It's not that you
> have to look at the sources (of course you have to - and any good debugger
> will make that _easy_). It's that you have to look at the level _above_
> sources. At the meaning of things. Without a debugger, you basically have
> to go the next step: understand what the program does. Not just that
> particular line.

The point is that WITH a debugger you have to take that step as well.
A person without the self discipline to do that is still a child and should
not be in this business. The debugger gives you a better picture of what
is actually happening. If that leverage is considered to be a bad thing
I am surprised and dismayed. A bloody LOT of personal experience 
and technological bloody noses suggests it is a very good thing.

> And quite frankly, for most of the real problems (as opposed to the stupid
> bugs - of which there are many, as the latest crap with "truncate()" has
> shown us) a debugger doesn't much help. And the real problems are what I
> worry about. The rest is just details. It will get fixed eventually. 

Is somebody saying that a debugger is the key to heaven? I certainly
am not. It is the platform at the top of the ladder that holds your paint
bucket so that you don't have to climb up and down the ladder of your
understanding quite so often. It also beats you over the head with
facts when your head has it wrong. And I must admit that is a painful
hit direct to the ego. {^_-}

> Because I'm a bastard, and proud of it!

And I am an older bitch and proud of it. I've whupped bigger boys than
you.

{^,-}At least I ended on some humor.
(And I even arranged that you only see the list copy reducing
your email inundation by one. Er, why's the list setup without
a reply-to the list?)


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux

2000-09-06 Thread J. Dow

From: "Horst von Brand" <[EMAIL PROTECTED]>
> Problem is:
> 
> - Debugging code has to be written, integrated and debugged. It has to be
>   designed for collecting certain types of data. If you get the data to be
>   collected wrong, it is useless (and as you don't know what bugs you are
>   looking for, you _will_ get it wrong most of the time, unless you collect
>   everything in sight)
> - Debugging code impacts readability, writeability, and performance of the
>   instrumented code, specially if it is pervasive (and most functionality
>   isn't localized)
> - Debugging harnesses have to evolve together with the instrumented
>   code. If they don't, they are just a liability. If they do, they double
>   (probably more) the development time, as they have to be kept in synch.
> - Broken debugging code impacts stability
> 
> Do we want Linus & Co. writing cool kernel code or writing a whiz-bang
> kernel code debugger? Developer time _is_ finite, you know...

So you place your money into the bank until you have a "large enough sum
to be worth investing in a mutual fund or stock", I presume. If not then you
SHOULD understand the invest now for returns later ideas. If the time is
invested now the return on investment later will be far greater than if they
finally grudgingly try to do it later.

While I was at Magnavox I watched several software projects from
proposal through production code. The more time that was invested
up front in tools the more likely the project was to be on time and on
budget or even under budget. (And this was with that roundly dispised
thing called Ada. Go figure.)

I rather strongly think it is well past time to make the debugger
investment. But building a good one is hard to do. Where is someone
smart enough and capable enough to get the core built so that there
is a rational debugger project to parallel the kernel development?
It's not a glamorous job. But somebody ought to grit their teeth and
get their name on The Linux Kernel Debugger. I suspect the community
would remember THAT person a LONG time. I kinda wish I had the
time and that it better fit my skill sets. I know the people who fit the
project are out there earning REALLY big bucks from Raytheon
and Rockwell as well as some of the more often thought of companies.

> Witness the people who have argued _against_ integrating debugging code
> into the kernel, *even code they designed and wrote themselves*.

"If it was hard to design by God it should be hard to maintain, too."

> Check out stuff like the user-level kernel, AFAIU there it is possible to
> attach a run-of-the-mill debugger to a live kernel. Or look at the remote
> debugging stubs for gdb.
> 
> It is not that they don't want better tools, the problem is that the tools
> available (or buildable at a reasonable cost) are woefully inadequate to
> the task at hand. Typical (low-level!)  question when debugging is "Where
> the $EXPLETIVE does this $WRONG_VALUE in $RANDOM_VARIABLE come from?", and
> no current debugger is able to give you even that little. Sure, you can
> single-step to the point of failure, but it is often faster to just grep
> for the places where it can be changed. Don't even think about asking stuff
> like "Why is $RANDOM_FUNCTION being called so much?". Given this scenario,
> the only useful tool is a good understanding of the kernel; instrumentation
> which gets more in the way than its usefulness warrants is just left out,
> or rots away.

Dr. Horst H. von Brand, I think you are pointing out one of the more
splendidly large worms in the Open Source apple. Better tools are
needed and if they exist they would be used. But nobody wants the
job. So it doesn't get done.

Alas, *I* do not have a solution other than maybe to embarrass someone
into doing what has to be done and geting intense satisfaction about that
when it is done.

{^_^}



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux

2000-09-06 Thread J. Dow

> > A good debugger is a very 
> > good leveraging agent. I can cut a 2x4 with a largish pocket knife,
> > in theory. (I have never wasted the time.) In a pinch I have cut a
> > 2X4 with a hand saw. I can see that if I wanted to do this for any
> > serious work power tools are required. The same logic seems
> > to fall into the programming realm.
> 
> I disagree. No one here is dumb enough to use a wholely inappropriate
> tool for a particular task. But using a debugger is often (but not
> always) like sawing bits off your 2x4 until it happens to fit the
> gap. What you need to do is to understand the problem parameters,
> measure them, mark your 2x4, then cut using whatever tool is best
> suited to the job. In woodwork the results tend to be superior :-)
> 
> Mike

Sigh, one more try to get youse guys to understand.

I started out as an RF engineer. I note that I was considered damn good
at it. Part of the reason I worked the miracles I worked, designing radios
for the military with specifications straight out of science fiction, is tools.
The one indispensable tool was a deep understanding of which way the
electrons flow and how. Another was mathematics. A third is experience.
Without these basics no other tool would have led me to the solutions I
found.

But I also used other power tools. I built my own computer aided circuit
analysis program. With that program I found some problems early on
that would have kept me at the test bench for weeks tracking down. I
added one resistor to a variable tuned circuit and increased the circuit's
useable dynamic range 30dB. I confirmed it with another tool I consider
indispensable, the spectrum analyzer. A couple years later we got our
first network analyzer and I was in hog heaven. When we finally got a
computer controlled network analyzer I did even better. I wrote some
custom software for it that led to being able to calibrate the test set
for the GPS navigation data unit to 10 times the precision of the NDU
itself in 30 minutes from drag the equipment into the room to tear down.
When the poor sod from Bendix (the folks that made that iteration of the
NDU) who was visiting that day saw this he died. It took them a full day
to get to the same results AND they had not noticed some effects I
pointed out to him DURING that 30 minutes. *THIS* is what a debugger
is for. It is a window on the software you are attempting to debug that
allows even the best in the business to do better. I rather immodestly
lay claim to being one of the best RF engineers in the country in the
70s for high dynamic range antijam receivers and frequency synthesizers.
I might not have been so good except that I adopted tools and used them
WITH my knowledge gained from building ham radio equipment of my
own since the 9th grade. Experience, knowledge, AND TOOLS lead to
the best product. Cripple your tools and you cripple your product.

30 years of experience have proven this to me over and over again from
watching auto mechanics and ditch diggers through every engineering
discipline I have ever paused to observe. Only a damnfool eschews good
tools because of some sense of "pride" that doing it the caveman way
"forces me to think more." Son, if you need to be forced into thinking you
are in the wrong business. I got into SW because RF engineering was
getting boring and this was more challenging and fun. ('Sides, it gets VERY
tiring for a "guril" to fight most of the RF weenies her age who think that
since she is a "guril" she cannot POSSIBLY understand electrons. I had
to prove them wrong and fools too many times. I got bored. SW was more
"forgiving" in that regard. It allowed me more time to concentrate on
"doing it right and well." I figure I am "good" but not "great" with SW. Er,
the company I last worked for thought different. I got all the jobs nobody
else seemed able to do. Sadly I had to do most of them without adequate
tools. I really learned to like the leverage tools give. Tools do not SOLVE
the problems. But they leverage my knowledge so that *I* can solve the
problem. It is entirely up to me to have the self discipline to think through
what I can learn with the debugger until I have a solution that "makes
sense". "It works" does not always "make sense". So I usually do not
stop at "it works". That is being an adult and having some self discipline
as opposed to a being a child in a high tech playground. I get adult level
rewards and satisfaction that the children miss, too. The "hit" or the
"high" is greater the adult way.

{^_^}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux

2000-09-06 Thread J. Dow

From: "Ingo Molnar" <[EMAIL PROTECTED]>
> > If the Kernel Debugger creates faulty solutions through lack of
> > thinking, and asking why, then surely printk is at least as bad
> > because it allows somebody to view the operation of the kernel through
> > a keyhole darkly. [...]
> 
> i'd like to quote David here, because i cannot put it any simpler:
> 
>  " It is hoped that because it isn't the default, some new people
>will take the quantum leap to actually try debugging using the
>best debugger any of us have, our brains, instead of relying on
>automated tools. "

I note again that the same arguement applies vis a vis printk
and desk checks with a paper and pencil. The printk leverages
the capable person's time. The kernel debugger leverages
the capable person's time. What IS this urge to be handicapped
when trying to debug the most important pieces of what gets
delivered on the distribution CDROMs. Is it, "I'm so hairy chested
that I can code with one metaphorical arm tied behind my
equally cliched back?" Rather seriously this seems to be a
rather transparent attempt to keep the old boy's club closed
rather than an attempt to get the most bang for each old boy's
hour of debugging and coding time.

Good tools do not foster bad code. People foster bad code.
The converse is also true.

> my claim (which others share) is that we need more people who can debug
> the really tough problems (for which there are no tools in any OS) with
> their brains, and also we need people who will produce code with less bugs
> in the future.

And absense of tools fosters this? I would think it would drive many
serious people off figuring it is a fancy toy regardless of how effective
it may be serving up web pages for ever and ever amen.

In that regard I enjoy my "Yes!" (with a raised "I won" fist) reactions
when I finally knock down a problem. But I have gotten older now and
realize that 16 million seconds per year is about all I get on a practical
basis for generating these moments. I want to leverage my talents
for the best chances of creating these moments and knowing these
moments are valid and not spurious. A good debugger is a very 
good leveraging agent. I can cut a 2x4 with a largish pocket knife,
in theory. (I have never wasted the time.) In a pinch I have cut a
2X4 with a hand saw. I can see that if I wanted to do this for any
serious work power tools are required. The same logic seems
to fall into the programming realm.

> There is also the important question of 'bug prevention'. The kernel isnt
> some magical soup which must be debugged only, code is *added* and
> debugged. If people who write code use more code reviews to fix bugs, then
> as a side-effect they'll sooner or later write code that is less prone to
> bugs. This is because they identify the bug-risks based on the code
> pattern - if you use a debugger mainly then you dont really see the code
> pattern but the current state of the system, which you validate. So the
> difference is this:
> 
>  - compare code, algorithm and concept with the original intention;
>analyze the symptoms and find the bug
> 
>  - compare the system state discovered through the debugger with the
>intended state of the system. Potentially step through the code before
>and after the faulty behavior, try to identify the 'point of bug' and
>constantly compare actual system state with intended system state.
>(it's certainly more complex than this, but you get the point.) This is
>why tools/features visualizing system state are so popular.
> 
> i claim that the second behavior is 'passive', 'disconnected' and has no
> connection to the code itself, and thus tends to lead to inferior code. It
> leads to the frequent behavior of 'patching the state', not modifying the
> code itself. Eg. 'ok, we have a NULL here, lets return then so it wont
> crash later in the function.'
> 
> The first behavior IMO produces a more 'integrated' coding style, where
> designing, writing and debugging code is closely interwoven, and naturally
> leads to higher quality code. Eg. 'we must never get a NULL here, who
> called this function and why??'.

This is all motherhood and has little or nothing to do with the presence or
absense of leveraging agents. Sure a dolt will produce more doltish code
per mega disk revolution with the leveraging agent than without. On the
other extremity a guru will produce more good code per mega disk
revolution, too.

Linux's Open Source nature leverages quantities of people fairly effectively.
Some attention appears to be needed to leverage the abilities of the few
GOOD people. (And I note that a good debugger is a good way to figure
out how the code works for some people who do not visualize from written
code all that well. Since Linux documentation is little or no better than NT
documentation these people are stranded unless they can see what is
happening "in vitro".)

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe lin

Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS for Linux

2000-09-06 Thread J. Dow

From: "Ingo Molnar" <[EMAIL PROTECTED]>
> 
> On Tue, 5 Sep 2000, Richard Gooch wrote:
> 
> > Would you classify IKD as a pile of warts you wouldn't want to see in
> > the kernel?
> 
> the quality of IKD is IMO excellent ( having written parts of it),
> yet i wouldnt want to see it in the kernel. That having said, i *did*
> author and integrate one of the IKD subsystems into the mainstream kernel
> - the NMI oopser on SMP systems. If a debugging aid is localized then
> there are no source-code health issues. In the case of the NMI-oopser the
> case was even clearer: nor a developer, nor a user can do anything useful
> with a hard lockup, apart from complaining that it 'locked up'. We clearly
> needed more information than that.
> 
> KDB is not a code health issue either, it's quite localized. But KDB has
> the other bad social side-effect David was talking about, it promotes
> band-aids. So it's a tough call IMO.

I guess this has dragged on long enough I feel the urge to stick my
oversize sandles into the mess here.

For decades now I have observed that no tool ever makes a
"hack" into an "artist". All the tool does is allow both to make
their product, mess or art, quicker and with fewer basic errors.
A word processor does not turn the average Joe up the street
into an award winning novelist. But if it is reasonably well designed
there is a prayer that Joe will make fewer basic spelling or
textual errors.

A Kernel Debugger is just another such tool. It helps you find
"something interesting" quicker and with less pain. A great
debugger also offers you interesting views to help you
understand WHY what you are seeing is happening. If the
person using the debugger fails to ask that question he's
created his mess quicker. If the person using the debugger
asks the question and seriously ponders the answer the
kernel debugger is a handy tool for discovering why the
problem is happening. The kernel debugger, or any other
good debugging tool, gives the capable user a cleaner and
more efficient view of what is happening.

If the Kernel Debugger creates faulty solutions through lack
of thinking, and asking why, then surely printk is at least as
bad because it allows somebody to view the operation of
the kernel through a keyhole darkly. This view also fosters
"quick solutions" rather than careful analysis and desk
checking etc. Again, both are tools. They make things happen
quicker. In capable hands you get properly debugged code
quicker. In novice hands you get quicker bandaids placed on
the wrong sores. At least if the novice characterizes a problem
and points to a place where the problem is evident through
the patch presented the capable kernel author can start there
and use his or her thinking process more efficiently. It leverages
the capable person's abilities.

And in my limited experience with the NT kernel debugger you
sometimes can find problems that printk and looking at the same
code over and over again miss.

I guess the refrain is the same as the gun lovers refrain. Kernel
Debuggers do not create problems. People create problems.

{^_^}Joanne Dow, a "debugger" for most of my professional life,
 both electronics hardware of many types and software.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: A useful DNS setup....

2000-09-04 Thread J. Dow

Sorry - I punched the wrong key on that message.
Mea Culpa - mea maxima culpa.
{o.o}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



A useful DNS setup....

2000-09-04 Thread J. Dow

OK, I decided to do it, since it gave me a chance to tweak two people
in the nose, one who really earned it and one who probably should know
better. It is up on my web page and may be used and copied freely by
anyone whose email service does not have Earthlink.net black holed.
(Somebody who himself has an email service which black holes messages
foamed at the mouth at another, a major figure in the community, whose
email service blocked the first somebody's email. I thumb my metaphorical
nose at both of them. {^_-} I figure it is really a meaningless gentle jibe but
who am I to waste an opportunity?)

url: http://home.earthlink.net/~jdow/DNS_Setup.html

Um, no links, yet, from or to the main page. It is mostly straight text
suitable for cutandpaste.

Enjoy - I hope.

{^_^}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [bug] test8-preX crashes X on APM resume >>>Re: [Fwd: Returned mail: see transcript for details]

2000-09-04 Thread J. Dow

From: "David Ford" <[EMAIL PROTECTED]>

> Alan Cox wrote:
>
> > > My server is in the tested/good list w/ orbs.  Aren't you following your
own advice
> > > about properly setting up your MTA to allow good guys and stop bad guys in
accord
> > > with ORBS DNS?
> >
> > I get too much junk to care about it.
> >
> > Alan
>
> How are we supposed to properly contact maintainers and post bugs and
solutions when
> you're rejecting our mail?
>

Unfoam thy mouth David. And while you are at it clean up YOUR email filters.
It appears that both you and Alan have mail blocking hangovers from the ill
conceived and IMHO unethical black hole project that Paul Vixie runs. I am
not sure about "kalifornia.com" at all. But I am aware that Earthlink filters
junk mail. It does not do it in a manner Paul approves of since it does not use
his dns based crud. Instead if uses a slowdown process for repeated message
from a single source. The server slows its response as more messages are
posted reaching delays of several minutes per message. It makes spamming
through Earthlink's email processes a very slow and unprofitable process.

I have a pair of email addresses, one at Earthlink and one at BIX. In the last
two years or more I have not received more than two spams that I could
actually trace back to earthlink. I have received several that had forged
headers indicating earthlink as the source. The mail was not, however, sent
through earthlink. I suggest both you, David, and you, Alan, revise your filters
to match reality rather than your past prejudices.

(What is interesting in Alan's case is that one email from me made it
through and a more recent one didn't. I figure on the more recent one
"scroot". I'll let someone else find the problem and solve it. I haven't the
time to plan idiot email filter games and I have better things to do than
work and have my email rejected. So I simply wander off elsewhere
for awhile until the children grow up. I just HAD to respond to David's
being the pot foaming at the mouth calling the Alan Cox kettle black.)

{o.o}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [ANNOUNCE] Withdrawl of Open Source NDS Project/NTFS/M2FS forLinux

2000-09-03 Thread J. Dow

> That's B.S.  The GPL is a Copyright license; it applies whether or not
> it is in the kernel.  Microsoft (or anyone else for that matter) can't
> take your code and use it without consent.  The GPL is one way of giving
> consent, with certain strings attached.

And, Ted, THAT is brown steaming matter coming from the south end of a
fertile male bovine. Who is going to have the money to beat MS at the "I
have more attornies than you" game? And what constitutes copying in this
sort of instance? It would be an expensive lawsuit. I suspect that barring
some obvious screwup somewhere MS is immune to worrying its overgrown
head about GPL issues.

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: thread rant [semi-OT]

2000-09-02 Thread J. Dow

> In summary, when "multithreading" floats into your mind, think
> "concurrency." Think very carefully about how you might simultaneously
> exploit all of the independent resources in your computer. Due to the long
> and complex history of OS development, a different API is usually required
> to communicate with each device. (e.g. old-school UNIX has always handled
> non-blocking network I/O with select(), but non-blocking disk I/O is
rather
> new and must be done with AIO or threads; and don't even ask about
> asynchronous access to the graphics card =).

Dan, another thing to consider with multithreading is that it is a way to
avoid "convoy" effects if there is a nice priority mechanism for first in
first out messaging. Until NT crufted in its IO Completion model it was
highly prone to either starvation or convoy problems with certain problems.
If you fired off reads on several interfaces which all experienced about the
same problem the first on in the array searched by the multiple object
wait function was more likely to get serviced than the others. The initial
solution reordered the list each pass through the wait. IO Completion
actually added in first in first out messaging for all the messages
reaching that particular wait for io completion call. This made a VERY
measureable improvement in performance overall. You can run into
the same problems with psuedo-threading in state machines with
polling loops only it's a whole lot uglier. (Been there, done that, kicked
myself, bought the teeshirt in disgrace, and retreated to a proper
solution.)

(Heh, incidentally I was rather surprised by the time I hit NT tasking
to discover its array based message handling. I was used to the
FIFO message queueing on AmigaDOS that it enjoyed from day
zero. So I wrote as if I had that on NT I learned the hard way.)

{^_^}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: thread rant

2000-09-01 Thread J. Dow

(Hm, I meant for a copy of this to go to the list, too. So here it is.)

Mike Harris comments:
> > I've heard comments from Alan, and others in the past bashing
> > threads, and I can understand the "threads are for people who
> > can't write state machines" comments I've heard, but what other
> > ways are there of accomplishing the goals that threads solve in
> > an acceptable manner that gives good performance without coding
> > complexity?
>
> Threads are a handy way to allow a prioritized state machine
> operations. State machines are nice and I use them and have
> used them to good effect. (The MX3000 SatCom modem data
> mode and fax mode are both state machines - as far as the
> Inmarsat spec vs the Facsimile spec allowed.) I also use
> multithreaded code. I use it when I want to switch from thread
> to thread based on input events and priority. I don't want to
> continue to run through a low priority "state" before I can service
> a high priority "state". Threads are the mechanism. People who
> make declarations such as you cite remind me of people who
> yank the gas pliers out of their back pockets to pound in 10
> penny nails. If you need a hammer and do not have one then
> any tool begins to look like a hammer.
>
> > This is all just curiosity.  I've considered trying some thread
> > programming, but if it is as stupid as it sounds, I'd rather
> > learn the "right" way of writing code that would ordinarily be
> > done with threads, etc..  Right now, I'm using fork() all over
> > the place and don't much care how much waste it is...  I'd like
> > to though.
>
> Think on what it is you want to do. State machines are REAL
> good when every state should run to completion before you
> run the next state. State machines are not good when your
> program has functions that must be run at a higher priority
> than the other's while the others must not block when the
> higher priority thread blocks. Use the two tools with some
> discrimination and you can get wonderful results.
>
> > >The fact that the system implements threads speaks enough about
> > >it's capabilities. ie, it's trying hard to suck less. So, from my POV,
> > >we're looking to make Linux suck more by effectively emulating systems
> > >that are trying to suck less.
> >
> > Makes sense... if you understand the details of why threads
> > suck.  I understand there are some cache coherency issues, and
> > I've heard of other things as well, but is there an FAQ out there
> > saying "Why to not use threads?" that goes into good detail and
> > provides alternatives?
>
> Doesn't make sense to me, Mike. But then I use threads where
> threads are appropriate and state machines where state machines
> are appropriate and linear code where linear code is appropriate.
> Consider threads a tool. Learn how the tool works. Then select it
> and use it when the situation warrants it.
>
> > >But, I've never done anything worthwhile for Linux, so take this for
what
> > >it's worth, from an asshole.
> >
> > Works for me.  ;o)
>
> Sometimes I remember I am a lady and that ladies don't reply to that
> level of comment. 'Sides, if I did reply it'd not be by fainting. He might
> discover a new apperature somewhere on his body that resembles
> the orifice referenced that he didn't plan on or have OEM.
>
> {^_-}
>
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Press release - here we go again!

2000-09-01 Thread J. Dow

> >And if got lost. That should tell you something. Perhaps something like
> >"*Advanced interface support for USB, FireWire, and AGP!"
> >
> >Then place any expostulatory text indented under that as complete
sentences.
> >This treates the bulleted items as "titles". Your target audience
dispises
> >incomplete sentences and clunky grammar. (And do rest assured that folks
> >like Jerry Pournelle have posted some REAL clunkers, worse than anything
> >you did) on BIX for our chuckles.)
>
> Interesting juxtaposition, Dow.  Your suggestion includes a bang
> (exclamation mark) and then you mention the most vicious bang-hater I've
> ever run into, Jerry Pournelle, in the next paragraph.  Whatever will you
> do next, Oh Emoticon Person?  :)

There is that consideration. I figured I'd try to be nice to the guy for
even
making the effort.

> (This is a VERY serious thing -- one sign of an amateur press release is
> The! Excessive! Use! of! the! Ballbat! Character! -- I know columnists who
> stop reading and start scanning for exclamation marks when they encounter
> the first one.  Because all too many poof-piece writers place the bang in
> the headline, that means the entire release is ignored.  I remember when I
> was including some C code in an article I was writing for a magazine; the
> editor said to take out all of those bangs!  It took 10 minutes with a
copy
> of K&R to show him that the exclamation marks were operators, not
> emphasis.  )

Reminds me of the DoD "proof reader" who ordered us to take the
"fork()" references out of our source code on some military software we
had written for the Air Force some years ago. That "get the fork() out of
there" issue took about 6 months to resolve. The AIr Force employed
some woefully ignorant consultants on that project. 

Er, are you the Satch formerly of BIX or "are there two?"

{^_-}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: MTBF data for linux

2000-09-01 Thread J. Dow

From: "Chris Wedgwood" <[EMAIL PROTECTED]>
To: "Alan Cox" <[EMAIL PROTECTED]>

> We ran 1.2.13lmp for about 1100 days before the box finally got
> turned off - twice around the uptime clock and more
> 
> That's must be some kind of unofficial record... I though 400+ days
> was pretty neat, but 1100 says is really impressive, especially on a
> kernel which has races with jiffie wraps...

Um, another "uptime benchmark" is the little machine (old 75MHz
Pentium) we use here as our internal net's gateway to the internet
via IP Masquerading. It is the machine on the net with the honorary
unimaginative name, linux. Little linux endeared himself to me by
reaching 450+ days uptime as a v.90 dialup and internal http server
before I rebooted him for some upgrades. We went DSL. I needed
to install a new NIC. I'm not adventuresome enough to hotplug a
PCI card. So at something like 450 days, 6 hours and somewhat
more than another half hour he was cleanly shutdown. I installed a
new disk as hdc, a new CDROM as hdd, a new NIC, and rebooted.
He came up clean first time. Then I partitioned and mounted the
new disk, transferred critical files, and rebooted again to install
RedHat 6.2 and once more to upgrade the kernel. He has a month
of uptime at this point. (I took him down one more time to remove
the tiny original disks he ran on and move the 2gig drive to hda
and the CDROM to hdc after he'd been up and clean about a week
and a half.)

It appears Linux stays up longer than you care to leave it up. I
admit this was with no load to speak of. But it was kinda fun to
hold a birthday party for him at one year. (Heh, I even moved him
across the room sitting on top of the UPS when I rearranged the
furniture on my side of the room. hey, I'm female. I get to do silly
things like that. {O,o})

Er, do the True64 machines have any practical uptime problem?
The one BIX runs on seems to accumulate some impressive
uptime under the load the tiny number of remaining members
place on the machine.

{^_^}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Press release - here we go again!

2000-09-01 Thread J. Dow

> I am not a lawyer, marketing manager, marketer, salesperson, pre-sales
> person, or indeed even a "real" kernel hacker. I'm a bloody high school
> student. Hence the lack of the "journalistic touch". I'm just hacking
away,
> hoping someone will notice, tell me everything to fix, I fix it, and in
the
> end I get credit, and people hail me down the streets with a ticker-tape
> para

Thank you for doing it, Daniel. SOMEBODY needs to do it. You elected to.
Stephan is attempting to help you do it right. (If he is who I believe him
to be
(there can't be TWO can there?) he's had plenty of experience listening to
the sort of pundits you want to reach complain about flakey press releases.
Um, I've seen this litany before - in almost the same order on BIX. {^_-}

> Read about the second paragraph of the initial email I sent out. A feature
> writer for a well-known computer magazine here tried to contact one of the
> higher deities and they more or less told him to bugger off. Do we want to
> do this to ALL the press? (besides, I don't have Linus' phone number on
> hand).

This is indeed "Not A Good Thing" (tm). Somebody should bite the bullet
and be there to answer questions. (I'm not volunteering. I don't know
enough to answer the questions cogently. But I did make one minor
successful kernel hack to keep one of my machines working past Y2K
when its BIOS would lockup on boot with a divide by zero error.)

> That was actually one of the things I red-flagged myself for and pondered
> about. Then I decided I'd get flamed for it on l-k and just hoped to god
> that someone would come up with something constructive.

If you refer to a trademarked item mark it so and place the usual
footnote. It covers your asterisk. {o.o}

> > 3.) The language is a bit (and here my own deficit strikes me as I'm
> > searching for the right word, let's try) 'unprofessional'. With that I
> > mean that it is not the 'typical' language of press releases. I quote:
> > 
> > >   * Enterprise Ready! Linux 2.4 includes changes that
> > > make it even more ready for Enterprise environments.
> > 
> > >   * Linux also includes Logical Volume Manager, for easy
> > > administration of disk space; you can also combine
> > > several hard drives/partitions into one for even more
> > > space and ease!
> > 
> > >   * More interface support! Linux is now even better
> > > supported for the desktop with the advent of support
> > > for USB, FireWire and AGP.
> > 
> > > Linux includes a new architecture, known as Netfilter,
> > > to act as a firewall (security - choose what gets
> > > through and what doesn't), and masquerading server
> > > (multiple machines can share the one connection without
> > > any fuss or hassle). Netfilter is now much quicker, 
> > 
>
> Yeah, well scroll up. I will go and read the KDE press release, but don't
> mistake me for a professional journalist, or someone who sits there all
day
> slamming out quality press releases.

The phrasing is inconsistent. Cleaning that piece up is good training for
engineers as well as PR authors. (English is my primary language - heck
only people language - and I am still learning to use it correctly. If you
spend
the time now it will be a BIG win come time to look for a job. I turned down
a few employment prospects when I was in the hiring loop because they
could not express themselves clearly.)

> > One should take time and discuss what _groundbreaking_ new features
> > Linux 2.4 will have when compared to both 2.2 and the competitors. I'm
> > thinking of the scalability issues, as well as all the other
> > 'enterprise' enhancements like maximum memory/processors supported. On
> > the other end of the user scale you might want to proudly present Linux
> > as the first system to support ATA100. Full USB support comes to mind.
>
> Well, does Linus, Alan, etc, want to share their viewpoint? ATA100 is a
very
> good one, yes, and I actually did put in USB if you bothered to read the

And if got lost. That should tell you something. Perhaps something like
"*Advanced interface support for USB, FireWire, and AGP!"

Then place any expostulatory text indented under that as complete sentences.
This treates the bulleted items as "titles". Your target audience dispises
incomplete sentences and clunky grammar. (And do rest assured that folks
like Jerry Pournelle have posted some REAL clunkers, worse than anything
you did) on BIX for our chuckles.)

> press release, indeed what you pasted also. And the first thing I wrote
was
> about how it's so much better for SMP and 64gig of ram, and >2gig files
...

> > One should single out what Linux 2.4 can do better than the competitors,
> > but in a respectful (w.r.t. competitors) and indirect way. And one
> > should take the time and space to go into some application examples to
> > elaborate on these outstanding features.
>
> Application != kernel.

Enh, one COULD a

Re: hfs support for blocksize != 512

2000-08-31 Thread J. Dow

Alexander wrote vs I wrote vs he wrote etc.

> > > And let's not go into the links to directories, implemented well
> > > after it became painfully obvious that they were an invitation for
> > > troubles (from looking into Amiga newsgroups it seems that miracle
> > > didn't happen - I've seen quite a few complaints about fs breakage
> > > answered with "don't use links to directories, they are broken").
> >
> > They MAY be fixed in the OS3.5 BoingBag 2 (service pack 2 with a
> > cutsiepie name.) Heinz has committed yet another rewrite.
>
> Ouch... Why did he do them (links to directories, that is), in the
> first place?

Since you asked, but I am warning you that you don't want to know
Well, maybe you do - there is a project to port UNIXy tools to every
platform in existance. While I like some of the people involved just
a whole lot I dislike the way they have done it. They attempt to pervert
other filesystems into UNIX lookalikes. They needed links. They
pestered the Commodore people until in desperation to shut them
up Randall made an effort. As you note, as a filesystem AFFS is not
well suited to links. (But then a lightweight threaded OS is not well
suited to several popular GCCisms such as huge amounts of data
on the stack. It takes programmer discipline to write threaded programs
properly. But the results are, in my experience, very well worth it. And
avoiding stack overflow on small stack spaces is one of the keys
unless the OS has done what BeOS did by assigning absurd default
stack spaces to accommodate GeekGadgets.)

> > > Anyway, it's all history. We can't unroll the kludge, no matter
> > > what we do. We've got what we've got. And I'm not too interested in
> > > distribution of the blame between the people in team that seems to be
> > > dissolved years ago. I consider AFFS we have to deal with as a poor
excuse
> > > of design and I think that it gives more than enough reasons for that.
> > > In alternative history it might be better. So might many other things.

> >
> > Indeed, poor or not it exists and we live with it in the Amiga
community.
> > (Um, I wonder if I could talk Hendrix into a copy of the source for SFS
so
> > it could be ported to Linux These days I prefer it to FFS. {^_-})
>
> Hmm... What, format description is not available?

SFS is a private effort on Hendrix's part. It is wholely unrelated to FFS.
But it does work on the Amiga, fairly nicely. I'm not sure how much of
his structures he has released publicly. (It is also in perpetual beta.)

> > If you want I can bend your ear on things Amiga for longer than your
> > patience stretches, I suspect. (I've been following the threads
discussions
>
> alt.folklore.computers is -> that way ;-) Let's take it there...

Private email is easier. I've had my Use(less)Net aversion therapy. (I
got well and good spoiled by BIX in its prime. It had the highest signal
to noise ratio I have experienced yet.)

Er, and if you want info about the latest changes to the RDB spec to
incorporate into the kernel attempts to read the AFFS boot sectors
 "I can help you." Er, I am the person guilty for the
latest perversions. {^_-}

> ObWTF: WTF did these guys drop QNX when they clearly wanted RTOS? Do they
> have somebody who
> a) knew the difference between RT and TS and
> b) knew that Linux is TS?

Um, it would not be either polite or politic to say what I "really" think
here or
anywhere else. I am reserving judgement until I have had a chance to test
what a full up "elate" can do. I am willing to be (very) surprised.

The interesting thing is that we need an OS somewhere between QNX and
Linux. Sadly it appears that NT is about what fits in the niche and is well
enough
known to sell to our customers, theme parks, theaters, cruise ships, and
other
venues. It's a small niche. We have fun doing it. And we make enough money
to finance doing more of it. (And so far the customers are more than passing
honest, which is amazingly refreshing. And it's amazing fun to walk into a
place
like Wonderland in the Toronto area and see their volcano really work
knowing
that it is our tool making it go. The looks on the other people's faces are
VERY
rewarding.)

(Er, by day Loren, my partner,  is a mild mannered (hah) OS hacker for
UniSys. Lately he's been "doing" specialty HALs for their BIG machines.
He has some interesting opinions about the various OSs out there and
how people are relearning what he and his buddies learned, often the
hard way, 20 years ago. He nearly single handedly built some of the
Burroughs OSs. Chuckle - he was hired to write documentation. They
discovered he was fixing bugs he found while testing his documentation.
Suddenly he was tossed into the OS group as a programmer.)

{^_^}


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: hfs support for blocksize != 512

2000-08-31 Thread J. Dow

Quoth a misinformed Alexander Viro re AFFS,
> As for the silliness of the OFS... I apologize for repeating the
> story if you know it already, but anyway: OFS looks awfully similar to
> Alto filesystem. With one crucial difference: Alto kept the header/footer
> equivalents in the sector framing. No silly 400-odd byte sectors for them.
> That layout made a lot of sense - you could easily recover from many disk
> faults, yodda, yodda, _without_ sacrificing performance. The whole design
> relied on ability to put pieces of metadata in the sector framing. Take
> that away and you've lost _very_ large part of the benefits. So large that
> the whole design ought to be rethought - tradeoffs change big way.
>
> OFS took that away. Mechanically. It just stuffed the headers into
> the data part of sectors. I don't know the story behind that decision -
> being a jaded bastard I suspect that Commodore PHBs decided to save a
> bit on floppy controller price and did it well after the initial design

Comododo PHBs had nothing to do with it. And the Commododo floppy
disk format is quite literally unreadable with a PC style controller. It was
not an economic decision. If you are going to carp please do so from a
basis of real knowledge Alexander. (The REAL blame for the disk fiasco
goes to the people at Metacrap^H^H^H^HComCo.)

> was done and so close to release that redesign was impossible for
> schedule reasons, but it might be something else. We'll probably never
> know unless somebody who had been in the original design team will leak
> it. But whatever reasons were behind that decision, OFS was either blindly
> copied without a single thought about very serious design factor _or_
> had been crippled at some point before the release. If it's the latter - I
> commiserate with their fs folks. If it's the former... well, I think that
> it says quite a few things about their clue level.

Metacomco designed it based on their TripOS. OFS is very good for
repairing the filesystem in the event of a problem, although the so called
DiskDoctor they provided quickly earned the name DiskDestroyer.
Metacomco and BSTRINGS and BPOINTERS and all that nonsense
entered the picture when it was decided the originally planned OS was
would take too long to develop. So what Metacomco had was grafted
onto what the old Amiga Inc had done resulting in a hodgepodge
mess.

> AFFS took the headers out of the data sectors. But that killed the
> whole reason behind having them anywhere - if you can't tell data blocks
> from the rest, what's the point of marking free and metadata ones?

> Now, links were total lossage - I think that even if you have some

Kemo Sabe, links never existed UNTIL the Amiga FFS was developed,
redeveloped, and redeveloped again.

> doubts about that now, you will lose them when you will write down the
> operations needed for rename(). And I mean pure set of on-disk changes -
> forget about dentries, inodes and other in-core data.
>
> Why did they do it that way? Beats me. AmigaOS is a microkernel,
> so replacing fs driver should be very easy. It ought to be easier than in
> Linux. And they've pulled out the change from OFS to AFFS, so the
> filesystem conversion was not an issue. Dunno how about UNIX-friendliness,
> but their implementation of links definitely was not friendly to their own
> OS.

As it turns out many of the recovery tools people built worked remarkably
well on FFS when it was introduced with little modification. (Most of the
time tracing the actual data blocks was not necessary for rebuilding the
disk. Thus the datablock metadata loss was not crippling.) FFS appeared
in its first versions with AmigaDOS 1.3. (Er, if you want a copy of some of
the earliest versions sent to developers for testing I can arrange something
in that regard. I believe I still have most of that "stuff".) It underwent
several
rewrites as successive developers and demands were placed on it. One
major change is evidenced in the hash algorithm used for the original
OFS and FFS. It fails to treat international characters correctly when
removing case. The international version corrected this deficiency. The
old cruft is preserved for reading old disks. Later on DirCache was added
principly for floppy disks. About that time Randall added both so called
soft links and hard links. For what it is worth it took a long long time and
series of modifications before either of them worked adequately.

> And let's not go into the links to directories, implemented well
> after it became painfully obvious that they were an invitation for
> troubles (from looking into Amiga newsgroups it seems that miracle
> didn't happen - I've seen quite a few complaints about fs breakage
> answered with "don't use links to directories, they are broken").

They MAY be fixed in the OS3.5 BoingBag 2 (service pack 2 with a
cutsiepie name.) Heinz has committed yet another rewrite.

> Anyway, it's all history. We can't unroll the kludge, no matter
> what we do. We've got w