Re: Insanely high baud rates

2018-10-11 Thread Craig Milo Rogers
On 18.10.11, Alan Cox wrote:
> I mean - what is the baud rate of a pty  ?

Solaris made the distinction between B0, which means pty hangup mode,
and any other baud rate:

https://docs.oracle.com/cd/E88353_01/html/E37851/pty-4d.html

But... why not implement a pty bandwidth limitation layer?  You say, I
need to justify it?  It's for, uh... protecting the system from unrestricted
pty usage DOS attacks!  Yeah.  That's what it's for.

        Craig Milo Rogers


Re: [PATCH 0/5] kstrdup optimization

2015-01-13 Thread Craig Milo Rogers
> As kfree_const() has the exact same signature as kfree(), the risk of
> accidentally passing pointers returned from kstrdup_const() to kfree() seems
> high, which may lead to memory corruption if the pointer doesn't point to
> allocated memory.
...
>> To verify if the source is in .rodata function checks if the address is 
>> between
>> sentinels __start_rodata, __end_rodata. I guess it should work with all
>> architectures.

kfree() could also check if the region being freed is in .rodata, and
ignore the call; kfree_const() would not be needed.  If making this check all
the time leads to a significant decrease in performance (numbers needed here),
another option is to keep kfree_const() but add a check to kfree(), when
compiled for debugging, that issues a suitable complaint if the region being
freed is in .rodata.

    Craig Milo Rogers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Space-Certified CPUs and Linux

2012-08-24 Thread Craig Milo Rogers
On 12.08.24, Theodore Ts'o wrote:
> Random question.  As I recall the Space Shuttle and the International
> Space Station was only using 80386's because they have to be hardened
> against radiation/cosmic rays, as well as all of the other mechnical
> and thermal stresses associated with being in a spacecraft.  Is there
> any newer generation cpu's which are space-cerified at this point?

The MAESTRO processor is a rad-hard-by-design variant of the Tilera
architecture, intended for space applications.  Linux runs on it.

The Mongoose-V is a rad-hard MIPS R3000 processor.  It can run
VxWorks, made by Wind River (a subsidiary of Intel since 2009).

http://www.synova.com/proc/mg5.html

Rad-hard Power PCs are the current space workhorse.  Several variants
are available.  NASA has run Linux on at least one of them in space, but I
believe that VxWorks is more the norm.

> (Of course, I'm rather doubtful that NASA would ever be willing to use
> Linux on something like the Curiosity Mars Rover, but I could imagine
> Linux being used in a non-mission critcal system on the ISS)

Linux has been use on scientific equipment sent to the ISS.
Not, I think, on the avionics.

Curiosity runs VxWorks, as do the SpaceX Falcon 9 and Dragon.

        Craig Milo Rogers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Cosmetic JFFS patch.

2001-06-28 Thread Craig Milo Rogers

>Print all copyright, config, etc. as KERN_DEBUG.

How about a new level, say "KERN_CONFIG", with a "show-config"
parameter to enable displaying KERN_CONFIG messages?

        Craig Milo Rogers
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Cosmetic JFFS patch.

2001-06-28 Thread Craig Milo Rogers

>Q: Would it be worth making the module author/version strings survive in
>a non modular build but stuffed into their own section so you can pull them
>out with some magic that we'd include in 'REPORTING-BUGS'

In a /proc file, maybe?  A single file ("/proc/authors"?
"/proc/versions"? "/proc/brags"? "/proc/kvell"?)  could present the
whole section.  Alternatively, you could have one /proc file per
attributed source file; I suspect that would be messier to code.  In a
modular system, would it be feasible to dynamically link/unlink
attribution strings from a global list as modules are loaded/unloaded,
and display linked attributions along with static ones in the /proc
file?

Extrapolating from past behavior into the future:  someone will
submit code with a multi-page attribution string.  It is likely that
we'd need a formal policy on the length, content, and maybe even format
of attribution strings.

Craig Milo Rogers

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Controversy over dynamic linking -- how to end the panic

2001-06-21 Thread Craig Milo Rogers

IANAL.  I also dislike fencepost errors.  Hence, these
comments.

The GNU GPL Version 2, June 1991, (hereafter the GPL), applies
"to the modified work as a whole".  Consequently:

>2. A driver or other kernel component which is statically linked to
>   the kernel *is* to be considered a derivative work.

The kernel image, including the statically-linked device
driver, is the primary derived work licensed by the GPL.  That portion
of the kernel image that represents some actual device driver binary
code (continuing the example above, and assuming that the device
driver's unlinked object code isn't already subject to the provisions
of the GPL), may or may not be a derived work under copyright law,
depending upon what modifications the linker made to the binary code
during linking. If the device driver didn't become a derived work
during compilation, and it didn't (for example) resolve any kernel
symbols during linking, it would not be a derived work under copyright
law, right?  Of course, right.

>3. A kernel module loaded at runtime, after kernel build, *is not*
>   to be considered a derivative work.

The in-core kernel image, including a dynamically-loaded
driver, is clearly a derived work per copyright law.  As above, the
portion consisting only of the dynamically-loaded driver's binary code
may or may not be a derived work per the GPL.  It doesn't much matter
under the GPL, anyway, so long as the in-code kernel image isn't
"copied or distributed".

Looking to the future, what if Linux is enhanced with the
ability to take a snapshot of a running system (including any
dynamically-loaded driver modules), and the snapshots are copied and
distributed by, say, a PDA vendor to make an instant-on Linux system?
I think it's reasonable to argue that the GPL requirements must apply
to all parts of the distributed kernel image, even the parts that were
derived from pre-snapshot dynamically-loaded modules.


The act of "copying and distributing" a linked kernel image
containing GPL'ed code and, say, a non-GPL'ed device driver, requires
that the distributor follow the requirements of the GPL as applied to
the work as a whole, which means that source code must be available
for *all* portions of the linked kernel, which means that source code
for the driver must be made available... under the terms of sections 1
and 2 of the GPL.  The GPL's source code availability requirement
applies even to those identifiable sections of the linked kernel image
which themselves are not derived (per copyright law) from GPL'ed
sources.

Remember, however, that the GPL, as written, imposes
requirements upon the the *distributor* of a combined work, not upon
the owner of the non-GPL'ed portions that were included in the
combined work.  It is the distributor's responsibility to make source
code available as require by the GPL.  This is *not* the same as
saying that any non-GPL'ed source code in question has, automatically,
itself become licensed under the GPL (sections 1 and 2).

Craig Milo Rogers
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Getting FS access events

2001-05-15 Thread Craig Milo Rogers

>And because your suspend/resume idea isn't really going to help me
>much. That's because my boot scripts have the notion of
>"personalities" (change the boot configuration by asking the user
>early on in the boot process). If I suspend after I've got XDM
>running, it's too late.

Preface: As has been mentioned on this discussion thread, some
disk devices maintain a cache of their own, running on a small (by
today's standards) CPU.  These caches are probably sector oriented,
not block oriented, but are almost certainly not page oriented or
filesystem oriented.  Well, OK, some might have DOS filesystem
knowlege built-in, I suppose... yuck!

Anyway, although there may be slight differences, they are
effectively block-orieted caches.  As long as they are write-through
(and/or there are cache flushing commands, etc), there are reasonably
coherent with the operating system's main cache, and they meet the
expectations of database programs, etc. that want stable storage.

In terms of efficiency, there are questions about read-aheead,
write-behind, write-through with invalidation or write-through with
cache update -- the usual stuff.  I leave it as an exercise for the
reader to decide how to best tune their system, and merely assert that
it can be done.

Imagine, as a mental exercise, that you move this
block-oriented cache out of the disk drive, and into the main CPU and
operating system, say roughly at the disk driver level.  We lose the
efficiency of having the small CPU do the block lookups, but a hashed
block lookup is rather cheap nowadays, wouldn't you say?  Ignoring
issues of, "What if the disk drive fails independently of the main
CPU, or vice versa?", the transplanted block cache should operate
pretty much as it did in the disk drive.

In particular, it should continue to operate properly with the
main CPU's main page cache.

Conclusion: a page cache can successfully run over a
appropriately designed block cache.  QED.

What's the hitch?  It's the "appropriately designed"
constraint.  It is quite possible that the Linux block cache is not
designed (data strictures and code paths considered together) in a way
that allows it to mimic a simple disk drive's block cache.  I assume
that there's some impediment, or this discussion wouldn't have lasted
so long -- the idea of using the Linux block cache to model a disk
drive's block cache is pretty obvious, after all.

>So what I want is a solution that will keep the kernel clean (believe
>me, I really do want to keep it clean), but gives me a fast boot too.
>And I believe the solution is out there. We just haven't found it yet.

Well, if you want a fast boot *on a single type of disk
drive*, and the existing Linux block cache doesn't work, you could
extend the driver for that hardware with an optional block cache,
independently of Linux' block cache, along with an appropriate
interface to populate it with boot-time blocks, and to flush it when
no longer needed.  That's not exactly clean, though, is it?

You could extend the md (or LVM) drivers, or create a new
driver similar to one of them, that incorporates a simple block cache,
with appropriate mechanisms for populating and flushing it.  Clean?
er, no, rather muddy, in fact.

You might want to lock down the pages that you've
prepopulated, rather than let them be discarded before they're needed.
This could be designed into a new block cache, but you might need to
play some accounting games to get it right with the existing block
cache.

Finally, there's Linus' offer for a preread call, to
prepopulate the page cache.  By virtue of your knowlege of the
underlying implementation of the system, you could preload the file
system index pages into the block cache, and load the datd pages into
the page cache.  Clean!  Sewer-like!

Craig Milo Rogers

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: UDP stop transmitting packets!!!

2001-03-16 Thread Craig Milo Rogers

>In fact, the current choice is optimal.  If the problem is that we are
>being hit with too many packets too quickly, the most desirable course
>of action is the one which requires the least amount of computing
>power.  Doing nothing to the receive queue is better than trying to
>"dequeue" some of the packets there to allow the new one to be added.

A study by Greg Finn <[EMAIL PROTECTED]> determined that randomly
dropping packets in a congested queue may be preferable to dropping
only newly received packets.  Dropping only newly-arrived packets can
be suboptimal, depending upon the details of how your packets are
generated, of course. YMMV.

"A Connectionless Congestion Control Algorithm"
Finn, Greg
ACM Computer Communication Review, Vol. 19, No. 5., pp. 12-31,Oct. 1989.

The way I view this result is that each packet is part of a
flow (true even for most UDP packets).  Dropping a packet penalizes
the flow.  All packets in a queue contribute to the queue's
congestion, not simply the most recently-arrived packet.  Dropping a
random packet in the queue distributes the penalty among the flows in
the queue.  Over the statistical average, this is more optimal than
dropping the latest packet.

    Craig Milo Rogers
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [UPDATE] zerocopy.. While working on ip.h stuff

2001-02-26 Thread Craig Milo Rogers

>> a competing philosophy that said that the IP checksum must be
>> recomputed incrementally at routers to catch hardware problems in the
...
>ah.. we do recalculate IP Checksums now..  when we update any of the 
>timestamp rr options etc..

But, do you do it incrementally? By which I mean: subtract
(appropriately) the old value of the octet from the existing checksum,
field in the packet then add (appropriately) the new value of the
octet to the checksum?  Simply recalculating the IP checksum from
scratch can generate a "correct" checksum for a packet that was
damaged*** while waiting around in memory.

I don't know if people worry about this now, but 20 years ago
there was a fuss about it.  Further discussion offline, please.

        Craig Milo Rogers

*** Maybe by hardware trouble, or maybe because someone followed a bad
pointer and stomped on part of the header.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



2.2.19pre15: drivers/net/Config.in: 359: bad if condition

2001-02-26 Thread Craig Milo Rogers

After building a patched source tree, and running "make xmenu" on a
RH6.2 system with most relevant RPMs installed, I see:

drivers/net/Config.in: 359: bad if condition

The following line:
if [ "$CONFIG_EXPERIMENTAL" = y -a "$CONFIG_WAN_ROUTER" != "n" ]; then

should be:
if [ "$CONFIG_EXPERIMENTAL" = "y" -a "$CONFIG_WAN_ROUTER" != "n" ]; then


Craig Milo Rogers
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/