On Fri, Sep 01, 2000 at 04:35:26PM +0200, [EMAIL PROTECTED] wrote:
> I don't agree.
> The world - where the program runs - is undeterministic.
> Program must adapt itself to various circumstances including lack of
> resources. Let's say I have N message buffers in a pool. What if
> there is no more free buffer?
> Running out of memory is just a situation what the task must handle
> if necessary. This is their problem, not yours.
For me, there is a critical difference between:
Spec 1: If there are no more than N events every K time units the
system will never drop data.
and
Spec 2: If total system memory demands allows, the system will never
drop data ...
If you allocate memory from a shared dynamic pool with open
access to that pool, then any user of system resources, or any shifts
in even the timing of the the non-RT portion of your total system may
cause failures.
Running out of memory is tough enough in the non-RT domain. Linux kernel
list features endless discussions on how to keep the Linux kernel floating
when there is no more space. I think it all comes down to the specification of
the system. If non-deterministic failures are ok, then dynamic memory allocation
is ok. But if non-deterministic failures are ok, why not run in the Linux domain
anyways?
--
---------------------------------------------------------
Victor Yodaiken
Finite State Machine Labs: The RTLinux Company.
www.fsmlabs.com www.rtlinux.com
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
---
For more information on Real-Time Linux see:
http://www.rtlinux.org/rtlinux/