On 11/14/13 8:54 PM, Daniel Micay wrote:
> On Thu, Nov 14, 2013 at 11:40 PM, Paul Nathan
> <pnat...@vandals.uidaho.edu> wrote:
>>
>> * Linux is not the only platform that matters. I would actually argue
>> that other operating systems, in particular the embedded & RTOS space,
>> are the OS platforms that need to be held up as platforms to be careful
>> to map against. Names of such operating systems include QNX, VxWorks,
>> ThreadX, L4, etc. These systems are designed very carefully to be fault
>> tolerant, deterministic and reliable; failure in design is often
>> literally "not an option" with the software systems that build on them.
>> These are design goals that Rust, in part,  shares. Being able to
>> carefully manage memory, tasks, etc, and have strong type safety is
>> something I believe that will be very attractive to the safety critical
>> space (obviously after Rust proves itself).
> 
> Is being able to handle dynamic resource exhaustion failures
> important, or are the resources (memory, threads, file descriptors)
> usually allocated up-front?

Depends upon the size of the system. For a hard real-time task or device
(e.g., aircraft, car response, medical system), it is better to allocate
up-front, as that simplifies your analysis of interrupts and timing.
However, components may have soft real-time requirements or no real-time
requirements; these are more likely to have dynamic allocation. I have
seen a real-time Linux system where a thread was designated real-time
and other threads were catch-as-catch can. In such a system, it is
likely that as hardware designs progress, the hard RT
thread/task/process will be pinned to a core/cpu and the rest will sit
on another core/cpu in order to guarantee that there is no contention
for the RT thread.  Keep in mind that in hard real-time applications,
MMUs have been disabled in past designs (this is for guaranteeing time
of execution) and will probably continue to be disabled for certain CPUs.

> 
>> * Not only is Linux not the only platform, assuming that *LLVM* is the
>> only platform is a bad idea as well. Designing for only LLVM's
>> capabilities ignores the possibility of a Rust compiler targeting (say)
>> Atmel chips. Making sufficient assumptions about a run-time model that
>> prevents retargeting (of course, retargeting by a funded group of
>> full-time engineers is what I mean, not hackable in a weekend by, say
>> me, a n00b) to a different non-LLVM-supported chip will also be a major
>> problem.
> 
> Adding a backend to LLVM will be much easier than porting Rust to
> another compiler and maintaining it.

If Rust attains ANSI/IEC/ISO standard and becomes used on the
multi-decade scale, I would expect multiple implementations of Rust with
separate codebases.

> 
>> * One of the BIG problems with D uptake is the split library problem
>> referred to before. They could not get a comfortable standard library
>> for a long time, despite some extremely bright and decently famous
>> engineers working on D. My understanding is that it's mostly been solved
>> now (after what, 10 years?).  That'd be a disaster for Rust if things
>> split badly at the interface level.
> 
> An alternative library is a far better situation than not having good
> real-time/embedded/freestanding support. I'll have a rejected pull
> request or RFC to point at for any divergence taken by rust-core, and
> it won't make any pointless bikeshed changes like renaming an API.
> .

Multiplicity of implementation is fine (glibc, eglibc); divergence of
policy & interface is not. If a reimplemented standard library shows up
designed to not GC & can be plugged into QNX threads but conforms to the
standard API, glorious.   I think the trait system should support this
well. :-)


-- 
Regards,
Paul

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to