Re: [rust-dev] Implementation complexity
On 11/15/2013 05:52 AM, Patrick Walton wrote: * One of the BIG problems with D uptake is the split library problem referred to before. They could not get a comfortable standard library for a long time, despite some extremely bright and decently famous engineers working on D. My understanding is that it's mostly been solved now (after what, 10 years?). That'd be a disaster for Rust if things split badly at the interface level. I agree, and I would like to prevent divergence. Divergence of *implementation* is OK and probably inevitable if Rust succeeds; divergence of API for no reason can harm the ecosystem. If I may say a word on this (I have also been a D user), all this may well also be a human problem. People feel badly whenever decisions on points important for them are taken in opaque, rush, or uncooperative manners. D's first official stdlib was somewhat like that, and felt more or less undesigned (or unsufficiently); while the core language was already quite good, pleasant, strongly usable. I'd rather wait one or even two or three years more than expected for Rust 1.0 and get a design that most joyfully support or at least agree with... After all, we are waiting for a good static, systems programming language for, what, 40 years? (Please, don't rush for 1.0, take the time needed.) Denis ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On Thu, Nov 14, 2013 at 7:31 PM, Nathan Myers n...@cantrip.org wrote: On 11/11/2013 03:52 PM, Gaetan wrote: Can we have Two rust? The first one would be easy to learn, easy to read, and do most of ones would expect: on demand garbage collector, traits, Owned pointers,... The second one would include all advanced feature we actually don t need everyday This is a special case of the general design principle: push policy choices up, implementation details down. There's no need to choose between M:N vs. 1:1 threading, or contiguous vs. segmented stacks, at the language design level. It just takes different kinds of spawn(). The default chosen is whatever works most transparently. Similarly, a thread with a tiny or segmented stack is not what we usually want, but when we (as users) determine we can live with its limitations and costs -- including expensive call/return across segment boundaries, and special ffi protocol -- there's no fundamental reason not to support it. In many cases, there is a need to choose between supporting one or the other, or having sub-par support for both. If segmented stacks are supported, there will be preludes in nearly every single function to check the available stack space and there is the need to carefully annotate all the foreign function calls. Go always uses segmented stacks, and therefore it reimplements the support offered by the C standard library to avoid stack switches. It even uses a special calling convention to make context switches cheap (on a modern Intel CPU only 3 registers to swap, instead of 16 general purpose ones + float state + 32 AVX registers + segment registers and more). The same is true for 1:1 vs. M:N threading. If a task doesn't map 1:1 to a thread ID and thread-local data, support for C libraries using thread-local data will always be stuck with an inferior API to C/C++. There's also the inability to directly use static thread-local data which is very fast and easy to use. There are practical reasons, though. Each choice offered adds to the complexity of the implementation, and multiplies the testing needed. We don't want it to be very expensive to port the rust runtime to a new platform, so these special modes should be limited in number, and optional. Ideally a program could try to use one and, when it fails, fall back to the default mode. There is no need to make this falling- back invisible, but there are good reasons not to. An example of this is that Rust's standard library currently assumes every CPU has the baseline set of registers of the architecture, and doesn't swap the others. For an example of the consequences, Rust doesn't support using AVX on x86_64 and or SSE/AVX on x86. It would be incredibly complex to support every variation of various architectures via runtime selection of the context switching code. The reason for Rust not simply supporting (almost) every architecture that LLVM supports out-of-the-box is M:N threading. ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/15/13 9:51 AM, Daniel Micay wrote: The same is true for 1:1 vs. M:N threading. If a task doesn't map 1:1 to a thread ID and thread-local data, support for C libraries using thread-local data will always be stuck with an inferior API to C/C++. Unless you pin the task, no? There's also the inability to directly use static thread-local data which is very fast and easy to use. I think we could just implement static task-local data using some sort of life-before-main under the hood. Patrick ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On Thu, Nov 14, 2013 at 7:53 PM, Patrick Walton pcwal...@mozilla.com wrote: On 11/15/13 9:51 AM, Daniel Micay wrote: The same is true for 1:1 vs. M:N threading. If a task doesn't map 1:1 to a thread ID and thread-local data, support for C libraries using thread-local data will always be stuck with an inferior API to C/C++. Unless you pin the task, no? If you pin the task and other tasks aren't allowed to use the thread, it would work. The library would have to do this in all the entry points to provide safety and there would have to be no way of unpinning a task in safe code or at least a separate mandatory pinning concept. There's also the inability to directly use static thread-local data which is very fast and easy to use. I think we could just implement static task-local data using some sort of life-before-main under the hood. The issue is that the thread-local data is used by outputting accesses through an x86 segment register (and similarly on other architectures) so 1:1 vs. M:N would become a compile-time choice. ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/15/13 9:51 AM, Daniel Micay wrote: The reason for Rust not simply supporting (almost) every architecture that LLVM supports out-of-the-box is M:N threading. I take issue with this. The *language* supports almost every architecture that LLVM supports. The *runtime* is just part of the standard library, and the standard library will always have porting work needed to support different platforms well. Patrick ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/15/13 10:00 AM, Daniel Micay wrote: If you pin the task and other tasks aren't allowed to use the thread, it would work. The library would have to do this in all the entry points to provide safety and there would have to be no way of unpinning a task in safe code or at least a separate mandatory pinning concept. Mandatory pinning for safety is needed anyway, for example for OpenGL. The issue is that the thread-local data is used by outputting accesses through an x86 segment register (and similarly on other architectures) so 1:1 vs. M:N would become a compile-time choice. I'm fine with that. Patrick ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On Thu, Nov 14, 2013 at 8:02 PM, Patrick Walton pcwal...@mozilla.com wrote: On 11/15/13 9:51 AM, Daniel Micay wrote: The reason for Rust not simply supporting (almost) every architecture that LLVM supports out-of-the-box is M:N threading. I take issue with this. The *language* supports almost every architecture that LLVM supports. The *runtime* is just part of the standard library, and the standard library will always have porting work needed to support different platforms well. Patrick That's true, but for most people the standard library is part of what Rust is. I don't think there's much porting work to do for most architectures beyond updating the context switch assembly code. It's true that due to our hard-wired definitions of C types there is a lot in `std::libc` to update, but we're going to need to switch to auto-generating this to support alternative C libraries already. ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/14/13 4:31 PM, Nathan Myers wrote: On 11/11/2013 03:52 PM, Gaetan wrote: Can we have Two rust? The first one would be easy to learn, easy to read, and do most of ones would expect: on demand garbage collector, traits, Owned pointers,... The second one would include all advanced feature we actually don t need everyday This is a special case of the general design principle: push policy choices up, implementation details down. There's no need to choose between M:N vs. 1:1 threading, or contiguous vs. segmented stacks, at the language design level. It just takes different kinds of spawn(). The default chosen is whatever works most transparently. Similarly, a thread with a tiny or segmented stack is not what we usually want, but when we (as users) determine we can live with its limitations and costs -- including expensive call/return across segment boundaries, and special ffi protocol -- there's no fundamental reason not to support it. There are practical reasons, though. Each choice offered adds to the complexity of the implementation, and multiplies the testing needed. We don't want it to be very expensive to port the rust runtime to a new platform, so these special modes should be limited in number, and optional. Ideally a program could try to use one and, when it fails, fall back to the default mode. There is no need to make this falling- back invisible, but there are good reasons not to. Making the choice of default mode depend on the platform (1:1 here, M:N there) might force complexity on users not necessarily equipped to cope with it, so it is best to make the defaults the same in all environments, wherever practical. (Graydon et al. understand all this, but it might not be obvious to all of the rapidly growing readership here.) Nathan Myers n...@cantrip.org ___ At the risk of increasing the current noise on the list, I want to make a few points about the current arguments, based on my education experience in the embedded critical infrastructure space. Understand, I don't want to call anyone out; I'm not a threading expert, and I'm not a core developer or committer, but I have watched Rust the mailing list and I have plans to use it in the future. I want these thoughts to be heard, weighed, and incorporated in the decision making process. * Linux is not the only platform that matters. I would actually argue that other operating systems, in particular the embedded RTOS space, are the OS platforms that need to be held up as platforms to be careful to map against. Names of such operating systems include QNX, VxWorks, ThreadX, L4, etc. These systems are designed very carefully to be fault tolerant, deterministic and reliable; failure in design is often literally not an option with the software systems that build on them. These are design goals that Rust, in part, shares. Being able to carefully manage memory, tasks, etc, and have strong type safety is something I believe that will be very attractive to the safety critical space (obviously after Rust proves itself). * Not only is Linux not the only platform, assuming that *LLVM* is the only platform is a bad idea as well. Designing for only LLVM's capabilities ignores the possibility of a Rust compiler targeting (say) Atmel chips. Making sufficient assumptions about a run-time model that prevents retargeting (of course, retargeting by a funded group of full-time engineers is what I mean, not hackable in a weekend by, say me, a n00b) to a different non-LLVM-supported chip will also be a major problem. * One of the BIG problems with D uptake is the split library problem referred to before. They could not get a comfortable standard library for a long time, despite some extremely bright and decently famous engineers working on D. My understanding is that it's mostly been solved now (after what, 10 years?). That'd be a disaster for Rust if things split badly at the interface level. My perspective is that the future of Rust's place is in systems that need reliability by achieving the following characteristics: low defects, controllable memory usage, and controllable time usage. In short, replacing C in ten - twenty years time. I am also taking the *corporate* perspective, which is partially driven by risk mitigation and caution; seeing core contributors arguing about runtime implementation *without* talking about systems much beyond Linux/LLVM is concerning. In summary, please remember the embedded hidden world of computing in your discussions. -- Best Regards, Paul Nathan signature.asc Description: OpenPGP digital signature ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/15/13 1:40 PM, Paul Nathan wrote: * Linux is not the only platform that matters. I would actually argue that other operating systems, in particular the embedded RTOS space, are the OS platforms that need to be held up as platforms to be careful to map against. Names of such operating systems include QNX, VxWorks, ThreadX, L4, etc. These systems are designed very carefully to be fault tolerant, deterministic and reliable; failure in design is often literally not an option with the software systems that build on them. These are design goals that Rust, in part, shares. Being able to carefully manage memory, tasks, etc, and have strong type safety is something I believe that will be very attractive to the safety critical space (obviously after Rust proves itself). I agree. Note that the pthreads API was, as far as I'm aware, explicitly designed to be implementable by both M:N and 1:1 scheduling modes. And, indeed, there have been shipping implementations of pthreads on both M:N and 1:1 threading models. So we're essentially just following in POSIX's footsteps here. * Not only is Linux not the only platform, assuming that *LLVM* is the only platform is a bad idea as well. Designing for only LLVM's capabilities ignores the possibility of a Rust compiler targeting (say) Atmel chips. Making sufficient assumptions about a run-time model that prevents retargeting (of course, retargeting by a funded group of full-time engineers is what I mean, not hackable in a weekend by, say me, a n00b) to a different non-LLVM-supported chip will also be a major problem. Agreed. * One of the BIG problems with D uptake is the split library problem referred to before. They could not get a comfortable standard library for a long time, despite some extremely bright and decently famous engineers working on D. My understanding is that it's mostly been solved now (after what, 10 years?). That'd be a disaster for Rust if things split badly at the interface level. I agree, and I would like to prevent divergence. Divergence of *implementation* is OK and probably inevitable if Rust succeeds; divergence of API for no reason can harm the ecosystem. Patrick ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On Thu, Nov 14, 2013 at 11:40 PM, Paul Nathan pnat...@vandals.uidaho.edu wrote: * Linux is not the only platform that matters. I would actually argue that other operating systems, in particular the embedded RTOS space, are the OS platforms that need to be held up as platforms to be careful to map against. Names of such operating systems include QNX, VxWorks, ThreadX, L4, etc. These systems are designed very carefully to be fault tolerant, deterministic and reliable; failure in design is often literally not an option with the software systems that build on them. These are design goals that Rust, in part, shares. Being able to carefully manage memory, tasks, etc, and have strong type safety is something I believe that will be very attractive to the safety critical space (obviously after Rust proves itself). Is being able to handle dynamic resource exhaustion failures important, or are the resources (memory, threads, file descriptors) usually allocated up-front? * Not only is Linux not the only platform, assuming that *LLVM* is the only platform is a bad idea as well. Designing for only LLVM's capabilities ignores the possibility of a Rust compiler targeting (say) Atmel chips. Making sufficient assumptions about a run-time model that prevents retargeting (of course, retargeting by a funded group of full-time engineers is what I mean, not hackable in a weekend by, say me, a n00b) to a different non-LLVM-supported chip will also be a major problem. Adding a backend to LLVM will be much easier than porting Rust to another compiler and maintaining it. * One of the BIG problems with D uptake is the split library problem referred to before. They could not get a comfortable standard library for a long time, despite some extremely bright and decently famous engineers working on D. My understanding is that it's mostly been solved now (after what, 10 years?). That'd be a disaster for Rust if things split badly at the interface level. An alternative library is a far better situation than not having good real-time/embedded/freestanding support. I'll have a rejected pull request or RFC to point at for any divergence taken by rust-core, and it won't make any pointless bikeshed changes like renaming an API. ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/15/13 1:54 PM, Daniel Micay wrote: An alternative library is a far better situation than not having good real-time/embedded/freestanding support. I'll have a rejected pull request or RFC to point at for any divergence taken by rust-core, and it won't make any pointless bikeshed changes like renaming an API. Glad to see that, and thanks. Patrick ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev
Re: [rust-dev] Implementation complexity
On 11/14/13 8:54 PM, Daniel Micay wrote: On Thu, Nov 14, 2013 at 11:40 PM, Paul Nathan pnat...@vandals.uidaho.edu wrote: * Linux is not the only platform that matters. I would actually argue that other operating systems, in particular the embedded RTOS space, are the OS platforms that need to be held up as platforms to be careful to map against. Names of such operating systems include QNX, VxWorks, ThreadX, L4, etc. These systems are designed very carefully to be fault tolerant, deterministic and reliable; failure in design is often literally not an option with the software systems that build on them. These are design goals that Rust, in part, shares. Being able to carefully manage memory, tasks, etc, and have strong type safety is something I believe that will be very attractive to the safety critical space (obviously after Rust proves itself). Is being able to handle dynamic resource exhaustion failures important, or are the resources (memory, threads, file descriptors) usually allocated up-front? Depends upon the size of the system. For a hard real-time task or device (e.g., aircraft, car response, medical system), it is better to allocate up-front, as that simplifies your analysis of interrupts and timing. However, components may have soft real-time requirements or no real-time requirements; these are more likely to have dynamic allocation. I have seen a real-time Linux system where a thread was designated real-time and other threads were catch-as-catch can. In such a system, it is likely that as hardware designs progress, the hard RT thread/task/process will be pinned to a core/cpu and the rest will sit on another core/cpu in order to guarantee that there is no contention for the RT thread. Keep in mind that in hard real-time applications, MMUs have been disabled in past designs (this is for guaranteeing time of execution) and will probably continue to be disabled for certain CPUs. * Not only is Linux not the only platform, assuming that *LLVM* is the only platform is a bad idea as well. Designing for only LLVM's capabilities ignores the possibility of a Rust compiler targeting (say) Atmel chips. Making sufficient assumptions about a run-time model that prevents retargeting (of course, retargeting by a funded group of full-time engineers is what I mean, not hackable in a weekend by, say me, a n00b) to a different non-LLVM-supported chip will also be a major problem. Adding a backend to LLVM will be much easier than porting Rust to another compiler and maintaining it. If Rust attains ANSI/IEC/ISO standard and becomes used on the multi-decade scale, I would expect multiple implementations of Rust with separate codebases. * One of the BIG problems with D uptake is the split library problem referred to before. They could not get a comfortable standard library for a long time, despite some extremely bright and decently famous engineers working on D. My understanding is that it's mostly been solved now (after what, 10 years?). That'd be a disaster for Rust if things split badly at the interface level. An alternative library is a far better situation than not having good real-time/embedded/freestanding support. I'll have a rejected pull request or RFC to point at for any divergence taken by rust-core, and it won't make any pointless bikeshed changes like renaming an API. . Multiplicity of implementation is fine (glibc, eglibc); divergence of policy interface is not. If a reimplemented standard library shows up designed to not GC can be plugged into QNX threads but conforms to the standard API, glorious. I think the trait system should support this well. :-) -- Regards, Paul signature.asc Description: OpenPGP digital signature ___ Rust-dev mailing list Rust-dev@mozilla.org https://mail.mozilla.org/listinfo/rust-dev