On Tue, Oct 29, 2013 at 7:08 PM, Niko Matsakis wrote:
> If I understand correctly, what you are proposing is to offer fixed
> size stacks and to use a guard page to check for stack overflow (vs a
> stack preamble)?
>
> My two thoughts are:
>
> 1. I do think segmented stacks offer several tangible
Simon 1 thing you may want to test is 10-20 senders to 1 reciever.
Multiple senders have completely diffirent behaviour and can create a lot
of contention around locks / interlocked calls . Also checks what happens
to CPU when the receiver blocks for 100ms disk accesses every 100ms.
Disruptor as
Note pre fetch and non temporal instructions really help with this..
I'm not deeply familiar with Disruptor, but I believe that it uses bounded
>> queues. My general feeling thus far is that, as the general 'go-to' channel
>> type, people should not be using bounded queues that block the sender wh
On Tue, Oct 29, 2013 at 3:30 PM, Brian Anderson wrote:
> On 10/28/2013 10:02 PM, Simon Ruggier wrote:
>
> Greetings fellow Rustians!
>
> First of all, thanks for working on such a great language. I really like
> the clean syntax, increased safety, separation of data from function
> definitions, a
I certainly like the idea of exposing a "low stack" check to the user
so that they can do better recovery. I also like the idea of
`call_with_new_stack`. I am not sure if this means that the default
recovery should be *abort* vs *task failure* (which is already fairly
drastic).
But I guess it is a
On Tue, Oct 29, 2013 at 04:24:49PM -0400, Daniel Micay wrote:
> If we want to unwind on task failure, we'll need to disable the `prune-eh`
> pass that bubbles up `nounwind` since every function will be able to
> unwind. I think it will cause a very significant increase in code size.
That's too bad
SpiderMonkey uses recursive algorithms in quite a few places. As the
level of recursion is at mercy of JS code, checking for stack
exhaustion is a must. For that the code explicitly compare an address
of a local variable with a limit set as a part of thread
initialization. If the limit is breached,
I don't think anything but (correct) strict static analysis would have
helped in that case. Embedded systems often avoid dynamic memory allocation
completely, because dynamic out-of-memory conditions would be unacceptable.
That's likely why there was so much data on the stack in the first place. A
Stack overflow gets a mention in this article:
http://www.edn.com/design/automotive/4423428/Toyota-s-killer-firmware--Bad-design-and-its-consequences:)
--
Ziad
On Tue, Oct 29, 2013 at 1:24 PM, Daniel Micay wrote:
> On Tue, Oct 29, 2013 at 7:08 AM, Niko Matsakis wrote:
>
>> If I understand cor
On Tue, Oct 29, 2013 at 7:08 AM, Niko Matsakis wrote:
> If I understand correctly, what you are proposing is to offer fixed
> size stacks and to use a guard page to check for stack overflow (vs a
> stack preamble)?
>
> My two thoughts are:
>
> 1. I do think segmented stacks offer several tangible
On 10/28/2013 10:02 PM, Simon Ruggier wrote:
Greetings fellow Rustians!
First of all, thanks for working on such a great language. I really
like the clean syntax, increased safety, separation of data from
function definitions, and freedom from having to declare duplicate
method prototypes in
On 29 October 2013 16:43, Niko Matsakis wrote:
> But I've been watching the code I write now and I've found that many
> times a recursion-based solution is much cleaner. Moreover, since we
> integrate recursion specially with the type system in the form of
> lifetimes, it can also express things t
On 29 October 2013 12:08, Niko Matsakis wrote:
> One non-technical difficulty to failing on overflow is how to handle
> user-defined destructors when there is no stack to run them on
>From C++ experience the need to handle recursion comes from code like
parsers or tree-like structure navigation.
On Tue, Oct 29, 2013 at 10:14:13PM +1100, Brendan Zabarauskas wrote:
> > struct ntimes(times: uint, value: T) -> T;
>
>
> Does this syntax work at the moment?
My mistake, I meant to leave off the `-> T`. It would just be:
struct ntimes(times: uint, value: T) -> T;
impl ntimes {
On Tue, Oct 29, 2013 at 04:21:35PM +0100, Igor Bukanov wrote:
> This is a weak argument. If one needs that much memory, then using an
> explicit stack is a must as it allows for significantly more compact
> memory presentation.
I considered this when I wrote the e-mail. I partially agree but not
f
On 29 October 2013 12:08, Niko Matsakis wrote:
> 1. I do think segmented stacks offer several tangible benefits:
>
> - Recursion limited only by available memory / address space
This is a weak argument. If one needs that much memory, then using an
explicit stack is a must as it allows for signifi
On Tue, Oct 29, 2013 at 7:08 AM, Niko Matsakis wrote:
> If I understand correctly, what you are proposing is to offer fixed
> size stacks and to use a guard page to check for stack overflow (vs a
> stack preamble)?
>
> My two thoughts are:
>
> 1. I do think segmented stacks offer several tangible
> struct ntimes(times: uint, value: T) -> T;
Does this syntax work at the moment?
~Brendan
On 29/10/2013, at 9:33 PM, Niko Matsakis wrote:
> Incidentally, my preferred way to "return a closure" is to use
> an impl like so:
>
>struct ntimes(times: uint, value: T) -> T;
>impl ntimes {
If I understand correctly, what you are proposing is to offer fixed
size stacks and to use a guard page to check for stack overflow (vs a
stack preamble)?
My two thoughts are:
1. I do think segmented stacks offer several tangible benefits:
- Recursion limited only by available memory / address s
You may just want to enable the garbage-collection feature gate by
adding `#[feature(managed_boxes)];` to the top of the file. We have
currently made `@T` types "opt-in" because we expect the syntax to
change (as the message notes) and we expect the collector to change
from a ref-counting to a trac
Incidentally, my preferred way to "return a closure" is to use
an impl like so:
struct ntimes(times: uint, value: T) -> T;
impl ntimes {
fn call(&mut self) -> Option {
if self.times == 0 {
None
} else {
self.times -= 1;
Indeed. It is invaluable in helping me decide when to pull a new master and
be prepared for what I need to fix in my code. Many thanks for doing this!
On Tue, Oct 29, 2013 at 10:51 AM, Gaetan wrote:
> +1 I just subscribed yesterday and I really appreciates this overview :)
>
> -
> Gaetan
>
You can use the `--emit-llvm` flag with rustc to check out the IR. To increase
the optimisation amount, you can use `--opt-level LEVEL`, or just `-O` for an
optimisation level of 2.
~Brendan
On 29/10/2013, at 6:00 AM, RĂ©mi Fontan wrote:
> Thanks.
>
> I hope that llvm its capable optimizing t
Thanks to everyone who replied. Daniel Micay gave me more ideas on the
irc channel about the move semantics. Here is the final program which
compiled and worked fine (with borrow/clone) is given below for
reference. Additionally I later found Patrick Walton's blog post
explaining reference counting
+1 I just subscribed yesterday and I really appreciates this overview :)
-
Gaetan
2013/10/29 Jack Moffitt
> > I just wanted to thank you for the "This Week in Rust" notes. I love
> > reading them and I am sure that I am not the only one who appreciates the
> > effort that you put into ea
25 matches
Mail list logo