On 10/30/13 4:31 PM, Robert O'Callahan wrote:
RWarcs are worse. However, tasks that share a RWarc can't generally fail
independently, I assume. So one possibility is to group tasks into units
of failure, require that RWarcs can only be shared within such units,
and account for memory usage at fai
On Thu, Oct 31, 2013 at 12:16 PM, Patrick Walton wrote:
> On 10/30/13 4:12 PM, Robert O'Callahan wrote:
>
>> Since tasks don't share heaps, bounding their memory usage seems
>> tractable; it becomes an accounting problem. Instead of using explicit
>> counters I suggest following the lead of Gecko'
On Thu, Oct 31, 2013 at 12:21 PM, Patrick Walton wrote:
> On 10/30/13 4:16 PM, Patrick Walton wrote:
>
>> We don't have precise stack maps, though
>>
>
> I should add that this is an LLVM problem, not a language problem.
> Although it's a big LLVM problem that is a lot of work to fix. Maybe some
>
On 10/30/13 4:16 PM, Patrick Walton wrote:
We don't have precise stack maps, though
I should add that this is an LLVM problem, not a language problem.
Although it's a big LLVM problem that is a lot of work to fix. Maybe
some of Apple's changes will help here.
(GCC has the same problem, by t
On 10/30/13 4:12 PM, Robert O'Callahan wrote:
Since tasks don't share heaps, bounding their memory usage seems
tractable; it becomes an accounting problem. Instead of using explicit
counters I suggest following the lead of Gecko's MemShrink project and
building infrastructure to compute the memor
On Thu, Oct 31, 2013 at 12:12 PM, Robert O'Callahan wrote:
> Since tasks don't share heaps, bounding their memory usage seems
> tractable; it becomes an accounting problem. Instead of using explicit
> counters I suggest following the lead of Gecko's MemShrink project and
> building infrastructure
On Wed, Oct 30, 2013 at 3:17 PM, Niko Matsakis wrote:
> But I guess it is a legitimate question: to what extent should we
> permit safe rust code to bring a system to its knees? We can't truly
> execute untrusted code, since it could invoke native things or include
> unsafe blocks, but it'd be ni
I really like the idea of a task being a sandbox (if pure/no-unsafe Rust).
It seems (relatively) easy for a task to keep count of the number of bytes
it allocated (or the number of blocks), both heap-allocated and
stack-allocated blocks could be meshed together there (after all, both
consume memor
I certainly like the idea of exposing a "low stack" check to the user
so that they can do better recovery. I also like the idea of
`call_with_new_stack`. I am not sure if this means that the default
recovery should be *abort* vs *task failure* (which is already fairly
drastic).
But I guess it is a
SpiderMonkey uses recursive algorithms in quite a few places. As the
level of recursion is at mercy of JS code, checking for stack
exhaustion is a must. For that the code explicitly compare an address
of a local variable with a limit set as a part of thread
initialization. If the limit is breached,
10 matches
Mail list logo