12-Apr-2013 19:27, Manu пишет:
On 13 April 2013 00:56, Dmitry Olshansky <dmitry.o...@gmail.com
<mailto:dmitry.o...@gmail.com>> wrote:

    12-Apr-2013 18:40, Manu пишет:


        It measures in the megabytes on PC's, I'm used to working on
        machines
        with ~32k stack, I use it aggressively, and I don't tend to run out.


    Just a moment ago you were arguing for some quite different platform :)


Hey? I think 32k is pretty small, and I find it comfortable enough.

For an app - yes, for library - no. Once you control total usage of the stack all is fine, once not - too bad. Library doesn't control.

I'm
just saying that most people who are only concerned with PC have nothing
to worry about.

Maybe. Depends on how many libraries in the call chart do reasonable assumption to burn few K of the stack.


    Regardless - think fibers on say servers. These get no more then
    around 64K of stack. In fact, current D fibers have something like
    16K or 32K.
    There even was report that writeln triggered stack overflow with
    these, so the size was extended a bit.


If it was overflowing 64k, then the function that allocates 1k for some
string processing is not taking the significant share.

    In general simple non-GUI threads on Windows get 64K by default (IRC).


And that's heaps.

So what? It's still overflows.

             Maybe just adding a separate thread-local growable stack
        for data
             would work - at least it wouldn't depend on sheer luck and
             particular OS settings.


        If you're saying the stack is a limited resource, therefore it's
        unsafe
        to use it, then you might as well argue that calling any
        function is an
        unsafe process.


    s/limited/unpredictably limited/

    "calling any function is an  unsafe process" - indeed in principle
    you don't know how much of stack these use unless you measure them
    or analyze otherwise.

    It's awful that on 32 bit system you can't expect stack to be
    arbitrarily long (as much as you'd use of it) due to threads quickly
    using up all of virtual memory. On 64-bit something to that effect
    is achievable


Just apply some common sense to your stack usage. Don't allocate
hundreds of kb (or even 10s of kb) at a time.

Helpless rules, it's like "don't follow suspicious links on the web". Neither works RELIABLY.

     > I wouldn't personally burn more than 1k
     > in a single function, unless I knew it was close to a leaf by design
     > (which many library calls are).

    And you trust that nobody will build a ton of wrappers on top of
    your function (even close to leaf one)? Come on, "they" all do it.


A ton of wrappers in phobos? If it's their own program and they write
code like that, they're probably using the heap.

Imagine some other 3-rd library uses phobos and too avoids heap allocation 'case why not let's burn some stack... And so on up the chain of libraries/frameworks.

What I try (and fail obviously) to show is that you can't rely on a goodwill in stack usage and unless you see the whole picture you'd rather not reach for alloca (e.g. Phobos can't).


        90% of what we're dealing with here are strings, and they tend to
        measure in the 10s of bytes.

    ???
    Not at all, paths could easily get longer the 256, much to legacy
    apps chagrin.


And they fall back to the heap. No problem.

Sadly they don't know until it hits them. Then they may choose to re-write the libraries the use, nice prospect.

If I care about avoiding heap usage, I can know to avoid such deep
paths. Ie, it can be controlled by the user.

Yes - user, not the library writter, damn it.

--
Dmitry Olshansky

Reply via email to