Building expressions and calling eval() and such are really just there
as an escape hatch; you're not supposed to do lots of programming via
eval(). If you look at the history of lisp, originally it was all
about dynamic scope and building expressions, but it became clear that
this was not an efficient abstraction, and scheme switched to static
scope, compile-time macros, and syntax objects that are no longer the
familiar lists-of-symbols.

Building expressions is sort of a global maximum of flexibility, and
therefore not efficient unless the cost is highly amortized. It is
similar to a C program that prints out another C program, and invokes
the compiler on that. Nobody would expect that to be fast or
convenient. Having eval() is just a slightly more convenient version
of that.

On Mon, Feb 10, 2014 at 4:49 PM, Stefan Karpinski <ste...@karpinski.org> wrote:
> It's completely consistent - nothing ever reifies or reflects on local
> scopes. It just may not be what you're used to coming from Lisp or Scheme
> (or other dynamic languages).
>
>
> On Mon, Feb 10, 2014 at 3:10 PM, Bill Janssen <bill.jans...@gmail.com>
> wrote:
>>
>> My concern is that there seem to be a lot of special-case things about
>> scoping and evaluation, which makes actual use of homoiconicity rather
>> painful/difficult, and produces unexpected results for some classes of
>> programs.  Perhaps David Moon's idea of Contexts would address this.  But
>> they have to be used consistently.
>>
>> Meanwhile my workaround of generating new modules on the fly and using
>> them as evaluation contexts seems to be possible.
>>
>> Thanks.
>>
>> Bill
>>
>> On Sunday, February 9, 2014 7:22:03 PM UTC-8, Stefan Karpinski wrote:
>>>
>>> On Sun, Feb 9, 2014 at 7:10 PM, Bill Janssen <bill.j...@gmail.com> wrote:
>>>>
>>>> I just want to be able to get my hands on the inner scope blocks.
>>>
>>>
>>> This you can't do, by design. When a dynamic language is implemented as a
>>> straightforward interpreter, local environments are just hash tables and
>>> it's so easy to expose them as first-class values that it's difficult to
>>> resist the temptation to do so - especially since it seems like such a
>>> powerful feature. But allowing local environments to escape is a disaster as
>>> soon as you want to optimize your code. Unless the compiler can prove that
>>> the local environment can't possibly escape, it can't optimize anything,
>>> because if the local environment escapes, basically any crazy thing can
>>> happen at any point in time: every function might change the value of every
>>> local binding, including changing the type of values that variables are
>>> bound to and introduce entirely new bindings. Not only is this terrible for
>>> compilers trying to optimize code, but it makes it equally as impossible for
>>> programmers to reason about that what code will do. Sure, you can tell
>>> people not to do that sort of thing, but then why allow it in the first
>>> place? These are exactly the kind of marginal features that make traditional
>>> highly dynamic languages so difficult to optimize. Rather than implementing
>>> fancy compiler shenanigans that try to prove that local environments don't
>>> escape, Julia simply doesn't reify local environments, via eval or
>>> otherwise.
>>>
>>> Sorry for the rant, this should probably go in our FAQ.
>
>

Reply via email to