That's great :) I've uploaded a minor update to the issue which allows
closures and array declarations to contain line returns i.e.

foo = {
  echo $it
}

However my antlr breaks down at this time of night trying to get it to support

foo = {
  echo $it
  echo $it again
}

Possibly all will become clear on the morrow...

Regards,

Dave

On Mon, Aug 24, 2009 at 9:52 PM, Guillaume Nodet<gno...@gmail.com> wrote:
> As a starting point for the discussion, I've started writing an ANTLR
> grammar and I attached it to FELIX-1521:
>  https://issues.apache.org/jira/secure/attachment/12417521/gogo.g
>
> On Mon, Aug 24, 2009 at 15:55, David Savage <dave.sav...@paremus.com> wrote:
>
>> Hi Peter,
>>
>> I absolutely agree with you about parser size, but I wonder if this
>> might be an optimization? The concern is that in the urge to optimize
>> early we miss some subtle parts of the language specification.
>>
>> In theory, the existence of the language - verified via a grammar tool
>> such as ANTLR, Yacc, etc is a test that the language is logically self
>> consistent. Whether or not we then use the actual auto generated
>> parsers in the "real" implementation is another thing. But having it
>> available as a sanity check to confirm if an optimized version is
>> behaving as per the language spec is potentially very useful.
>>
>> However I also know there are problems associated with ambiguities in
>> these sorts of grammars, i.e. if certain input can be matched by two
>> or more different rule invocations. This /could/ be the case when
>> mixing IO and OO semantics, so potentially argues against this route.
>> I guess I just wanted to flag the fact that we may be getting into
>> deep water...
>>
>> Either way I do think it would be worth while defining some commands
>> we agree on their base level semantics, i.e. the IO vs OO options for
>> echo.
>>
>> Regards,
>>
>> Dave
>>
>> On Mon, Aug 24, 2009 at 1:37 PM, Peter Kriens<peter.kri...@aqute.biz>
>> wrote:
>> > One of the key reasons for the grammar is that the parser could be
>> trivial.
>> > You basically only need a token and string parser and recursively parse
>> > blocks enclosed by [<({[], etc. A trivial grammar that is very similar to
>> > TCL. Though it is a nice exercise to turn it into an ANTLR grammar, the
>> > increased size would void one of my primary design goals. That is, if you
>> > look at beanshell it is around 130k, and that is largely the zillion
>> parse
>> > classes.
>> >
>> > I think keeping this simplicity has some value ...
>> >
>> > Kind regards,
>> >
>> >        Peter Kriens
>> >
>> >
>> >
>> >
>> > On 21 aug 2009, at 21:34, David Savage wrote:
>> >
>> >> On Fri, Aug 21, 2009 at 4:50 PM, Guillaume Nodet<gno...@gmail.com>
>> wrote:
>> >>>
>> >>> Right, I think it makes sense to define the grammar.
>> >>> But the grammar just defines ... well, the syntax, not the semantics.
>> >>> I think most of the discussion is about what semantic to apply on the
>> >>> grammar.
>> >>
>> >> Hmmm but having agreed on a syntax doesn't that mean we then have a
>> >> set of tokens to which we can apply semantics? At the risk of
>> >> reopening the a,b,c debate...
>> >>
>> >> ab
>> >> a b
>> >> a ; b
>> >> a = b
>> >> a | b
>> >> a {b}
>> >> a <b>
>> >> "a b"
>> >> a = $b
>> >> a = {b}
>> >> a = <b>
>> >> $ab
>> >> $a b
>> >>
>> >> Generate a variety of different tokens and it is the tokens to which
>> >> we apply semantics? Also the expected result of each of these varies
>> >> depending on whether a or b is assigned to be a command or an object
>> >> and also whether it does IO, is a value or returns a value.
>> >>
>> >> I think if we focus on one part of the problem and tweek the parsing
>> >> code to fix this we are highly likely to generate some other
>> >> unexpected behaviour. Where as if we define a parser that can generate
>> >> a set of tokens via a grammar the runtime only has to define the
>> >> actions it applies to the tokens?
>> >>
>> >> Though it still remains to be seen if a grammar is possible so perhaps
>> >> this is a moot point till one is available - hopefully the above
>> >> illustrates the concern though?
>> >>
>> >>>
>> >>> For commands, imnsho, I think we should discuss and encourage the use
>> of
>> >>> what I committed in
>> >>>
>> >>>
>> >>>
>> https://svn.apache.org/repos/asf/felix/trunk/gogo/commands/src/main/java/org/apache/felix/gogo/commands/
>> >>>
>> >>> Those are much more powerfull than a simple method on an object,
>> because
>> >>> they allow:
>> >>>  * help on the command
>> >>>  * automatic parsing / conversion of arguments
>> >>>  * variable number of arguments
>> >>>  * command line switches
>> >>>
>> >>
>> >> Certainly I think those classes look very useful and wrapper classes
>> >> and utilities are great in real world scenarios to avoid code
>> >> duplication. But I don't think we should ignore the OO case. Being
>> >> able to call things like String.indexOf and startsWith type methods in
>> >> a shell without having to generate minutae wrapper commands to achieve
>> >> things the vm already provides us with helps in the DRY aspect of
>> >> software engineering.
>> >>
>> >>> We have a bunch of commands defined in Karaf that are not specific to
>> >>> Karaf,
>> >>> so it would make sense to move them in a command ground.
>> >>> You can find some of them at:
>> >>>
>> >>>
>> >>>
>> https://svn.apache.org/repos/asf/felix/trunk/karaf/gshell/gshell-osgi/src/main/java/org/apache/felix/karaf/gshell/osgi/
>> >>> The only problem is that those are slightly tied to blueprint atm.
>> >>
>> >> In terms of this level of the debate I'm not sure we need "real"
>> >> commands only to agree on a set of representative commands - so we
>> >> know exactly what echo does, for example. If those representative
>> >> commands are also useful that's obviously a bonus, but at least if
>> >> we're agreed on what the various types of commands and tokens are,
>> >> then we should be able to agree (or not) how to treat them in a shell
>> >> environment - avoids the potential for circular debates. Finally we
>> >> should certainly implement these representative commands, to allow us
>> >> to create meaningful tests.
>> >>
>> >> Again just some thoughts...
>> >>
>> >> Regards,
>> >>
>> >> Dave
>> >>
>> >>>
>> >>> On Fri, Aug 21, 2009 at 16:54, David Savage <dave.sav...@paremus.com>
>> >>> wrote:
>> >>>>
>> >>>> Hi there,
>> >>>>
>> >>>> Seems like a good time to wade into the discussion ;) it seems like it
>> >>>> could be beneficial to start looking at the TSL syntax from the point
>> >>>> of view of a formal grammar specification such as ANTLR? Currently
>> >>>> most of the tests are via unit tests and the parsing is done by custom
>> >>>> java code. The danger is that without a formal grammar specification
>> >>>> we may "fix" one part of the parser to handle one use case only to
>> >>>> expose a secondary problem. Unit tests are certainly one way to catch
>> >>>> this but they can only be as good as the number of tests we define,
>> >>>> where as I believe a logically consistent grammar is a test in itself?
>> >>>>
>> >>>> Of course this may also be a fools errand (firstly as I have no
>> >>>> experience with antlr grammars) and looking at this link (which I came
>> >>>> at by searching for "bash antlr grammar" in google) it may not even be
>> >>>> possible - though the post is very old:
>> >>>>
>> >>>> http://www.antlr.org/pipermail/antlr-interest/2006-May/016235.html
>> >>>>
>> >>>> Wondering if there are any language experts following this list who
>> can
>> >>>> comment?
>> >>>>
>> >>>> I'm also wondering if it may make sense to extract the shell that
>> >>>> interprets the commands and the runtime that provides the commands. I
>> >>>> really like the TSL impl but in the end it's just one possible way of
>> >>>> running commands? One of the really nice things about RFC 132 is it
>> >>>> provides a common way for developers to add commands in osgi, but how
>> >>>> we refer to those commands is a secondary issue, here thinking of the
>> >>>> sh,csh,bash type debate...
>> >>>>
>> >>>> Finally though the echo echo type debate is good for simplifying the
>> >>>> problem down in unit tests in email it does depend on how we think
>> >>>> echo is defined. Does echo return a value or does it write the result
>> >>>> to the stream or both (?!). It seems like it would be useful to define
>> >>>> a set of commands for debate that are unambiguous? I guess these could
>> >>>> be abstract commands which could go on a wiki or some such? Some
>> >>>> examples:
>> >>>>
>> >>>> nsc -> no such command
>> >>>> void -> command that does nothing and returns nothing
>> >>>> echo -s hello -> echo "hello" to stream
>> >>>> echo -v -> echo "hello" to value
>> >>>> echo -sv hello -> echo "hello" to stream and value
>> >>>> array a b c -> returns [a,b,c] as an array
>> >>>>
>> >>>> Others?
>> >>>>
>> >>>> Just my 2 pence anyway...
>> >>>>
>> >>>> Regards,
>> >>>>
>> >>>> Dave
>> >>>>
>> >>>> On Thu, Aug 20, 2009 at 5:45 PM, Guillaume Nodet<gno...@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> On Thu, Aug 20, 2009 at 15:40, Derek Baum <derek.b...@paremus.com>
>> >>>>> wrote:
>> >>>>>
>> >>>>>>
>> >>>>>>> I disagree with having eval as a command.  The reason is that is
>> has
>> >>>>>>> two
>> >>>>>>
>> >>>>>> sides effects:
>> >>>>>>>
>> >>>>>>>  * all parameters are evaluated once more, so that $xxx expansion
>> >>>>>>> will be
>> >>>>>>> done once again, and it could lead to unwanted results
>> >>>>>>
>> >>>>>> this is offset by not implicitly evaluating the args - re-evaluation
>> >>>>>> only occurs when explicitly invoking eval.
>> >>>>>>
>> >>>>>>>  * all parameters are converted to strings, which i think is not
>> what
>> >>>>>>> is
>> >>>>>>
>> >>>>>> expected.
>> >>>>>>
>> >>>>>> I'm not sure this is a problem. The 3.patch eval is like eval in
>> bash,
>> >>>>>> and can be used to re-evaluate a string as a script.
>> >>>>>>
>> >>>>>> Derek
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>> Well, I think this really lead to undesirable effects:
>> >>>>>>
>> >>>>>> x = <bundle 0>
>> >>>>>
>> >>>>> ...
>> >>>>>>
>> >>>>>> $x toString
>> >>>>>
>> >>>>> org.apache.felix.framework [0]
>> >>>>>>
>> >>>>>> eval $x toString
>> >>>>>
>> >>>>>  Command not found *:org.apache.felix.framework
>> >>>>>
>> >>>>> I think both should be identical.
>> >>>>> If you want to evaluate the arguments as a fully new command line,
>> you
>> >>>>> could
>> >>>>> use quoting
>> >>>>>>
>> >>>>>> eval "$x toString"
>> >>>>>
>> >>>>> But the opposite can't be done.
>> >>>>> So I still think we should come back to my earlier proposal about
>> >>>>> making
>> >>>>> it
>> >>>>> a real keyword instead of a command.
>> >>>>>
>> >>>>> I think this is independant of wether arguments are re-parsed, though
>> >>>>> they
>> >>>>> are related.
>> >>>>>
>> >>>>> I've also spotted another problem, but this looks like a different
>> >>>>> problem:
>> >>>>>>
>> >>>>>> echo "$x"
>> >>>>>
>> >>>>> java.lang.Exception: Unable to convert from
>> [org.apache.felix.framework
>> >>>>> [0]]
>> >>>>> to java.util.List<java.lang.String>(error converting collection
>> entry)
>> >>>>>
>> >>>>> I would have thought it would behave the same as
>> >>>>>>
>> >>>>>> echo <$x toString>
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> Cheers,
>> >>>>> Guillaume Nodet
>> >>>>> ------------------------
>> >>>>> Blog: http://gnodet.blogspot.com/
>> >>>>> ------------------------
>> >>>>> Open Source SOA
>> >>>>> http://fusesource.com
>> >>>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cheers,
>> >>> Guillaume Nodet
>> >>> ------------------------
>> >>> Blog: http://gnodet.blogspot.com/
>> >>> ------------------------
>> >>> Open Source SOA
>> >>> http://fusesource.com
>> >>>
>> >>>
>> >>>
>> >
>> >
>>
>
>
>
> --
> Cheers,
> Guillaume Nodet
> ------------------------
> Blog: http://gnodet.blogspot.com/
> ------------------------
> Open Source SOA
> http://fusesource.com
>

Reply via email to