At 9:14 PM +0100 3/19/03, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 02:31:58PM -0500, Dan Sugalski wrote:
Well, I'm not 100% sure we need it for rules. Simon's point is well-taken, but on further reflection what we're doing is subclassing the existing grammar and reinvoking the regex engine on that subclassed grammar, rather than redefining the grammar actually in use. The former doesn't require runtime redefinitions, the latter does, and I think we're going to use the former scheme.

That's not the impression I got from Simon


It would also be rather annoying.. think about balanced braces etc, take this rather contrieved, but valid example:

$x ~~ m X {
        macro ... yada yada yada;
        } X;

It seems to be that you're really inside a grammar rule when that macro is defined.

Right. Macro definition ends, you subclass off the parser object, then immediately call into it, and it eats until the end of the regex, at which point it exits and so does the parent, for lack of input, and the resulting parse tree is turned to bytecode and executed.


Otherwise you'd have to keep a lot of state outside the parser to keep track of such things, which is exactly what perl grammars were supposed to avoid I think.

You, as a user-level programmer, don't have to track the state. The parser code will, but that's not a big deal.


We'll need to meet in the middle..

Well, not to be too cranky (I'm somewhat ill at the moment, so I'll apologize in advance) but... no. No, we don't actually have to, though if we could that'd be nice.

OK, strictly speaking that's true, but I think we can


Semantics. Until Larry's nailed down what he wants, there are issues of reestablishing hypotheticals on continuation reinvocation,

They should be though, if a variable was hypothesized when the continuation was taken, then it should be hypothesized when that continuation is invoked.

Should they? Does hypotheticalization count as data modification (in which case it shouldn't) or control modification (in which case it should), and do you restore the hypothetical value at the time the continuation was taken or just re-hypotheticalize the variables? (Which makes continuations potentially more expensive as you need to then save off more info so on invocation you can restore the hypothetical state)


What about co-routines, then? And does a yield from a coroutine count as normal or abnormal exit for pushing of hypothetical state outward, or doesn't it count at all?

flushing those hypotheticals multiple times,

Not idea what you mean

I hypotheticalize the variables. I then take a continuation. Flow continues normally, exits off the end normally, hypothetical values get pushed out. I invoke the continuation, flow continues, exits normally. Do I push the values out again?



what happens to hypotheticals when you invoke a continuation with hypotheticals in effect,


Basically de-hypothesize all current hypotheticals,

How? Successfully or unsuccessfully? Does it even *count* as an exit at all if there's a pending continuation that could potentially exit the hypotheticalizing block later?


what happens to hypotheticals inside of coroutines when you establish them then yield out,

This follows directly from the implementation of coroutines: the first yield is a normal return, so if you hypothesize $x before that it'll stay hypothesized. if you then hypothesize $y outside the coroutine and call the coroutine again, $y will be de-hypothesized.

Why? That doesn't make much sense, really. If a variable is hypotheticalized outside the coroutine when I invoke it, the coroutine should see the hypothetical variable. But what about yields from within a couroutine that's hypotheticalized a variable? That's neither a normal nor an abnormal return, so what happens?


If the coroutine then hypothesizes $z and yields out, $z will be de-hypothesized and $y
re-hypothesized. $x will be unaffected by all this

Yech. I don't think that's the right thing to do.


and when hypotheticals are visible to other threads.

I haven't thought of that, but to be honest I'm not a big fan of preemptive threading anyway.

Doesn't matter whether you like it or not, they're a fact that must be dealt with. (And scare up a dual or better processor machine and I'll blow the doors off a cooperative threading scheme, synchronization overhead or not)


I read through your proposal (I'm assuming it's the one that started this
Sounds like a good deal? :-)

At the moment, no. It seems like a potentially large amount of overhead for no particular purpose, really.

I have to admit I don't know the details of how your system works, but what I had in mind didn't have any extra overhead at all -- under the (apparently still debatable) assumption that you need to look up subrules at runtime anyway.


You do agree that if that is possible, is *is* a good deal?

No. Honestly I still don't see the *point*, certainly not in regards to regular expressions and rules. The hypothetical issues need dealing with in general for threads, coroutines, and continuations, but I don't see how any of this brings anything to rules for the parsing engine.


The flow control semantics the regex/parser needs to deal with are small and simple. I just don't see the point of trying to make it more complex.

I don't see any win in the regex case, and you're not generalizing it out to the point where there's a win there. (I can see where it would be useful in the general case, but we've come nowhere near touching that)

We have come near it.. backtracking is easy using continuations, and we can certainly have rules set the standard for the general case.

We're not backtracking with continuations, though. -- Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to