Andres Freund writes:
> On 2012-12-20 23:12:55 +, McDevitt, Charles wrote:
>>> Another way of attack along these lines might be to use the %glr-parser
>>> and then try to cut back on all those redundant rules that were put in
>>> to avoid conflicts. The number of key words categories and such
> >
> > The GLR output from Bison is licensed under the GPL (unlike the LALR
> > output).
> > So using Bison's GLR mode isn't an option.
>
> Thats not the case anymore:
> http://www.gnu.org/software/bison/manual/html_node/Conditions.html
Sorry! My mistake... I didn't realize they changed the ru
On 2012-12-20 23:12:55 +, McDevitt, Charles wrote:
> >
> > Another way of attack along these lines might be to use the %glr-parser
> > and then try to cut back on all those redundant rules that were put in
> > to avoid conflicts. The number of key words categories and such could
> > perhaps a
>
> Another way of attack along these lines might be to use the %glr-parser
> and then try to cut back on all those redundant rules that were put in
> to avoid conflicts. The number of key words categories and such could
> perhaps also be reduced that way.
>
> Of course, this is mostly speculati
On 2012-12-20 15:58:12 +, Greg Stark wrote:
> On Thu, Dec 20, 2012 at 3:18 AM, Tom Lane wrote:
> > Greg Stark writes:
> >> But I'm not entirely convinced any of this is actually useful. Just
> >> becuase the transition table is large doesn't mean it's inefficient.
> >
> > That's a fair point.
On Thu, Dec 20, 2012 at 3:18 AM, Tom Lane wrote:
> Greg Stark writes:
>> But I'm not entirely convinced any of this is actually useful. Just
>> becuase the transition table is large doesn't mean it's inefficient.
>
> That's a fair point. However, I've often noticed base_yyparse() showing
> up ra
On 2012-12-20 16:04:49 +0100, Andres Freund wrote:
> On 2012-12-20 15:51:37 +0100, Andres Freund wrote:
> When doing a source/assembly annotation in SearchCatCache about half of
> the misses are attributed to the memcpy() directly at the beginning.
Using a separate array for storing the arguments
On 2012-12-20 15:51:37 +0100, Andres Freund wrote:
> On 2012-12-20 15:45:47 +0100, Andres Freund wrote:
> > On 2012-12-20 09:11:46 -0500, Robert Haas wrote:
> > > On Thu, Dec 20, 2012 at 8:55 AM, Simon Riggs
> > > wrote:
> > > > On 18 December 2012 22:10, Robert Haas wrote:
> > > >> Well that wo
On 2012-12-20 15:45:47 +0100, Andres Freund wrote:
> On 2012-12-20 09:11:46 -0500, Robert Haas wrote:
> > On Thu, Dec 20, 2012 at 8:55 AM, Simon Riggs wrote:
> > > On 18 December 2012 22:10, Robert Haas wrote:
> > >> Well that would be nice, but the problem is that I see no way to
> > >> implemen
On 2012-12-20 09:11:46 -0500, Robert Haas wrote:
> On Thu, Dec 20, 2012 at 8:55 AM, Simon Riggs wrote:
> > On 18 December 2012 22:10, Robert Haas wrote:
> >> Well that would be nice, but the problem is that I see no way to
> >> implement it. If, with a unified parser, the parser is 14% of our
>
On Thu, Dec 20, 2012 at 8:55 AM, Simon Riggs wrote:
> On 18 December 2012 22:10, Robert Haas wrote:
>> Well that would be nice, but the problem is that I see no way to
>> implement it. If, with a unified parser, the parser is 14% of our
>> source code, then splitting it in two will probably cran
On 18 December 2012 22:10, Robert Haas wrote:
> Well that would be nice, but the problem is that I see no way to
> implement it. If, with a unified parser, the parser is 14% of our
> source code, then splitting it in two will probably crank that number
> up well over 20%, because there will be d
On 12/18/12 5:10 PM, Robert Haas wrote:
> I can't help but suspect that the way we handle keywords today is
> monumentally inefficient. The unreserved_keyword products, et al,
> just seem somehow badly wrong-headed. We take the trouble to
> distinguish all of those cases so that we an turn around
On Wed, Dec 19, 2012 at 10:18 PM, Tom Lane wrote:
>> valgrind comes with a tool called cachegrind which can emulate the
>> cache algorithm on some variants of various cpus and produce reports.
>> Can it be made to produce a report for a specific block of memory?
>
> I believe that oprofile can be
Greg Stark writes:
> But I'm not entirely convinced any of this is actually useful. Just
> becuase the transition table is large doesn't mean it's inefficient.
That's a fair point. However, I've often noticed base_yyparse() showing
up rather high on profiles --- higher than seemed plausible at t
On Tue, Dec 18, 2012 at 10:44 PM, Robert Haas wrote:
> Yeah, that's why I don't know how to make it work. It feels like this
> is partly artifact of the tool, though. I mean, suppose we haven't
> read anything yet. Then, the next token can't be an IDENT, so if we
> see an unreserved keyword, we
On Tue, Dec 18, 2012 at 10:33:12AM +0100, Dimitri Fontaine wrote:
> Robert Haas writes:
> > And on the other hand, if you could get a clean split between the two
> > grammars, then regardless of exactly what the split was, it might seem
> > a win. But it seemed to me when I looked at this that yo
On Tue, Dec 18, 2012 at 5:24 PM, Peter Eisentraut wrote:
> On 12/18/12 5:10 PM, Robert Haas wrote:
>> I can't help but suspect that the way we handle keywords today is
>> monumentally inefficient. The unreserved_keyword products, et al,
>> just seem somehow badly wrong-headed. We take the troubl
On 12/18/12 5:10 PM, Robert Haas wrote:
> I can't help but suspect that the way we handle keywords today is
> monumentally inefficient. The unreserved_keyword products, et al,
> just seem somehow badly wrong-headed. We take the trouble to
> distinguish all of those cases so that we an turn around
On Tue, Dec 18, 2012 at 4:33 AM, Dimitri Fontaine
wrote:
> Robert Haas writes:
>> And on the other hand, if you could get a clean split between the two
>> grammars, then regardless of exactly what the split was, it might seem
>> a win. But it seemed to me when I looked at this that you'd have to
Robert Haas writes:
> And on the other hand, if you could get a clean split between the two
> grammars, then regardless of exactly what the split was, it might seem
> a win. But it seemed to me when I looked at this that you'd have to
> duplicate a lot of stuff and the small parser still wouldn't
Robert Haas writes:
> I'm frankly kind of shocked at the revelation that the parser is
> already 14% of the backend. I knew it was big; I didn't realize it
> was THAT big.
Yeah, likewise. It makes me wonder whether we aren't past the size
of grammar that bison is a good choice for: considering
On Sat, Dec 15, 2012 at 11:52 AM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> Tom Lane wrote:
>>> the parser tables are basically number-of-tokens wide by
>>> number-of-states high. (In HEAD there are 433 tokens known to the
>>> grammar, all but 30 of which are keywords, and 4367 states.)
>>>
>
"Kevin Grittner" writes:
> Tom Lane wrote:
>> the parser tables are basically number-of-tokens wide by
>> number-of-states high. (In HEAD there are 433 tokens known to the
>> grammar, all but 30 of which are keywords, and 4367 states.)
>>
>> Splitting the grammar into multiple grammars is unlikel
Tom Lane wrote:
> the parser tables are basically number-of-tokens wide by
> number-of-states high. (In HEAD there are 433 tokens known to the
> grammar, all but 30 of which are keywords, and 4367 states.)
>
> Splitting the grammar into multiple grammars is unlikely to do
> much to improve this -
Tom Lane writes:
> Let me explain to you why there will never be a situation where we can
> consider new keywords to be zero-cost.
Thanks for taking the time to do this.
> Splitting the grammar into multiple grammars is unlikely to do much to
> improve this --- in fact, it could easily make matt
Dimitri Fontaine writes:
> Now, what about splitting those grammars in order to freely add any new
> production rules with or without new keywords for administrative
> commands, without a blink of though about the main parser grammar.
Let me explain to you why there will never be a situation wher
Robert Haas writes:
> ISTM that a PGC_SUSER GUC, as I proposed previously, would serve this
> need adequately, without the cost of more cruft in gram.y.
I can't help but think about the experiments you did some time ago about
splitting the grammar into differents sub-grammars (for lack of a bette
28 matches
Mail list logo