Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-17 Thread Richard Biener
On Wed, Sep 16, 2015 at 5:45 PM, Manuel López-Ibáñez
 wrote:
> On 16 September 2015 at 15:33, Richard Biener
>  wrote:
>> On Wed, Sep 16, 2015 at 3:22 PM, Michael Matz  wrote:
 if we suggest 'foo' instead of foz then we'll get a more confusing followup
 error if we actually use it.
>>>
>>> This particular case could be solved by ruling out candidaten of the wrong
>>> kind (here, something that can be assigned to, vs. a function).  But it
>>> might actually be too early in parsing to say that there will be an
>>> assignment.  I don't think _this_ problem should block the patch.
>
> Indeed. The patch by David does not try to fix-up the code, it merely
> suggests a possible candidate. The follow-up errors should be the same
> before and after. Such suggestions will never be 100% right, even if
> the suggestion makes the code compile and run, it may still be the
> wrong one. A wrong suggestion is far less serious than a wrong
> uninitialized or Warray-bounds warning and we can live with those. Why
> this needs to be perfect from the very beginning?
>
> BTW, there is a PR for this: 
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52277
>
>> I wonder if we can tentatively parse with the choice at hand, only allowing
>> (and even suggesting?) it if that works out.
>
> This would require to queue the error, fix-up the wrong name and
> continue parsing. If there is another error, ignore that one and emit
> the original error without suggestion. The problem here is that we do
> not know if the additional error is actually caused by the fix-up we
> did or it is an already existing error. It would be equally terrible
> to emit errors caused by the fix-up or emit just a single error for
> the typo. We would need to roll-back the tentative parse and do a
> definitive parse anyway. This does not seem possible at the moment
> because the parsers maintain a lot of global state that is not easy to
> roll-back. We cannot simply create a copy of the parser state and
> throw it away later to continue as if the tentative parse has not
> happened.
>
> I'm not even sure if, in general, one can stop at the statement level
> or we would need to parse the whole function (or translation unit) to
> be able to tell if the suggestion is a valid candidate.

I was suggesting to only tentatively finish parsing the "current construct".
No idea how to best figure that out to the extend to make the tentative
parse useful.  Say, if we have "a + s.foz" and the field foz is not there
but foo is, so if we continue parsing with 'foo' instead but 'foo' will have
a type that makes "a + s.foo" invalid then we probably shouldn't suggest
it.  It _might_ be reasonably "easy" to implement that, but I'm not sure.
There might be a field named fz (with same or bigger levenstein distance)
with the correct type.  Of course it might have been I misspelled
's' and meant 'r' instead which has a field foz of corect type... (and 's'
is available as well).

I agree that we don't have to solve all this in the first iteration.

Richard.

> Cheers,
>
> Manuel.


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-17 Thread David Malcolm
On Thu, 2015-09-17 at 13:31 -0600, Jeff Law wrote:
> On 09/16/2015 02:34 AM, Richard Biener wrote:
> >
> > Btw, this looks quite expensive - I'm sure we want to limit the effort
> > here a bit.
> A limiter is reasonable, though as it's been pointed out this only fires 
> during error processing, so we probably have more leeway to take time 
> and see if we can do better error recovery.
> 
> FWIW, I've used this algorithm in totally unrelated projects and while 
> it seems expensive, it's worked out quite nicely.
> 
> >
> > So while the idea might be an improvement to selected cases it can cause
> > confusion as well.  And if using the suggestion for further parsing it can
> > cause worse followup errors (unless we can limit such "fixup" use to the
> > cases where we can parse the result without errors).  Consider
> >
> > foo()
> > {
> >foz = 1;
> > }
> >
> > if we suggest 'foo' instead of foz then we'll get a more confusing followup
> > error if we actually use it.
> True.  This kind of problem is probably inherent in this kind of "I'm 
> going assume you meant..." error recovery mechanisms.
> 
> And just to be clear, even in a successful recovery scenario, we still 
> issue an error.  The error recovery is just meant to try and give the 
> user a hint what might have gone wrong and gracefully handle the case 
> where they just made a minor goof.  

(nods)

> Obviously the idea here is to cut 
> down on the number of iterations of edit-compile cycle one has to do :-)

In my mind it's more about saving the user from having to locate the
field they really meant within the corresponding structure declaration
(either by grep, or by some cross-referencing tool).

A lot of the time I find myself wishing that the compiler had issued a
note saying "here's the declaration of the struct in question", which
would make it easy for me to go straight there in Emacs.

I wonder what proportion of our users use a cross-referencing tool or
have an IDE that can find this stuff for them, vs those that rely on
grep, and if that should mean something for our diagnostics (I tend to
just rely on grep).

This is rather tangential to this RFE, of course.

Dave



Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-17 Thread Jeff Law

On 09/16/2015 02:34 AM, Richard Biener wrote:


Btw, this looks quite expensive - I'm sure we want to limit the effort
here a bit.
A limiter is reasonable, though as it's been pointed out this only fires 
during error processing, so we probably have more leeway to take time 
and see if we can do better error recovery.


FWIW, I've used this algorithm in totally unrelated projects and while 
it seems expensive, it's worked out quite nicely.




So while the idea might be an improvement to selected cases it can cause
confusion as well.  And if using the suggestion for further parsing it can
cause worse followup errors (unless we can limit such "fixup" use to the
cases where we can parse the result without errors).  Consider

foo()
{
   foz = 1;
}

if we suggest 'foo' instead of foz then we'll get a more confusing followup
error if we actually use it.
True.  This kind of problem is probably inherent in this kind of "I'm 
going assume you meant..." error recovery mechanisms.


And just to be clear, even in a successful recovery scenario, we still 
issue an error.  The error recovery is just meant to try and give the 
user a hint what might have gone wrong and gracefully handle the case 
where they just made a minor goof.  Obviously the idea here is to cut 
down on the number of iterations of edit-compile cycle one has to do :-)



Jeff


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-17 Thread Manuel López-Ibáñez
On 17 September 2015 at 21:57, David Malcolm  wrote:
> In my mind it's more about saving the user from having to locate the
> field they really meant within the corresponding structure declaration
> (either by grep, or by some cross-referencing tool).

I think it is more than that. After a long coding session, one can
start to wonder why the compiler cannot find type_of_unknwon_predicate
or firstColourInColumn (ah! it was type_of_unknown_predicate and
firstColorInColumn!).

Or when we extend this to options (PR67613), why I get

error: unrecognized command line option '-Weffic++'

when I just read it in the manual!

Cheers,

Manuel.


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-16 Thread Richard Biener
On Tue, Sep 15, 2015 at 5:38 PM, David Malcolm  wrote:
> Updated patch attached, which is now independent of the rest of the
> patch kit; see below.  Various other comments inline.
>
> On Fri, 2015-09-11 at 17:30 +0200, Manuel López-Ibáñez wrote:
> On 10/09/15 22:28, David Malcolm wrote:
>> > There are a couple of FIXMEs here:
>> > * where to call levenshtein_distance_unit_tests
>>
>> Should this be part of make check? Perhaps a small program that is compiled 
>> and
>> linked with spellcheck.c? This would be possible if spellcheck.c did not 
>> depend
>> on tree.h or tm.h, which I doubt it needs to.
>
> Ideally I'd like to put them into a unittest plugin I've been working on:
>  https://gcc.gnu.org/ml/gcc-patches/2015-06/msg00765.html
> In the meantime, they only get run in an ENABLE_CHECKING build.
>
>> > * should we attempt error-recovery in c-typeck.c:build_component_ref
>>
>> I would say yes, but why not leave this discussion to a later patch? The
>> current one seems useful enough.
>
> (nods)
>
>> > +
>> > +/* Look for the closest match for NAME within the currently valid
>> > +   scopes.
>> > +
>> > +   This finds the identifier with the lowest Levenshtein distance to
>> > +   NAME.  If there are multiple candidates with equal minimal distance,
>> > +   the first one found is returned.  Scopes are searched from innermost
>> > +   outwards, and within a scope in reverse order of declaration, thus
>> > +   benefiting candidates "near" to the current scope.  */
>> > +
>> > +tree
>> > +lookup_name_fuzzy (tree name)
>> > +{
>> > +  gcc_assert (TREE_CODE (name) == IDENTIFIER_NODE);
>> > +
>> > +  c_binding *best_binding = NULL;
>> > +  int best_distance = INT_MAX;
>> > +
>> > +  for (c_scope *scope = current_scope; scope; scope = scope->outer)
>> > +for (c_binding *binding = scope->bindings; binding; binding = 
>> > binding->prev)
>> > +  {
>> > +   if (!binding->id)
>> > + continue;
>> > +   int dist = levenshtein_distance (name, binding->id);
>> > +   if (dist < best_distance)

Btw, this looks quite expensive - I'm sure we want to limit the effort
here a bit.
Also not allowing arbitrary "best" distances and not do this for very simple
identifiers such as 'i'.  Say,

foo()
{
  int i;
  for (i =0; i<10; ++i)
   for (j = 0; j < 12; ++j)
;
}

I don't want us to suggest using 'i' instead of j (a good hint is that
I used 'j'
multiple times).

So while the idea might be an improvement to selected cases it can cause
confusion as well.  And if using the suggestion for further parsing it can
cause worse followup errors (unless we can limit such "fixup" use to the
cases where we can parse the result without errors).  Consider

foo()
{
  foz = 1;
}

if we suggest 'foo' instead of foz then we'll get a more confusing followup
error if we actually use it.

But maybe you already handle all these cases (didn't look at the patch,
just saw the above expensive loop plus dropped some obvious concerns).

Richard.

>> I guess 'dist' cannot be negative. Can it be zero? If not, wouldn't be
>> appropriate to exit as soon as it becomes 1?
>
> It can't be negative, so I've converted it to unsigned int, and introduced an
> "edit_distance_t" typedef for it.
>
> It would be appropriate to exit as soon as we reach 1 if we agree
> that lookup_name_fuzzy isn't intended to find exact matches (since
> otherwise we might fail to return an exact match if we see a
> distance 1 match first).
>
> I haven't implemented that early bailout in this iteration of the
> patch; should I?
>
>> Is this code discriminating between types and names? That is, what happens 
>> for:
>>
>> typedef int ins;
>>
>> int foo(void)
>> {
>> int inr;
>> inp x;
>> }
>
> Thanks.  I've fixed that.
>
>> > +/* Recursively append candidate IDENTIFIER_NODEs to CANDIDATES.  */
>> > +
>> > +static void
>> > +lookup_field_fuzzy_find_candidates (tree type, tree component,
>> > +   vec *candidates)
>> > +{
>> > +  tree field;
>> > +  for (field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field))
>> > +{
>> > +  if (DECL_NAME (field) == NULL_TREE
>> > + && (TREE_CODE (TREE_TYPE (field)) == RECORD_TYPE
>> > + || TREE_CODE (TREE_TYPE (field)) == UNION_TYPE))
>> > +   {
>> > + lookup_field_fuzzy_find_candidates (TREE_TYPE (field),
>> > + component,
>> > + candidates);
>> > +   }
>> > +
>> > +  if (DECL_NAME (field))
>> > +   candidates->safe_push (field);
>> > +}
>> > +}
>>
>> This is appending inner-most, isn't it? Thus, given:
>
> Yes.
>
>> struct s{
>>  struct j { int aa; } kk;
>>  int aa;
>> };
>>
>> void foo(struct s x)
>> {
>>  x.ab;
>> }
>>
>> it will find s::j::aa before s::aa, no?
>
> AIUI, it doesn't look inside the "kk", only for anonymous structs.
>
> I added a test for this.
>
>> >   tree
>> > -build_component_ref (location_t loc, tree datum, tree component)
>> > 

Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-16 Thread Richard Biener
On Wed, Sep 16, 2015 at 3:22 PM, Michael Matz  wrote:
> Hi,
>
> On Wed, 16 Sep 2015, Richard Biener wrote:
>
>> Btw, this looks quite expensive - I'm sure we want to limit the effort
>> here a bit.
>
> I'm not so sure.  It's only used for printing an error, so walking all
> available decls is expensive but IMHO not too much so.

Well, as we're not stopping at the very first error creating an
artificial testcase
that hits this quite badly should be possible.  Maybe only try this for
the first error and not for followups?

>> I don't want us to suggest using 'i' instead of j (a good hint is that I
>> used 'j' multiple times).
>
> Well, there will always be cases where the suggestion is actually wrong.
> How do you propose to deal with this?  The above case could be solved by
> not giving hints when the levenshtein distance is as long as the string
> length (which makes sense, because then there's no relation at all between
> the string and the suggestion).
>
>> So while the idea might be an improvement to selected cases it can cause
>> confusion as well.  And if using the suggestion for further parsing it
>> can cause worse followup errors (unless we can limit such "fixup" use to
>> the cases where we can parse the result without errors).  Consider
>>
>> foo()
>> {
>>   foz = 1;
>> }
>>
>> if we suggest 'foo' instead of foz then we'll get a more confusing followup
>> error if we actually use it.
>
> This particular case could be solved by ruling out candidaten of the wrong
> kind (here, something that can be assigned to, vs. a function).  But it
> might actually be too early in parsing to say that there will be an
> assignment.  I don't think _this_ problem should block the patch.

I wonder if we can tentatively parse with the choice at hand, only allowing
(and even suggesting?) it if that works out.

Richard.

>
> Ciao,
> Michael.


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-16 Thread Michael Matz
Hi,

On Wed, 16 Sep 2015, Richard Biener wrote:

> Btw, this looks quite expensive - I'm sure we want to limit the effort
> here a bit.

I'm not so sure.  It's only used for printing an error, so walking all 
available decls is expensive but IMHO not too much so.

> I don't want us to suggest using 'i' instead of j (a good hint is that I 
> used 'j' multiple times).

Well, there will always be cases where the suggestion is actually wrong.  
How do you propose to deal with this?  The above case could be solved by 
not giving hints when the levenshtein distance is as long as the string 
length (which makes sense, because then there's no relation at all between 
the string and the suggestion).

> So while the idea might be an improvement to selected cases it can cause 
> confusion as well.  And if using the suggestion for further parsing it 
> can cause worse followup errors (unless we can limit such "fixup" use to 
> the cases where we can parse the result without errors).  Consider
> 
> foo()
> {
>   foz = 1;
> }
> 
> if we suggest 'foo' instead of foz then we'll get a more confusing followup
> error if we actually use it.

This particular case could be solved by ruling out candidaten of the wrong 
kind (here, something that can be assigned to, vs. a function).  But it 
might actually be too early in parsing to say that there will be an 
assignment.  I don't think _this_ problem should block the patch.


Ciao,
Michael.


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-16 Thread Manuel López-Ibáñez
On 16 September 2015 at 15:33, Richard Biener
 wrote:
> On Wed, Sep 16, 2015 at 3:22 PM, Michael Matz  wrote:
>>> if we suggest 'foo' instead of foz then we'll get a more confusing followup
>>> error if we actually use it.
>>
>> This particular case could be solved by ruling out candidaten of the wrong
>> kind (here, something that can be assigned to, vs. a function).  But it
>> might actually be too early in parsing to say that there will be an
>> assignment.  I don't think _this_ problem should block the patch.

Indeed. The patch by David does not try to fix-up the code, it merely
suggests a possible candidate. The follow-up errors should be the same
before and after. Such suggestions will never be 100% right, even if
the suggestion makes the code compile and run, it may still be the
wrong one. A wrong suggestion is far less serious than a wrong
uninitialized or Warray-bounds warning and we can live with those. Why
this needs to be perfect from the very beginning?

BTW, there is a PR for this: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52277

> I wonder if we can tentatively parse with the choice at hand, only allowing
> (and even suggesting?) it if that works out.

This would require to queue the error, fix-up the wrong name and
continue parsing. If there is another error, ignore that one and emit
the original error without suggestion. The problem here is that we do
not know if the additional error is actually caused by the fix-up we
did or it is an already existing error. It would be equally terrible
to emit errors caused by the fix-up or emit just a single error for
the typo. We would need to roll-back the tentative parse and do a
definitive parse anyway. This does not seem possible at the moment
because the parsers maintain a lot of global state that is not easy to
roll-back. We cannot simply create a copy of the parser state and
throw it away later to continue as if the tentative parse has not
happened.

I'm not even sure if, in general, one can stop at the statement level
or we would need to parse the whole function (or translation unit) to
be able to tell if the suggestion is a valid candidate.

Cheers,

Manuel.


Re: [PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-15 Thread Manuel López-Ibáñez
On 15 September 2015 at 17:38, David Malcolm  wrote:
> It would be appropriate to exit as soon as we reach 1 if we agree
> that lookup_name_fuzzy isn't intended to find exact matches (since
> otherwise we might fail to return an exact match if we see a
> distance 1 match first).
>
> I haven't implemented that early bailout in this iteration of the
> patch; should I?

Everything I say are mere suggestions since I cannot approve patches,
so I would say: "whatever the approvers say!" :-)

Nevertheless, how an exact match would play out?

unknown type name 'inp'; did you mean 'inp'?


> Remaining work:
>   * the FIXME about where to call levenshtein_distance_unit_tests;
> there's an argument that this could be moved to libiberty (is C++
> allowed in libiberty?); I'd prefer to get the unittest idea from
>  https://gcc.gnu.org/ml/gcc-patches/2015-06/msg00765.html
> into trunk, and then move it into there.  Right now it's all
> gcc_assert, so optimizes away in a production build.

That would be gcc_checking_assert, no? gcc_assert() should still work
in release mode, AFAIU.

>   * try existing testcases as noted by Manu above

I think the most useful part of checking those is that we have really
wacky testcases and it may show cases where things go horribly wrong.
Plus, if the suggestion is perfect, then you have another testcase for
free. This is what I was doing with the Wformat precise locations.

> It also strikes me that sometimes a "misspelling" is a missing
> header file, and that the most helpful thing to do might be to
> suggest including that header file.  For instance given:
>   $ cat /tmp/foo.c
>   int64_t i;
>
>   $ ./xgcc -B. /tmp/foo.c
>   /tmp/foo.c:1:1: error: unknown type name ‘int64_t’
>   int64_t i;
>   ^
> (where the suggestion of "int" is suppressed due to the distance
> being too long) it might be helpful to print:
>   /tmp/foo.c:1:1: error: unknown type name 'int64_t'; did you mean to include 
> ''?
>   int64_t i;
>   ^
> That does seem like a separate enhancement, though.

We already suggest header files for built-in functions:
https://gcc.gnu.org/PR59717
Doing the same for "standard" types would not be a stretch, but yes,
it is a separate thing.


> diff --git a/gcc/spellcheck.c b/gcc/spellcheck.c
> new file mode 100644
> index 000..c407aa0
> --- /dev/null
> +++ b/gcc/spellcheck.c
> +#include "config.h"
> +#include "system.h"
> +#include "coretypes.h"
> +#include "tm.h"
> +#include "tree.h"
> +#include "spellcheck.h"

Why tm.h?

Great work!

Manuel.


[PATCH WIP] Use Levenshtein distance for various misspellings in C frontend v2

2015-09-15 Thread David Malcolm
Updated patch attached, which is now independent of the rest of the
patch kit; see below.  Various other comments inline.

On Fri, 2015-09-11 at 17:30 +0200, Manuel López-Ibáñez wrote:
On 10/09/15 22:28, David Malcolm wrote:
> > There are a couple of FIXMEs here:
> > * where to call levenshtein_distance_unit_tests
>
> Should this be part of make check? Perhaps a small program that is compiled 
> and
> linked with spellcheck.c? This would be possible if spellcheck.c did not 
> depend
> on tree.h or tm.h, which I doubt it needs to.

Ideally I'd like to put them into a unittest plugin I've been working on:
 https://gcc.gnu.org/ml/gcc-patches/2015-06/msg00765.html
In the meantime, they only get run in an ENABLE_CHECKING build.

> > * should we attempt error-recovery in c-typeck.c:build_component_ref
>
> I would say yes, but why not leave this discussion to a later patch? The
> current one seems useful enough.

(nods)

> > +
> > +/* Look for the closest match for NAME within the currently valid
> > +   scopes.
> > +
> > +   This finds the identifier with the lowest Levenshtein distance to
> > +   NAME.  If there are multiple candidates with equal minimal distance,
> > +   the first one found is returned.  Scopes are searched from innermost
> > +   outwards, and within a scope in reverse order of declaration, thus
> > +   benefiting candidates "near" to the current scope.  */
> > +
> > +tree
> > +lookup_name_fuzzy (tree name)
> > +{
> > +  gcc_assert (TREE_CODE (name) == IDENTIFIER_NODE);
> > +
> > +  c_binding *best_binding = NULL;
> > +  int best_distance = INT_MAX;
> > +
> > +  for (c_scope *scope = current_scope; scope; scope = scope->outer)
> > +for (c_binding *binding = scope->bindings; binding; binding = 
> > binding->prev)
> > +  {
> > +   if (!binding->id)
> > + continue;
> > +   int dist = levenshtein_distance (name, binding->id);
> > +   if (dist < best_distance)
>
> I guess 'dist' cannot be negative. Can it be zero? If not, wouldn't be
> appropriate to exit as soon as it becomes 1?

It can't be negative, so I've converted it to unsigned int, and introduced an
"edit_distance_t" typedef for it.

It would be appropriate to exit as soon as we reach 1 if we agree
that lookup_name_fuzzy isn't intended to find exact matches (since
otherwise we might fail to return an exact match if we see a
distance 1 match first).

I haven't implemented that early bailout in this iteration of the
patch; should I?

> Is this code discriminating between types and names? That is, what happens 
> for:
>
> typedef int ins;
>
> int foo(void)
> {
> int inr;
> inp x;
> }

Thanks.  I've fixed that.

> > +/* Recursively append candidate IDENTIFIER_NODEs to CANDIDATES.  */
> > +
> > +static void
> > +lookup_field_fuzzy_find_candidates (tree type, tree component,
> > +   vec *candidates)
> > +{
> > +  tree field;
> > +  for (field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field))
> > +{
> > +  if (DECL_NAME (field) == NULL_TREE
> > + && (TREE_CODE (TREE_TYPE (field)) == RECORD_TYPE
> > + || TREE_CODE (TREE_TYPE (field)) == UNION_TYPE))
> > +   {
> > + lookup_field_fuzzy_find_candidates (TREE_TYPE (field),
> > + component,
> > + candidates);
> > +   }
> > +
> > +  if (DECL_NAME (field))
> > +   candidates->safe_push (field);
> > +}
> > +}
>
> This is appending inner-most, isn't it? Thus, given:

Yes.

> struct s{
>  struct j { int aa; } kk;
>  int aa;
> };
>
> void foo(struct s x)
> {
>  x.ab;
> }
>
> it will find s::j::aa before s::aa, no?

AIUI, it doesn't look inside the "kk", only for anonymous structs.

I added a test for this.

> >   tree
> > -build_component_ref (location_t loc, tree datum, tree component)
> > +build_component_ref (location_t loc, tree datum, tree component,
> > +source_range *ident_range)
> >   {
> > tree type = TREE_TYPE (datum);
> > enum tree_code code = TREE_CODE (type);
> > @@ -2294,7 +2356,31 @@ build_component_ref (location_t loc, tree datum, 
> > tree component)
> >
> > if (!field)
> > {
> > - error_at (loc, "%qT has no member named %qE", type, component);
> > + if (!ident_range)
> > +   {
> > + error_at (loc, "%qT has no member named %qE",
> > +   type, component);
> > + return error_mark_node;
> > +   }
> > + gcc_rich_location richloc (*ident_range);
> > + if (TREE_CODE (datum) == INDIRECT_REF)
> > +   richloc.add_expr (TREE_OPERAND (datum, 0));
> > + else
> > +   richloc.add_expr (datum);
> > + field = lookup_field_fuzzy (type, component);
> > + if (field)
> > +   {
> > + error_at_rich_loc
> > +   (,
> > +"%qT has no member named %qE; did you mean %qE?",
> > +type, component, field);
> > + /* FIXME: error recovery: should we try to keep going,
> > +