cor3ntin added inline comments.

================
Comment at: clang/test/SemaCXX/overloaded-operator-decl.cpp:64
+class E {};
+void operator+(E, ...) {} // expected-error{{overloaded 'operator+' cannot be 
variadic}}
+void d() { E() + E(); }
----------------
aaron.ballman wrote:
> cor3ntin wrote:
> > aaron.ballman wrote:
> > > Fznamznon wrote:
> > > > aaron.ballman wrote:
> > > > > I think it might make sense to extend the test coverage for the other 
> > > > > operators you can overload, just to demonstrate we diagnose them all 
> > > > > consistently. WDYT?
> > > > Okay, while trying to add more test cases I discovered that following
> > > > ```
> > > > class E {};
> > > > bool operator<(const E& lhs, ...);
> > > > auto operator<=>(const E& lhs, ...);
> > > > 
> > > > void d() {
> > > >   E() < E();
> > > > }
> > > > ```
> > > > crashes even with the patch since there is code searching for best 
> > > > overload candidate that doesn't consider possibility for them making 
> > > > variadic.
> > > > The code around overloading is actually pretty inconsistent, somewhere 
> > > > invalid candidates are considered, and somewhere not, so I spent some 
> > > > time not knowing what to do.
> > > > I'm now inclined that we just shouldn't consider invalid candidates 
> > > > like @shafik 
> > > > suggests. WDYY?
> > > > 
> > > Overload resolution doesn't need to produce a candidate that's viable to 
> > > call; C++ lets you resolve to the "best viable function" only to say "and 
> > > that one wasn't good enough either." e.g., 
> > > http://eel.is/c++draft/over#match.general-3
> > > 
> > > I've not yet spotted anything in http://eel.is/c++draft/over that says 
> > > invalid declarations should/should not be added to the initial candidate 
> > > set. I *think* the intention is that if name lookup can find the name, it 
> > > goes into the candidate set. Then that set is processed to remove 
> > > functions that are not viable (http://eel.is/c++draft/over.match.viable). 
> > > Then we find the best viable function from that set.
> > > 
> > > I think we should be keeping the function in the candidate set so long as 
> > > it matches the rules in http://eel.is/c++draft/over.match.viable even if 
> > > the function is otherwise not viable. Otherwise, correcting an unrelated 
> > > issue might change overload resolution to find a completely different 
> > > function. e.g., in my example above, we'd select `void overloaded(int);` 
> > > as the best viable function, but when the user corrects the `float` 
> > > function, we'd change to call that instead. I think it's easier to 
> > > understand what's going on when picking the `float` overload to begin 
> > > with and saying "but we can't call that because it's busted".
> > > 
> > > CC @cor3ntin @hubert.reinterpretcast @rsmith for some extra opinions, as 
> > > I'm not certain if I'm interpreting the standard correctly or not.
> > I'm not sure how much the standardese matter here as the declaration of the 
> > operators are ill-formed anyway.
> > But from a QOI perspective, i think keeping in the set everything the users 
> > might reasonably expect to be considered makes sense to me, as it *should* 
> > lead to better diagnostics
> Thanks for the extra perspective! Yeah, I think that's what I've been 
> convincing myself of. Effectively:
> ```
> int overloaded(int);
> float overloaded(float) noexcept(error()); // error: invalid declaration
> 
> int main() {
>   (void)overloaded(1.0f); // #1
> }
> ```
> We're always going to get the invalid declaration diagnostic. The question is 
> whether we want a diagnostic at #1 that says "can't call this overload" or 
> whether we want #1 to have no diagnostic (because it picked the `int` 
> overload). From a purely diagnostic perspective, I think I can see arguments 
> either way. But if we change it slightly:
> ```
> template <typename Ty>
> constexpr int func() { return 0; }
> 
> template <>
> constexpr int func<int>() { return 1; }
> 
> template <>
> constexpr int func<float>() { return 2; }
> 
> int overloaded(int);
> float overloaded(float) noexcept(error()); // error: invalid declaration
> 
> static_assert(func(overloaded(1.0f)) == 2); // #1
> ```
> we still get the invalid declaration diagnostic, but now we get a failing 
> static assertion because we picked the `int` overload rather than the `float` 
> overload. I think this is clearly worse behavior than if we issued an 
> additional diagnostic at #1 saying that the overload we picked (`float`) was 
> invalid.
> 
> https://godbolt.org/z/34aGMY43n
> 
> WDYT?
Agreed. If i write an overload i presumably want the compiler to consider it, 
even if I botched it. And apparently, the issue is not that we don't do that 
but rather than we don't always do that. We should strive to be consistent.

It's not critical because even in the `static_assert` case, the only thing a 
user can do is fix the declaration, at which point they would get the ambiguity 
diag or the correct best match correctly picked. But it's nicer to have more 
diags, when we can


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D156244/new/

https://reviews.llvm.org/D156244

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to