On Thu, Oct 20, 2022 at 5:36 PM Ian Lance Taylor <i...@golang.org> wrote:

> On Thu, Oct 20, 2022 at 2:14 PM 'Daniel Lepage' via golang-nuts
> <golang-nuts@googlegroups.com> wrote:
> >
> > I'm not sure if this is the right forum to ask this question - please
> let me know if there's somewhere better!
> >
> > I'm wondering why most of the Go 2 error handling proposals I've seen
> are intent on not letting a function continue after detecting an error -
> this seems weird to me, since we already have panic() as a way for a
> function to signal an error it doesn't expect the caller to recover from.
> If a function returns an error instead of panicking, doesn't that mean the
> author of the function believes that it is both possible and reasonable for
> a caller to recover from this error and keep going?
> >
> > I looked through a bunch of different error proposals among the PRs and
> linked from various summary pages, but I can't find if anyone has proposed
> something simple like
> >
> > var x0 float
> > try {
> >    x0 = check DoSomeMath(check FetchSomething(), check
> ComputeSomething())
> > } handle err {
> >    log.Info("Unable to estimate initial approximation, defaulting to
> 1...")
> >    x0 = 1
> > }
> > // code continues and does things with x
> >
> > This makes it easy to see what error handling will happen at any point
> within the function, keeps the control flow linear (so that, unlike
> defer()-based recovery, you don't have to skip ahead in the function to get
> context before the handler makes sense - the context comes first, followed
> by the handling code), and allows code to recover from errors without
> aborting an entire function.
> >
> > As a bonus it'll be easier for new Go programmers to learn because it's
> structured the same way as try/catch or try/except blocks in numerous other
> languages.
> >
> > This seems like an obvious enough suggestion that someone must have made
> it already, but I haven't been able to find that among all the other error
> handling proposals and discussions.
> >
> > Has this in fact been proposed already, and if so where can I find
> discussion on why it was rejected?
>
> I can't recall any error handling proposals that are quite like that,
> though you may want to look at the meta-issue
> https://go.dev/issue/40432.
>
> Most error handling proposals are trying to reduce the boilerplate of
> error checking.  If I rewrite your example with Go as it exists today
> I come up with this:
>
> var x0 float
> x0, err = check DoSomeMath(check FetchSomething(), check
> ComputeSomething())
> if err != nil {
>    log.Info("Unable to estimate initial approximation, defaulting to 1...")
>    x0 = 1
> }
> // code continues and does things with x
>
> That isn't noticeably longer than what you wrote.  So your suggestion
> doesn't appear to reduce boilerplate much.
>

Sorry, I should have been clearer - what I am proposing is both try/handle
blocks *and* a `check` expression that triggers the handler. The line
`check DoSomeMath(check FetchSomething(), check ComputeSomething())` cannot
(as far as I know) be written in existing Go; in modern Go my example would
have to look something like:

var x0 float
var err error
a, err := FetchSomething()
if err != nil {
   log.Info("Unable to estimate initial approximation, defaulting to 1...")
   x0 = 1
} else {
   b, err := ComputeSomething()
   if err != nil {
      log.Info("Unable to estimate initial approximation, defaulting to
1...")
      x0 = 1
   } else {
      x0, err = DoSomeMath(a, b)
      if err != nil {
         log.Info("Unable to estimate initial approximation, defaulting to
1...")
         x0 = 1
      }
   }
}
// code continues and does things with x

But I also don't want to get bogged down by this specific example - the
above could be somewhat streamlined with an intermediate ComputeX0
function, for example, but that's beside the more general point I'm trying
to make, which is just that there definitely are cases where a function
wants to continue after handling one or more errors, and I am perplexed as
to why every proposal I've seen seems designed to prevent this.


> Specifically, most error handling proposals are focused on reducing
> the boilerplate in
>
> if err != nil {
>     return 0, fmt.Errorf("Oh noes: %v", err)
> }
>

TBH I'm less concerned with the boilerplate in general (though I wouldn't
mind reducing it), and more concerned about the specific problem of using
function return values within an expression - I think there are many cases
where it's useful to be able to write code of the form `fn1(fn2())`, but if
fn2 could produce an error then you have to fall back to

tmp, err := fn2()
if err != nil {
    return fmt.Errorf(...)
}
fn1(tmp)

which means you're now using temporary variables and obscuring what's
actually happening by sticking a bunch of error handling in between the
various calls.

The premise that being able to chain functions like this is useful is, I
think, validated by the existence of various testing code that passes a
testing.TB around so that the helpers can call t.Fatal instead of returning
errors. As a real-world example, consider the open-source Ondatra project (
https://github.com/openconfig/ondatra), which has numerous functions that
use the "pass in a TB and Fatal on errors" pattern; you can see this API
being used in e.g. this test
<https://github.com/openconfig/featureprofiles/blob/6a8da17dd327bf17e928aa90853d75bcbc9d2362/feature/bgp/policybase/ate_tests/route_installation_test/route_installation_test.go#L122>
from
the OpenConfig featureprofiles project, which contains the snippet:

func configureDUT(t *testing.T, dut *ondatra.DUTDevice) {
    dc := dut.Config()
    i1 := dutSrc.NewInterface(dut.Port(t, "port1").Name())
    dc.Interface(i1.GetName()).Replace(t, i1)
    i2 := dutDst.NewInterface(dut.Port(t, "port2").Name())
    dc.Interface(i2.GetName()).Replace(t, i2)
}

The Port method and the Replace method above both can fail; if they do,
they call t.Fatal instead of returning an error. If they returned errors
instead, the above code would be significantly longer and harder to read,
something like:

func configureDUT(dut *ondatra.DUTDevice) error {
   dc := dut.Config()
   port1, err := dut.Port("port1")
   if err != nil {
      return err
   }
   i1 := dutSrc.NewInterface(port1.Name())
   if _, err := dc.Interface(i1.GetName()).Replace(i1); err != nil {
      return err
   }
   port2, err := dut.Port("port2")
   if err != nil {
      return err
   }
   i2 := dutDst.NewInterface(port2.Name())
   if _, err := dc.Interface(i2.GetName()).Replace(i2); err != nil {
      return err
   }
   return nil
}

And this is just a single function from a much longer test, which is a
single test out of dozens within the project, and that's just within this
particular project which is definitely not the only project using Ondatra.

So it makes sense that the Ondatra authors opted to pass a TB everywhere,
but it also means A) that none of the tooling they built can ever be used
for anything BUT tests, B) that any test using these helpers MUST abort
completely if an error happens, and C) it's not obvious just looking at the
test code that every call to Replace or Port is actually an
assertion helper that will kill your test if something's wrong. I would
argue that they shouldn't have had to make this tradeoff - it should be
possible in Go to achieve the streamlined style without assertion helpers.

The point of try/check/handle would be that you could rewrite the above as:

func configureDUT(dut *ondatra.DUTDevice) error {
  try {
    dc := dut.Config()
    i1 := dutSrc.NewInterface((check dut.Port("port1")).Name())
    check dc.Interface(i1.GetName()).Replace(i1)
    i2 := dutDst.NewInterface((check dut.Port(t, "port2")).Name())
    check dc.Interface(i2.GetName()).Replace(t, i2)
  } handle err {
    return fmt.Errorf("configuring DUT: %w", err)
  }
}

The actual logic is just as streamlined as it is right now, but it achieves
this without any hidden assertions or dependency on testing.TB, and now it
allows the caller to decide whether the error is fatal for the test or not.

The biggest differences between this and e.g. the original Go 2 draft
proposal are 1) it can be used to consolidate error boilerplate regardless
of whether or not the boilerplate returns (and thus terminates the
functions), and 2) it limits "check" to within explicit "try" blocks, such
that every function must still have explicit error handling for any errors
that arise in it.

Note that point 2 could be changed - if "check" expressions were allowed
outside of try/handle blocks, essentially by giving every function an
implicit "try { <func body> } handle err { return ..., err }", then this
would eliminate more boilerplate but at the cost of allowing functions to
have entirely implicit error handling, so I think whether or not to do that
should be a separate discussion beyond the scope of this proposal. This
proposal is solely about allowing the error handling of multiple calls to
be consolidated into a single handler block, reducing duplicate boilerplate
and allowing natural use of return values even when functions also return
errors.

It sounds like this hasn't been proposed before, so I can go ahead and
write up a formal proposal for this.

Thanks,
Dan

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CAAViQtgV6XQEV%2BL1_BJB2-YzRXb1-J%3Da-c-7BXjqqKma%2BYUp7g%40mail.gmail.com.

Reply via email to