On Tue, Dec 16, 2008 at 11:15 AM, Lex Spoon <sp...@google.com> wrote:
> My concern is that there were high hopes that the vast majority of
> method calls in a program could actually be known-synchronous.  The
> problem is that for any operation that might happen asynchronously,
> the developer needs to deal with both the lag before it happens and
> the possibility it will fail.  Doesn't this patch make these issues
> hard to address?  If you do a.b().c().d(), it is initially compact,
> but how do you keep it compact once you add error handling and lag
> handling?  You seem to end up needing a callback for b(), c(), and d()
> anyway.


Okay, after talking with you on the phone I see I completely
misunderstood the idea.  Normally, there would not be chaining of this
kind, and normally, people would be very careful about what methods
they mark as deferrable.  Most frequently, these are fire and forget
kind of methods, such as "show tab 3".  They wouldn't be making
hundreds of these proxies, but a dozen or two, and they'll know that
they're getting a round-trip penalty for each one.  I actually like
the coding pattern you are trying to support, now that I understand
it.

I'll review the implementation.

Design-wise, the main remaining question I have is about AllowNonVoid.
 The main idea is that an existing interface could be reused with
minimal change.  That's probably the most common way runAsync will be
used: on an existing app that has grown large.  The issue is that
existing interfaces will have some methods that don't work as deferred
because they return some value.  I believe the idea in the current
patch is for the generator to object by default, but to provide an
override that allows the interface to go through anyway.  It will all
work so long as the callers can tolerate seeing the default value up
until the concrete implementation is actually loaded.

It looks a little better to me to encourage the idea of splitting
legacy interfaces into two parts: the half that is deferrable and the
half that is not.  For example, given this legacy interface:

  // old version
  interface Adder {
    void add(int i);

    int getValue();
  }


The two parts would be:

  interface DeferrablePartOfAdder {
    void add(int i);
  }

  interface Adder extends DeferrablePartOfAdder {
    int getValue();
  }


Only the deferrable part would have the deferred wrapper methods
generated.  For the rest of the interface, callers would still have to
either use some sort of callback or ensure that an instance of Adder
is in scope.  They wouldn't be stuck, but they wouldn't get to use the
boilerplate removal.

If we encouraged people to work that way, then there is less pressure
to have a big hammer that just makes an interface go through the
compiler.  Instead, we could provide a little hammer and let people
annotate individual methods, like this:

  interface Adder {
    void setValue(int x);

    @IfNotLoaded(-1)
    int getValue();
  }


This way also preserves the big hammer use case, but it's more
verbose.  People just have to annotate each method individually.

What do you think about encouraging people to split their interfaces
like this?  Even aside from the void returns, it seems like some
methods don't really work when deferred anyway.

-Lex

--~--~---------~--~----~------------~-------~--~----~
http://groups.google.com/group/Google-Web-Toolkit-Contributors
-~----------~----~----~----~------~----~------~--~---

Reply via email to