On Thu, Jan 17, 2019 at 12:31 PM <davis.bry...@gmail.com> wrote:

> After researching a bit, I believe the issue was that the proxy on the
> server was closing the connection after a few minutes of idle time, and the
> client ManagedChannel didn't automatically detect that and connect again
> when that happened. When constructing the ManagedChannel, I added an
> idleTimeout to it, which will proactively kill the connection when it's
> idle, and reestablish it when it's needed again, and this seems to solve
> the problem. So the new channel construction looks like this:
>
> @Singleton@Provides
> fun providesMyClient(app: Application): MyClient {
>     val channel = AndroidChannelBuilder
>             .forAddress("example.com", 443)
>             .overrideAuthority("example.com")
>             .context(app.applicationContext)
>             .idleTimeout(60, TimeUnit.SECONDS)
>             .build()
>     return MyClient(channel)}
>
> To anyone who might see this, does that seem like a plausible explanation?
>
>
The explanation seems plausible, but I would generally expect that when the
proxy closes the connection, this would be noticed by the gRPC client. For
example, if the TCP socket is closed by the proxy, then the managed channel
will see this and try to reconnect. Can you provide some more details about
what proxy is in use, and how you were able to determine that the proxy is
closing the connection?

If you can deterministically reproduce the DEADLINE_EXCEEDED errors from
the original email, it may also be helpful to ensure that you observe the
same behavior when using OkHttpChannelBuilder directly instead of
AndroidChannelBuilder. AndroidChannelBuilder is only intended to respond to
changes in the device's internet state, so it should be irrelevant to
detecting (or failing to detect) server-side disconnections, but it's a
relatively new feature and would be worth ruling it out as a source of the
problem.

Thanks,

Eric




>
> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis....@gmail.com
> wrote:
>>
>> I believe I may not understand something about how gRPC Channels, Stubs,
>> And Transports work. I have an Android app that creates a channel and a
>> single blocking stub and injects it with dagger when the application is
>> initialized. When I need to make a grpc call, I have a method in my client,
>> that calls a method with that stub. After the app is idle a while, all of
>> my calls return DEADLINE_EXCEEDED errors, though there are no calls showing
>> up in the server logs.
>>
>> @Singleton@Provides
>> fun providesMyClient(app: Application): MyClient {
>>     val channel = AndroidChannelBuilder
>>             .forAddress("example.com", 443)
>>             .overrideAuthority("example.com")
>>             .context(app.applicationContext)
>>             .build()
>>     return MyClient(channel)}
>>
>> Where my client class has a function to return a request with a deadline:
>>
>> class MyClient(channel: ManagedChannel) {private val blockingStub: 
>> MyServiceGrpc.MyServiceBlockingStub = MyServiceGrpc.newBlockingStub(channel)
>>
>> fun getStuff(): StuffResponse =
>>         blockingStub
>>                 .withDeadlineAfter(7, TimeUnit.SECONDS)
>>                 .getStuff(stuffRequest())}
>> fun getOtherStuff(): StuffResponse =
>>         blockingStub
>>                 .withDeadlineAfter(7, TimeUnit.SECONDS)
>>                 .getOtherStuff(stuffRequest())}
>>
>> I make the calls to the server inside a LiveData class in My Repository,
>> where the call looks like this: myClient.getStuff()
>>
>> I am guessing that the channel looses its connection at some point, and
>> then all of the subsequent stubs simply can't connect, but I don't see
>> anywhere in the AndroidChannelBuilder documentation that talks about how to
>> handle this (I believed it reconnected automatically). Is it possible that
>> the channel I use to create my blocking stub gets stale, and I should be
>> creating a new blocking stub each time I call getStuff()? Any help in
>> understanding this would be greatly appreciated.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7hu1sw9UEg8XS-fw3RNhfBQYs41ozeAAZMSr0yZKjRT6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to