Re: [grpc-io] Re: gRPC client side RoundRobin loadbalancing w/ Consul DNS

2019-01-17 Thread Ram Kumar Rengaswamy
Hmm ... It's unfortunate that there is no way to force the C++ client to
periodically refresh it's list of IP addresses. That's a show stopper as
our backends scale up elastically and there is no way for gRPC client to
become aware of them.

Q1: If we implement our own lookaside LB, could we configure the client to
consult this LB for a fresh set of IP addresses periodically ?
Q2: Can the lookaside LB be within the client process ?


On Thu, Jan 17, 2019, 11:00 PM apolcyn via grpc.io  I can add to a couple of questions.
> Re: > "Do gRPC clients honor DNS TTL ?"
>
> gRPC clients don't look at TTL's at all. In C++, a gRPC client channel
> will request it's DNS resolver to re-resolve when it determines that it has
> reached "transient failure" state. The details of when exactly it reaches
> that state depends on the load balancing policy in use. In "round robin",
> it would be roughly when all individual connections in the list reach
> "transient failure", i.e. if the connections all break. Effectively, if
> backends are moving around and things break, then the default client will
> re-resolve. But if you want the DNS resolution to be up to date for
> different reasons, there's no polling built in to the default DNS resolver.
>
> This could be done with a custom resolver, but in C++ the resolver API
> isn't currently public. I understand that making that API public is
> something in progress though.
>
> Re: > "Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ?"
>
> If the DNS server sends back a large response (c-ares considers a large
> response greater than 512 bytes), or if the response has a "truncated" bit
> set, then it will re-send its query over TCP. I can confirm this with
> "ares", I believe this is also the case with "native" (in C++ there's two
> DNS resolvers: "ares" (c-ares) and "native" (getaddrinfo); "native" is the
> default one right now, but "ares" should be the default in the upcoming
> 1.19 release)
>
>
>
>
>
>
> On Thursday, January 17, 2019 at 6:23:37 PM UTC-8, Carl Mastrangelo wrote:
>>
>> I know you asked for C++, but At least for Java we do not honor TTL.
>> (because the JVM doesn't surface it to us).  If you implement your own
>> NameResolver (not as hard as it sounds!) you can honor these TTLs.
>>
>> I believe C++ uses the cares resolver which IIRC can resort to doing TCP
>> lookups if the response size is too large.  Alas, I cannot answer with any
>> more detail.
>>
>> gRPC has the option to do health checks, but what I think you actually
>> want are keep-alives.  This is configurable on the channel and the server.
>> If you can add more detail about the problem you are trying to avoid, I can
>> give a better answer.
>>
>> As for if DNS is a really bad idea:  Not really.  It has issues, but none
>> of them are particularly damning.   For example, when you add a new server
>> most clients won't find out about it until they poll again.  gRPC is
>> designed around a push based name resolution model, with clients being told
>> what servers they can talk to.   DNS is adapted onto this model, by
>> periodically spawning a thread and notifying the client via the
>> push-interface.
>>
>> The DNS support is pretty good in gRPC, to the point that implementing a
>> custom DNS resolver is likely to cause more issues (what happens if the A
>> lookups succeed, but the  fail?, what happens if there are lots of
>> addresses for a single endpoint?, etc.)
>>
>> One last thing to consider:  the loadbalancer in gRPC is independent of
>> the name resolver.  You could continue to use DNS (and do SRV lookups and
>> such) and pass that info into your own custom client-side LB.  This is what
>> gRPCLB does, but you could customize your own copy to not depend on a
>> gRPCLB server.   There's lots of options here.
>>
>>
>>
>> On Wednesday, January 16, 2019 at 5:01:33 PM UTC-8, Ram Kumar Rengaswamy
>> wrote:
>>>
>>> Hello ... We are looking to setup client-side loadbalancing in GRPC
>>> (C++).
>>> Our current plan roughly is the following:
>>> 1. Use consul for service discovery, health checks etc.
>>> 2. Expose the IP addresses behind a service to GRPC client via Consul
>>> DNS Interface 
>>> 3. Configure the client to use simple round_robin loadbalancing (All our
>>> servers have the same capacity and therefore we don't need any
>>> sophisticated load balancing)
>>>
>>> Before we embark on this path, it would be great if someone with gRPC
>>> production experience could answer a few questions.
>>> Q1: We plan to use a low DNS TTL (say 30s) to force the clients to have
>>> the most up to date service discovery information. Do gRPC clients honor
>>> DNS TTL ?
>>> Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ? We
>>> could have a couple of hundred backends for a service.
>>> Q3: Does gRPC do its own health checks and mark unhealthy connections?
>>>
>>> Also from experience, do folks think that this is a really bad idea and
>>

[grpc-io] Does gRPC run on QNX?

2019-01-17 Thread mjgigli
Is anyone out there running gRPC on QNX platforms? I can't find many people 
talking about it online and it's not listed as officially supported on 
gRPC's website, but I can't think of any particular reasons why it 
shouldn't be able to work. Just curious if anyone out there has 
successfully ran gRPC on QNX.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1cf7fab0-0730-422c-afde-24b15baaa1c7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC client side RoundRobin loadbalancing w/ Consul DNS

2019-01-17 Thread apolcyn via grpc.io
I can add to a couple of questions.
Re: > "Do gRPC clients honor DNS TTL ?"

gRPC clients don't look at TTL's at all. In C++, a gRPC client channel will 
request it's DNS resolver to re-resolve when it determines that it has 
reached "transient failure" state. The details of when exactly it reaches 
that state depends on the load balancing policy in use. In "round robin", 
it would be roughly when all individual connections in the list reach 
"transient failure", i.e. if the connections all break. Effectively, if 
backends are moving around and things break, then the default client will 
re-resolve. But if you want the DNS resolution to be up to date for 
different reasons, there's no polling built in to the default DNS resolver.

This could be done with a custom resolver, but in C++ the resolver API 
isn't currently public. I understand that making that API public is 
something in progress though.

Re: > "Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ?"

If the DNS server sends back a large response (c-ares considers a large 
response greater than 512 bytes), or if the response has a "truncated" bit 
set, then it will re-send its query over TCP. I can confirm this with 
"ares", I believe this is also the case with "native" (in C++ there's two 
DNS resolvers: "ares" (c-ares) and "native" (getaddrinfo); "native" is the 
default one right now, but "ares" should be the default in the upcoming 
1.19 release)






On Thursday, January 17, 2019 at 6:23:37 PM UTC-8, Carl Mastrangelo wrote:
>
> I know you asked for C++, but At least for Java we do not honor TTL.  
> (because the JVM doesn't surface it to us).  If you implement your own 
> NameResolver (not as hard as it sounds!) you can honor these TTLs.  
>
> I believe C++ uses the cares resolver which IIRC can resort to doing TCP 
> lookups if the response size is too large.  Alas, I cannot answer with any 
> more detail.
>
> gRPC has the option to do health checks, but what I think you actually 
> want are keep-alives.  This is configurable on the channel and the server.  
> If you can add more detail about the problem you are trying to avoid, I can 
> give a better answer.
>
> As for if DNS is a really bad idea:  Not really.  It has issues, but none 
> of them are particularly damning.   For example, when you add a new server 
> most clients won't find out about it until they poll again.  gRPC is 
> designed around a push based name resolution model, with clients being told 
> what servers they can talk to.   DNS is adapted onto this model, by 
> periodically spawning a thread and notifying the client via the 
> push-interface.   
>
> The DNS support is pretty good in gRPC, to the point that implementing a 
> custom DNS resolver is likely to cause more issues (what happens if the A 
> lookups succeed, but the  fail?, what happens if there are lots of 
> addresses for a single endpoint?, etc.)
>
> One last thing to consider:  the loadbalancer in gRPC is independent of 
> the name resolver.  You could continue to use DNS (and do SRV lookups and 
> such) and pass that info into your own custom client-side LB.  This is what 
> gRPCLB does, but you could customize your own copy to not depend on a 
> gRPCLB server.   There's lots of options here. 
>
>
>
> On Wednesday, January 16, 2019 at 5:01:33 PM UTC-8, Ram Kumar Rengaswamy 
> wrote:
>>
>> Hello ... We are looking to setup client-side loadbalancing in GRPC (C++).
>> Our current plan roughly is the following:
>> 1. Use consul for service discovery, health checks etc.
>> 2. Expose the IP addresses behind a service to GRPC client via Consul 
>> DNS Interface 
>> 3. Configure the client to use simple round_robin loadbalancing (All our 
>> servers have the same capacity and therefore we don't need any 
>> sophisticated load balancing)
>>
>> Before we embark on this path, it would be great if someone with gRPC 
>> production experience could answer a few questions.
>> Q1: We plan to use a low DNS TTL (say 30s) to force the clients to have 
>> the most up to date service discovery information. Do gRPC clients honor 
>> DNS TTL ?
>> Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ? We 
>> could have a couple of hundred backends for a service.
>> Q3: Does gRPC do its own health checks and mark unhealthy connections?
>>
>> Also from experience, do folks think that this is a really bad idea and 
>> we should really use grpclb policy and implement a look-aside 
>> loadbalancer instead ?
>>
>> Thanks,
>> -Ram
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f8a69f99

[grpc-io] Re: errors when linking with protobuf-lite

2019-01-17 Thread 'Carl Mastrangelo' via grpc.io
Do you have c-ares ( https://c-ares.haxx.se/ ) somewhere on your system?

On Thursday, January 17, 2019 at 11:10:17 AM UTC-8, 
joe.p...@decisionsciencescorp.com wrote:
>
> I have built gRPC with protobuf-lite support and added 
> the  -DGRPC_USE_PROTO_LITE compiler switch along with option optimize_for 
> = LITE_RUNTIME; to my protobuf definition file. My program compiles ok 
> when generates the following undefines during link:
>
> grpc_ares_wrapper.cc:(.text+0x7c8): undefined reference to `ares_inet_ntop'
>
> grpc_ares_wrapper.cc:(.text+0x9a7): undefined reference to `ares_strerror'
>
> grpc_ares_wrapper.cc:(.text+0xb44): undefined reference to 
> `ares_parse_srv_reply'
>
> grpc_ares_wrapper.cc:(.text+0xbd7): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0xc33): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0xc6e): undefined reference to `ares_free_data'
>
> grpc_ares_wrapper.cc:(.text+0xe02): undefined reference to 
> `ares_parse_txt_reply_ext'
>
> grpc_ares_wrapper.cc:(.text+0xfbd): undefined reference to `ares_free_data'
>
> grpc_ares_wrapper.cc:(.text+0x155f): undefined reference to 
> `ares_set_servers_ports'
>
> grpc_ares_wrapper.cc:(.text+0x16cf): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0x173a): undefined reference to `ares_query'
>
> grpc_ares_wrapper.cc:(.text+0x17bb): undefined reference to `ares_search'
>
> grpc_ares_wrapper.cc:(.text+0x1e90): undefined reference to 
> `ares_library_init'
>
>
> Note that I am linking against the following grpc libs:
>
> libgrpc++.a
>
> libgrpc.a
>
> libaddress_sorting.a
>
> libgpr.a
>
> libgrpc_cronet.a
>
> libgrpc++_cronet.a
>
> libgrpc++_error_details.a
>
> libgrpc_plugin_support.a
>
> libgrpcpp_channelz.a
>
> libgrpc++_reflection.a
>
> libgrpc_unsecure.a
>
> libgrpc++_unsecure.a
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/50b257cd-2036-4636-8483-debe4f0c7922%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC client side RoundRobin loadbalancing w/ Consul DNS

2019-01-17 Thread 'Carl Mastrangelo' via grpc.io
I know you asked for C++, but At least for Java we do not honor TTL.  
(because the JVM doesn't surface it to us).  If you implement your own 
NameResolver (not as hard as it sounds!) you can honor these TTLs.  

I believe C++ uses the cares resolver which IIRC can resort to doing TCP 
lookups if the response size is too large.  Alas, I cannot answer with any 
more detail.

gRPC has the option to do health checks, but what I think you actually want 
are keep-alives.  This is configurable on the channel and the server.  If 
you can add more detail about the problem you are trying to avoid, I can 
give a better answer.

As for if DNS is a really bad idea:  Not really.  It has issues, but none 
of them are particularly damning.   For example, when you add a new server 
most clients won't find out about it until they poll again.  gRPC is 
designed around a push based name resolution model, with clients being told 
what servers they can talk to.   DNS is adapted onto this model, by 
periodically spawning a thread and notifying the client via the 
push-interface.   

The DNS support is pretty good in gRPC, to the point that implementing a 
custom DNS resolver is likely to cause more issues (what happens if the A 
lookups succeed, but the  fail?, what happens if there are lots of 
addresses for a single endpoint?, etc.)

One last thing to consider:  the loadbalancer in gRPC is independent of the 
name resolver.  You could continue to use DNS (and do SRV lookups and such) 
and pass that info into your own custom client-side LB.  This is what 
gRPCLB does, but you could customize your own copy to not depend on a 
gRPCLB server.   There's lots of options here. 



On Wednesday, January 16, 2019 at 5:01:33 PM UTC-8, Ram Kumar Rengaswamy 
wrote:
>
> Hello ... We are looking to setup client-side loadbalancing in GRPC (C++).
> Our current plan roughly is the following:
> 1. Use consul for service discovery, health checks etc.
> 2. Expose the IP addresses behind a service to GRPC client via Consul DNS 
> Interface 
> 3. Configure the client to use simple round_robin loadbalancing (All our 
> servers have the same capacity and therefore we don't need any 
> sophisticated load balancing)
>
> Before we embark on this path, it would be great if someone with gRPC 
> production experience could answer a few questions.
> Q1: We plan to use a low DNS TTL (say 30s) to force the clients to have 
> the most up to date service discovery information. Do gRPC clients honor 
> DNS TTL ?
> Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ? We 
> could have a couple of hundred backends for a service.
> Q3: Does gRPC do its own health checks and mark unhealthy connections?
>
> Also from experience, do folks think that this is a really bad idea and we 
> should really use grpclb policy and implement a look-aside loadbalancer 
> instead ?
>
> Thanks,
> -Ram
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a09d8abe-5936-462b-bfef-a322a58736c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java client side TLS authentication using root.pem file and using username and password.

2019-01-17 Thread 'Carl Mastrangelo' via grpc.io
You are going to need to clarify some more, I can't tell what's going on in 
your setup.   Where do the username and password come from?  Why aren't you 
using an authentication token?Have you read our Mutual TLS guide 
here https://github.com/grpc/grpc-java/blob/master/SECURITY.md#mutual-tls

On Tuesday, January 15, 2019 at 1:09:39 PM UTC-8, Kishore Ganipineni wrote:
>
> SSL/TLS Authentication of gRPC using root.pem file and username & password 
> at client side.
>
> To Authenticate the gRPC server using root pem certificate file and 
> credentials in C++ we have a facility to provide both options from client 
> like below.
>
> pem file setup using environment variable option (C++):
>
> setenv("GRPC_DEFAULT_SSL_ROOTS_FILE_PATH", fileBuff1, true);
> sprintf(setSecBuff, "chmod 777 %s", fileBuff1);
> system(setSecBuff);
> Creating Channel Using ssl options(keyPassword if any):
>
> SslCredentialsOptions ssl_opts;
> TelemAsyncClient 
> telemAsyncClient(grpc::CreateChannel(std::string(hostIpStr), 
> grpc::SslCredentials(ssl_opts), ChannelArguments()));
> Passing credentials using ClientContext(C++):
>
> ClientContext context;
> CompletionQueue cq;
> Status status;
>
> context.AddMetadata("username", userid); 
> context.AddMetadata("password", password);  
>
>
> // Print Populated GetRequest
> printGetRequest(&getReq); 
> std::unique_ptr > 
> rpc(stub_->AsyncGet(&context, getReq, &cq));
> In java we have facility to pass the pem file but how to pass the 
> credentials? Java code to pass pem file: 
>
> ManagedChannel channel = NettyChannelBuilder.forAddress(ip, port)
> .useTransportSecurity()
> .negotiationType(NegotiationType.TLS)
> .sslContext(GrpcSslContexts.forClient()
> .trustManager(new File("/test.pem"))
> .clientAuth(ClientAuth.REQUIRE)
> .build())
> .overrideAuthority("test")
> .build();
> Tried to set the credentials using CallCredentials and ClientInterceptor 
> options but none of the worked. Server side Username is not receiving. 
> Hence getting io.grpc.StatusRuntimeException: UNAUTHENTICATED exception.
>
> CallCredentials Tried:
>
> OpenConfigGrpc.OpenConfigBlockingStub blockingStub = 
> OpenConfigGrpc.newBlockingStub(channel).withCallCredentials(credentials);
>
> public void applyRequestMetadata(MethodDescriptor methodDescriptor, 
> Attributes attributes, Executor executor, final MetadataApplier 
> metadataApplier) {
> String authority = attributes.get(ATTR_AUTHORITY);
> Attributes.Key usernameKey = Attributes.Key.of("userId");
> Attributes.Key passwordKey = Attributes.Key.of("password");
> attributes.newBuilder().set(usernameKey, username).build();
> attributes.newBuilder().set(passwordKey, pasfhocal).build();
> System.out.println(authority);
> executor.execute(new Runnable() {
> public void run() {
> try {
> Metadata headers = new Metadata();
> Metadata.Key usernameKey = 
> Metadata.Key.of("userId", Metadata.ASCII_STRING_MARSHALLER);
> Metadata.Key passwordKey = 
> Metadata.Key.of("password", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(usernameKey, username);
> headers.put(passwordKey, pasfhocal);
> metadataApplier.apply(headers);
> } catch (Exception e) {
> 
> metadataApplier.fail(Status.UNAUTHENTICATED.withCause(e));
> e.printStackTrace();
> }finally{
> logger.info("Inside CienaCallCredentials finally.");
> }
> }
> });
> }
> Interceptors Tried:
>
> OpenConfigGrpc.OpenConfigBlockingStub blockingStub = 
> OpenConfigGrpc.newBlockingStub(channel).withInterceptors(interceptors);
>
> public  ClientCall 
> interceptCall(MethodDescriptor methodDescriptor, CallOptions 
> callOptions, Channel channel) {
> return new ForwardingClientCall.SimpleForwardingClientCall RespT>(channel.newCall(methodDescriptor, callOptions)) {
> @Override
> public void start(Listener responseListener, Metadata 
> headers) {
> callOptions.withCallCredentials(credentials);
> Metadata.Key usernameKey = 
> Metadata.Key.of("usernId", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(usernameKey, username);
> Metadata.Key passwordKey = 
> Metadata.Key.of("password", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(passwordKey, pasfhocal);
> super.start(responseListener, headers);
> }
> };
> }
> Much appreciated your help if some can help on this how to authenticate 
> gRPC using root.pem file and username and password.
>
> Thanks in Advance, Kishore
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To u

[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-17 Thread eleanore . jin
Got it! Thanks a lot

On Thursday, January 17, 2019 at 2:35:54 PM UTC-8, Kun Zhang wrote:
>
> You don't need to worry about the timing. As soon as the Subchannel 
> becomes ready, RoundRobinLoadBalancer should notice that by yet another 
> call to updateBalancingState() and add it to the round-robin list. If you 
> continue debugging, you should be able to see that.
>
> On Wednesday, January 16, 2019 at 1:44:41 PM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Hi Kun, 
>>
>> I am trying to debug further, in 
>> io.grpc.util.RoundRobinLoadBalancerFactory::handleResolvedAddressGroups 
>> will be called if the NameResolver.Listener::onAddress is called, 
>>
>> inside handleResolvedAddressGroups method, it is calling 
>> updateBalancingState(getAggregatedState(), 
>> getAggregatedError()); where it seems in getAggregatedState(),
>> it is not returning the subchannel state as READY, sometimes connecting, 
>> sometimes idle.
>>
>> Then in updateBalancingState(), it will only put those subchannel's state 
>> with READY in the activeList. 
>>
>> So just wonder is there anyway to ensure the sub channel is READY when 
>> updating the loadbalancer ?
>>
>> On Wednesday, January 16, 2019 at 12:50:04 PM UTC-8, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Kun, 
>>>  
>>> I did see that the new server3 (listening on 9097) has its 
>>> InternalSubchannel gets created:
>>>
>>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>>> [io.grpc.internal.InternalSubchannel-20] 
>>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>>> ready
>>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>>> [io.grpc.internal.InternalSubchannel-20] 
>>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>>> ready
>>>
>>> On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
>>> wrote:

 Hi, 

 in my java gRPC client, when I create the ManagedChannel, I am passing 
 my custom NameResolver, and using RoundRobinLoadBalancer. When my 
 NameResolver is notified with a change to the server list (new server 
 added), it will call Listener.onAddress and pass the updated the list.

 I see from the Log: the onAddress is called from 
 NameResolverListenerImpl, (9097 is the new server address added)

 resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
 [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}


 however, the traffic is not coming to the new server, did I miss 
 anything?


 Thanks a lot!







-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/488e7bc4-4171-4c2d-a7d4-0521ed3fa369%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread robert engels
A lot of proxies - at least the firewall kind - don’t implement the TCP 
protocol to close the connection for an idle timeout - they just drop their own 
side / mapping.

When you attempt to send via that connection later on, then you will get the 
error atthe TCP layer.

> On Jan 17, 2019, at 5:51 PM, 'Eric Gribkoff' via grpc.io 
>  wrote:
> 
> 
> 
> On Thu, Jan 17, 2019 at 12:31 PM  > wrote:
> After researching a bit, I believe the issue was that the proxy on the server 
> was closing the connection after a few minutes of idle time, and the client 
> ManagedChannel didn't automatically detect that and connect again when that 
> happened. When constructing the ManagedChannel, I added an idleTimeout to it, 
> which will proactively kill the connection when it's idle, and reestablish it 
> when it's needed again, and this seems to solve the problem. So the new 
> channel construction looks like this:
> 
> @Singleton
> @Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com ", 443)
> .overrideAuthority("example.com ")
> .context(app.applicationContext)
> .idleTimeout(60, TimeUnit.SECONDS)
> .build()
> return MyClient(channel)
> }
> To anyone who might see this, does that seem like a plausible explanation?
> 
> 
> The explanation seems plausible, but I would generally expect that when the 
> proxy closes the connection, this would be noticed by the gRPC client. For 
> example, if the TCP socket is closed by the proxy, then the managed channel 
> will see this and try to reconnect. Can you provide some more details about 
> what proxy is in use, and how you were able to determine that the proxy is 
> closing the connection?
> 
> If you can deterministically reproduce the DEADLINE_EXCEEDED errors from the 
> original email, it may also be helpful to ensure that you observe the same 
> behavior when using OkHttpChannelBuilder directly instead of 
> AndroidChannelBuilder. AndroidChannelBuilder is only intended to respond to 
> changes in the device's internet state, so it should be irrelevant to 
> detecting (or failing to detect) server-side disconnections, but it's a 
> relatively new feature and would be worth ruling it out as a source of the 
> problem.
> 
> Thanks,
> 
> Eric
> 
> 
>   
> 
> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com 
>  wrote:
> I believe I may not understand something about how gRPC Channels, Stubs, And 
> Transports work. I have an Android app that creates a channel and a single 
> blocking stub and injects it with dagger when the application is initialized. 
> When I need to make a grpc call, I have a method in my client, that calls a 
> method with that stub. After the app is idle a while, all of my calls return 
> DEADLINE_EXCEEDED errors, though there are no calls showing up in the server 
> logs.
> 
> @Singleton
> @Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com ", 443)
> .overrideAuthority("example.com ")
> .context(app.applicationContext)
> .build()
> return MyClient(channel)
> }
> Where my client class has a function to return a request with a deadline:
> 
> class MyClient(channel: ManagedChannel) {
> private val blockingStub: MyServiceGrpc.MyServiceBlockingStub = 
> MyServiceGrpc.newBlockingStub(channel)
> 
> fun getStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getStuff(stuffRequest())
> }
> fun getOtherStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getOtherStuff(stuffRequest())
> }
> I make the calls to the server inside a LiveData class in My Repository, 
> where the call looks like this: myClient.getStuff()
> 
> I am guessing that the channel looses its connection at some point, and then 
> all of the subsequent stubs simply can't connect, but I don't see anywhere in 
> the AndroidChannelBuilder documentation that talks about how to handle this 
> (I believed it reconnected automatically). Is it possible that the channel I 
> use to create my blocking stub gets stale, and I should be creating a new 
> blocking stub each time I call getStuff()? Any help in understanding this 
> would be greatly appreciated.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io " group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to grpc-io@googlegroups.com 
> 

Re: [grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread 'Eric Gribkoff' via grpc.io
On Thu, Jan 17, 2019 at 12:31 PM  wrote:

> After researching a bit, I believe the issue was that the proxy on the
> server was closing the connection after a few minutes of idle time, and the
> client ManagedChannel didn't automatically detect that and connect again
> when that happened. When constructing the ManagedChannel, I added an
> idleTimeout to it, which will proactively kill the connection when it's
> idle, and reestablish it when it's needed again, and this seems to solve
> the problem. So the new channel construction looks like this:
>
> @Singleton@Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com", 443)
> .overrideAuthority("example.com")
> .context(app.applicationContext)
> .idleTimeout(60, TimeUnit.SECONDS)
> .build()
> return MyClient(channel)}
>
> To anyone who might see this, does that seem like a plausible explanation?
>
>
The explanation seems plausible, but I would generally expect that when the
proxy closes the connection, this would be noticed by the gRPC client. For
example, if the TCP socket is closed by the proxy, then the managed channel
will see this and try to reconnect. Can you provide some more details about
what proxy is in use, and how you were able to determine that the proxy is
closing the connection?

If you can deterministically reproduce the DEADLINE_EXCEEDED errors from
the original email, it may also be helpful to ensure that you observe the
same behavior when using OkHttpChannelBuilder directly instead of
AndroidChannelBuilder. AndroidChannelBuilder is only intended to respond to
changes in the device's internet state, so it should be irrelevant to
detecting (or failing to detect) server-side disconnections, but it's a
relatively new feature and would be worth ruling it out as a source of the
problem.

Thanks,

Eric




>
> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com
> wrote:
>>
>> I believe I may not understand something about how gRPC Channels, Stubs,
>> And Transports work. I have an Android app that creates a channel and a
>> single blocking stub and injects it with dagger when the application is
>> initialized. When I need to make a grpc call, I have a method in my client,
>> that calls a method with that stub. After the app is idle a while, all of
>> my calls return DEADLINE_EXCEEDED errors, though there are no calls showing
>> up in the server logs.
>>
>> @Singleton@Provides
>> fun providesMyClient(app: Application): MyClient {
>> val channel = AndroidChannelBuilder
>> .forAddress("example.com", 443)
>> .overrideAuthority("example.com")
>> .context(app.applicationContext)
>> .build()
>> return MyClient(channel)}
>>
>> Where my client class has a function to return a request with a deadline:
>>
>> class MyClient(channel: ManagedChannel) {private val blockingStub: 
>> MyServiceGrpc.MyServiceBlockingStub = MyServiceGrpc.newBlockingStub(channel)
>>
>> fun getStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getStuff(stuffRequest())}
>> fun getOtherStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getOtherStuff(stuffRequest())}
>>
>> I make the calls to the server inside a LiveData class in My Repository,
>> where the call looks like this: myClient.getStuff()
>>
>> I am guessing that the channel looses its connection at some point, and
>> then all of the subsequent stubs simply can't connect, but I don't see
>> anywhere in the AndroidChannelBuilder documentation that talks about how to
>> handle this (I believed it reconnected automatically). Is it possible that
>> the channel I use to create my blocking stub gets stale, and I should be
>> creating a new blocking stub each time I call getStuff()? Any help in
>> understanding this would be greatly appreciated.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group

[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-17 Thread 'Kun Zhang' via grpc.io
You don't need to worry about the timing. As soon as the Subchannel becomes 
ready, RoundRobinLoadBalancer should notice that by yet another call to 
updateBalancingState() and add it to the round-robin list. If you continue 
debugging, you should be able to see that.

On Wednesday, January 16, 2019 at 1:44:41 PM UTC-8, eleano...@gmail.com 
wrote:
>
> Hi Kun, 
>
> I am trying to debug further, in 
> io.grpc.util.RoundRobinLoadBalancerFactory::handleResolvedAddressGroups 
> will be called if the NameResolver.Listener::onAddress is called, 
>
> inside handleResolvedAddressGroups method, it is calling 
> updateBalancingState(getAggregatedState(), 
> getAggregatedError()); where it seems in getAggregatedState(),
> it is not returning the subchannel state as READY, sometimes connecting, 
> sometimes idle.
>
> Then in updateBalancingState(), it will only put those subchannel's state 
> with READY in the activeList. 
>
> So just wonder is there anyway to ensure the sub channel is READY when 
> updating the loadbalancer ?
>
> On Wednesday, January 16, 2019 at 12:50:04 PM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Hi Kun, 
>>  
>> I did see that the new server3 (listening on 9097) has its 
>> InternalSubchannel gets created:
>>
>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>> [io.grpc.internal.InternalSubchannel-20] 
>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>> ready
>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>> [io.grpc.internal.InternalSubchannel-20] 
>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>> ready
>>
>> On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi, 
>>>
>>> in my java gRPC client, when I create the ManagedChannel, I am passing 
>>> my custom NameResolver, and using RoundRobinLoadBalancer. When my 
>>> NameResolver is notified with a change to the server list (new server 
>>> added), it will call Listener.onAddress and pass the updated the list.
>>>
>>> I see from the Log: the onAddress is called from 
>>> NameResolverListenerImpl, (9097 is the new server address added)
>>>
>>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>>> [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}
>>>
>>>
>>> however, the traffic is not coming to the new server, did I miss 
>>> anything?
>>>
>>>
>>> Thanks a lot!
>>>
>>>
>>>
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3f7c0182-d7ad-485d-b655-f7fd296913b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc-java DnsNameResolver behavior (Kubernetes pod failing scenario behind Kube DNS)

2019-01-17 Thread Yee-Ning Cheng
What is the default exponential backoff configuration?

I have used dig on the DNS record and it is has a 30s expiration, so that 
does not seem to be the issue.

With regards to the JVM DNS caching, I tried setting a variety of 
properties and none of them seem to work.


I tried setting the following property right before my main, but it does 
not work.

object ClientDriver {

  java.security.Security.setProperty("networkaddress.cache.ttl", "20")

  def main(arg: Array[String]) = {

// Code
**

  }
}


I even tried setting the following system property which didn't work either

-Dsun.net.inetaddr.ttl=20



On Thursday, January 17, 2019 at 2:04:45 PM UTC-5, Kun Zhang wrote:
>
> Even though the first DNS refresh is too early to notice the new address, 
> as long as the old address is still returned, RoundRobin will continue 
> trying to connect to the old address (subject to exponential back-off of 
> Subchannel reconnections). Of course it will fail, but whenever it does, a 
> new DNS refresh will be triggered. Eventually you will get the new address.
>
> If you have waited long enough and still not seen the new address, it may 
> be due to the TTL of the DNS record, or more likely, JVM's DNS caching 
> 
> .
>
> On Tuesday, January 15, 2019 at 2:36:39 PM UTC-8, Yee-Ning Cheng wrote:
>>
>> Hi,
>>
>> I have a gRPC client using the default DnsNameResolver and 
>> RoundRobinLoadBalancer that is connected to gRPC servers on Kubernetes 
>> using the Kube DNS endpoint.  The servers are deployed as Kube pods and may 
>> fail.  I see that when a pod fails, the onStateChange gets called to 
>> refresh the DnsNameResolver.  The problem is that the new Kube pod that 
>> gets spun up in the old pod's place is not up yet when the resolver is 
>> trying to refresh the subchannel state and doesn't see the new pod.  And 
>> thus, the client is not able to see the new pod and does not connect to it.
>>
>> Is there a configuration I am missing or is there a way to refresh the 
>> resolver on a scheduled timer?
>>
>> Thanks,
>>
>> Yee-Ning
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/93bc0f43-468e-401c-ac39-dd6488b617ac%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread robert engels

It should - you just need a time lower than proxy timeout.

Which is better depends a lot on how many simultaneous connections you expect 
to the server (i.e. how many client processes/machines). If small it would be 
much more efficient and provide better latency to use the keep alives rather 
than rebuilding the connection. If a lot of requests are coming in it won’t 
even matter.

> On Jan 17, 2019, at 2:41 PM, Bryant Davis  wrote:
> 
> Thanks for your response, Robert!  I wasn't clear on how the keepalives work, 
> and saw some warning in the docs about increasing load on servers, but 
> perhaps thats better than redoing the tls handshake each time?  There seem to 
> be 3 options: keepAliveTime, keepAliveTimeout, and keepAliveWithoutCalls.  I 
> suppose I would use keepAliveTime, and that would prevent the connection from 
> closing?
> 
> On Thu, Jan 17, 2019 at 2:32 PM robert engels  > wrote:
> Yes, it is might be more efficient to use keep-alives rather than destroying 
> and rebuilding the connections - but that will depend on your setup/usage.
> 
>> On Jan 17, 2019, at 2:31 PM, davis.bry...@gmail.com 
>>  wrote:
>> 
>> After researching a bit, I believe the issue was that the proxy on the 
>> server was closing the connection after a few minutes of idle time, and the 
>> client ManagedChannel didn't automatically detect that and connect again 
>> when that happened. When constructing the ManagedChannel, I added an 
>> idleTimeout to it, which will proactively kill the connection when it's 
>> idle, and reestablish it when it's needed again, and this seems to solve the 
>> problem. So the new channel construction looks like this:
>> 
>> @Singleton
>> @Provides
>> fun providesMyClient(app: Application): MyClient {
>> val channel = AndroidChannelBuilder
>> .forAddress("example.com ", 443)
>> .overrideAuthority("example.com ")
>> .context(app.applicationContext)
>> .idleTimeout(60, TimeUnit.SECONDS)
>> .build()
>> return MyClient(channel)
>> }
>> To anyone who might see this, does that seem like a plausible explanation?
>> 
>> 
>> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com 
>>  wrote:
>> I believe I may not understand something about how gRPC Channels, Stubs, And 
>> Transports work. I have an Android app that creates a channel and a single 
>> blocking stub and injects it with dagger when the application is 
>> initialized. When I need to make a grpc call, I have a method in my client, 
>> that calls a method with that stub. After the app is idle a while, all of my 
>> calls return DEADLINE_EXCEEDED errors, though there are no calls showing up 
>> in the server logs.
>> 
>> @Singleton
>> @Provides
>> fun providesMyClient(app: Application): MyClient {
>> val channel = AndroidChannelBuilder
>> .forAddress("example.com ", 443)
>> .overrideAuthority("example.com ")
>> .context(app.applicationContext)
>> .build()
>> return MyClient(channel)
>> }
>> Where my client class has a function to return a request with a deadline:
>> 
>> class MyClient(channel: ManagedChannel) {
>> private val blockingStub: MyServiceGrpc.MyServiceBlockingStub = 
>> MyServiceGrpc.newBlockingStub(channel)
>> 
>> fun getStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getStuff(stuffRequest())
>> }
>> fun getOtherStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getOtherStuff(stuffRequest())
>> }
>> I make the calls to the server inside a LiveData class in My Repository, 
>> where the call looks like this: myClient.getStuff()
>> 
>> I am guessing that the channel looses its connection at some point, and then 
>> all of the subsequent stubs simply can't connect, but I don't see anywhere 
>> in the AndroidChannelBuilder documentation that talks about how to handle 
>> this (I believed it reconnected automatically). Is it possible that the 
>> channel I use to create my blocking stub gets stale, and I should be 
>> creating a new blocking stub each time I call getStuff()? Any help in 
>> understanding this would be greatly appreciated.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io " group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+unsubscr...@googlegroups.com 
>> .
>> To post to this group, send email to grpc-io@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io 
>> .
>> To view this discussio

Re: [grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread robert engels
Yes, it is might be more efficient to use keep-alives rather than destroying 
and rebuilding the connections - but that will depend on your setup/usage.

> On Jan 17, 2019, at 2:31 PM, davis.bry...@gmail.com wrote:
> 
> After researching a bit, I believe the issue was that the proxy on the server 
> was closing the connection after a few minutes of idle time, and the client 
> ManagedChannel didn't automatically detect that and connect again when that 
> happened. When constructing the ManagedChannel, I added an idleTimeout to it, 
> which will proactively kill the connection when it's idle, and reestablish it 
> when it's needed again, and this seems to solve the problem. So the new 
> channel construction looks like this:
> 
> @Singleton
> @Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com", 443)
> .overrideAuthority("example.com")
> .context(app.applicationContext)
> .idleTimeout(60, TimeUnit.SECONDS)
> .build()
> return MyClient(channel)
> }
> To anyone who might see this, does that seem like a plausible explanation?
> 
> 
> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com wrote:
> I believe I may not understand something about how gRPC Channels, Stubs, And 
> Transports work. I have an Android app that creates a channel and a single 
> blocking stub and injects it with dagger when the application is initialized. 
> When I need to make a grpc call, I have a method in my client, that calls a 
> method with that stub. After the app is idle a while, all of my calls return 
> DEADLINE_EXCEEDED errors, though there are no calls showing up in the server 
> logs.
> 
> @Singleton
> @Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com ", 443)
> .overrideAuthority("example.com ")
> .context(app.applicationContext)
> .build()
> return MyClient(channel)
> }
> Where my client class has a function to return a request with a deadline:
> 
> class MyClient(channel: ManagedChannel) {
> private val blockingStub: MyServiceGrpc.MyServiceBlockingStub = 
> MyServiceGrpc.newBlockingStub(channel)
> 
> fun getStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getStuff(stuffRequest())
> }
> fun getOtherStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getOtherStuff(stuffRequest())
> }
> I make the calls to the server inside a LiveData class in My Repository, 
> where the call looks like this: myClient.getStuff()
> 
> I am guessing that the channel looses its connection at some point, and then 
> all of the subsequent stubs simply can't connect, but I don't see anywhere in 
> the AndroidChannelBuilder documentation that talks about how to handle this 
> (I believed it reconnected automatically). Is it possible that the channel I 
> use to create my blocking stub gets stale, and I should be creating a new 
> blocking stub each time I call getStuff()? Any help in understanding this 
> would be greatly appreciated.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to grpc-io@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1014FB70-8708-42FC-B810-8951D4BF7B99%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread davis . bryant


After researching a bit, I believe the issue was that the proxy on the 
server was closing the connection after a few minutes of idle time, and the 
client ManagedChannel didn't automatically detect that and connect again 
when that happened. When constructing the ManagedChannel, I added an 
idleTimeout to it, which will proactively kill the connection when it's 
idle, and reestablish it when it's needed again, and this seems to solve 
the problem. So the new channel construction looks like this:

@Singleton@Provides
fun providesMyClient(app: Application): MyClient {
val channel = AndroidChannelBuilder
.forAddress("example.com", 443)
.overrideAuthority("example.com")
.context(app.applicationContext)
.idleTimeout(60, TimeUnit.SECONDS)
.build()
return MyClient(channel)}

To anyone who might see this, does that seem like a plausible explanation?


On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com 
wrote:
>
> I believe I may not understand something about how gRPC Channels, Stubs, 
> And Transports work. I have an Android app that creates a channel and a 
> single blocking stub and injects it with dagger when the application is 
> initialized. When I need to make a grpc call, I have a method in my client, 
> that calls a method with that stub. After the app is idle a while, all of 
> my calls return DEADLINE_EXCEEDED errors, though there are no calls showing 
> up in the server logs.
>
> @Singleton@Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com", 443)
> .overrideAuthority("example.com")
> .context(app.applicationContext)
> .build()
> return MyClient(channel)}
>
> Where my client class has a function to return a request with a deadline:
>
> class MyClient(channel: ManagedChannel) {private val blockingStub: 
> MyServiceGrpc.MyServiceBlockingStub = MyServiceGrpc.newBlockingStub(channel)
>
> fun getStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getStuff(stuffRequest())}
> fun getOtherStuff(): StuffResponse =
> blockingStub
> .withDeadlineAfter(7, TimeUnit.SECONDS)
> .getOtherStuff(stuffRequest())}
>
> I make the calls to the server inside a LiveData class in My Repository, 
> where the call looks like this: myClient.getStuff()
>
> I am guessing that the channel looses its connection at some point, and 
> then all of the subsequent stubs simply can't connect, but I don't see 
> anywhere in the AndroidChannelBuilder documentation that talks about how to 
> handle this (I believed it reconnected automatically). Is it possible that 
> the channel I use to create my blocking stub gets stale, and I should be 
> creating a new blocking stub each time I call getStuff()? Any help in 
> understanding this would be greatly appreciated.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Why does it take a long time to connect sometimes in C++?

2019-01-17 Thread robert engels
If you are running a tight loop with lots of connection attempts there are a 
lot of reasons it can fail. usually resources (number of connections) - so 
while the OS is waiting to close the existing connections, future attempts will 
fail.

> On Jan 17, 2019, at 1:54 PM, br...@forceconstant.com wrote:
> 
> I don't really understand the question, but  I have tested retry by just 
> starting and stopping server.
> 
> 
> On Thursday, January 17, 2019 at 2:46:44 PM UTC-5, robert engels wrote:
> How are you testing the retry - pulling plug? iptables ?
> 
>> On Jan 17, 2019, at 1:39 PM, br...@forceconstant.com <> wrote:
>> 
>> 
>> I have a gRPC streaming client, that has to handle server going up and down, 
>> so I have a while loop, but sometimes it works fine, but other times it 
>> takes 15 seconds to connect even on the same machine. Is it something wrong 
>> with my code, or how can I debug? As you can see below I have debug to print 
>> out channel state, and is mostly GRPC_CHANNEL_CONNECTING  or 
>> GRPC_CHANNEL_TRANSIENT_FAILURE , but still can take 15 seconds to connect. I 
>> haven't found a pattern. Can someone tell me how I get it to connect faster 
>> and more reliably?  Thanks.  Note I am using a deadline, so that I can shut 
>> everything down at the end gracefully, and not have it block forever.
>> 
>> 
>> 
>> ...
>> 
>> channel = grpc::CreateChannel(asServerAddress, channel_creds);
>> 
>>  while ((channel->GetState(true) != GRPC_CHANNEL_READY))
>> {
>>   time_point deadline = std::chrono::system_clock::now() + 
>> std::chrono::milliseconds(1000);
>>   
>>   channel->WaitForConnected(deadline);
>>   std::cout << "." << channel->GetState(false) << std::flush ;
>> }
>> std::cout << "Client Connected" << std::endl;
>> 
>> 
>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io " group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com <>.
>> To post to this group, send email to grp...@googlegroups.com <>.
>> Visit this group at https://groups.google.com/group/grpc-io 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/06cb24fd-f91f-42d4-b495-9c701b2457ae%40googlegroups.com
>>  
>> .
>> For more options, visit https://groups.google.com/d/optout 
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to grpc-io@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/6661bf81-8734-4c0f-a6a0-fc5b1adfce8e%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/315A962D-4286-47E2-921C-305AE214574C%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Why does it take a long time to connect sometimes in C++?

2019-01-17 Thread brian
Another point, is I look at connections with netstat, and I don't even see 
gRPC even trying to connect until the connection actually happens. So I am 
not sure what it is waiting for.

On Thursday, January 17, 2019 at 2:54:28 PM UTC-5, br...@forceconstant.com 
wrote:
>
> I don't really understand the question, but  I have tested retry by just 
> starting and stopping server.
>
>
> On Thursday, January 17, 2019 at 2:46:44 PM UTC-5, robert engels wrote:
>>
>> How are you testing the retry - pulling plug? iptables ?
>>
>> On Jan 17, 2019, at 1:39 PM, br...@forceconstant.com wrote:
>>
>>
>> I have a gRPC streaming client, that has to handle server going up and 
>> down, so I have a while loop, but sometimes it works fine, but other times 
>> it takes 15 seconds to connect even on the same machine. Is it something 
>> wrong with my code, or how can I debug? As you can see below I have debug 
>> to print out channel state, and is mostly GRPC_CHANNEL_CONNECTING  or 
>> GRPC_CHANNEL_TRANSIENT_FAILURE , but still can take 15 seconds to 
>> connect. I haven't found a pattern. Can someone tell me how I get it to 
>> connect faster and more reliably?  Thanks.  Note I am using a deadline, so 
>> that I can shut everything down at the end gracefully, and not have it 
>> block forever.
>>
>>
>>
>> ...
>>
>> channel = grpc::CreateChannel(asServerAddress, channel_creds);
>>
>>  while ((channel->GetState(true) != GRPC_CHANNEL_READY))
>> {
>>   time_point deadline = std::chrono::system_clock::now() + 
>> std::chrono::milliseconds(1000);
>>   
>>   channel->WaitForConnected(deadline);
>>   std::cout << "." << channel->GetState(false) << std::flush ;
>> }
>> std::cout << "Client Connected" << std::endl;
>>
>> 
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To post to this group, send email to grp...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/06cb24fd-f91f-42d4-b495-9c701b2457ae%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6f75326d-163b-4af0-b63a-1f812ec2a02c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Why does it take a long time to connect sometimes in C++?

2019-01-17 Thread brian
I don't really understand the question, but  I have tested retry by just 
starting and stopping server.


On Thursday, January 17, 2019 at 2:46:44 PM UTC-5, robert engels wrote:
>
> How are you testing the retry - pulling plug? iptables ?
>
> On Jan 17, 2019, at 1:39 PM, br...@forceconstant.com  wrote:
>
>
> I have a gRPC streaming client, that has to handle server going up and 
> down, so I have a while loop, but sometimes it works fine, but other times 
> it takes 15 seconds to connect even on the same machine. Is it something 
> wrong with my code, or how can I debug? As you can see below I have debug 
> to print out channel state, and is mostly GRPC_CHANNEL_CONNECTING  or 
> GRPC_CHANNEL_TRANSIENT_FAILURE , but still can take 15 seconds to 
> connect. I haven't found a pattern. Can someone tell me how I get it to 
> connect faster and more reliably?  Thanks.  Note I am using a deadline, so 
> that I can shut everything down at the end gracefully, and not have it 
> block forever.
>
>
>
> ...
>
> channel = grpc::CreateChannel(asServerAddress, channel_creds);
>
>  while ((channel->GetState(true) != GRPC_CHANNEL_READY))
> {
>   time_point deadline = std::chrono::system_clock::now() + 
> std::chrono::milliseconds(1000);
>   
>   channel->WaitForConnected(deadline);
>   std::cout << "." << channel->GetState(false) << std::flush ;
> }
> std::cout << "Client Connected" << std::endl;
>
> 
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+u...@googlegroups.com .
> To post to this group, send email to grp...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/06cb24fd-f91f-42d4-b495-9c701b2457ae%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6661bf81-8734-4c0f-a6a0-fc5b1adfce8e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Why does it take a long time to connect sometimes in C++?

2019-01-17 Thread robert engels
How are you testing the retry - pulling plug? iptables ?

> On Jan 17, 2019, at 1:39 PM, br...@forceconstant.com wrote:
> 
> 
> I have a gRPC streaming client, that has to handle server going up and down, 
> so I have a while loop, but sometimes it works fine, but other times it takes 
> 15 seconds to connect even on the same machine. Is it something wrong with my 
> code, or how can I debug? As you can see below I have debug to print out 
> channel state, and is mostly GRPC_CHANNEL_CONNECTING  or 
> GRPC_CHANNEL_TRANSIENT_FAILURE , but still can take 15 seconds to connect. I 
> haven't found a pattern. Can someone tell me how I get it to connect faster 
> and more reliably?  Thanks.  Note I am using a deadline, so that I can shut 
> everything down at the end gracefully, and not have it block forever.
> 
> 
> 
> ...
> 
> channel = grpc::CreateChannel(asServerAddress, channel_creds);
> 
>  while ((channel->GetState(true) != GRPC_CHANNEL_READY))
> {
>   time_point deadline = std::chrono::system_clock::now() + 
> std::chrono::milliseconds(1000);
>   
>   channel->WaitForConnected(deadline);
>   std::cout << "." << channel->GetState(false) << std::flush ;
> }
> std::cout << "Client Connected" << std::endl;
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to grpc-io@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/06cb24fd-f91f-42d4-b495-9c701b2457ae%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/C58BBE6F-893E-4074-8650-DF3B37DAE013%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Why does it take a long time to connect sometimes in C++?

2019-01-17 Thread brian

I have a gRPC streaming client, that has to handle server going up and 
down, so I have a while loop, but sometimes it works fine, but other times 
it takes 15 seconds to connect even on the same machine. Is it something 
wrong with my code, or how can I debug? As you can see below I have debug 
to print out channel state, and is mostly GRPC_CHANNEL_CONNECTING  or 
GRPC_CHANNEL_TRANSIENT_FAILURE , but still can take 15 seconds to connect. 
I haven't found a pattern. Can someone tell me how I get it to connect 
faster and more reliably?  Thanks.  Note I am using a deadline, so that I 
can shut everything down at the end gracefully, and not have it block 
forever.



...

channel = grpc::CreateChannel(asServerAddress, channel_creds);

 while ((channel->GetState(true) != GRPC_CHANNEL_READY))
{
  time_point deadline = std::chrono::system_clock::now() + 
std::chrono::milliseconds(1000);
  
  channel->WaitForConnected(deadline);
  std::cout << "." << channel->GetState(false) << std::flush ;
}
std::cout << "Client Connected" << std::endl;




-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/06cb24fd-f91f-42d4-b495-9c701b2457ae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] errors when linking with protobuf-lite

2019-01-17 Thread joe . parness
I have built gRPC with protobuf-lite support and added 
the  -DGRPC_USE_PROTO_LITE compiler switch along with option optimize_for = 
LITE_RUNTIME; 
to my protobuf definition file. My program compiles ok when generates the 
following undefines during link:

grpc_ares_wrapper.cc:(.text+0x7c8): undefined reference to `ares_inet_ntop'

grpc_ares_wrapper.cc:(.text+0x9a7): undefined reference to `ares_strerror'

grpc_ares_wrapper.cc:(.text+0xb44): undefined reference to 
`ares_parse_srv_reply'

grpc_ares_wrapper.cc:(.text+0xbd7): undefined reference to 
`ares_gethostbyname'

grpc_ares_wrapper.cc:(.text+0xc33): undefined reference to 
`ares_gethostbyname'

grpc_ares_wrapper.cc:(.text+0xc6e): undefined reference to `ares_free_data'

grpc_ares_wrapper.cc:(.text+0xe02): undefined reference to 
`ares_parse_txt_reply_ext'

grpc_ares_wrapper.cc:(.text+0xfbd): undefined reference to `ares_free_data'

grpc_ares_wrapper.cc:(.text+0x155f): undefined reference to 
`ares_set_servers_ports'

grpc_ares_wrapper.cc:(.text+0x16cf): undefined reference to 
`ares_gethostbyname'

grpc_ares_wrapper.cc:(.text+0x173a): undefined reference to `ares_query'

grpc_ares_wrapper.cc:(.text+0x17bb): undefined reference to `ares_search'

grpc_ares_wrapper.cc:(.text+0x1e90): undefined reference to 
`ares_library_init'


Note that I am linking against the following grpc libs:

libgrpc++.a

libgrpc.a

libaddress_sorting.a

libgpr.a

libgrpc_cronet.a

libgrpc++_cronet.a

libgrpc++_error_details.a

libgrpc_plugin_support.a

libgrpcpp_channelz.a

libgrpc++_reflection.a

libgrpc_unsecure.a

libgrpc++_unsecure.a


-- 
This email and its contents are confidential. If you are not the 
intended 
recipient, please do not disclose or use the information within
 this email 
or its attachments. If you have received this email in 
error, please 
report the error to the sender by return email and 
delete this 
communication from your records.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9753fb07-49ae-4316-bbe9-9f63f1596097%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc-java DnsNameResolver behavior (Kubernetes pod failing scenario behind Kube DNS)

2019-01-17 Thread 'Kun Zhang' via grpc.io
Even though the first DNS refresh is too early to notice the new address, 
as long as the old address is still returned, RoundRobin will continue 
trying to connect to the old address (subject to exponential back-off of 
Subchannel reconnections). Of course it will fail, but whenever it does, a 
new DNS refresh will be triggered. Eventually you will get the new address.

If you have waited long enough and still not seen the new address, it may 
be due to the TTL of the DNS record, or more likely, JVM's DNS caching 

.

On Tuesday, January 15, 2019 at 2:36:39 PM UTC-8, Yee-Ning Cheng wrote:
>
> Hi,
>
> I have a gRPC client using the default DnsNameResolver and 
> RoundRobinLoadBalancer that is connected to gRPC servers on Kubernetes 
> using the Kube DNS endpoint.  The servers are deployed as Kube pods and may 
> fail.  I see that when a pod fails, the onStateChange gets called to 
> refresh the DnsNameResolver.  The problem is that the new Kube pod that 
> gets spun up in the old pod's place is not up yet when the resolver is 
> trying to refresh the subchannel state and doesn't see the new pod.  And 
> thus, the client is not able to see the new pod and does not connect to it.
>
> Is there a configuration I am missing or is there a way to refresh the 
> resolver on a scheduled timer?
>
> Thanks,
>
> Yee-Ning
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ce745b00-6320-429d-a223-107b89500c8f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.