[grpc-io] Grpc gateway implementation

2018-08-20 Thread nandini . dbit


Hi,

   My grpc server is written in Java. And Grpc gateway is written in 
go-lang. 
  
   I have referred grpc-gateway from the below url:-
  
 https://grpc-ecosystem.github.io/grpc-gateway/docs/usage.html


when i hit url from my browser, it give error saying 

{"error":"Method not found", "code" :12}

Please suggest.







-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/baf1ec07-e3a5-4bb8-ada2-036e838c2876%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-20 Thread 'Carl Mastrangelo' via grpc.io
1 TCP connection for all streams (really RPCs) should be faster.  

On Monday, August 20, 2018 at 1:15:14 PM UTC-7, eleano...@gmail.com wrote:
>
> Hi Carl, 
>
> It is hard to show my code as I have a wrapper API on top of grpc. 
>
> However, are you suggesting that using 1 tcp connection per stream should 
> be faster than using 1 tcp connection for all streams?
>
> On Monday, August 20, 2018 at 11:14:43 AM UTC-7, Carl Mastrangelo wrote:
>>
>> Can you show your code?   This may just be a threading problem.  
>>
>> On Saturday, August 18, 2018 at 9:02:59 PM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Srini, 
>>>
>>> The way how I do it:
>>> for single connection:
>>> 1. send 1 request via  request StreamObserver, to let initial connection 
>>> established 
>>> 2. start the timer, send 1 requests
>>> 3. end the timer when see all results from the response 
>>> StreamObserver.onNext() that the client passed to the server. the logic is 
>>> just System.out.println
>>>
>>> for multiple connections:
>>> 1. send 1 request for each channel created, to let initial connection 
>>> established
>>> 2 start the timer, send 1000 per connection, total 10 connections, so 
>>> total 1 requests
>>> 3. end the timer when see all the results from the response 
>>> StreamObserver.onNext() that the client passed to the server, for all 
>>> connections, the logic is just System.out.println
>>>
>>> Thanks!
>>>
>>> for multiple connection:
>>>
>>> On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:

 Could you provide some stats on your observation and how you are 
 measuring this? Two streams sharing a connection vs. separate connections 
 could be faster due to these reasons:
 - One less socket to service: less system calls, context switching, 
 cache misses etc.
 - Better batching of data from different streams on a single connection 
 resulting in better connection utilization and larger av. pkt size on the 
 wire.

 On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com 
 wrote:
>
> Hi Carl, 
>
> Thanks for the very detailed explanation! my question is why I 
> observed using a separate TCP connection per stream was SLOWER!
>
> If the single TCP connection for multiple streams are faster 
> (Regardless the reason), will the connection get saturated? e.g. too many 
> streams sending on the same TCP connection.
>
>
> On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>>
>> I may have misinterpretted your question; are you asking why gRPC 
>> prefers to use a single connection, or why you observed using a separate 
>> TCP connection per stream was faster?
>>
>> If the first, the reason is that the number of TCP connections may be 
>> limitted.   For example, making gRPC requests from the browser may limit 
>> how many connections can exist.   Also, a Proxy between the client and 
>> server may limit the number of connections.   Connection setup and 
>> teardown 
>> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) 
>> prefers 
>> to reuse a connection.
>>
>> If the second, then I am not sure.   If you are benchmarking with 
>> Java, I strongly recommend using the JMH benchmarking framework.  It's 
>> difficult to setup, but it provides the most accurate, believe benchmark 
>> results.
>>
>> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Carl, 
>>>
>>> Thanks for the explanation, however, that still does not explain why 
>>> using single tcp for multiple streamObserver is faster than using 1 tcp 
>>> per 
>>> stream. 
>>>
>>> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo 
>>> wrote:

 gRPC does connection management for you.  If you don't have any 
 active RPCs, it will not actively create connections for you.  

 You can force gRPC to create a connection eaglerly by calling 
 ManagedChannel.getState(true), which requests the channel enter the 
 ready 
 state. 

 Do note that in Java, class loading is done lazily, so you may be 
 measuring connection time plus classload time if you only measure on 
 the 
 first connection.

 On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
 wrote:
>
> Hi, 
>
> I am doing some experiment with gRPC java to determine the right 
> gRPC call type to use. 
>
> here is my finding:
>
> creating 4 sets of StreamObservers (1 for Client to send request, 
> 1 for Server to send response), sending on the same channel is 
> slightly 
> after than sending on 1 channel per stream.
> I have already el

[grpc-io] Re: how to use grpc java load balancing library with a list of server ip address directly

2018-08-20 Thread 'Carl Mastrangelo' via grpc.io
Look in the gRPC source code for DirectAddressNameResolverFactory, which 
you can copy into your project.  This does what you want.

On Monday, August 20, 2018 at 1:11:41 PM UTC-7, eleano...@gmail.com wrote:
>
> Hi, 
>
> I would like to use the grpc java load balancing library, I looked at the 
> example code, it looks like below:
>
> public HelloWorldClient(String zkAddr) {
>   this(ManagedChannelBuilder.forTarget(zkAddr)
>   .loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstance())
>   .nameResolverFactory(new ZkNameResolverProvider())
>   .usePlaintext(true));
> }
>
>
> I would like to pass ManagedChannelBuilder a list of service ip addresses 
> directly, rather than let a NameResolver to resolve for me. Then I still
>
> want to use .loadBalancerFactory(roundRobinLoadBalancerFactory.getInstance()),
>
>
> if there a way to do it ?
>
>
> Thanks a lot!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d88b4969-6f8d-4d0b-a483-20afb49592ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Establishing multiple grpc subchannels for a single resolved host

2018-08-20 Thread 'Srini Polavarapu' via grpc.io
Keepalive should work here. You'll have to configure a few other params if 
you have long lived streams with low activity. Along with 
keepalive_permit_without_calls, you may have to configure 
max_pings_without_data or min_sent_ping_interval_without_data too. You may 
also have to configure min_recv_ping_interval_without_data on the server 
side. See details in this 
document https://github.com/grpc/grpc/blob/master/doc/keepalive.md

On Monday, August 20, 2018 at 6:23:22 AM UTC-7, alysha@shopify.com 
wrote:
>
> Hey Srini,
>
> I've tested pretty aggressive KeepAlive config with the following 
> parameters:
>
> 'grpc.http2.min_time_between_pings_ms': 1000,
> 'grpc.keepalive_time_ms': 1000,
> 'grpc.keepalive_permit_without_calls': 1
>
> Is there anything I'm missing? Ideally I would like this solution to 
> handle both explicit RST and also things like firewalls blackholing 
> inactive connections (which we've seen happen in the past), so getting 
> keepalive to detect a dead connection would be great.
>
> Thanks,
> Alysha
>
> On Friday, August 17, 2018 at 8:17:43 PM UTC-4, Srini Polavarapu wrote:
>>
>> Hi Alysha,
>>
>> How did you confirm that client is going into backoff and it is indeed 
>> receiving a RST when nginx goes away? Have you looked at the logs gRPC 
>> generates when this happens? One possibility is that nginx doesn't send RST 
>> and client doesn't know that the connection is broken until TCP timeout 
>> occurs. Using keepalive will help in this case.
>>
>> You can try using wait_for_ready=false 
>> 
>>  so 
>> the call fails immediately and you can retry.
>>
>> A recent PR allows you to reset the backoff period. 
>> https://github.com/grpc/grpc/pull/16225. It is experimental and doesn't 
>> have python or ruby API so it can't be of immediate help.
>>
>> On Friday, August 17, 2018 at 12:58:12 PM UTC-7, alysha@shopify.com 
>> wrote:
>>>
>>> Hey Carl,
>>>
>>> This is with L7 nginx balancing, the reason we moved to nginx from L4 
>>> balancers was so we could do per-call balancing (instead of per-connection 
>>> with L7).
>>>
>>> >  In an ideal world, nginx would send a GOAWAY frame to both the client 
>>> and the server, and allow all the RPCs to complete before tearing down the 
>>> connection.
>>>
>>>  I agree a GOAWAY would be better but it seems like nginx doesn't do 
>>> that (at least yet), they just RST the connection :(
>>>
>>> > The client knows how to reschedule and unstarted RPC onto a different 
>>> connection, without returning an UNAVAILABLE.  
>>>
>>> Even when we were using L4 it seemed like a GOAWAY from the Go server 
>>> would put the Core clients in a backoff state instead of retrying 
>>> immediately. The only solution that worked was a round-robin over multiple 
>>> connections and a slow-enough rolling restart so the connections could 
>>> re-establish before the next one died.
>>>
>>> > When you say multiple connections to a single IP, does that mean 
>>> multiple nginx instances listening on different ports?
>>>
>>> No, it's a pool of ~20 ingress nginx instances with an L4 load balancer, 
>>> so traffic looks like client -> L4 LB -> nginx L7 -> backend GRPC pod. The 
>>> problem is the L4 LB in front of nginx has a single public IP.
>>>
>>> > I'm most familiar with Java, which can actually do what you want.  The 
>>> normal way is the create a custom NameResolver that returns multiple 
>>> address for a single address, which a RoundRobin load balancer will use
>>>
>>> Yeah I considered writing something similar in Core but I was worried it 
>>> wouldn't be adopted upstream because of the move to external LBs? It's very 
>>> tough (impossible?) to add new resolvers to Ruby or Python without 
>>> rebuilding the whole extension, and we're pretty worried about maintaining 
>>> a fork of the C++ implementation. It's nice to hear the approach has some 
>>> merits, I might experiment with it.
>>>
>>> Thanks,
>>> Alysha
>>>
>>> On Friday, August 17, 2018 at 3:42:31 PM UTC-4, Carl Mastrangelo wrote:

 Hi Alysha,

 Do you you know if nginx is balancing at L4 or L7?In an ideal 
 world, nginx would send a GOAWAY frame to both the client and the server, 
 and allow all the RPCs to complete before tearing down the connection.  
  The client knows how to reschedule and unstarted RPC onto a different 
 connection, without returning an UNAVAILABLE.  

 When you say multiple connections to a single IP, does that mean 
 multiple nginx instances listening on different ports?

 I'm most familiar with Java, which can actually do what you want.  The 
 normal way is the create a custom NameResolver that returns multiple 
 address for a single address, which a RoundRobin load balancer will use.  
 It sounds like you aren't using Java, but since the implementations are 
 all 
 similar there may be a way to do so.

[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-20 Thread eleanore . jin
Hi Carl, 

It is hard to show my code as I have a wrapper API on top of grpc. 

However, are you suggesting that using 1 tcp connection per stream should 
be faster than using 1 tcp connection for all streams?

On Monday, August 20, 2018 at 11:14:43 AM UTC-7, Carl Mastrangelo wrote:
>
> Can you show your code?   This may just be a threading problem.  
>
> On Saturday, August 18, 2018 at 9:02:59 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi Srini, 
>>
>> The way how I do it:
>> for single connection:
>> 1. send 1 request via  request StreamObserver, to let initial connection 
>> established 
>> 2. start the timer, send 1 requests
>> 3. end the timer when see all results from the response 
>> StreamObserver.onNext() that the client passed to the server. the logic is 
>> just System.out.println
>>
>> for multiple connections:
>> 1. send 1 request for each channel created, to let initial connection 
>> established
>> 2 start the timer, send 1000 per connection, total 10 connections, so 
>> total 1 requests
>> 3. end the timer when see all the results from the response 
>> StreamObserver.onNext() that the client passed to the server, for all 
>> connections, the logic is just System.out.println
>>
>> Thanks!
>>
>> for multiple connection:
>>
>> On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:
>>>
>>> Could you provide some stats on your observation and how you are 
>>> measuring this? Two streams sharing a connection vs. separate connections 
>>> could be faster due to these reasons:
>>> - One less socket to service: less system calls, context switching, 
>>> cache misses etc.
>>> - Better batching of data from different streams on a single connection 
>>> resulting in better connection utilization and larger av. pkt size on the 
>>> wire.
>>>
>>> On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi Carl, 

 Thanks for the very detailed explanation! my question is why I observed 
 using a separate TCP connection per stream was SLOWER!

 If the single TCP connection for multiple streams are faster 
 (Regardless the reason), will the connection get saturated? e.g. too many 
 streams sending on the same TCP connection.


 On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>
> I may have misinterpretted your question; are you asking why gRPC 
> prefers to use a single connection, or why you observed using a separate 
> TCP connection per stream was faster?
>
> If the first, the reason is that the number of TCP connections may be 
> limitted.   For example, making gRPC requests from the browser may limit 
> how many connections can exist.   Also, a Proxy between the client and 
> server may limit the number of connections.   Connection setup and 
> teardown 
> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
> to reuse a connection.
>
> If the second, then I am not sure.   If you are benchmarking with 
> Java, I strongly recommend using the JMH benchmarking framework.  It's 
> difficult to setup, but it provides the most accurate, believe benchmark 
> results.
>
> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi Carl, 
>>
>> Thanks for the explanation, however, that still does not explain why 
>> using single tcp for multiple streamObserver is faster than using 1 tcp 
>> per 
>> stream. 
>>
>> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo 
>> wrote:
>>>
>>> gRPC does connection management for you.  If you don't have any 
>>> active RPCs, it will not actively create connections for you.  
>>>
>>> You can force gRPC to create a connection eaglerly by calling 
>>> ManagedChannel.getState(true), which requests the channel enter the 
>>> ready 
>>> state. 
>>>
>>> Do note that in Java, class loading is done lazily, so you may be 
>>> measuring connection time plus classload time if you only measure on 
>>> the 
>>> first connection.
>>>
>>> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi, 

 I am doing some experiment with gRPC java to determine the right 
 gRPC call type to use. 

 here is my finding:

 creating 4 sets of StreamObservers (1 for Client to send request, 1 
 for Server to send response), sending on the same channel is slightly 
 after 
 than sending on 1 channel per stream.
 I have already elimiated the time of creating initial tcp 
 connection by making a initial call to let the connection to be 
 established, then start the timer. 

 I just wonder why this is the case?

 Thanks!



-- 
You received thi

[grpc-io] how to use grpc java load balancing library with a list of server ip address directly

2018-08-20 Thread eleanore . jin
Hi, 

I would like to use the grpc java load balancing library, I looked at the 
example code, it looks like below:

public HelloWorldClient(String zkAddr) {
  this(ManagedChannelBuilder.forTarget(zkAddr)
  .loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstance())
  .nameResolverFactory(new ZkNameResolverProvider())
  .usePlaintext(true));
}


I would like to pass ManagedChannelBuilder a list of service ip addresses 
directly, rather than let a NameResolver to resolve for me. Then I still

want to use .loadBalancerFactory(roundRobinLoadBalancerFactory.getInstance()),


if there a way to do it ?


Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a568a077-0f4f-4386-9446-613ea83a469f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-20 Thread 'Carl Mastrangelo' via grpc.io
Can you show your code?   This may just be a threading problem.  

On Saturday, August 18, 2018 at 9:02:59 PM UTC-7, eleano...@gmail.com wrote:
>
> Hi Srini, 
>
> The way how I do it:
> for single connection:
> 1. send 1 request via  request StreamObserver, to let initial connection 
> established 
> 2. start the timer, send 1 requests
> 3. end the timer when see all results from the response 
> StreamObserver.onNext() that the client passed to the server. the logic is 
> just System.out.println
>
> for multiple connections:
> 1. send 1 request for each channel created, to let initial connection 
> established
> 2 start the timer, send 1000 per connection, total 10 connections, so 
> total 1 requests
> 3. end the timer when see all the results from the response 
> StreamObserver.onNext() that the client passed to the server, for all 
> connections, the logic is just System.out.println
>
> Thanks!
>
> for multiple connection:
>
> On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:
>>
>> Could you provide some stats on your observation and how you are 
>> measuring this? Two streams sharing a connection vs. separate connections 
>> could be faster due to these reasons:
>> - One less socket to service: less system calls, context switching, cache 
>> misses etc.
>> - Better batching of data from different streams on a single connection 
>> resulting in better connection utilization and larger av. pkt size on the 
>> wire.
>>
>> On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Carl, 
>>>
>>> Thanks for the very detailed explanation! my question is why I observed 
>>> using a separate TCP connection per stream was SLOWER!
>>>
>>> If the single TCP connection for multiple streams are faster (Regardless 
>>> the reason), will the connection get saturated? e.g. too many streams 
>>> sending on the same TCP connection.
>>>
>>>
>>> On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:

 I may have misinterpretted your question; are you asking why gRPC 
 prefers to use a single connection, or why you observed using a separate 
 TCP connection per stream was faster?

 If the first, the reason is that the number of TCP connections may be 
 limitted.   For example, making gRPC requests from the browser may limit 
 how many connections can exist.   Also, a Proxy between the client and 
 server may limit the number of connections.   Connection setup and 
 teardown 
 is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
 to reuse a connection.

 If the second, then I am not sure.   If you are benchmarking with Java, 
 I strongly recommend using the JMH benchmarking framework.  It's difficult 
 to setup, but it provides the most accurate, believe benchmark results.

 On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
 wrote:
>
> Hi Carl, 
>
> Thanks for the explanation, however, that still does not explain why 
> using single tcp for multiple streamObserver is faster than using 1 tcp 
> per 
> stream. 
>
> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo 
> wrote:
>>
>> gRPC does connection management for you.  If you don't have any 
>> active RPCs, it will not actively create connections for you.  
>>
>> You can force gRPC to create a connection eaglerly by calling 
>> ManagedChannel.getState(true), which requests the channel enter the 
>> ready 
>> state. 
>>
>> Do note that in Java, class loading is done lazily, so you may be 
>> measuring connection time plus classload time if you only measure on the 
>> first connection.
>>
>> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi, 
>>>
>>> I am doing some experiment with gRPC java to determine the right 
>>> gRPC call type to use. 
>>>
>>> here is my finding:
>>>
>>> creating 4 sets of StreamObservers (1 for Client to send request, 1 
>>> for Server to send response), sending on the same channel is slightly 
>>> after 
>>> than sending on 1 channel per stream.
>>> I have already elimiated the time of creating initial tcp connection 
>>> by making a initial call to let the connection to be established, then 
>>> start the timer. 
>>>
>>> I just wonder why this is the case?
>>>
>>> Thanks!
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ba0d54c6-8bbe-4106-

Re: [grpc-io] if we switch from netty to okttp in a non-android project, is there any potential different?

2018-08-20 Thread 'Eric Anderson' via grpc.io
grpc-okhttp works fine on regular, non-Android Java versions. Security is
different than grpc-netty, so you can't use netty-tcnative. You can use
Java 9+ or Conscrypt. If using Conscrypt, you should install it as the
default provider with Security.insertProviderAt(Conscrypt.newProvider(), 1);
before calling grpc. Netty uses non-blocking I/O via NIO whereas OkHttp
uses blocking I/O with Input/OutputStreams. Thus each TCP connection with
OkHttp will use 1-2 threads.

Why are you interested in using okhttp with non-Android?

On Thu, Aug 16, 2018 at 2:56 PM Grpc learner  wrote:

> If we are not building the Android software but regular java maven project
> for PC/Linux
> what is different?
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1bd39209-2854-42d4-b077-3c94dfbb0e77%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oMRb1tgSWry4ALNcwUKoBVMVBdJ7bxaX%2BP9qVsF17PWyw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] Re: Establishing multiple grpc subchannels for a single resolved host

2018-08-20 Thread alysha.gardner via grpc.io
Hey Srini,

I've tested pretty aggressive KeepAlive config with the following 
parameters:

'grpc.http2.min_time_between_pings_ms': 1000,
'grpc.keepalive_time_ms': 1000,
'grpc.keepalive_permit_without_calls': 1

Is there anything I'm missing? Ideally I would like this solution to handle 
both explicit RST and also things like firewalls blackholing inactive 
connections (which we've seen happen in the past), so getting keepalive to 
detect a dead connection would be great.

Thanks,
Alysha

On Friday, August 17, 2018 at 8:17:43 PM UTC-4, Srini Polavarapu wrote:
>
> Hi Alysha,
>
> How did you confirm that client is going into backoff and it is indeed 
> receiving a RST when nginx goes away? Have you looked at the logs gRPC 
> generates when this happens? One possibility is that nginx doesn't send RST 
> and client doesn't know that the connection is broken until TCP timeout 
> occurs. Using keepalive will help in this case.
>
> You can try using wait_for_ready=false 
> 
>  so 
> the call fails immediately and you can retry.
>
> A recent PR allows you to reset the backoff period. 
> https://github.com/grpc/grpc/pull/16225. It is experimental and doesn't 
> have python or ruby API so it can't be of immediate help.
>
> On Friday, August 17, 2018 at 12:58:12 PM UTC-7, alysha@shopify.com 
> wrote:
>>
>> Hey Carl,
>>
>> This is with L7 nginx balancing, the reason we moved to nginx from L4 
>> balancers was so we could do per-call balancing (instead of per-connection 
>> with L7).
>>
>> >  In an ideal world, nginx would send a GOAWAY frame to both the client 
>> and the server, and allow all the RPCs to complete before tearing down the 
>> connection.
>>
>>  I agree a GOAWAY would be better but it seems like nginx doesn't do that 
>> (at least yet), they just RST the connection :(
>>
>> > The client knows how to reschedule and unstarted RPC onto a different 
>> connection, without returning an UNAVAILABLE.  
>>
>> Even when we were using L4 it seemed like a GOAWAY from the Go server 
>> would put the Core clients in a backoff state instead of retrying 
>> immediately. The only solution that worked was a round-robin over multiple 
>> connections and a slow-enough rolling restart so the connections could 
>> re-establish before the next one died.
>>
>> > When you say multiple connections to a single IP, does that mean 
>> multiple nginx instances listening on different ports?
>>
>> No, it's a pool of ~20 ingress nginx instances with an L4 load balancer, 
>> so traffic looks like client -> L4 LB -> nginx L7 -> backend GRPC pod. The 
>> problem is the L4 LB in front of nginx has a single public IP.
>>
>> > I'm most familiar with Java, which can actually do what you want.  The 
>> normal way is the create a custom NameResolver that returns multiple 
>> address for a single address, which a RoundRobin load balancer will use
>>
>> Yeah I considered writing something similar in Core but I was worried it 
>> wouldn't be adopted upstream because of the move to external LBs? It's very 
>> tough (impossible?) to add new resolvers to Ruby or Python without 
>> rebuilding the whole extension, and we're pretty worried about maintaining 
>> a fork of the C++ implementation. It's nice to hear the approach has some 
>> merits, I might experiment with it.
>>
>> Thanks,
>> Alysha
>>
>> On Friday, August 17, 2018 at 3:42:31 PM UTC-4, Carl Mastrangelo wrote:
>>>
>>> Hi Alysha,
>>>
>>> Do you you know if nginx is balancing at L4 or L7?In an ideal world, 
>>> nginx would send a GOAWAY frame to both the client and the server, and 
>>> allow all the RPCs to complete before tearing down the connection.   The 
>>> client knows how to reschedule and unstarted RPC onto a different 
>>> connection, without returning an UNAVAILABLE.  
>>>
>>> When you say multiple connections to a single IP, does that mean 
>>> multiple nginx instances listening on different ports?
>>>
>>> I'm most familiar with Java, which can actually do what you want.  The 
>>> normal way is the create a custom NameResolver that returns multiple 
>>> address for a single address, which a RoundRobin load balancer will use.  
>>> It sounds like you aren't using Java, but since the implementations are all 
>>> similar there may be a way to do so.  
>>>
>>> On Friday, August 17, 2018 at 8:46:49 AM UTC-7, alysha@shopify.com 
>>> wrote:

 Hi grpc people!

 We have a setup where we're running a grpc service (written in Go) on 
 GKE, and we're accepting traffic from outside the cluster through nginx 
 ingresses. Our clients are all using Core GRPC libraries (mostly Ruby) to 
 make calls to the nginx ingress, which load-balances per-call to our 
 backend pods.

 The problem we have with this setup is that whenever the nginx 
 ingresses reload they drop all client connections, which results in spikes 
 of Unavailable errors from

[grpc-io] Re: Do we have any examples to use grpc in C

2018-08-20 Thread Prashant Shubham
Can anyone guide how to write C wrapper class for grpc?  

On Wednesday, January 27, 2016 at 5:12:53 AM UTC+5:30, yyd...@gmail.com 
wrote:
>
> Hi guys,
>
> How to use grpc in C?
>
> From thread history, the suggestion is to use C++. Understood this, while 
> we need to use grpc in C as our existing code base is C.
>
> Is it good to wrap grpc code in a shared library for our use case and call 
> it from C? 
>
> Or would suggest to use the low level C API with limited functionality?
>
> Thanks a lot!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/dcc0b2db-4e1a-470e-98bd-012a48f96f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.