Re: [grpc-io] [android][java][cloud-speech] how can I know about the channel status?

2017-03-27 Thread David Edery
Thank you for your answer :) On Monday, March 27, 2017 at 7:19:38 PM UTC+3, Eric Anderson wrote: > > On Sun, Mar 26, 2017 at 9:28 AM, David Edery > wrote: > >> 500ms is too much for my app to wait before streaming. This is why I >> prepare everything before and I

Re: [grpc-io] Question about timeout detection at steaming

2017-03-27 Thread 'Kun Zhang' via grpc.io
Not sure what code you are referring to. Use OkHttpChannelBuilder.enableKeepAlive() or NettyChannelBuilder.enableKeepAlive()

[grpc-io] Is there any way to determine when an RPC ends using Go's StreamClientInterceptor?

2017-03-27 Thread ryan . burn
I'm working on instrumenting Go's streaming RPCs for OpenTracing. Is there a good way to figure out when an RPC ends on the client-side so that a span can be accurately measured? (I also asked this on StackOverflow: http://stackoverflow.com/q/42988396). I've been looking into a returning a

Re: [grpc-io] Question about timeout detection at steaming

2017-03-27 Thread Carfield Yim
Cool, thanks a lot, this is the sample code, right? https://github.com/grpc/grpc-java/issues/1648 And at the moment it is only for Java client, right? On Mon, Mar 27, 2017 at 11:45 PM, 'Kun Zhang' via grpc.io < grpc-io@googlegroups.com> wrote: > Ideally gRPC client library should detect the

Re: [grpc-io] Re: Can more than one grpc service run in a single process?

2017-03-27 Thread 'Eric Anderson' via grpc.io
On Wed, Mar 22, 2017 at 11:09 PM, wrote: > On Thursday, March 23, 2017 at 11:27:49 AM UTC+5:30, falco...@gmail.com > wrote: >> >> Hey Eric. He asked about the C++ API and not C# :) I didn't answer since >> I don't know the C++ API. > > I don't know where I saw C#, but I did

Re: [grpc-io] [android][java][cloud-speech] how can I know about the channel status?

2017-03-27 Thread 'Eric Anderson' via grpc.io
On Sun, Mar 26, 2017 at 9:28 AM, David Edery wrote: > 500ms is too much for my app to wait before streaming. This is why I > prepare everything before and I make sure that at the end of a recognition > operation the full structure is prepared for the next iteration.

Re: [grpc-io] Question about timeout detection at steaming

2017-03-27 Thread 'Kun Zhang' via grpc.io
Ideally gRPC client library should detect the stuck connection and give you an error, which is what the keep-alive option tries to achieve. On Saturday, March 25, 2017 at 8:21:38 AM UTC-7, Carfield Yim wrote: > > oh, I see, so even if the server just don't get stuck, I should still get > an

Re: [grpc-io] URGENT: Crash in grpc code while running asynchorous stream for long time.

2017-03-27 Thread 'Craig Tiller' via grpc.io
If gRPC allowed you to start a new write before the old one completed, then there's an opportunity for unbounded memory growth, which could crash your application (out of memory errors). Instead, we limit the outstanding write requests to one, and force the application to deal with push-back. In

Re: [grpc-io] URGENT: Crash in grpc code while running asynchorous stream for long time.

2017-03-27 Thread Chaitanya Gangwar
Thanks Craig for your response. In my design, I have one thread where I start the grpc server and wait on completion queue for events. As soon, any client connects I store the AsynWriter pointer on a map and from some other thread I am doing a post on that AsyncWriter every 5 secs. is this a

Re: [grpc-io] URGENT: Crash in grpc code while running asynchorous stream for long time.

2017-03-27 Thread 'Craig Tiller' via grpc.io
It looks like you're trying to start a new write while there's one already outstanding. This is unsupported and will crash (although we should admittedly give better messaging). You need to ensure that the previous write has delivered it's tag back via the completion queue before starting a new

[grpc-io] URGENT: Crash in grpc code while running asynchorous stream for long time.

2017-03-27 Thread Chaitanya Gangwar
Hi , I am seeing following a crash in grpc code. I have 5 node setup, where all nodes are streaming out data. After few hours of streaming, I am seeing a crash on 2 nodes but rest 3 nodes are working fine. It is not always reproducible. Looks like some timing issue. Below is the crash : #0