[grpc-io] Re: how can i use channelz in java?

2019-07-31 Thread 'Carl Mastrangelo' via grpc.io
Hi!   I am the primary author of Channelz.Right now Channelz is a gRPC 
service which means that it needs to be manually added to your server, and 
called using a gRPC client.If you have a working server, just 
add ChannelzService to it.

At the moment, we have a very hacky Web UI for making gRPC-web calls to the 
server and building a little HTML, but it is more of a proof of concept.  
 You can also use grpc_cli to make the calls, and get data for specific 
channels.   I'm sorry the user experience around this isn't very good yet, 
but all the core functionality should work via the RPC interface. 

On Monday, July 29, 2019 at 1:32:05 PM UTC-7, Elhanan Maayan wrote:
>
> hi.. 
> i'm trying to debug and see how come my channels keeps receving data after 
> i call shutdown and await for terminate 
>
> i've seen the go samples but there doesn't seem to be 
> a RegisterChannelzServiceToServer method in the java api. 
>
> i already have the docker image running and i'm assuming i need to 
> configure it somehow to connect the channelz running on my machine. 
> but i'm not sure how do use the channelz api and connect it to the app
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/10329f57-5c48-4c03-8c82-93892f9f0a52%40googlegroups.com.


[grpc-io] Re: receiving the byte array for a protobuff in a streaming grpc

2019-06-24 Thread 'Carl Mastrangelo' via grpc.io
You don't have to use the proto generated stubs.   You can make your own 
serializer / deserializer and persist the data.  You can even delegate to 
the next serializer and reuse the proto ones, if you want.   (this is for 
Java)

On Monday, June 24, 2019 at 7:18:50 AM UTC-7, Elhanan Maayan wrote:
>
> hi..
>
> is there a way for a streaming call in grpc to accept it's binary array 
> prior to it being turned into protobuff? 
>
> i'm aware that protobuff itself has this ability, but getting the byte 
> array , turning it into an object and then turning it back to byte array 
> seems redundant. 
>
> the reason is that i'd like to create a "recording file " of the incoming 
> traffic (to be played back later).
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4d64a907-1311-4d5d-b540-aa94256b4174%40googlegroups.com.


[grpc-io] Re: Thread leak while use ManagedChannel in grpc Java

2019-06-24 Thread 'Carl Mastrangelo' via grpc.io
You'll need to include a thread dump to diagnose this further.  

On Wednesday, June 19, 2019 at 8:52:54 PM UTC-7, Deepak Agarwal wrote:
>
> Hi all,
>   I am using grpc java as a grpc client to connect to a remote GRPC 
> server. The RPC is a server streaming RPC.
> I first build the channel (have not provided customized executor), and 
> then invoke the RPC.
>
> Scenario:
> 1. I have several destination servers where i have to connect.
> 2. for one destination, the same RPC can be invoked multiple times (with 
> different parameters)
> 3. The RPC is server streaming RPC.
>
> What did I do:
> 1. I created one ManagedChannel(have not provided customized executor) for 
> each destination server.
> 2. On the same ManagedChannel, i invoke multiple RPCs as and when 
> required, and the RPCs keeps running (server keeps on streaming) until user 
> stops the operation.
>
> Problem:
> After few hours, i see there are many threads(in several hundreds) which 
> are spawn by the application and it keeps on increasing with time.
>
> PS: I don't have a thread dump. The thread leak was identified by some 
> external tool in production environment
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/55bf968a-5a00-4975-8508-6aa615ec830d%40googlegroups.com.


[grpc-io] Re: Implementing client streaming on server side (java)

2019-06-19 Thread 'Carl Mastrangelo' via grpc.io
Regular hashMap is enough, as long as there is exactly one map per RPC.  If 
there are multiple RPCs, you will need to synchronize.  (Stated 
differently: each request StreamObserver and map pair should be 1-1 and 
onto, e.g a bijection). 


More detail:   each RPC has an associated SerializingExecutor that handles 
the callbacks for that RPC.   The SerializingExecutor is executed inside of 
the executor you provided to the ServerBuilder at construction.  The SE 
ensures that no callback for the given RPC overlaps with any other callback 
for the same RPC.  Thus, you can be sure that you don't get onCancelled in 
the middle of onMessage.   The events may happen on different threads, 
which is up to the executor you provided to the server.  This is okay, 
because there is a "Happens Before" relationship between the callbacks.  
The necessary synchronization barriers are present  to ensure you don't 
need to add your own.  

On Wednesday, June 19, 2019 at 12:50:56 AM UTC-7, Alexander Furer wrote:
>
> Thanks Carl, but I'll rephrase  the question.
> On the server side,my implementation of 'onNext'  aggregates the streaming 
> messages in to some in-memory storage, say Map. No one else is accessing it 
> till I get OnCompleted from client. 
> Does this map should be ConcurrentMap (in java language)  or regular 
> HashMap is enough ?
> I'm asking if  each 'onNext' waits completion of previous one before being 
> executed or they can be invoked in parallel ?
>  
>
>
> On Wednesday, June 12, 2019 at 7:41:00 PM UTC+3, Carl Mastrangelo wrote:
>>
>> What that means is that the messages will never be reordered.  You MUST 
>> synchronize access to the RPC, either by ensuring only one thread ever 
>> accesses it, or adding your own synchronization.  
>>
>> The comment you see is more in regards to other network stuff (like UDP), 
>> where packets can be reordered.  
>>
>> On Wednesday, June 12, 2019 at 1:38:15 AM UTC-7, Alexander Furer wrote:
>>>
>>> From the grpc guide "gRPC guarantees message ordering within an 
>>> individual RPC call."
>>>
>>> Given the service definition  
>>>
>>> rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse) {}
>>>
>>> does it mean that implementation of  onNext(HelloRequest request)  does 
>>> NOT need to be synchronized ? 
>>> In the other words, the onNext(HelloRequest request) is invoked  
>>> sequentially one after another ?
>>> Thanks
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/28384952-e945-4f85-b1de-84b1455d8715%40googlegroups.com.


[grpc-io] Re: Julia language

2019-06-17 Thread 'Carl Mastrangelo' via grpc.io
Sorry, I don't think anyone on the gRPC team has Julia experience.  

On Sunday, June 16, 2019 at 12:56:17 AM UTC-7, ondrej...@gmail.com wrote:
>
> Hi,
>
>  Are there any plans to support  Julia language?
>
> Best Regards,
>
> Ondra
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b91806f6-18ef-40f3-8fe3-5a10ffa515f7%40googlegroups.com.


[grpc-io] Re: java grpc server how to handle RejectedExecutionException

2019-06-17 Thread 'Carl Mastrangelo' via grpc.io
Never set a max queue size on the executor.  Instead, use the 
ClientCall.request() and ServerCarll.request() methods to avoid overloading 
the queue.  If your server or client is overloaded, just accept the RPC and 
immediately close it.
 

On Monday, June 10, 2019 at 8:48:16 PM UTC-7, fangli...@126.com wrote:
>
> hi all
>when build a grpc server by ServerBuilder, I use the 
> ServerBuilder.executor(myExecutor) to bind my own executor. I set the max 
> thread count 、queue capacity and throw rejectExecutionException when the 
> executor is full in the myExecutor.
>
>my question is : when the executor is full, how to catch the 
> exception and converted to the grpc status. Also I want the grpc client 
> distinguish 
> the error is cause by the reject of executor.
>
>  
>thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/46e5c1e2-be43-4cfb-bbb5-be4b9e45d36c%40googlegroups.com.


[grpc-io] Re: Switch Network interfaces while stream is running

2019-06-14 Thread 'Carl Mastrangelo' via grpc.io
Replies inline:



On Friday, June 14, 2019 at 12:15:29 AM UTC-7, tob...@googlemail.com wrote:
>
> Hi group,
>
> are there any experiences here how GRPC handles a switch of the network 
> interface while a bi-di stream is running?
>

Sorry, this is not supported currently.   It is theoretically solvable if 
using QUIC, which has a separate connection identifier in each packet, but  
I don't think gRPC supports it yet.   File an issue on the GitHub tracker 
if you are interested in it. 
 

> E.g. I start a connection on a laptop that is using LAN and WLAN.Due to 
> the interface metric I would expect that the channel would prefer the LAN 
> connection. 
>
> But what happens to the stream if I pull the network cable?
>

It will take a while to realize the packets are not being responded to.  
You can set a keep alive timer on the channel to make sure you noticed 
within some time frame.
 

>
> Now a WLAN only scenario - e.g. I walk with the laptop through a large 
> building with many access points.
> What happens to the stream if the WLAN switches to a different accesspoint 
> (still connected to the same network)?
>
> Are there any differences between IPv4 and IPv6?
>

Not really, at least in the scope of this question.

 

>
> I would appreciate any thoughts or input on this matter.
>
> Thanks
> Tobias
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9375d9e7-0790-489c-95bc-633bce0cf45b%40googlegroups.com.


[grpc-io] Re: Implementing client streaming on server side (java)

2019-06-12 Thread 'Carl Mastrangelo' via grpc.io
What that means is that the messages will never be reordered.  You MUST 
synchronize access to the RPC, either by ensuring only one thread ever 
accesses it, or adding your own synchronization.  

The comment you see is more in regards to other network stuff (like UDP), 
where packets can be reordered.  

On Wednesday, June 12, 2019 at 1:38:15 AM UTC-7, Alexander Furer wrote:
>
> From the grpc guide "gRPC guarantees message ordering within an individual 
> RPC call."
>
> Given the service definition  
>
> rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse) {}
>
> does it mean that implementation of  onNext(HelloRequest request)  does 
> NOT need to be synchronized ? 
> In the other words, the onNext(HelloRequest request) is invoked  
> sequentially one after another ?
> Thanks
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1c1c6ed8-ff72-4650-a086-8c4af1bac7bb%40googlegroups.com.


[grpc-io] Re: Allow complete override of User-Agent Header on a per-RPC level from grpc client libraries

2019-06-12 Thread 'Carl Mastrangelo' via grpc.io
Probably a better place to bring this up is as an issue on the GitHub 
tracker, as a feature request: https://github.com/grpc/grpc-java/issues

On Tuesday, June 11, 2019 at 12:40:18 PM UTC-7, Asad Ali wrote:
>
> bump.
>
> i am not sure if this was the right forum to bring this up.
> any advice on who/where to continue this conversation with is much 
> appreciated..
>  
> -A
>
> On Friday, June 7, 2019 at 4:21:58 PM UTC-7, Asad Ali wrote:
>>
>> tl;dr
>>
>> This is a followup from a discussion that was initiated on gitter 
>> grpc/grpc channel.
>> Currently the grpc/java library reuses the User-Agent from the channel 
>> for each RPC
>> and discards User-Agent by treating it as a reserved header. 
>>
>> However, User-Agent is not a reserved header and this creates 
>> complications
>> when trying to write a proxy-like gRPC service for HTTP endpoints that 
>> care
>> about User-Agent for response customization.
>>
>> Posting it here to get more ideas about how to resolve this.
>>
>> quoting the conversation below:
>>
>> Asad @asadali Jun 04 13:10
>> What was the underlying reason for the restriction on not allowing 
>> User-Agent
>> to be overridden on a per-call basis?  can't seem to find a spec which 
>> reserves
>> the User-Agent string for gRPC/HTTP2 and yet there is code in place in the
>> libraries (grpc-java/ grpc-go ..) to discard any user-supplied metadata
>> regarding User-Agent and always use the channel's value eg:
>> Utils.convertServerHeaders
>>
>> Asad @asadali Jun 04 13:16
>> use-case:
>> client ---> httpSVC-A ---> grpcSVC-B ---> httpSVC-C
>> how can the client's user-agent be conveyed to httpSVC-C?  if A-B have 
>> ONLY one
>> channel open between them with a channel-level User-Agent that can't be
>> overridden
>> @ejona86 ^ question regarding user-agent behavior
>>
>> Eric Anderson @ejona86 Jun 04 14:34
>> @asadali, user-agent is a built-in feature as gRPC itself sends it.  
>> There is
>> an API to change what gRPC sends, but there didn't seem to be any need to 
>> allow
>> it to be changed per-RPC.
>>
>> Asad @asadali Jun 04 14:43
>> @ejona86 we seem to have a use-case in which a per-RPC user-agent will 
>> make
>> things easier for us. The alternate is to use custom metadata fields to
>> preserve this information. that approach seems non-standard and we were 
>> hoping
>> to avoid it.  Will it be possible to include a per-RPC user-agent in 
>> gRPC? i
>> will be happy to code it up. but based on what i read in past issues, this
>> request was repeatedly turned down.
>>
>> Eric Anderson @ejona86 Jun 04 14:44
>> That is a cross-language decision. You would need to make clear what the
>> use-case for it is.  Right now, it isn't clear what the use-case is.
>> Oh. I see now.
>> You want to communicate the origin client's user-agent to SVC-C
>> Yeah. That's not appropriate for user-agent.
>>
>> Asad @asadali Jun 04 14:45
>> ack
>>
>> Eric Anderson @ejona86 Jun 04 14:46
>>  unless you are making something closer to a proxy. Maybe.  It sort of
>> seems like a can of worms. It just makes a mess of things.
>> But I think I understand now.
>>
>> Asad @asadali Jun 04 14:48
>> so the intermediate gateways aren't pure proxies but maybe more like
>> aggregators. in the non-GRPC world, the implementation made an assumption 
>> that
>> User-Agent is the originating client's user-agent. and all intermediate 
>> hops
>> honored that.  I agree, that this is a very loose reading of the spec. I 
>> feel
>> the more logical method is to update the user-agent on each hop
>> however, systems built around that assumption aren't happy when they lose 
>> this
>> info :( IMO, gRPC clients can default to per-channel behavior but the 
>> choice
>> should ultimately be left to the user if they want to override it
>>
>> Eric Anderson @ejona86 Jun 04 14:51
>> Well, today the application can't set the entire user-agent. gRPC will 
>> always
>> include itself in the user-agent.  I'm trying to check what HTTP says to 
>> do for
>> user-agent and proxies.
>>
>> Asad @asadali Jun 04 14:52
>> yeah i can use another opinion on this. and current gRPC behavior is what 
>> I am
>> trying to rationalize. does it need to always include its user-agent?
>>
>> Eric Anderson @ejona86 Jun 04 17:31
>> @asadali, proxies do forward the user-agent. We do want to enable grpc 
>> proxies,
>> so that does mean we should forward the user-agent. Although on the 
>> server, any
>> compatibility quirks would generally be with the proxy, not the 
>> end-client. So
>> it still seems muddled, but it does seem we should consider it. 
>>
>>  
>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1a4a2f68-18f4-4295-815b-df0aef8ca63b%40googlegroups.com.


[grpc-io] Re: Detecting bad connection in java

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
You'll need to read the docs of that method I linked (and the other ones 
with "keepalive" in the name).  It discusses what will happen with regards 
to active calls.

On Wednesday, June 5, 2019 at 11:02:34 AM UTC-7, yfe...@gmail.com wrote:
>
> Hey Carl, 
>
> Thanks for your help with this!
>
> Is it correct that if I am submitting calls every 10 seconds, but my 
> keepAliveTime > 10seconds, then no keep alives would be sent? Or are keep 
> alives sent regardless of request activity?
> Is it only failed keep alive pings that will indicate a channel is down 
> but not regular requests? 
>
> Thanks again,
>
> Yosef
>
> On Wednesday, June 5, 2019 at 1:57:25 PM UTC-4, Carl Mastrangelo wrote:
>>
>> Try setting the keep alive settings defined here:
>>
>>
>> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannelBuilder.html#keepAliveTimeout-long-java.util.concurrent.TimeUnit-
>>
>> On Wednesday, June 5, 2019 at 10:48:38 AM UTC-7, yfe...@gmail.com wrote:
>>>
>>> Hey folks, 
>>>
>>> I'm making calls using a `ManagedChannel` over a VPN connection. When 
>>> the VPN renegotiates (~ 8 hours), a periodic call (issued every 10 seconds) 
>>> that would ordinarily take 1ms times out repeatedly for about 15 min 
>>> instead before the `ManagedChannel` figures out the connection is bad 
>>> (fails with `RuntimeException: UNAVAILABLE`) and resets. Is there a way to 
>>> make that happen faster? I've set the deadline for this call to 2 seconds 
>>> which is typically more than enough time (usually comes back in under 10 
>>> ms). The logs end up looking something like 
>>>
>>>
>>> 14:14:05.256 io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline 
>>> exceeded after 401019ns
>>> 14:14:15.257 io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline 
>>> exceeded after 401019ns
>>>
>>> ... call is made every 10 sec and fails like this for 15 min before 
>>>
>>> 14:30:42.236 java.io.IOException: Connection timed out
>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>>> at 
>>> io.grpc.netty.shaded.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
>>> at 
>>> io.grpc.netty.shaded.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
>>> at 
>>> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
>>> at 
>>> io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
>>> at 
>>> io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>> at java.lang.Thread.run(Thread.java:748)
>>> Wrapped by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
>>>
>>> After which everything resumes normally.
>>>
>>> How can I get `ManagedChannel` to pick up that the connection is bad and 
>>> to reset earlier than 15min. I've tested outside of grpc that the VPN 
>>> renegotiation takes about 2 seconds, not anywhere close to 15 min.
>>>
>>> I'm using GRPC for java 1.18.0 with Netty
>>>
>>> Many thanks!
>>>
>>> Yosef
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a9484748-ec96-492b-9b58-9005de6db80b%40googlegroups.com.


[grpc-io] Re: Detecting bad connection in java

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
Try setting the keep alive settings defined here:

https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannelBuilder.html#keepAliveTimeout-long-java.util.concurrent.TimeUnit-

On Wednesday, June 5, 2019 at 10:48:38 AM UTC-7, yfe...@gmail.com wrote:
>
> Hey folks, 
>
> I'm making calls using a `ManagedChannel` over a VPN connection. When the 
> VPN renegotiates (~ 8 hours), a periodic call (issued every 10 seconds) 
> that would ordinarily take 1ms times out repeatedly for about 15 min 
> instead before the `ManagedChannel` figures out the connection is bad 
> (fails with `RuntimeException: UNAVAILABLE`) and resets. Is there a way to 
> make that happen faster? I've set the deadline for this call to 2 seconds 
> which is typically more than enough time (usually comes back in under 10 
> ms). The logs end up looking something like 
>
>
> 14:14:05.256 io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline 
> exceeded after 401019ns
> 14:14:15.257 io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline 
> exceeded after 401019ns
>
> ... call is made every 10 sec and fails like this for 15 min before 
>
> 14:30:42.236 java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at 
> io.grpc.netty.shaded.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
> at 
> io.grpc.netty.shaded.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
> at 
> io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
> at 
> io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
> at 
> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
> at 
> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
> at 
> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
> at 
> io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
> at 
> io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
> at 
> io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.lang.Thread.run(Thread.java:748)
> Wrapped by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
>
> After which everything resumes normally.
>
> How can I get `ManagedChannel` to pick up that the connection is bad and 
> to reset earlier than 15min. I've tested outside of grpc that the VPN 
> renegotiation takes about 2 seconds, not anywhere close to 15 min.
>
> I'm using GRPC for java 1.18.0 with Netty
>
> Many thanks!
>
> Yosef
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/832fcfb2-d64e-4ab8-b8a9-0af95e8c77b1%40googlegroups.com.


[grpc-io] Re: Servers in PHP?

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
There isn't support for running a PHP gRPC server (and nothing on the 
roadmap either).You might be able to convert an HTTP/2 request to an 
HTTP/1.1 request and forward that to the PHP server, but I am not sure how 
the server can respond with trailers.  The spec for gRPC is here: 
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md

You could in theory directly respond using the headers (and trailers) 
described there.  You wouldn't even need the gRPC library to do it.

On Friday, May 24, 2019 at 5:19:29 AM UTC-7, 
andrew...@global-fashion-group.com wrote:
>
> From a naive, first principles thought given that gRPC is implemented over 
> HTTP/2 is it possible to implement a server in PHP mediated by a proxy such 
> as NGINX or with mod_php?
>
> Might only be possible with Unary RPC, but that's as much as REST provides 
> us now anyways
>
> On Wednesday, January 27, 2016 at 11:23:26 PM UTC+1, 
> stephe...@bigcommerce.com wrote:
>>
>> Hey all—
>>
>> It appears as of right now you can only create CLIENTS in PHP, but not 
>> servers. I was wondering what the technical blockers behind this were and 
>> if it's on the roadmap for a future release?
>>
>> Thanks!
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/826680e7-a15d-419f-8e89-f465937d12d2%40googlegroups.com.


[grpc-io] Re: Is it possible to setup gRPC on board

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
We don't support C directly; it is only used to interact across language 
bounds (such as plugging into Python, Ruby, PHP, etc.) . 

That said, gRPC does run on a lot of environments, and on varied hardware.  
I suspect you could get it working.

On Tuesday, May 28, 2019 at 1:38:25 AM UTC-7, trun...@gmail.com wrote:
>
> In introduction about gRPC, it is possible to support C++, Java,  But 
> C doesnt seem to be mention. Is it possible to set up gRPC by C on hardware 
> (ESP32 board)?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c0c59410-74c1-4f64-bfd2-932e0d1a1511%40googlegroups.com.


[grpc-io] Re: c++ empty response async example

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
Can you build the examples that come with gRPC? Your error looks like you 
might not have protobuf installed.

On Thursday, May 30, 2019 at 8:53:40 AM UTC-7, 윤석영 wrote:
>
> Can I get example codes for c++ empty reponse in async mode
>
> My current code occurs compile error. 
>
> 
> struct AsyncClientCall {
> google.protobuf.Empty reply;
> ClientContext context;
> Status status;
> std::unique_ptr> 
> response_reader;
> };
>
>
>
> In file included from ./src/CommGrpcService.cpp:1:0:
> ./hdr/CommGrpcService.h:28:9: error: 'google' does not name a type
>  google.protobuf.Empty reply;
>  ^
> ./hdr/CommGrpcService.h:31:67: error: template argument 1 is invalid
>  std::unique_ptr> 
> response_reader;
>^
> ./hdr/CommGrpcService.h:31:72: error: template argument 1 is invalid
>  std::unique_ptr> 
> response_reader;
> ^
> ./hdr/CommGrpcService.h:31:72: error: template argument 2 is invalid
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5929f668-98ca-4bac-9d10-467cb8faf520%40googlegroups.com.


[grpc-io] Re: Building gRPC, Protobuf, etc with SCons

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
Hi,

We currently don't support SCons, and don't know of anyone else who has 
used it with gRPC.   We are not opposed to accepting contributions to 
generating the configuration for SCons from our source of truth (a YAML 
file which we use to generate Make, CMake, and Bazel build files).   

On Thursday, May 30, 2019 at 1:44:49 PM UTC-7, nathan...@gmail.com wrote:
>
> Hello,
>
> I work at a company that relies heavily on SCons for its C++ build tooling 
> (don't ask me why). I've found some examples of generating Protocol Buffer 
> files with SCons:
>
> https://bitbucket.org/scons/scons/wiki/ProtocBuilder
>
> But I can't find any working examples searching online for building the 
> gRPC C/C++ libraries and *.grpc.pb.cc, *.grpc.pb.h files, etc. using 
> SCons. My immediate goal is to build the helloworld example 
>  with 
> SCons as a proof of concept.
>
> Asking the community for any references/instructions if you are aware of 
> any.
>
> - NM
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/347c291c-9e17-4447-b1b4-827c9c2138a6%40googlegroups.com.


[grpc-io] Re: When grpc client buffers in HTTP/2

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
Hi, 

The deadline starts as soon as it gets from the Stub into the core library 
(i.e. after it passes through all interceptors).   The "deadline" is a 
specific point in time after which the RPC is no longer valid.   When you 
set a deadline on an RPC, the buffering doesn't come into account, since 
buffering or no buffer, the point in time has not changed.   Compare this 
to the similar "timeout" which is a relative amount of time and does get 
affected by buffering.   gRPC exposes the former to make reasoning about 
when the RPC.

On Thursday, May 30, 2019 at 6:12:01 PM UTC-7, hitti...@gmail.com wrote:
>
> Recently, I asked a question when the deadline starts to count down. (grpc 
> Issue #19155 ) One told me 
> that the time taken for buffering in HTTP/2, TCP and the network is all 
> part of deadline count down. But when gprc client buffers in HTTP/2. Does 
> the client buffer the requests when the user calls `stub->Echo(&context, 
> request, response)`? I hope anyone acquainted with that can help me. 
> Thanks! 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/da8aa1fa-527d-4932-b6eb-eeaa383bc84f%40googlegroups.com.


[grpc-io] Re: Relying on IllegalStateException: Call already half-closed

2019-06-05 Thread 'Carl Mastrangelo' via grpc.io
There is no side effect.  The gRPC library checks the state before making 
any mutations.  It's still a bug to call it twice.What is the use case?

On Tuesday, June 4, 2019 at 2:18:03 AM UTC-7, changxu wrote:
>
> Hi all,
>
> I wanted to know if sending second onCompleted on already half-closed rpc 
> might have any side effects beyond throwing IllegalStateException? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/44079a65-f834-4a01-bab3-40e9bdba099e%40googlegroups.com.


Re: [grpc-io] Re: Standardization of rich error reporting via google.rpc.Status

2019-05-31 Thread 'Carl Mastrangelo' via grpc.io
PR would be great.I would be happy  to do review, and find some other 
people to look as well.  

On Thursday, May 30, 2019 at 7:57:44 PM UTC-7, Chris Toomey wrote:
>
> What I was thinking of was something short and high level, added as a 
> section to https://grpc.io/docs/guides/error/ to 1) make users aware that 
> it's not just them that's thinking "wow, gRPC's error model is REALLY 
> limited, how am I going to ...", and 2) mentioning some approaches that 
> have been used to address this.
>
> I'd be happy to take a stab at it and submit a PR, but something like this:
>
> *Options for Implementing Richer Error Models*
>
> The error model described above is the only official gRPC error model and 
> is supported by all gRPC client/server libraries and is independent of the 
> gRPC data format (whether protocol buffers or something else). As such it 
> is necessarily very limited and lacks the ability to communicate error 
> details in a standard way.
>
> If you're using protocol buffers as your data format, however, many of the 
> gRPC libraries now support the richer error model developed and used by 
> Google as described here <https://cloud.google.com/apis/design/errors>. 
> If you're not using protocol buffers, but do want to continue supporting 
> the standard over-the-wire gRPC error model, you could similarly use gRPC 
> response metadata to convey error details by documenting your 
> representation model and optionally creating helper libraries to augment 
> the gRPC libraries and assist with producing and consuming the error 
> details.
>
> Some things to be aware of if you adopt such an extended error model, 
> however are ...
>
> On Thu, May 30, 2019 at 10:17 AM 'Carl Mastrangelo' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>> Because writing good documentation is time-consuming :)   At the time I 
>> joined the gRPC team, there was an HTTP RFC in the works for standardizing 
>> on a JSON error format (it didn't pan out).  Some of the original authors 
>> were exploring using it, and defining conversions between Googles own types 
>> and the "standard"-to-be.
>>
>> I wouldn't mind publishing *my* experience error handling (and I do have 
>> a lot), but it's a higher bar to speak on behalf of the rest of the team.  
>> There are some big issues that have answers that go either way:
>>
>> 1.  Should gRPC use HTTP error codes in addition to the one from the 
>> status?  Some of them seem easy enough to write (UNIMPLEMENTED -> 404). 
>> Some are more difficult (UNKNOWN -> ??   500?  )  
>> 2.  What should happen with packed error values in google.rpc.Status?  
>>  Should they be expanded into HTTP fields?Should they remain in the 
>> proto so processors can skip parsing?  
>>
>> Those are the two I know of of the top of my head, and there's a bunch of 
>> sub problems in that space that you can see gRPC *did* take a stance on 
>> (like inner error codes and messages matching the encapsulating message's 
>> error codes).   Reasonable people could go either way on several of these 
>> issues, and different environments will bias people's answers.  (If you 
>> work in Go exclusively, you might think stack traces in errors are 
>> wasteful, if you work in Java, you might be more accepting).
>>
>>
>> I am not sure this stuff answers your question.   I really wish the gRPC 
>> had a mini design series talking about the technical decisions we have made 
>> over the years.  But as I started with: writing takes time.
>>
>>
>> On Wednesday, May 29, 2019 at 9:57:39 PM UTC-7, Chris Toomey wrote:
>>>
>>> Why the reluctance to publish guidance on this key aspect of gRPC API 
>>> design?
>>>
>>> Has anybody besides Google built a gRPC/protobuf API that provides rich 
>>> error reporting? How did you implement it, and would you have benefitted 
>>> from some published guidance when you started?
>>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/grpc-io/p_gCk1bn2JE/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> grp...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/f36ae8f4-a934-41bd-89f9-552445c41093%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/f36ae8f4-a934-41bd-89f9-552445c41093%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1163311e-e6fc-4282-af7f-fe967eec347a%40googlegroups.com.


[grpc-io] Re: Standardization of rich error reporting via google.rpc.Status

2019-05-30 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Because writing good documentation is time-consuming :)   At the time I 
joined the gRPC team, there was an HTTP RFC in the works for standardizing 
on a JSON error format (it didn't pan out).  Some of the original authors 
were exploring using it, and defining conversions between Googles own types 
and the "standard"-to-be.

I wouldn't mind publishing *my* experience error handling (and I do have a 
lot), but it's a higher bar to speak on behalf of the rest of the team.  
There are some big issues that have answers that go either way:

1.  Should gRPC use HTTP error codes in addition to the one from the 
status?  Some of them seem easy enough to write (UNIMPLEMENTED -> 404). 
Some are more difficult (UNKNOWN -> ??   500?  )  
2.  What should happen with packed error values in google.rpc.Status?  
 Should they be expanded into HTTP fields?Should they remain in the 
proto so processors can skip parsing?  

Those are the two I know of of the top of my head, and there's a bunch of 
sub problems in that space that you can see gRPC *did* take a stance on 
(like inner error codes and messages matching the encapsulating message's 
error codes).   Reasonable people could go either way on several of these 
issues, and different environments will bias people's answers.  (If you 
work in Go exclusively, you might think stack traces in errors are 
wasteful, if you work in Java, you might be more accepting).


I am not sure this stuff answers your question.   I really wish the gRPC 
had a mini design series talking about the technical decisions we have made 
over the years.  But as I started with: writing takes time.


On Wednesday, May 29, 2019 at 9:57:39 PM UTC-7, Chris Toomey wrote:
>
> Why the reluctance to publish guidance on this key aspect of gRPC API 
> design?
>
> Has anybody besides Google built a gRPC/protobuf API that provides rich 
> error reporting? How did you implement it, and would you have benefitted 
> from some published guidance when you started?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f36ae8f4-a934-41bd-89f9-552445c41093%40googlegroups.com.


[grpc-io] Re: Standardization of rich error reporting via google.rpc.Status

2019-05-23 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
gRPC is not directly a Google product, it's donated to the CNCF.   Google 
has use cases where google.rpc.Status is sufficient, but there are a LOT of 
other companies that have different use cases and requirements.  The Proto 
status can be impractical on Android for instance, where Protos cause large 
binary bloat and method count.  

Also, recall that gRPC uses HTTP/2, which means trailers typically will 
contain the status.  Proxies, logging, and other request processors would 
have to unpack the proto to use the fields.  This makes it hard to use 
prebuilt tools to monitor the traffic.  As an example, consider two 
approaches to expressing a DEADLINE_EXCEEDED scenario.   In this scenario, 
your monitoring wants to know if the deadline was exceeded due to the 
server timing out, or because one of the dependent backends of your server 
timed out.   If it was the first, it should alert your monitoring, in the 
second case, the dependent backend should be alerting.   To express this 
dichotomy, you include either a custom field in status Proto, or a custom 
header in your gRPC trailing Metadata.  Lastly, suppose you have a proxy 
(like Envoy, or nginx) that can see the responses of your server and fire 
alerts.  

In the proto scenario, your proxy (or other alerter) cannot look into the 
fields, so it has to assume all DEADLINE_EXCEEDED errors are noteworthy, 
since it cannot inspect the proto.  In the trailing Metadata scenario, a 
lot more tooling can inspect the headers and make a decision.   

This scenario is a little contrived, but coming back:  If we standardized 
on the Proto, it would exclude some valid use cases, for a proto 
dependency, and make gRPC not protocol agnostic.   Google /can/ decide for 
all of it's code that Proto is okay, but it cannot dictate it is right for 
everyone.   (what about the gRPC+Thrift users?)

On Wednesday, May 22, 2019 at 7:14:15 PM UTC-7, Chris Toomey wrote:
>
> Sure, but if people are working on a concerted effort to support 
> google.rpc.Status 
>  in a bunch of 
> grpc language libraries (Java, Go, grpc-web/JS, ...) so that developers 
> building grpc apps with those languages can use it for richer error 
> reporting, this effort should be publicized so that developers can take 
> advantage of it when appropriate.
>
> If it's good enough for Google Cloud to standardize on, why shouldn't it 
> be documented for others to standardize on?
>
> On Wednesday, May 22, 2019 at 11:26:23 AM UTC-7, Penn (Dapeng) Zhang wrote:
>>
>> The grpc core library io.grpc:grpc-core 
>> 
>>  does 
>> not and should not depend on protobuf, so we can not use 
>> google.rpc.Status 
>>  as a 
>> replacement of io.grpc.Status 
>> 
>>  at 
>> the core library level. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/92399e0a-d915-44ab-a216-e404f132bd62%40googlegroups.com.


[grpc-io] Re: Retry to establish TCP Connection

2019-05-23 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
The client does reconnect, but the RPC may fail.  RPCs that have not yet 
started will likely be sent to a different backend instead of waiting on a 
re-established connection.  The number of retries is not limited.  Instead, 
the deadline on the RPC controls how long before the RPC fails.  The 
connection will be retried indefinitely, using exponential back off (and 
assuming you are not using a custom load balancer, which allows you to more 
finely control this behavior).

There is no public way to control the connection retry behavior, but there 
is an in-progress feature to control RPC retries.

On Wednesday, May 22, 2019 at 7:07:24 PM UTC-7, 윤석영 wrote:
>
> If TCP connection is disconnected (due to Server Down, etc.) after TCP 
> connection is established.
>
>  - does Client perform TCP connection retry operation?
>  - If so, is the number of retries or time limited?
>  - And can I set the retry cycle and the number of times?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/edc56255-6207-48d7-8c49-e1ffb5331b34%40googlegroups.com.


[grpc-io] Re: How can GRPC server know if client is down when data is streaming from client to server .

2019-05-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You can tell if a client is done using ServerTransportFilter, but you 
cannot use it to associate if a particular RPC is associated with the 
transport.  You can only tell if a client RPC is done (successful or not) 
using the StreamObserver api (and sub classes), or using the 
ServerCall.Listener API directly.

On Thursday, May 9, 2019 at 11:22:00 AM UTC-7, Sidhartha Thota wrote:
>
> Hi Team,
>
> I have a following situation. I have Java GRPC server listening for client 
> data. Client opens the connection and streams data to Server. It is 
> unidirectional stream from client to server or Client side streaming use 
> case.
>
> Client is able to know when server is down (when the write is failed) but 
> how can GRPC server know if a client is down? In precise, is any server 
> callback is called if a connected client is down? or is there any other 
> way, I can keep track of my clients from server end.
>
> Next, how can server know the client details which sent the data? Because, 
> onNext() [while listening to client data at server] method do not have any 
> info to know which client sent the corresponding data.
>
> -- 
> Thanks,
> Sidhartha Thota.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/03e24c55-c77e-4d13-8391-f4e97f0fdc08%40googlegroups.com.


[grpc-io] Re: Where is the ServiceConfig proto definition?

2019-05-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Alas, there is no public proto.  There was one internal to Google, but when 
Service Config was open sourced the definition was made to be JSON to avoid 
taking a proto dependency.

The specification of it is structurally compatible to a Proto (using the 
proto3 JSON form), but gRPC doesn't use the proto form.  

On Thursday, May 16, 2019 at 6:31:29 AM UTC-7, Josh Humphries wrote:
>
> The doc on service configs include phrases like so:
>
> // The format of the value is that of the 'Duration' type defined here:
> // https://developers.google.com/protocol-buffers/docs/proto3#json
>
> This makes it apparent that somewhere there is a canonical version of this 
> structure defined as a proto.
>
> The gRFC for health checks makes it even more obvious:
>
> We will then add the following top-level field to the ServiceConfig proto:
>
> HealthCheckConfig health_check_config = 4;
>
> However, I can find no definition for a ServiceConfig proto anywhere. I've 
> looked in a few Google and gRPC repos, including the google.rpc package in 
> the googleapis repo (
> https://github.com/googleapis/googleapis/tree/master/google/rpc).
>
> Furthermore, the Go runtime (maybe others?) use an unstructured JSON blob 
> for providing this configuration, when a much better API would allow for 
> providing an actual structured type (such as the Go structs generated from 
> the ServiceConfig proto).
>
> I'm currently working on a package (not open-source, at least not yet) to 
> make it easy to configure services and have them expose their own configs 
> directly via an RPC interface (so instead of having to use DNS or other 
> service discovery mechanisms, a client can just ask the server for its 
> config via an RPC). That means I am creating a proto representation of it. 
> But it would be nice if I could just lean on some standard, canonical 
> definition of the proto instead.
>
> 
> *Josh Humphries*
> jh...@bluegosling.com 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/644a85be-374e-4a28-9edb-e5ebfc73f2a9%40googlegroups.com.


[grpc-io] Re: gRPC-Java deadline exceeded scalability

2019-05-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Responses inline

On Tuesday, May 21, 2019 at 7:07:11 AM UTC-7, sut...@gmail.com wrote:
>
> Hi folks,
>
> I'm interested in understanding how gRPC-Java might behave in a 
> pathological situation where a vast number of client requests start timing 
> out causing a deadline exceeded avalanche. If a gRPC java client with 50k 
> "sessions" is partitioned away from the server, it might start seeing a 
> very large number deadline exceeded. Subsequent retry attempts might also 
> meet the same fate---adding another 50k deadline exceeded. 
>
> What might be an effect on gRPC java due to such an avalanche of deadline 
> exceeded? Are the deadlines reported to the caller in a timely fashion? 
> What might be the impact on well-behaved sessions?
>

They are reported.  Both the client and server know the deadline, so there 
is no risk of one side not being aware.  If you are using Netty, the 
deadlines are enforced using the EventLoopGroup's scheduler.  I'm not sure 
if they use Netty's HashedWheelTimer.
 

>
> Client might do clever things like waiting a random backoff before 
> retrying to avoid clustering. 
>

gRPC has an in progress retry mechanism that does auto backoff.  By 
default, there are no RPC retries, so you would need to do this at the 
application level today.
 

>
> Does gRPC Java use a scalable approach to track deadlines? A paranoid 
> client might rely on its own HashedTimerWheel 
> -based 
> deadline exceeded reporting mechanism without relying on gRPC. Do you see a 
> need for such a thing?
>
> Regards,
> Sumant
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c03ec03a-9cce-42a2-8d09-3f2ffb7ae4d7%40googlegroups.com.


[grpc-io] Re: Can we intercept an incoming grpc request (Java ) and send the response back gracefully

2019-05-02 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
That sounds right.  You should really include an error message along with 
your status, but that is the way I would go about it.  I vaguely recall 
gRPC (the library) doing the same thing, but I can't find it at the moment.

On Wednesday, May 1, 2019 at 2:04:35 PM UTC-7, anu@gmail.com wrote:
>
> Hi Friends,
>
> We have gRPC server which receives the request, and based upon one 
> specific validation, we want to return the request without sending it to 
> service.
>
> We implemented the below class..
>
> public class MyInterceptor implements ServerInterceptor {
>
> @Override
> public  ServerCall.Listener interceptCall(
> ServerCall call, Metadata headers, 
> ServerCallHandler next) {
>
>if(!someCondition) {
>  call.close(Status.UNKNOWN, headers);
>  return new ServerCall.Listener() {};   
>
>}
>
> return next.startCall(call, headers);
>
> }
>
> } 
>
> the problem with this code is that, at client end it gives the below error 
> trace..
> Exception in thread "main" io.grpc.StatusRuntimeException: UNKNOWN
>
> can we send the proper response with error code so that client will get 
> specific error code and not the above error message
>
> Thanks in advance for your help and advice.
> Regards
> Anurag
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cfddd5ca-e0ed-4851-bba0-1797c664fb7f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Which k8s ingress controller is "blessed" by gRPC community?

2019-05-02 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I haven't personally run with it, but there is active development from 
Google on Envoy, so it typically has good support.  Any of those with Envoy 
support would be what I would look at first.  (again, taking into account 
my inexperience).

On Tuesday, April 30, 2019 at 2:54:01 PM UTC-7, nathan...@gmail.com wrote:
>
> Hello,
>
> I've been using the Kubernetes nginx-ingress controller 
>  for exposing my gRPC 
> servers with clients outside the cluster. However, I have had a somewhat 
> disappointing experience using the nginx-ingress and have filed these 
> issues accordingly:
>
>- one connection to ingress, multiple connections to pods 
>
>- disproportionate load balancing 
>
>
> There are several options 
> 
>  
> other than the nginx. For developers running gRPC servers on Kubernetes, 
> what have you found is the best ingress controller solution for your 
> applications?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9f75997a-7763-452a-ad1c-c769944357ba%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Grpc Java Server Side Streaming server stream cancellation detection

2019-05-02 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You can turn on keep alives on the channel / server builder, but that's 
about the best you can do.  The problem is that a broken connection just 
means silence.  You can't tell the difference between the packets taking a 
long time, and the the packets not arriving.  Keep-alives let you check the 
connection is still active, and close it after a time out (like 10s).  

On Tuesday, April 30, 2019 at 9:10:22 AM UTC-7, Isuru Samaraweera wrote:
>
> Hi All,
> I am using grpc java with serverside streaming.In the grpc server side 
> streaming service I am doing streaming such as below to detect stream 
> cancellations due to network errors.
>
> while(true) {
> if(Context.current().isCancelled()) {
> responseObserver.onNext(response)
> } else break;
>
> The issue is when I disconnect the client still onNext invokes around 100 
> times before network failure is detected by Context.current().isCancelled().
>
> To prevent this I did as below.
> while(true) {
> Thread.sleep(500)
> if(Context.current().isCancelled()) {
> responseObserver.onNext(response)
> } else break;
>
> Now it seems isCancelled is detected before onNext is called when network 
> between client and server is disrupted.This implies cancel=true is updated 
> by a separate thread apart from main thread??
>
>
> Is there any better way of doing this using GRPC Java?
>
> Thanks
> Isuru
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4dc517ad-195c-420f-9f7a-4e73aa78b659%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: how to bind grpc client to specific network interface

2019-04-26 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
What language?

On Friday, April 26, 2019 at 2:58:08 PM UTC-7, 
joe.p...@decisionsciencescorp.com wrote:
>
> Is it possible to pass a parameter to grpc that allows clients to pick a 
> specific network interface on a computer with multiple ethernet interfaces? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c8e8aa16-f957-4e65-8f3d-370dbe5a3b11%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: We made a gRPC GUI Client

2019-04-19 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Looks neat!  You should post it too on twitter.  

On Thursday, April 18, 2019 at 9:44:03 AM UTC-7, roca...@gmail.com wrote:
>
> For those interested in gRPCs - myself and a couple of developer friends 
> put together a gRPC GUI client! Here are some details.
>
> - It's written in TS, aimed at Javascript denizens interested in gRPCs and 
> looking for an easier way to dabble and test.
>
> - We have compiled binaries, courtesy of Electron.
>
> - It has basic functionality. There's a long way to go, but you can 
> configure messages and mock basic requests.
>
> - If you care to contribute,
>
> https://muninrpc.dev/
>
> www.github.com/muninrpc/muninrpc
>
> Let us know what you think!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ff4352e6-11b4-4366-8ba1-9e47e9dcf011%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Java gRPC: multiple bidirectional streams, onNext blocking problems

2019-04-12 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
What executor are you using?  If you are using a fixed threadpoolexecutor, 
having multiple hanging calls could starve the other ones.   Do you notice 
the same blocking using ForkJoinPool or cached threadpoolexecutor? 

On Thursday, April 11, 2019 at 9:07:54 PM UTC-7, matt.m...@lucidworks.com 
wrote:
>
> Thanks Carl. I have tried using various executors on the client channel 
> and stub, but I still see that if an onNext thread blocks, all other client 
> streams go into a waiting state. By all other client threads, I mean that 
> when the client JVM starts up, it sends multiple bidi calls and those 
> streams sit around, waiting to respond to server message, which usually 
> results in the clients sending lots of message back. The only way I've been 
> able to get the client app threads working really well (all threads busy) 
> is to immediately push messages from the client's onNext into a queue + 
> return asap. From there, a client app thread pool pulls from the queue and 
> eventually begins sending messages back.
>
> - Matt
>
> On Tuesday, April 9, 2019 at 12:36:48 PM UTC-4, Carl Mastrangelo wrote:
>>
>> StreamObserver is not threadsafe, and they need external synchronization 
>> to be shared.   
>>
>> Also, gRPC threads are provided by your application when you pass an 
>> Executor to the ManagedChannel or the Server.   You can control that 
>> blocking.
>>
>> On Saturday, April 6, 2019 at 10:09:10 AM UTC-7, Matt Mitchell wrote:
>>>
>>> Hi Carl,
>>>
>>> I meant to get back to you sooner, but while building up an example I 
>>> figured out what the problem was. All of the gRPC onNext threads were 
>>> performing the application work, and blocking until done. Some of this work 
>>> involves reading from file systems, network calls etc..
>>> During that work, the gRPC thread is then blocked, and if the blocking 
>>> lasts for more than just a second or so, all other gRPC/onNext threads halt 
>>> as well.
>>>
>>> I do have other questions on the best way for application threads to 
>>> "claim" a StreamObserver though (there is a "pool" of them). Most examples 
>>> I've seen use synchronized, and maybe that'll be fine, but I could see 
>>> something like a ring buffer / circular queue working well for this too?
>>>
>>> - Matt 
>>>
>>> On Thursday, March 28, 2019 at 4:20:18 PM UTC-4, Carl Mastrangelo wrote:

 I'm having a hard time understanding your example, can you provide a 
 sample snippet of your sever side StreamObserver?

 On Tuesday, March 26, 2019 at 9:54:30 AM UTC-7, Matt Mitchell wrote:
>
> Hello. I'm debugging bidirectional streaming application. A server 
> thread obtains a client StreamObserver, sends a message and waits 
> (countDownLatch), while a call back receives and processes client 
> messages 
> until it receives a "done" message. When "done", countdown() is called on 
> the latch and the StreamObserver is made available to other server 
> threads.
>
> When a client receives a call, it blocks while also emitting to the 
> serverStream, ending with the "done" message. That is when the 
> onNext(serverMessage) call becomes unblocked. This blocking causes all 
> other threads to go into a WAITING state. Even streams created from 
> different channels/thread-pools have this problem. I'm guessing this is 
> expected behavior, having to do with the event execution? How can this be 
> avoided?
>
> Thanks,
> - Matt
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/84a03cd1-af7a-4f3d-b11b-be6de9d576b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Posting example code for asynchronous streaming implementation [subscription based services]

2019-04-10 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You can normally get multiple wakeups from gRPC.  I think it is documented 
implicitly by the wire protocol.  

On Wednesday, April 10, 2019 at 6:12:50 AM UTC-7, windf...@gmail.com wrote:
>
> This really helped me, just wanted to thank you for it.  The trick I was 
> missing is the multiple wake ups on the queue (CREATE, PROCESS, PROCESSED, 
> FINISH).  Does anyone know where this is documented?
>
> Thanks,
> -Alex
>
> On Thursday, April 27, 2017 at 2:10:56 AM UTC-4, Kuldeep Melligeri wrote:
>>
>> While implementing asynchronous streaming grpc, there are no straight 
>> forward examples that can be used, it was hard time for me to implement it. 
>> Now that I have implemented the hello world version of async stream 
>> version, I thought I will share.
>>
>> In this example, client will request for stream of replies by initiating 
>> asynchronous RPC and the server will respond with 5 replies, after server 
>> is done with 5 replies it will close the RPC by calling finish and then 
>> client will close the RPC.
>>
>> Hope it will be helpful.
>>
>> Thanks
>> Kuldeep
>>
>> *Protocol Buffer:*
>> // The greeting service definition.
>> service Greeter {
>>   // Sends a greeting
>>   rpc SayHello (HelloRequest) returns (stream HelloReply) {}
>> }
>>
>> // The request message containing the user's name.
>> message HelloRequest {
>>   string name = 1;
>> }
>>
>> // The response message containing the greetings
>> message HelloReply {
>>   string message = 1;
>> }
>>
>> Server Code:
>> class ServerImpl final {
>> public:
>> ~ServerImpl() {
>> server_->Shutdown();
>> // Always shutdown the completion queue after the server.
>> cq_->Shutdown();
>> }
>>
>> // There is no shutdown handling in this code.
>> void Run() {
>> std::string server_address("0.0.0.0:50051");
>>
>> ServerBuilder builder;
>> // Listen on the given address without any authentication 
>> mechanism.
>> builder.AddListeningPort(server_address, 
>> grpc::InsecureServerCredentials());
>> // Register "service_" as the instance through which we'll 
>> communicate with
>> // clients. In this case it corresponds to an *asynchronous* 
>> service.
>> builder.RegisterService(&service_);
>> // Get hold of the completion queue used for the asynchronous 
>> communication
>> // with the gRPC runtime.
>> cq_ = builder.AddCompletionQueue();
>> // Finally assemble the server.
>> server_ = builder.BuildAndStart();
>> std::cout << "Server listening on " << server_address << 
>> std::endl;
>>
>> // Proceed to the server's main loop.
>> HandleRpcs();
>> }
>>
>> private:
>> // Class encompasing the state and logic needed to serve a request.
>> class CallData {
>> public:
>> // Take in the "service" instance (in this case representing an 
>> asynchronous
>> // server) and the completion queue "cq" used for asynchronous 
>> communication
>> // with the gRPC runtime.
>> CallData(Greeter::AsyncService* service, ServerCompletionQueue* 
>> cq)
>> : service_(service), cq_(cq), repliesSent_(0), 
>> responder_(&ctx_), status_(CREATE) {
>> // Invoke the serving logic right away.
>> Proceed();
>> }
>>
>> void Proceed() {
>> if (status_ == CREATE) {
>> // Make this instance progress to the PROCESS state.
>> status_ = PROCESS;
>> std::cout << "Creating Call data for new client 
>> connections: " << this << std::endl;
>> // As part of the initial CREATE state, we *request* that 
>> the system
>> // start processing SayHello requests. In this request, 
>> "this" acts are
>> // the tag uniquely identifying the request (so that 
>> different CallData
>> // instances can serve different requests concurrently), 
>> in this case
>> // the memory address of this CallData instance.
>> service_->RequestSayHello(&ctx_, &request_, &responder_, 
>> cq_, cq_,
>>   (void*) this);
>> } else if (status_ == PROCESS) {
>> // Spawn a new CallData instance to serve new clients 
>> while we process
>> // the one for this CallData. The instance will 
>> deallocate itself as
>> // part of its FINISH state.
>> new CallData(service_, cq_);
>>
>> // The actual processing.
>> std::string prefix("Hello ");
>> reply_.set_message(prefix + request_.name() +
>> std::to_string(repliesSent_ + 1));
>> std::cout << "Sending reponse: " << this << " : " << 
>> reply_.message() << std::endl;
>> responder_.Write(reply_, this);
>> status_ = PROCESSING;
>> repliesSent_++;
>>

[grpc-io] Re: Load balancing over pool of bidirectional streams

2019-04-09 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Not sure what you mean.gRPC Channels are loadbalanced, but not the 
messages on individual RPCs.   If you repeatedly fire unary RPCs on your 
stub (or Channel) they will be load balanced across all the backends.

On Saturday, April 6, 2019 at 10:21:41 AM UTC-7, Matt Mitchell wrote:
>
> Hi. Does gRPC Java now provide ways to load balance across a set of 
> StreamObservers? I came across this: 
> https://github.com/grpc/grpc/blob/master/src/proto/grpc/lb/v1/load_balancer.proto
>  and 
> wondering if there might be a bidi LB example which implements that?
>
> Thanks,
> - Matt
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/008a0821-7d00-4a84-a7b5-24033bf7891d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Java gRPC: multiple bidirectional streams, onNext blocking problems

2019-04-09 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
StreamObserver is not threadsafe, and they need external synchronization to 
be shared.   

Also, gRPC threads are provided by your application when you pass an 
Executor to the ManagedChannel or the Server.   You can control that 
blocking.

On Saturday, April 6, 2019 at 10:09:10 AM UTC-7, Matt Mitchell wrote:
>
> Hi Carl,
>
> I meant to get back to you sooner, but while building up an example I 
> figured out what the problem was. All of the gRPC onNext threads were 
> performing the application work, and blocking until done. Some of this work 
> involves reading from file systems, network calls etc..
> During that work, the gRPC thread is then blocked, and if the blocking 
> lasts for more than just a second or so, all other gRPC/onNext threads halt 
> as well.
>
> I do have other questions on the best way for application threads to 
> "claim" a StreamObserver though (there is a "pool" of them). Most examples 
> I've seen use synchronized, and maybe that'll be fine, but I could see 
> something like a ring buffer / circular queue working well for this too?
>
> - Matt 
>
> On Thursday, March 28, 2019 at 4:20:18 PM UTC-4, Carl Mastrangelo wrote:
>>
>> I'm having a hard time understanding your example, can you provide a 
>> sample snippet of your sever side StreamObserver?
>>
>> On Tuesday, March 26, 2019 at 9:54:30 AM UTC-7, Matt Mitchell wrote:
>>>
>>> Hello. I'm debugging bidirectional streaming application. A server 
>>> thread obtains a client StreamObserver, sends a message and waits 
>>> (countDownLatch), while a call back receives and processes client messages 
>>> until it receives a "done" message. When "done", countdown() is called on 
>>> the latch and the StreamObserver is made available to other server threads.
>>>
>>> When a client receives a call, it blocks while also emitting to the 
>>> serverStream, ending with the "done" message. That is when the 
>>> onNext(serverMessage) call becomes unblocked. This blocking causes all 
>>> other threads to go into a WAITING state. Even streams created from 
>>> different channels/thread-pools have this problem. I'm guessing this is 
>>> expected behavior, having to do with the event execution? How can this be 
>>> avoided?
>>>
>>> Thanks,
>>> - Matt
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d0e698a2-f248-4413-b04c-f38ba76fc32c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Java gRPC: multiple bidirectional streams, onNext blocking problems

2019-03-28 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I'm having a hard time understanding your example, can you provide a sample 
snippet of your sever side StreamObserver?

On Tuesday, March 26, 2019 at 9:54:30 AM UTC-7, Matt Mitchell wrote:
>
> Hello. I'm debugging bidirectional streaming application. A server thread 
> obtains a client StreamObserver, sends a message and waits 
> (countDownLatch), while a call back receives and processes client messages 
> until it receives a "done" message. When "done", countdown() is called on 
> the latch and the StreamObserver is made available to other server threads.
>
> When a client receives a call, it blocks while also emitting to the 
> serverStream, ending with the "done" message. That is when the 
> onNext(serverMessage) call becomes unblocked. This blocking causes all 
> other threads to go into a WAITING state. Even streams created from 
> different channels/thread-pools have this problem. I'm guessing this is 
> expected behavior, having to do with the event execution? How can this be 
> avoided?
>
> Thanks,
> - Matt
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/91a9b4ce-76c2-44c5-91ea-fa3a0af91de2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Return scalar type field instead of message

2019-03-20 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
This kind of question would be better asked on StackOverflow.   The short 
answer is that Messages provide the backwards and forwards compatibility 
guarantees that Protobuf offers.  bools do not.

On Wednesday, March 20, 2019 at 3:10:02 AM UTC-7, Martin Scholz wrote:
>
> Why it is not possible to return a unnamed value instead of defining a 
> message with a named parameter? For example 
>
> service SearchService {
>   rpc Search (SearchRequest) returns (bool);
> }
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/eee4f073-c259-4905-9c76-441a5a2f252a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Deadlines with infinite streams

2019-03-18 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Deadlines apply to the whole RPC.  However, you can make an interceptor 
that cancels the RPC if a message hasn't been received in a time period.  
Without going into too much detail, there are lots of different ways to 
handle deadlines in the streaming case, so gRPC punts this up to the 
application.  For the (extremely common) case of Unary calls, the deadline 
is the right way to make sure the call eventually ends.


On Monday, March 18, 2019 at 1:41:04 PM UTC-7, Mark Fine wrote:
>
> How do people manage deadlines for infinite streams? The documentation on 
> ClientContext set_deadline()
>
>
> https://grpc.io/grpc/cpp/classgrpc_1_1_client_context.html#ad4e16866fee3f6ee5a10efb5be6f4da6
>
> indicates that "This method should only be called before invoking the rpc" 
> - it doesn't seem possible to re-set the deadline throughout the lifecycle 
> of the connection. Are deadlines precluded from infinite streams? How do 
> people determine when they're infinite streams potentially get stalled? 
> Out-of-band detection?
>
> Thanks!
> Mark
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b93eeb6d-fac1-4ed7-bb49-679b8a2c7c3b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: GRPC Communication with multiple remote agents and also scale

2019-03-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Start 1000 Channels, send the messages, collect the responses.   gRPC 
should easily scale to that number.   Can you clarify more what you are 
looking for?

On Wednesday, March 13, 2019 at 8:10:22 PM UTC-7, Vishal Pandya wrote:
>
> Can somebody please throw light on asynchronous solutions to how to 
> communicate to possibly thousands of remote agents using grpc? Basically a 
> Java process(sending requests, possibly scheduled) to a large number of 
> remote java processes deployed on a large number of remote machines, over 
> grpc. The request is sent to all  the agents in parallel and the responses 
> processed. Thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/878e9574-64b7-4b17-92fa-ac1342f8166f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRFC A3: Channel Tracing

2019-03-07 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Inline responses:

On Thursday, March 7, 2019 at 7:19:41 AM UTC-8, Mark D. Roth wrote:
>
> On Wed, Mar 6, 2019 at 2:29 PM 'Carl Mastrangelo' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>> Mark: I would be interested in taking over this, assuming you don't have 
>> many more concerns.  
>>
>
> I'd be fine with you considering yourself the new owner of this design.  
> However, the gRFC has already been merged, so there's no need for more work 
> here unless there are changes we want to see.  (I do notice that the gRFC 
> still says it's in state "Draft", but this was supposed to have been 
> finalized a while ago, so I suspect that Noah simply forgot to change that 
> when he merged.)
>  
>
>>
>> I have two changes I would like to propose as well:
>>
>> 1.   Add a CT_DEBUG level between CT_UNKNOWN and CT_INFO.   ChannelTraces 
>> at the DEBUG level are *never* surfaced up to channelz, and implementations 
>> are free to not implement it.  However, implementations that do want to 
>> surface debug info may.  This is relevant for Java, maybe C++, where there 
>> are leveled loggers that can go below the info level.  We use the channel 
>> trace log level throughout our implementation, and convert it to the 
>> relevant Channelz or logger level.  It would be nice if we could skip the 
>> Channelz conversion half.
>>
>
> I don't think we want to clutter up the channel trace buffer with 
> debug-level messages, since the buffer is of limited size, and we don't 
> want debug messages to cause important messages from earlier to be expired 
> out of the buffer.  So I would rather not add this mechanism to the design.
>

Agreed.  The proposal is to add another enum value and throw away 
everything with that value for the trace buffer.  There is no extra logic.
 

>
> I think it was probably a poor choice for Java to use an API with 
> different log levels.  If there's an API for adding channel trace events, 
> that API should expose only the log levels used for channel trace events; 
> we should not conflate that with any pre-existing log levels used for 
> different purposes.
>

Your suggestion is a tradeoff that Java traded the other way on.  The 
events relevant to channels is a superset of events representable by 
Channel Trace.  We don't want two calls (a log() and a trace()) per event 
callsite, so we combine them.  Leveling them down keeps the noise down when 
things are going well, and the default access for events is through the 
Channelz UI.  When things are bad, and the events are too numerous, they 
have to be logged or thrown away.  Hence, they have to be logged, and we 
have to pick some level to log them at. The C++ gRPC implementation groups 
such events by type (as args to GRPC_TRACE), while Java gRPC groups them by 
severity (via -Dio.grpc.ChannelLogger.level).  I think Java's 
implementation is easier to use.

But, I think time (and users) will tell if this approach is correct, and 
diversity in this area will help us find good solutions more quickly.  
 

>
> On a related note, Abhishek did suggest one change a while back, which I 
> don't think Noah ever got around to.  Currently, the trace event buffer 
> size is set in terms of number of events.  However, Abhishek suggested that 
> it should instead be expressed in terms of total amount of memory used.  
> The distinction is, of course, that instead of the buffer fitting a fixed 
> number of events, the number would fluctuate based on the size of the 
> events in memory.
>  
>
>>
>> 2.  Expose Channel trace (as perhaps a TransportTrace) that is specific 
>> to a socket.   Right now its just for channels and subchannels, which means 
>> if there is a lot of activitiy on sockets for a given subchannel, the trace 
>> events all get mixed up with the Subchannels logs.   I can make a concrete 
>> proposal if you are generally onboard with the idea.
>>
>
> I'm not opposed to this if there is a use-case for it, but I'd like to 
> hear examples of use-cases to justify this.  Can you give examples of 
> events that would be associated with the transport instead of the 
> subchannel?
>

Main use case is during connection setup, where there can be multiple 
stages.  For example, while debugging an internal issue, a user claimed the 
initial RPC timed out, but they channel stayed stuck in Connecting.  
Transport level events would show how far into the connection pipeline gRPC 
made it.   

A non internal example is when connecting via SSL.  If the SSL  handshake 
completes but the remote server never sends the HTTP/2 handshake, the 
connection won't fail, but eventually time out.  The details of how far

Re: [grpc-io] Re: gRFC A3: Channel Tracing

2019-03-06 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Mark: I would be interested in taking over this, assuming you don't have 
many more concerns.  

I have two changes I would like to propose as well:

1.   Add a CT_DEBUG level between CT_UNKNOWN and CT_INFO.   ChannelTraces 
at the DEBUG level are *never* surfaced up to channelz, and implementations 
are free to not implement it.  However, implementations that do want to 
surface debug info may.  This is relevant for Java, maybe C++, where there 
are leveled loggers that can go below the info level.  We use the channel 
trace log level throughout our implementation, and convert it to the 
relevant Channelz or logger level.  It would be nice if we could skip the 
Channelz conversion half.

2.  Expose Channel trace (as perhaps a TransportTrace) that is specific to 
a socket.   Right now its just for channels and subchannels, which means if 
there is a lot of activitiy on sockets for a given subchannel, the trace 
events all get mixed up with the Subchannels logs.   I can make a concrete 
proposal if you are generally onboard with the idea.

On Monday, January 30, 2017 at 9:46:26 AM UTC-8, Mark D. Roth wrote:
>
> (+ctiller)
>
> Overall, this looks pretty decent.  Here are a few initial thoughts...
>
> I like the idea of using JSON for the returned tracing data for C-core, 
> especially since it means less overhead in wrapped languages that want to 
> expose the new tracing APIs.  However, JSON may not be the desired form for 
> this data in all languages; the Java and Go folks may prefer some 
> language-specific data structure.  I suggest checking with folks on those 
> teams to see what they think.  (If we are going to have a 
> language-independent server UI for displaying trace data, then that may be 
> a good argument for all languages using JSON, but we need to make sure 
> everyone is on board with that.)
>
> The gRFC should document the schema for the JSON data.  In particular, we 
> should probably make sure that the JSON data is in a form that can be 
> automatically converted into a protobuf (which we'll want to define), as 
> per https://developers.google.com/protocol-buffers/docs/proto3#json.
>
> In terms of the C-core implementation, as you and I and Craig discussed 
> last week, the grpc_subchannel_tracer struct will probably need a refcount, 
> since it may be referenced by multiple parent channels.  Whenever a parent 
> channel gets a trace node indicating that a subchannel has been added or 
> removed from the parent channel, that trace node should hold a reference to 
> the subchannel trace.  Thus, the subchannel trace will live until the last 
> node referencing it is removed from the parent channels' buffers.  (Update: 
> Ah, I see you mentioned this at the very end of the doc.  It might be 
> useful to make this clear earlier, when the data structures themselves are 
> presented.)
>
> You might also consider making the list of subchannel tracers a 
> doubly-linked list, so that it's easier to delete entries from the middle 
> of the list.
>
> It might be advantageous to use grpc_channel_tracer for both parent 
> channels and subchannels, so that you don't need a separate internal API 
> for adding nodes to each type.  Or perhaps simply create some common base 
> class for the head_trace and tail_trace fields, and 
> have grpc_channel_tracer_add_trace() operate on that base class.
>
> Please let me know if you have any questions or concerns about any of this.
>
> On Wed, Jan 25, 2017 at 11:18 AM, ncteisen via grpc.io <
> grp...@googlegroups.com > wrote:
>
>> My first link was to the blob, so it is stale.
>>
>> Instead use this link  to the 
>> pull request itself.
>>
>> On Wednesday, January 25, 2017 at 10:16:46 AM UTC-8, ncte...@google.com 
>> wrote:
>>>
>>> I've created a gRFC describing the design and implementation plan for 
>>> gRPC Channel Tracing
>>>
>>> Take a look at the planning doc. 
>>> 
>>>
>>> Would love to hear some feedback on the design!
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/90af5752-0d28-41ad-8887-372070ad2430%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Mark D. Roth >
> Software Engineer
> Google, Inc.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group a

[grpc-io] gRPC Java 1.19.0 Released

2019-02-28 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
gRPC Java 1.19.0 is now released.   It should be ready use from Maven 
Central and JCenter.

https://github.com/grpc/grpc-java/releases/tag/v1.19.0

Dependencies and Build Changes
   
   - Upgraded to protobuf 3.6.1 (#5320 
   )
   - Google App Engine Java 7 is no longer supported, as it was shut down 
   . Java 8 is 
   supported.
   - Upgraded Guava to 26.0-android
   - Add "fake" Bazel dependency on Guava's failureaccess to fix dependency 
   handling issue in maven_jar (#5350 
   )
   - Upgraded OpenCensus to v0.19.2 (#5329 
   )

Bug Fixes
   
   - Fixed Service Config DNS parsing to match specification (Service 
   Config is still off by default) (#5293 
   )
   - OkHttp no longer spams NPE when connecting to a server that's down (
   #5260 )
   - Context avoids leaking ClassLoader through a ThreadLocal (#5290 
   )
   - Status is now preserved when getting a RST_STREAM with no error (#5264 
   )
   - Removed Channel reference from ManagedChannelWrapper to avoid a memory 
   leak (#5283 )
   - Avoid NPE in Cronet after the transport has shutdown (#5275 
   )
   - Fixed a channel panic caused by calling NameResolver.refresh() (#5223 
   )

New Features
   
   - New artifact grpc-bom is added (#5209 
   )
   - Each ManagedChannel can now have its own ProxyDetector (#5173 
   )

Behavior Changes
   
   - If enabled, health checking defaults to SERVING if the name 
   unspecified (#5274 )
   - Graceful Netty server shutdown now issues two GOAWAYs (#5351 
   )
   - Client-side health checking now warns if disabled (#5261 
   )

API Changes
   
   - Removed DEFAULT_CONNECTION_SPEC from OkHttpChannelBuilder (#5309 
   )
   - NettyChannelBuilder now accepts a channelFactory (#5312 
   )
   - NettyServerBuilder supports listening on multiple sockets (#5306 
   )
   - CallCredentials is now preferred over CallCredentials2 (#5216 
   )
   - ProxiedSocketAddress is added as an Experimental API (#5344 
   )
   - Added NameResolver.Helper, for use with new 
   NameResolver.newNameResolver() overload (#5345 
   )
   - Deprecated previous NameResolver.newNameResolver() overload (#5345 
   )

Documentation
   
   - SECURITY.md 
    recommendations 
   updated and reorganized (#5281 
   )

Acknowledgments

Thanks to all of our contributors:

   - Arajit Samanta @arajitsamanta 
   - Bogdan Drutu @bogdandrutu 
   - Danna Kelmer @dkelmer 
   - Ignacio del Valle Alles @idelvall 
   - Michael Plump @plumpy 
   - Tim van der Lippe @TimvdLippe 
   - Yang Song @songy23 
   - kenji yoshida @xuwei-k 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fffbea17-9c19-4d7d-9697-a19bc54f4e0c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: how to detect version of gRPC?

2019-02-28 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
What language are you using?

On Thursday, February 28, 2019 at 7:43:51 AM UTC-8, heman...@gmail.com 
wrote:
>
> I downloaded gRPC for use from github.  How do I find out what version of 
> gRPC this software is?  I don't see a VERSION file with the software.  
>
> For example, with protobuf, I can use 'protoc --version` to find out the 
> version of protobuf I am using.
>
> Best,
>
> Hemant
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cd8fa89a-a0ad-40fe-b94d-1d153a289de9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Unimplemented Method on Linux / Windows

2019-02-28 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
That sounds like a bug in your code.  That error would not be raised by 
changing JVMs.

On Thursday, February 28, 2019 at 6:11:54 AM UTC-8, marcel.moehring wrote:
>
> Hello,
>
>  
>
> we are currently migrating projects to JDK 11 and are experiencing the 
> strange problem where JARs compiled on macOS do not longer work on Linux or 
> Windows.
>
> For every request the server answers with UNIMPLEMENTED “Method not found”.
>
>  
>
> The only native parts we know of are tcnative and when compiling we use 
> the compile target arch (or the uber jar for testing).
>
>  
>
> Any ideas?
>
>  
>
>  
>
> Thanks,
>
> Marcel
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/884713ef-7f43-439b-8c5a-368cf913dadb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Future of GRPC-LB

2019-02-25 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Like Penn said, you can turn it on (it's experimental), but will eventually 
be replaced.  The flag itself is pretty simple, but the rest of the 
machinery needs to be set up properly for it to work.  We (gRPC 
maintainers) are not comfortable supporting this yet, hence the extra 
effort to turn it on.   The gRPCLB Load Balancer is experimental, so we 
will likely remove it at some point.  We will give a notice in one of the 
upcoming releases that it is deprecated, and then remove it the release 
after.   Since the replacement isn't yet ready, it has not been removed.

Sorry to be so non-committal, but it seems like XDS is a better long term 
LB solution, and we don't want to support two competing implementations.

On Saturday, February 23, 2019 at 12:48:43 AM UTC-8, blazej...@gmail.com 
wrote:
>
> And what about SRV records lookup: now I have to set this flag:
>
> io.grpc.internal.DnsNameResolverProvider.enable_grpclb
>>
>
> to true, and there was a commit some time ago which enabled it by default: 
> https://github.com/grpc/grpc-java/commit/c729a0f76b244da9f4aebc40896b2fb891d1b5c4
>  
> and now it has been reverted: https://github.com/grpc/grpc-java/pull/5232 
> - how it is eventually going to be? 
>
>
> W dniu piątek, 22 lutego 2019 21:16:54 UTC+1 użytkownik Penn (Dapeng) 
> Zhang napisał:
>>
>> Neither grpclb nor xds will be enabled by default, grpclb need be 
>> explicitly enabled by a service config or a ManagedChannelBuilder option, 
>> and xds need be explicitly enabled by a service config.  Grpclb will 
>> eventually be replaced by xds based solution in the future, but the 
>> grpc-grpclb  maven 
>> artifact will stay and work for a long time (for as many new releases as 
>> possible). When grpclb is not available for a new grpc release, your client 
>> can still automatically switch to a fallback loadbalancer (pick_first).
>>
>> On Friday, February 22, 2019 at 8:52:16 AM UTC-8, blazej...@gmail.com 
>> wrote:
>>>
>>> What is the status of GRPCLB - are there any plans to enable it by 
>>> default and finish the experimental stage (we want to start using it in 
>>> production), or opposite, you plan to abandon it? I am confused, because 
>>> I've read this PR: https://github.com/grpc/grpc-java/pull/5232:
>>>
>>> SRV has not yet been enabled in a release. 

 *Since work is rapidlyunderway to replace GRPC-LB with a service 
 config+XDS-based solution,there's now thoughts that we won't ever enable 
 grpclb by default* (but
 may allow it to be automatically enabled when using GoogleDefaultChannel
 or similar). Since things are being worked out, disable it.
>>>
>>>
>>> It will be really helpful to us to know, what is the plan for it :)
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e49e764a-9e2f-41b6-9b89-e69f1332bf36%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: debugging opencensus rpcz?

2019-02-25 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
+Dino, who may be able to answer this.

On Mon, Feb 25, 2019 at 9:48 AM Derek Perez  wrote:

> Yeah, I have it wired up and I can see statsz and tracez flowing but rpcz
> is empty...What should I be seeing there?
>
> On Mon, Feb 25, 2019 at 9:43 AM 'Carl Mastrangelo' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> Do you mean the Census Z page implementation?
>>
>> On Sunday, February 24, 2019 at 7:39:34 PM UTC-8, Derek Perez wrote:
>>>
>>>  I have zpages enabled on my Netty-based grpc server and I'm not seeing
>>> any data when I visit /rpcz
>>>
>>> I do see data when I visit /tracez so its definitely wired up and
>>> exporting data properly, Any ideas?
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To post to this group, send email to grpc-io@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/4ec47775-f993-4a92-b177-6b5984a66684%40googlegroups.com
>> <https://groups.google.com/d/msgid/grpc-io/4ec47775-f993-4a92-b177-6b5984a66684%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAcqB%2Bt34gO9PJfBtUb6Nedb5d8DhrR46h8y3j0uCf1zA%3DZEvA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: debugging opencensus rpcz?

2019-02-25 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Do you mean the Census Z page implementation?

On Sunday, February 24, 2019 at 7:39:34 PM UTC-8, Derek Perez wrote:
>
>  I have zpages enabled on my Netty-based grpc server and I'm not seeing 
> any data when I visit /rpcz
>
> I do see data when I visit /tracez so its definitely wired up and 
> exporting data properly, Any ideas?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4ec47775-f993-4a92-b177-6b5984a66684%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: grpc-java: Register all channels / access root channel list

2019-02-22 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You can see sample usage here:
https://github.com/grpc/grpc-java/blob/3a39b81cf54b3e93b06f8b7375f2bcbf0f712612/services/src/test/java/io/grpc/services/HealthStatusManagerTest.java#L48
Again, it works if you can pass in the Channel, but is less useful
otherwise.

I see
http://googleapis.github.io/gax-java/1.23.0/apidocs/com/google/api/gax/grpc/GrpcTransportChannel.html
  as a possible TransportChannelProvider, but I haven't personally used
it.   It seems like the right avenue to follow.

I'm curious if this ends up working out.


On Fri, Feb 22, 2019 at 2:00 PM Sam Gammon  wrote:

> Thank you for your thoughtful response, that all makes sense.
>
> Is there a code sample somewhere of the GrpcServerRule stuff? I’ll dig
> into the source and see what I can find. And does it shut down all
> channels, or just server side channels?
>
> The exceptions usually come from Firestore channels, and I can see an
> interface in the service builder that accepts a TransportChannelProvider.
>
> However I have never encountered the TransportChannelProvider and don’t
> know how I might go about providing my own, which might facilitate
> registering channels so they can be shut down.
>
> Thank you again for your help.
>
> Sam
>
> On Fri, Feb 22, 2019 at 1:48 PM 'Carl Mastrangelo' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> The warning you see is triggered by the garbage collector (via a
>> WeakRef), so likely the Channel won't show up in Channelz when you see the
>> message.  If you don't have references to these channels, then it won't be
>> possible to shut them down directly.   If you can create them (such as in
>> your tests), we provide some helper classes to auto create and shut them
>> down such as GrpcServerRule.
>>
>>
>> One other thing: you could (not saying it's a good idea), silence these
>> logs for your tests.  I would strongly encourage you to file bugs to the
>> projects that create the channels, (which is why we include the creation
>> site stack trace), but mute them if you can't take any action in response
>> to them.
>>
>>
>> On Friday, February 22, 2019 at 12:05:20 PM UTC-8, Sam G wrote:
>>>
>>> It appears the stacktrace I posted is written in white font. Apologies
>>> for that... here it is again, but this time, legibly:
>>>
>>>
>>> Feb 21, 2019 8:05:57 PM
>>> io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
>>> cleanQueue
>>>
>>> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=11, 
>>> target=*firestore.googleapis.com:443
>>> <http://firestore.googleapis.com:443/>*} was not shutdown properly!!!
>>> ~*~*~*
>>>
>>> Make sure to call shutdown()/shutdownNow() and wait until
>>> awaitTermination() returns true.
>>>
>>> java.lang.RuntimeException: ManagedChannel allocation site
>>>
>>> at
>>> io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
>>>
>>> at
>>> io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
>>>
>>> at
>>> io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
>>>
>>> at
>>> io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:419)
>>>
>>> at
>>> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:254)
>>>
>>> at
>>> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:165)
>>>
>>> at
>>> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:157)
>>>
>>> at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157)
>>>
>>> at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:122)
>>>
>>> at
>>> com.google.cloud.firestore.spi.v1beta1.GrpcFirestoreRpc.(GrpcFirestoreRpc.java:121)
>>>
>>> at
>>> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreRpcFactory.create(FirestoreOptions.java:80)
>>>
>>> at
>>> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreRpcFactory.create(FirestoreOptions.java:72)
>>>
>>> at com.google.cloud.ServiceOptions.getRpc(ServiceOptions.java:509)
>>>
>>> at
>>> com.google.cloud.firestore.FirestoreOptions.getFirestoreRpc(FirestoreOpt

[grpc-io] Re: grpc-java: Register all channels / access root channel list

2019-02-22 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
The warning you see is triggered by the garbage collector (via a WeakRef), 
so likely the Channel won't show up in Channelz when you see the message.  
If you don't have references to these channels, then it won't be possible 
to shut them down directly.   If you can create them (such as in your 
tests), we provide some helper classes to auto create and shut them down 
such as GrpcServerRule.  


One other thing: you could (not saying it's a good idea), silence these 
logs for your tests.  I would strongly encourage you to file bugs to the 
projects that create the channels, (which is why we include the creation 
site stack trace), but mute them if you can't take any action in response 
to them.   

On Friday, February 22, 2019 at 12:05:20 PM UTC-8, Sam G wrote:
>
> It appears the stacktrace I posted is written in white font. Apologies for 
> that... here it is again, but this time, legibly:
>
>
> Feb 21, 2019 8:05:57 PM 
> io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference 
> cleanQueue
>
> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=11, 
> target=*firestore.googleapis.com:443 
> *} was not shutdown properly!!! 
> ~*~*~*
>
> Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
>
> java.lang.RuntimeException: ManagedChannel allocation site
>
> at 
> io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
>
> at 
> io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
>
> at 
> io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
>
> at 
> io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:419)
>
> at 
> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:254)
>
> at 
> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:165)
>
> at 
> com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:157)
>
> at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157)
>
> at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:122)
>
> at 
> com.google.cloud.firestore.spi.v1beta1.GrpcFirestoreRpc.(GrpcFirestoreRpc.java:121)
>
> at 
> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreRpcFactory.create(FirestoreOptions.java:80)
>
> at 
> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreRpcFactory.create(FirestoreOptions.java:72)
>
> at com.google.cloud.ServiceOptions.getRpc(ServiceOptions.java:509)
>
> at 
> com.google.cloud.firestore.FirestoreOptions.getFirestoreRpc(FirestoreOptions.java:315)
>
> at com.google.cloud.firestore.FirestoreImpl.(FirestoreImpl.java:76)
>
> at 
> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreFactory.create(FirestoreOptions.java:63)
>
> at 
> com.google.cloud.firestore.FirestoreOptions$DefaultFirestoreFactory.create(FirestoreOptions.java:56)
>
> at com.google.cloud.ServiceOptions.getService(ServiceOptions.java:497)
>
> at io.[ REDACTED ].impl.FirestoreService.(FirestoreService.kt:451)
>
> [... more frames ...]
>
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
>
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
>
>
> On Friday, February 22, 2019 at 11:56:13 AM UTC-8, Sam G wrote:
>>
>> Hey gRPC,
>>
>> I am working in *grpc-java *and looking for a way to access all of my 
>> currently-open root channels.
>> I am aware of Channelz's existence, perhaps that would help? But I have 
>> so far been unable to find documentation to that effect (i.e. fetching all 
>> living channels, regardless of OkHttp/Netty implementation).
>>
>> Some of our gRPC services make use of Google's Cloud APIs for Java and so 
>> they have their own ManagedChannel connections up to, say, Firestore and 
>> Cloud Logging.
>>
>> When shutting down our server, it often closes the active connections as 
>> the JVM exits, rather than giving them time to exit gracefully (this could 
>> be some implementation problem on our side, but we have encountered other 
>> people with this same issue).
>>
>> This isn't much of an issue in production, where servers are long-lived 
>> and rarely shut dow

[grpc-io] Re: Java grpc FutureStub threads question

2019-02-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
By executing, you mean you are calling methods on the Future Stub?  If so, 
the completion of the future, and other callbacks, are executed on the 
executor provided to the channel when it was created (you can also fork the 
stub with your own executor).  gRPC will always complete the future on 
executor passed in, as there may be requirements (like the presence of 
Thread Locals, etc.) on that executor.  If you are worried about the app 
taking too long on one of the threads *you* provided to gRPC, you can 
always ask the application to provide you with an executor.  If this is not 
possible, an you don't particularly care about threading overhead, you can 
use a cached threadpool which will bring new threads into existence if the 
app is blocking for too long.  (Cached is also the default for gRPC itself, 
for the same reason). 

On Thursday, February 21, 2019 at 1:08:08 PM UTC-8, cr2...@gmail.com wrote:
>
> If as a library you're executing grpc future calls on behave on an 
> application is there an issue just using the grpc callback threads? Would 
> there be any need to *transfer* this back to an application provided thread 
> to return the grpc thread back to it's pool?  As a library there's really 
> no control over what the application may do or how long it will use that 
> thread. 
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e79e6b9e-6127-46ab-9905-af130d39abcc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Client-side visibility of server canceling context

2019-02-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Let me re-establish somethings just to make sure we are on the same page.

1.  Servers are always async.   Server Stubs are always passed a Response 
observer.   For Bidi and Client Streaming, they also return a Request 
Observer.
2.  Servers are usually on the receiving end of cancellation.  Servers find 
out about cancellation by inspecting the Context.current().isCancelled() 
value.
3.  Servers can also check for early termination of the RPC for Bidi and 
ClientStreaming by getting onError() called on the Request observer.

4.  Clients usually don't check for cancellation.  Client's find out the 
RPC is prematurely terminated by getting an onError() callback with a 
StatusRuntimeException or StatusException with a status Code.  From the 
client's POV, server cancellation and regular failure look the same.
5.  Clients using the blocking API usually use Context cancellation to end 
the call early.   (because there is no other way)
6.  A blocking client will take the current context, wrap a cancellable 
context around it, and install the new ctx back into the current.  (This 
was the attach() suggestion I made above).  When the RPC is complete, they 
swap the old context back into place.
7.  ClientResponseObserver is probably not what you want here.  It's meant 
to modify flow control on the client stub.  It's rarely used.

With these in mind, It seems like your server should just do 
responseObserver.onError(Status.CANCELLED.withDescription("Server is 
cancelling").asRuntimeException());  The client be notified by the 
onError() invocation on it's response observer.  (I think you said it's 
bidi, so this would be the observer "returned" from your client stub.

Does this make sense?


On Thursday, February 21, 2019 at 11:20:54 AM UTC-8, rmar...@hubspot.com 
wrote:
>
> Thanks for the reply. I updated my test server method to the following 
> (hopefully I understood correctly), although I observed no differences in 
> behavior on the client-side:
>
> public void testStream(TestRequest request, StreamObserver 
> responseObserver) {
>   Context originalContext = Context.current();
>   CancellableContext cancellableContext = originalContext.withCancellation();
>   cancellableContext.attach();
>   cancellableContext.detachAndCancel(originalContext, new 
> RuntimeException("Server context canceled"));
> }
>
>
> It's the ClientResponseObserver that I'm inspecting to observe the 
> cancellation, I suppose maybe expecting `onError` to get called on behalf 
> of the server cancellation.
>
> On Thursday, February 21, 2019 at 1:50:23 PM UTC-5, Carl Mastrangelo wrote:
>>
>> I think you need to call `attach()` on the context to install it (rather 
>> than just construct it).  Normally this sticks it in a thread local which 
>> gRPC then reads from.
>>
>> On Wednesday, February 20, 2019 at 2:08:03 PM UTC-8, rmar...@hubspot.com 
>> wrote:
>>>
>>>
>>> In the context of server streaming or bi-directional streaming calls, 
>>> I'm under the impression the client should be able to receive notice when 
>>> the server has decided to cancel the context for the call. I've been trying 
>>> to write up a test to capture this behavior, but it doesn't seem like 
>>> canceling the context on the server is triggering anything in the response 
>>> observer in the client side. Maybe I'm misinterpreting how the client 
>>> interacts with a server cancellation or just setting up the test case wrong?
>>>
>>> My client test call is something like:
>>>
>>> ClientResponseObserver closingObserver = 
>>> ClosingStreamObserver.closingClientResponseObserver(clientTestCloseable, 
>>> responseObserver);
>>> serviceStub.testStream(req, closingObserver);
>>>
>>>
>>> and then I wait a bit to see if anything propagates (I've got an atomic 
>>> boolean that I flip when `onError` or `onCompleted` get called. I had also 
>>> tried a listener on the current context on the client side as well, but 
>>> that seemed irrelevant.
>>>
>>>
>>> On the server side I'm canceling the context something like this:
>>>
>>> public void testStream(TestRequest request, StreamObserver 
>>> responseObserver) {
>>>   CancellableContext cancellableContext = 
>>> Context.current().withCancellation();
>>>   cancellableContext.cancel(new RuntimeException("Server context 
>>> canceled"));
>>> }
>>>
>>>
>>> To step back a bit, I'm looking to know if and how a server can signal it 
>>> is canceling a request to the client.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3d6aa170-14a5-4c49-b3cb-0db48e79be2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Using multiple late-bound services

2019-02-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Inline responses

On Wednesday, February 20, 2019 at 2:56:12 PM UTC-8, Geoff Groos wrote:
>
> I think we're on the same page.
>
> So if you've got an API user who connect to your API with some kind of 
> functionality that he wants to offer, (thinking in the vein of callbacks in 
> C# or java), then your two best solutions are:
>
>- use a bidirectional stream, such that the client-simulator sends one 
>message, the optimizer-server-to-client-simulator stream is held open 
> until 
>the optimizer-server would invoke the callback, at which point it sends 
> one 
>message to the client-simulator, then the client-simulator sends one 
>message back to the optimizer-server containing the result from the 
>callback's execution. Both sides then "onComplete". 
>
>rpc callbackBasedFeature(stream FeatureRegistrationOrResult) returns 
> (stream 
>FeatureOneCallbackInvocation)
>
>message FeatureRegistrationOrResult {
>  oneof message {
>FeatureRegistration registration = 1 //registers callback
>FeatureResult result = 2 //result from callback evaluation
>  }
>}
>message FeatureOneCallbackInvocation {
>  //callback parameters
>}
>
>- pro: it allows you to expose callback-based functionality as single 
>   service/method end-points. 
>   - pro: it allows for a reasonably-trivial and correct error handler 
>   in the form of `try { process(inboundStream.next()) } catch(Exception 
> e) { 
>   outputStream.onError(e) }`. Assuming callback execution is involved 
>   synchronously there is no chance of multiplexing problems (IE: there is 
> no 
>   chance you fail to understand which error corresponds to which 
> request). 
>   - con: if you have more than one such callback you're requiring 
>   that API callers employ some concurrency. 
>   - con: Streams are also usually multi-element'd, but if your 
>   callback is designed as a one-off rather than an event bus then it 
> you're 
>   likely only ever using streams of one element, which is surprising. 
>   
>
>- use separate servers, a registration message containing enough for 
>the "first-server" (optimizer) to connect to client "servers". 
>
> service OptimizerServer {
> rpc register(RegistrationRequest) returns (RegistrationResponse);
> }
> service SimulatorClientServer {
> rpc feature(FeatureRequest) returns (FeatureResponse);
> }
>
> message RegistrationRequest {
>   string host = 1;
>   int port = 2;
>   //other meta im missing? routing or DNS or some such?
> }
>
> Thus the client-simulator first starts its own GRPC server with a 
> SimulatorClientServer service, then connects to the optimizer-server, 
> calls register(host = getLocalHost(), port = 12345), and waits for 
> invocations of feature
>
>- pro: it requires the minimum of grpc packaging at invocation time. 
>Once setup and running this is two very REST-friendly services talking to 
>each other. 
>- pro: no use of streams on this feature set. Streams are left to 
>their (intended?) purpose of sending back multiple delayed elements on 
>features.
>- pro: the problem of errors is trivial since all methods are exposed 
>as simple RPC endpoints: success calls `onNext; onComplete` and errors 
> call 
>`onError; onComplete`. 
>- pro: clients will end up multi-threaded, but they will leverage the 
>~elegant thread-pools provided by grpc by default. 
>- con: clients now require a webplatform (netty by default on 
>grpc/java) to function. 
>- con: it requires some fairly strange logic to setup
>- con: uses as many ports as there are callback users. 
>
> Biggest con (despite me having suggested it): it plays badly with 
firewalls.  This will make things super complicated.   It's just 
technically possible.

 

> The original solution I came up with in my question seems clearly inferior 
> to these two solutions. 
>
> And I didn't think about the dependency issue until just now: requiring 
> that our API users bundle netty or similar is quite a big ask, and has many 
> implications for deployment. 
>
> I think bidirectional streams might be the best way to go.
>
> Thanks for your help!
>
>
> PS:
>
> Is there any chance you think the GRPC guys themselves would be willing to 
> change the spec to allow this? 
>

This has been an ask for a long time, but never really prioritized by the 
gRPC team.   We called it gRPC-on-gRPC, where a client opens a gRPC 
Connection, and then the server tunnels gRPC requests on top of it, 
inverting the relationship.  The main use is command and control, (like IoT 
devices that want to listen for commands, but may be behind a firewall, or 
something).  However, the overwhelming majority of RPC users do basic 
Request-Response pairs and then the RPC is done.  The complication to the 
gRPC library (which is already super complex) isn't worth it right now.  

The alternative is

[grpc-io] Re: Client-side visibility of server canceling context

2019-02-21 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I think you need to call `attach()` on the context to install it (rather 
than just construct it).  Normally this sticks it in a thread local which 
gRPC then reads from.

On Wednesday, February 20, 2019 at 2:08:03 PM UTC-8, rmar...@hubspot.com 
wrote:
>
>
> In the context of server streaming or bi-directional streaming calls, I'm 
> under the impression the client should be able to receive notice when the 
> server has decided to cancel the context for the call. I've been trying to 
> write up a test to capture this behavior, but it doesn't seem like 
> canceling the context on the server is triggering anything in the response 
> observer in the client side. Maybe I'm misinterpreting how the client 
> interacts with a server cancellation or just setting up the test case wrong?
>
> My client test call is something like:
>
> ClientResponseObserver closingObserver = 
> ClosingStreamObserver.closingClientResponseObserver(clientTestCloseable, 
> responseObserver);
> serviceStub.testStream(req, closingObserver);
>
>
> and then I wait a bit to see if anything propagates (I've got an atomic 
> boolean that I flip when `onError` or `onCompleted` get called. I had also 
> tried a listener on the current context on the client side as well, but that 
> seemed irrelevant.
>
>
> On the server side I'm canceling the context something like this:
>
> public void testStream(TestRequest request, StreamObserver 
> responseObserver) {
>   CancellableContext cancellableContext = 
> Context.current().withCancellation();
>   cancellableContext.cancel(new RuntimeException("Server context canceled"));
> }
>
>
> To step back a bit, I'm looking to know if and how a server can signal it is 
> canceling a request to the client.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b510d79c-a9e5-43e6-b677-eb55119f7641%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Using multiple late-bound services

2019-02-15 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Inline responses

On Thursday, February 14, 2019 at 3:10:45 PM UTC-8, Geoff Groos wrote:
>
> Thanks Carl,
>
> I think the client-server naming is only causing me problems, so Instead 
> I'll use the real names which are optimizer (server above) and simulator 
> (client above). They are more like peers than client/server, because each 
> offers functionality.
>

In gRPC, clients always initiate, so they aren't peers so much.   You can 
use streaming to work around this, by having the client "prime" the RPC by 
sending a dummy request, and having the server send responses.  These 
responses are handled by the client, which then sends more "requests", 
inverting the relationship.  

Alternatively, you could have your client be combined with a local server, 
and then advertise the ip:port to the actual server, which then is combined 
with its own client.  

Not clean I know, but bidirectional streaming RPCs are the closest thing to 
peers that gRPC can offer.  

 

>
> If its the case that one process can start a service, have clients connect 
> to it, and then register services that they offer with that server, then 
> you're correct, and I do only need one server. The key is that the client 
> needs to be able to state "I offer this service, and heres how you can send 
> me messages". I'm just not sure how to implement that in GRPC.
>
> I think I did indeed have the terminology correct: I do want multiple 
> servers, each offering one service. The idea is that a simulator would 
> connect to an already running optimizer, start their own server running a 
> single instance of the service 'EndpointClientMustImplement', bind it 
> with protobuf, then call 'register' on our optimizer with a token that 
> contains details on "heres how you can connect to the service I just 
> started".
>
> The only downside to your suggestion is that it would require 
> multi-threading, because user code would have to call `register`, and then 
> produce two threads (or coroutines) to consume all the messages in both the 
> `featureC` stream and the `featureD` stream. But it does address some of my 
> concerns. 
>

I would not consider threading to be a serious concern here (and I say that 
as someone who has spent significant time optimizing gRPC).  You will 
likely need to give up the blocking API anyways, which means you can have 
requests and responses happen on the same thread, keeping a clear 
synchronization order between the two.  
 

>
> Still, I like the elegance of the solution I was asking for: when a 
> client-simulator connects to a server-optimizer, it starts its own service 
> and tells the optimizer it connects to that it should call back to this 
> service at some token.
>
> Can it be done?
>
> On Thursday, 14 February 2019 08:59:35 UTC-8, Carl Mastrangelo wrote:
>>
>> Some comments / questions:
>>
>> 1.  Why doesn't "rpc register" get split into two methods, one per type?  
>> Like "rpc registerCee (CeeRegRequest) returns (CeeRegResponse);"
>>
>> 2.  Being careful with terminology, you have multiple "services" on a 
>> singe "server", and the "server" is at one address.   
>>
>> 3.  You can find all services, methods, and types using the reflection 
>> api, typically by adding ProtoReflectionService to your Server.  
>>
>> 4.  BindableService and ServerServiceDefinition are standard and stable 
>> API, you can make them if you want.  The Protobuf generated code makes its 
>> own (and is complicated for other reasons) but you can safely and easily 
>> construct one that you prefer.
>>
>> 5.  Multiple ports is usually for something special, like different 
>> socket options per port, or different security levels.  That is a more 
>> advanced feature less related to API. 
>>
>> On Wednesday, February 13, 2019 at 10:58:51 AM UTC-8, Geoff Groos wrote:
>>>
>>> Hey everyone
>>>
>>> I'm building an API with GRPC which currently looks like this:
>>>
>>> serivce OurEndpoint {
>>>rpc register (RegistrationForFeatureCeeAndDee) returns (stream 
>>> FeatureCeeOrDeeRequest) {}
>>>  
>>>rpc featureA (FeatureAyeRequest) returns (FeatureAyeReponse) {}
>>>rpc featureB (FeatureBeeRequest) returns (FeatureBeeResponse) {}
>>>
>>>rpc offerFeatureC(FeatureCeeResponse) returns (Confirmation) {}
>>>rpc offerFeatureD(FeatureDeeResponse) returns (Confirmation) {}
>>>rpc offerCeeOrDeeFailed(FailureResponse) returns (Confirmation) {}
>>> }
>>>
>>>
>>> message FeatureCeeOrDeeRequest {
>>> oneof request {
>>> FeatureDeeRequest deeRequest = 1;
>>> FeatureCeeRequest ceeRequest = 2;  
>>> }
>>> }
>>>
>>>
>>> message Confirmation {}
>>>
>>> Note that features A and B are fairly traditional client-driven 
>>> request-response pairs.
>>>
>>> Features C and D are callbacks; the client registers with
>>>
>>> I can provide answers to C and D, send me a message and I'll call 
>>> offerFeatureResponse 
 as appropriate.
>>>
>>>
>>> I don't like this. It makes our application code complex. We

[grpc-io] Re: Using multiple late-bound services

2019-02-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Some comments / questions:

1.  Why doesn't "rpc register" get split into two methods, one per type?  
Like "rpc registerCee (CeeRegRequest) returns (CeeRegResponse);"

2.  Being careful with terminology, you have multiple "services" on a singe 
"server", and the "server" is at one address.   

3.  You can find all services, methods, and types using the reflection api, 
typically by adding ProtoReflectionService to your Server.  

4.  BindableService and ServerServiceDefinition are standard and stable 
API, you can make them if you want.  The Protobuf generated code makes its 
own (and is complicated for other reasons) but you can safely and easily 
construct one that you prefer.

5.  Multiple ports is usually for something special, like different socket 
options per port, or different security levels.  That is a more advanced 
feature less related to API. 

On Wednesday, February 13, 2019 at 10:58:51 AM UTC-8, Geoff Groos wrote:
>
> Hey everyone
>
> I'm building an API with GRPC which currently looks like this:
>
> serivce OurEndpoint {
>rpc register (RegistrationForFeatureCeeAndDee) returns (stream 
> FeatureCeeOrDeeRequest) {}
>  
>rpc featureA (FeatureAyeRequest) returns (FeatureAyeReponse) {}
>rpc featureB (FeatureBeeRequest) returns (FeatureBeeResponse) {}
>
>rpc offerFeatureC(FeatureCeeResponse) returns (Confirmation) {}
>rpc offerFeatureD(FeatureDeeResponse) returns (Confirmation) {}
>rpc offerCeeOrDeeFailed(FailureResponse) returns (Confirmation) {}
> }
>
>
> message FeatureCeeOrDeeRequest {
> oneof request {
> FeatureDeeRequest deeRequest = 1;
> FeatureCeeRequest ceeRequest = 2;  
> }
> }
>
>
> message Confirmation {}
>
> Note that features A and B are fairly traditional client-driven 
> request-response pairs.
>
> Features C and D are callbacks; the client registers with
>
> I can provide answers to C and D, send me a message and I'll call 
> offerFeatureResponse 
>> as appropriate.
>
>
> I don't like this. It makes our application code complex. We effectively 
> have to build our own multiplexer for things like offerCeeOrDeeFailed
>
> What I'd really rather do is this:
>
> serivce OurEndpoint {
>rpc register (RegistrationForFeatureCeeAndDee) returns (Confirmation) 
> {}
>  
>rpc featureA (FeatureAyeRequest) returns (FeatureAyeReponse) {}
>rpc featureB (FeatureBeeRequest) returns (FeatureBeeResponse) {}  
> }
> service EndpointClientMustImplement {
>rpc featureC(FeatureCeeRequest) returns (FeatureCeeResponse) {}
>rpc featureD(FeatureDeeRequest) returns (FeatureDeeResponse) {}
> }
>
>
> message RegistrationForFeatureCeeAndDee {
>ConnectionToken name = 1;
> }
>
>
> message Confirmation {}
>
>
> The problem here is how to go about implementing ConnectionToken and its 
> handler. Ideally I'd like some code like this:
>
> //kotlin, which is on the jvm.
> override fun register(request: RegistrationForFeatureCeeAndDee, response: 
> ResponseObserver) {
>
> //...
>
> val channel: Channel = ManagedChannelBuilder
> .for("localhost", 5551) // a port shared by the service 
> handling this very response
> .build()
>
> val stub: EndpointClientMustImplement = EndpointClientMustImplement.
> newBuilder()
> .withServiceNameOrSimilar(request.name)
> .build()
>
> //
> }
>
> What is the best way to go about this?
> 1. Can I have multiple servers at a single address?
> 2. Whats the best way to find a service instance by name at runtime rather 
> than by a type-derived (and thus by statically bound) name? I suspect the 
> BindableService and ServerServiceDefinitions will help me here, but I 
> really don't want to mess with the method-table building and the code 
> generating system seems opaque. 
>
> I guess my idea solution would be to ask the code generator to generate 
> code that is open on its service name, --ideally open on a constructor 
> param such that there is no way to instance the service without specifying 
> its service name.
>
> Or, perhalps there's some other strategy I should be using? I could of 
> course specify port numbers and then instance grpc services once-per-port, 
> but that means I'm bounded on the number of ports I'm using by the number 
> of active API users I have, which is very strange.
>
> Many thanks!
>
> -Geoff
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/81261a66-6f7a-47f2-9532-fceb1191e239%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: SEGFAULT in greeter_client

2019-02-13 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Hi, can you file an issue on gRPC's GitHub issue 
tracker?  https://github.com/grpc/grpc/issues/new

On Thursday, January 24, 2019 at 7:37:05 AM UTC-8, Gautham B A wrote:
>
> Hi all,
>
> I just cloned and built gRPC 
> (SHA 9ed8734efb9b1b2cd892942c2c6dd57e903ce719). I'm getting SEGFAULT when I 
> try to run greeter_client in C++. It SEGFAULTs when the RPC call is made -
>
> Status status = stub_->SayHello(&context, request, &reply);
>
> Here's how I'm building greeter_client -
>
> cmake_minimum_required(VERSION 3.13)
> project(HelloWorld)
>
> set(CMAKE_CXX_STANDARD 17)
>
> set(GRPC_BUILD_DIR
> /Users/gautham/projects/github/grpc)
>
> set(LIB_GRPC
> ${GRPC_BUILD_DIR}/libs/opt/libgpr.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libaddress_sorting.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_cronet.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_error_details.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_reflection.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_unsecure.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc_cronet.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpc_unsecure.dylib
> ${GRPC_BUILD_DIR}/libs/opt/libgrpcpp_channelz.dylib
> )
>
> set(LIB_PROTOBUF
> 
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf-lite.17.dylib
> 
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf-lite.dylib
> 
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf.17.dylib
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf.dylib
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotoc.17.dylib
> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotoc.dylib
> )
>
> include_directories(
> ${GRPC_BUILD_DIR}/include
> )
>
> add_executable(greeter_client
> greeter_client.cc
> helloworld.grpc.pb.cc
> helloworld.pb.cc
> )
>
> target_link_libraries(greeter_client
> ${LIB_GRPC}
> ${LIB_PROTOBUF}
> )
>
> Here's the coredump -
> * thread #1, stop reason = signal SIGSTOP
>   * frame #0: 0x7fffa253a19e libsystem_kernel.dylib`poll + 10
> frame #1: 0x00010e6c01a6 
> libgrpc.dylib`pollset_work(pollset=, 
> worker_hdl=0x7fff519dded8, deadline=) at 
> ev_poll_posix.cc:1063 [opt]
> frame #2: 0x00010e6e5999 
> libgrpc.dylib`cq_pluck(cq=0x7fad6240ae40, tag=0x7fff519de200, 
> deadline=, reserved=) at completion_queue.cc:1282 
> [opt]
> frame #3: 0x00010e22c4d1 
> greeter_client`grpc::CompletionQueue::Pluck(grpc::internal::CompletionQueueTag*)
>  
> + 161
> frame #4: 0x00010e22b810 
> greeter_client`grpc::internal::BlockingUnaryCallImpl  
> helloworld::HelloReply>::BlockingUnaryCallImpl(grpc::ChannelInterface*, 
> grpc::internal::RpcMethod const&, grpc::ClientContext*, 
> helloworld::HelloRequest const&, helloworld::HelloReply*) + 704
> frame #5: 0x00010e22b4ed 
> greeter_client`grpc::internal::BlockingUnaryCallImpl  
> helloworld::HelloReply>::BlockingUnaryCallImpl(grpc::ChannelInterface*, 
> grpc::internal::RpcMethod const&, grpc::ClientContext*, 
> helloworld::HelloRequest const&, helloworld::HelloReply*) + 61
> frame #6: 0x00010e228921 greeter_client`grpc::Status 
> grpc::internal::BlockingUnaryCall helloworld::HelloReply>(grpc::ChannelInterface*, grpc::internal::RpcMethod 
> const&, grpc::ClientContext*, helloworld::HelloRequest const&, 
> helloworld::HelloReply*) + 81
> frame #7: 0x00010e2288c5 
> greeter_client`helloworld::Greeter::Stub::SayHello(grpc::ClientContext*, 
> helloworld::HelloRequest const&, helloworld::HelloReply*) + 85
> frame #8: 0x00010e226ecb 
> greeter_client`GreeterClient::SayHello(std::__1::basic_string std::__1::char_traits, std::__1::allocator > const&) + 235
> frame #9: 0x00010e226c05 greeter_client`main + 469
> frame #10: 0x7fffa240a235 libdyld.dylib`start + 1
> frame #11: 0x7fffa240a235 libdyld.dylib`start + 1
>
> I'm using macOS Sierra 10.12.6
>
> Compiler -
> clang
> Apple LLVM version 9.0.0 (clang-900.0.39.2)
> Target: x86_64-apple-darwin16.7.0
> Thread model: posix
>
> Can anyone please help me?
>
> Thanks,
> --Gautham
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f3dc762a-3dfd-4482-a833-f43a1a32a539%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: StatusRuntimeException with gRPC stream: Channel closed

2019-02-13 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Cancellation is usually done by your application code, unless there is a 
proxy in the path.  There is no default time limit, and RPCs will never 
time out normally.   You can add one by setting a deadline on the stub.
Deadlines will fail with a DEADLINE_EXCEEDED error code, rather than a 
CANCELED.

On Tuesday, February 12, 2019 at 8:24:27 PM UTC-8, mur...@akruta.com wrote:
>
> This error occurs on the server side.
> Also, what is the default timelimit for an rpc call?
>
>
>
> On Tuesday, February 12, 2019 at 8:01:44 PM UTC-8, mur...@akruta.com 
> wrote:
>>
>> Hi all,
>>
>> I am trying to use multiple grpc bidirectional streams in the same 
>> service between a client and server. And every now and then I get the 
>> following error:
>> ```
>>  io.grpc.StatusRuntimeException: CANCELLED: cancelled before receiving 
>> half close
>> at io.grpc.Status.asRuntimeException(Status.java:517)
>> at 
>> io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onCancel(ServerCalls.java:272)
>> at 
>> io.grpc.PartialForwardingServerCallListener.onCancel(PartialForwardingServerCallListener.java:40)
>> at 
>> io.grpc.ForwardingServerCallListener.onCancel(ForwardingServerCallListener.java:23)
>> at 
>> io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onCancel(ForwardingServerCallListener.java:40)
>> at 
>> io.grpc.Contexts$ContextualizedServerCallListener.onCancel(Contexts.java:96)
>> at 
>> io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.closed(ServerCallImpl.java:293)
>> at 
>> io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1Closed.runInContext(ServerImpl.java:738)
>> at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>> at 
>> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
>> at java.lang.Thread.run(Thread.java:764)
>> ```
>> did anyone else face this issue?
>> any pointers would be really appreciated.
>>
>> Thanks,
>> Murali
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a824abea-890b-43ff-9333-7431d3a52ff4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Does grpc-java supports cert reload without restarting server?

2019-02-07 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You are correct, Java doesn't support this.   However, if you are using the 
round robin load balancer in your client, you should be able to gracefully 
restart your servers with the new certificate without dropping any requests.

On Thursday, February 7, 2019 at 6:52:46 AM UTC-8, Danesh Kuruppu wrote:
>
> Hi Daisy,
>
> Does grpc-java supports cert refresh without restarting server?
>>
>
> AFAIK, this is not supported yet. We need to restart the server.
> Please correct me if I am wrong.
>
> Thanks
> Danesh
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/54f551a9-3092-4389-beb3-f45db4ce900c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Is there a python interceptor that generates access logs?

2019-02-07 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I work on the gRPC Java implementation, but I actually use the Go version 
(server and client).  It has been very easy to get started and deploy.   I 
would recommend the Go version since you can forget about almost all 
threading issues in your server handler.

On Thursday, February 7, 2019 at 9:25:59 AM UTC-8, robert engels wrote:
>
> Might be a better question, is it silly to start any python service in 
> 2019, given the alternatives available like Go. :) 
>
> > On Feb 7, 2019, at 10:39 AM, Ross Vandegrift  > wrote: 
> > 
> > On Wed, 2019-02-06 at 09:37 -0800, 'Carl Mastrangelo' via grpc.io 
> wrote: 
> >> There was some work on binary logs to dump the traffic to disk (for 
> replay), 
> >> but It wasn't anything like access logs.   I assume you mean something 
> like 
> >> Apache's log per request.   
> > 
> > Yes, that's what I'm thinking.  Mostly just to provide simple 
> troubleshooting 
> > info for developers.  I might write one, thanks for the list of 
> potential 
> > pitfalls. 
> > 
> > Side concern: it seems like most of the grpc ecosystem is built around 
> golang. 
> > Is it silly to start a new grpc python service in 2019? 
> > 
> > Thanks for the feedback, 
> > Ross 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "grpc.io" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to grpc-io+u...@googlegroups.com . 
> > To post to this group, send email to grp...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/grpc-io. 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/a5b37cab4e8da160ab3b7665e57578426c354e1b.camel%40cleardata.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f1ee45d3-02e6-4ac1-b6a0-49e5d42071e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Is there a python interceptor that generates access logs?

2019-02-06 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
There was some work on binary logs to dump the traffic to disk (for 
replay), but It wasn't anything like access logs.   I assume you mean 
something like Apache's log per request.  

If you are considering writing one, I foresee a few problems:

1.  Streaming calls may take a long time.  Should the beginning or end be 
logged, or both?
2.  Request size is tricky.   Do you include the headers and trailers?  
3.  Writing and flushing to disk (or to a remote log service) might need 
some buffering considerations.

On Wednesday, February 6, 2019 at 9:12:12 AM UTC-8, 
ross.va...@cleardata.com wrote:
>
> Hello,
>
> I'm working on a grpc service in python and would like to have the server 
> write simple access logs.  Seems like someone else has probably run into 
> that, but I can't find an existing interceptor. Is there one out there?
>
> Thanks,
> Ross
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9ee58e0a-bcd2-4fd4-bc03-4f06bfb73a4f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: HelpNeeded: in RoundRobinLoadBalancerFactory usage properly

2019-02-01 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Is there another way to get the addresses of all your backends?   If the 
DNS query only returns some of the results, gRPC won't try to query it 
again until some of the connections fail (or a timeout expires).  You can 
see what backends are returned by calling the 'dig' tool on Linux a few 
times to see what response you get.   

One alternative is to implement a custom NameResolver (not as hard as it 
sounds!) to issue multiple queries and aggregate the backends.   That would 
get you more backends (making some assumptions about your setup) to pass to 
the load balancer.

On Thursday, January 31, 2019 at 4:03:58 PM UTC-8, Vinay Kumar Marpina 
wrote:
>
> Dear Concerned,
>
> Scenario: I have a grpc service which hosted AWS EKS with a single load 
> balancer(ELB) pointing to 5 replicas. My grpc client is creating a single 
> channel and in that single channel trying to send 1000 of requests. If i 
> use multiple channels load balancer properly distributing load if not it is 
> sending to only pod out of 5 replicas. 
>
> So what i did is i used RoundRobinLoadBalancerFactory instance in 
> ManagedChannel Builder to build channel. Now when i sending 100's of 
> requests through a single channel it able to distribute the load but it 
> only distributes to 60% of replicas based on its choice and never send any 
> results to rest 40%. Example: 3 replicas only 2 hits, 5 replicas 3 get hit 
> etc...
>
> When i started debugging i found out irrespective number of replicas 
> DnsNameResolver always resolving only 3 servers only. Its pretty weird. 
> Am i missing some thing ?? resolutionResults or Servers should be 
> respective to number of replicas right. Even when i have 2 replicas i have 
> 3 servers in DnsNameResolver . I think my understanding of Source code is 
> bit off. Can some one please Help me in finding what i am missing.
>
> If this is not the right place can you please guide me to correct place 
> where i can find help. If i need to provide any more info please let me 
> know.
>
> Thanking you in Advance for your valuable time.
>
> Regards,
> -Vinay,
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/db263c58-f65f-4a83-a3e2-37cc60a8622a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: async gRPC - error conditions

2019-02-01 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Partial answers:

Callers don't drop connections due to timeouts.   Rpcs can drop due to 
timeouts, but not really connections.  In the event that the RPC times out, 
the client-side loadbalancer will decide if the RPC can be rescheduled onto 
a new connection.   gRPC comes with two load balancers by default: 
PickFirst and RoundRobin.  PickFirst prefers to use a single connection, 
even if there are multiple backends.  Pick first is also the default.  
Round Robin spreads load evenly across backends, and can send new RPCs to 
non failing connections.  

I can't comment on the C++ API, unfortunately.

On Thursday, January 31, 2019 at 12:54:19 AM UTC-8, Stephan Menzel wrote:
>
> Hello group,
>
> I am in the process of transforming a number of gRPC services to an async 
> approach. To this end, I have implemented a base architecture for my calls, 
> starting out with the async server example in the code and extending it 
> with a template base class for my calls. Unlike the example, the actual 
> work is done in a workerthread running a number of fibers where I post all 
> the work into.
>
> As I get along transforming all the calls a question arises that puzzled 
> me for quite some time and that I like to find an answer for in order to 
> get my implementation stabilized. I am referring to this example: 
> https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_async_server.cc
>
> The question is: What happens when things go south?
>
> For example, when the caller drops the connection because it runs into its 
> timeout. What happens to the call? Will it somehow be shut down? And how do 
> I notice that? Since my actual work is happening in another thread I need 
> to understand the lifecycle of the object better.
>
> The same goes for the call itself to timeout. Some of my implementations 
> involve blocking file IO and other stuff that may exceed a timeout I am 
> setting myself. Am I right assuming that I'm supposed to call Finish() 
> anyway? And would I be able to tell Finish() that this call is faulty or 
> would I use the actual response payload to relay this information?
>
> What other error conditions should I take care of that the example doesn't 
> cover? I'm thinking unparseable calls? Unimplemented methods? Anything 
> really.
>
> Can anyone shed some light on this?
>
> Best regards...
>
> Stephan
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ad438a39-85a1-48e5-bf2f-d00a519f69ef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: errors when linking with protobuf-lite

2019-01-17 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Do you have c-ares ( https://c-ares.haxx.se/ ) somewhere on your system?

On Thursday, January 17, 2019 at 11:10:17 AM UTC-8, 
joe.p...@decisionsciencescorp.com wrote:
>
> I have built gRPC with protobuf-lite support and added 
> the  -DGRPC_USE_PROTO_LITE compiler switch along with option optimize_for 
> = LITE_RUNTIME; to my protobuf definition file. My program compiles ok 
> when generates the following undefines during link:
>
> grpc_ares_wrapper.cc:(.text+0x7c8): undefined reference to `ares_inet_ntop'
>
> grpc_ares_wrapper.cc:(.text+0x9a7): undefined reference to `ares_strerror'
>
> grpc_ares_wrapper.cc:(.text+0xb44): undefined reference to 
> `ares_parse_srv_reply'
>
> grpc_ares_wrapper.cc:(.text+0xbd7): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0xc33): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0xc6e): undefined reference to `ares_free_data'
>
> grpc_ares_wrapper.cc:(.text+0xe02): undefined reference to 
> `ares_parse_txt_reply_ext'
>
> grpc_ares_wrapper.cc:(.text+0xfbd): undefined reference to `ares_free_data'
>
> grpc_ares_wrapper.cc:(.text+0x155f): undefined reference to 
> `ares_set_servers_ports'
>
> grpc_ares_wrapper.cc:(.text+0x16cf): undefined reference to 
> `ares_gethostbyname'
>
> grpc_ares_wrapper.cc:(.text+0x173a): undefined reference to `ares_query'
>
> grpc_ares_wrapper.cc:(.text+0x17bb): undefined reference to `ares_search'
>
> grpc_ares_wrapper.cc:(.text+0x1e90): undefined reference to 
> `ares_library_init'
>
>
> Note that I am linking against the following grpc libs:
>
> libgrpc++.a
>
> libgrpc.a
>
> libaddress_sorting.a
>
> libgpr.a
>
> libgrpc_cronet.a
>
> libgrpc++_cronet.a
>
> libgrpc++_error_details.a
>
> libgrpc_plugin_support.a
>
> libgrpcpp_channelz.a
>
> libgrpc++_reflection.a
>
> libgrpc_unsecure.a
>
> libgrpc++_unsecure.a
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/50b257cd-2036-4636-8483-debe4f0c7922%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC client side RoundRobin loadbalancing w/ Consul DNS

2019-01-17 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I know you asked for C++, but At least for Java we do not honor TTL.  
(because the JVM doesn't surface it to us).  If you implement your own 
NameResolver (not as hard as it sounds!) you can honor these TTLs.  

I believe C++ uses the cares resolver which IIRC can resort to doing TCP 
lookups if the response size is too large.  Alas, I cannot answer with any 
more detail.

gRPC has the option to do health checks, but what I think you actually want 
are keep-alives.  This is configurable on the channel and the server.  If 
you can add more detail about the problem you are trying to avoid, I can 
give a better answer.

As for if DNS is a really bad idea:  Not really.  It has issues, but none 
of them are particularly damning.   For example, when you add a new server 
most clients won't find out about it until they poll again.  gRPC is 
designed around a push based name resolution model, with clients being told 
what servers they can talk to.   DNS is adapted onto this model, by 
periodically spawning a thread and notifying the client via the 
push-interface.   

The DNS support is pretty good in gRPC, to the point that implementing a 
custom DNS resolver is likely to cause more issues (what happens if the A 
lookups succeed, but the  fail?, what happens if there are lots of 
addresses for a single endpoint?, etc.)

One last thing to consider:  the loadbalancer in gRPC is independent of the 
name resolver.  You could continue to use DNS (and do SRV lookups and such) 
and pass that info into your own custom client-side LB.  This is what 
gRPCLB does, but you could customize your own copy to not depend on a 
gRPCLB server.   There's lots of options here. 



On Wednesday, January 16, 2019 at 5:01:33 PM UTC-8, Ram Kumar Rengaswamy 
wrote:
>
> Hello ... We are looking to setup client-side loadbalancing in GRPC (C++).
> Our current plan roughly is the following:
> 1. Use consul for service discovery, health checks etc.
> 2. Expose the IP addresses behind a service to GRPC client via Consul DNS 
> Interface 
> 3. Configure the client to use simple round_robin loadbalancing (All our 
> servers have the same capacity and therefore we don't need any 
> sophisticated load balancing)
>
> Before we embark on this path, it would be great if someone with gRPC 
> production experience could answer a few questions.
> Q1: We plan to use a low DNS TTL (say 30s) to force the clients to have 
> the most up to date service discovery information. Do gRPC clients honor 
> DNS TTL ?
> Q2: Is it possible for gRPC to resolve DNS via TCP instead of UDP ? We 
> could have a couple of hundred backends for a service.
> Q3: Does gRPC do its own health checks and mark unhealthy connections?
>
> Also from experience, do folks think that this is a really bad idea and we 
> should really use grpclb policy and implement a look-aside loadbalancer 
> instead ?
>
> Thanks,
> -Ram
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a09d8abe-5936-462b-bfef-a322a58736c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java client side TLS authentication using root.pem file and using username and password.

2019-01-17 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You are going to need to clarify some more, I can't tell what's going on in 
your setup.   Where do the username and password come from?  Why aren't you 
using an authentication token?Have you read our Mutual TLS guide 
here https://github.com/grpc/grpc-java/blob/master/SECURITY.md#mutual-tls

On Tuesday, January 15, 2019 at 1:09:39 PM UTC-8, Kishore Ganipineni wrote:
>
> SSL/TLS Authentication of gRPC using root.pem file and username & password 
> at client side.
>
> To Authenticate the gRPC server using root pem certificate file and 
> credentials in C++ we have a facility to provide both options from client 
> like below.
>
> pem file setup using environment variable option (C++):
>
> setenv("GRPC_DEFAULT_SSL_ROOTS_FILE_PATH", fileBuff1, true);
> sprintf(setSecBuff, "chmod 777 %s", fileBuff1);
> system(setSecBuff);
> Creating Channel Using ssl options(keyPassword if any):
>
> SslCredentialsOptions ssl_opts;
> TelemAsyncClient 
> telemAsyncClient(grpc::CreateChannel(std::string(hostIpStr), 
> grpc::SslCredentials(ssl_opts), ChannelArguments()));
> Passing credentials using ClientContext(C++):
>
> ClientContext context;
> CompletionQueue cq;
> Status status;
>
> context.AddMetadata("username", userid); 
> context.AddMetadata("password", password);  
>
>
> // Print Populated GetRequest
> printGetRequest(&getReq); 
> std::unique_ptr > 
> rpc(stub_->AsyncGet(&context, getReq, &cq));
> In java we have facility to pass the pem file but how to pass the 
> credentials? Java code to pass pem file: 
>
> ManagedChannel channel = NettyChannelBuilder.forAddress(ip, port)
> .useTransportSecurity()
> .negotiationType(NegotiationType.TLS)
> .sslContext(GrpcSslContexts.forClient()
> .trustManager(new File("/test.pem"))
> .clientAuth(ClientAuth.REQUIRE)
> .build())
> .overrideAuthority("test")
> .build();
> Tried to set the credentials using CallCredentials and ClientInterceptor 
> options but none of the worked. Server side Username is not receiving. 
> Hence getting io.grpc.StatusRuntimeException: UNAUTHENTICATED exception.
>
> CallCredentials Tried:
>
> OpenConfigGrpc.OpenConfigBlockingStub blockingStub = 
> OpenConfigGrpc.newBlockingStub(channel).withCallCredentials(credentials);
>
> public void applyRequestMetadata(MethodDescriptor methodDescriptor, 
> Attributes attributes, Executor executor, final MetadataApplier 
> metadataApplier) {
> String authority = attributes.get(ATTR_AUTHORITY);
> Attributes.Key usernameKey = Attributes.Key.of("userId");
> Attributes.Key passwordKey = Attributes.Key.of("password");
> attributes.newBuilder().set(usernameKey, username).build();
> attributes.newBuilder().set(passwordKey, pasfhocal).build();
> System.out.println(authority);
> executor.execute(new Runnable() {
> public void run() {
> try {
> Metadata headers = new Metadata();
> Metadata.Key usernameKey = 
> Metadata.Key.of("userId", Metadata.ASCII_STRING_MARSHALLER);
> Metadata.Key passwordKey = 
> Metadata.Key.of("password", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(usernameKey, username);
> headers.put(passwordKey, pasfhocal);
> metadataApplier.apply(headers);
> } catch (Exception e) {
> 
> metadataApplier.fail(Status.UNAUTHENTICATED.withCause(e));
> e.printStackTrace();
> }finally{
> logger.info("Inside CienaCallCredentials finally.");
> }
> }
> });
> }
> Interceptors Tried:
>
> OpenConfigGrpc.OpenConfigBlockingStub blockingStub = 
> OpenConfigGrpc.newBlockingStub(channel).withInterceptors(interceptors);
>
> public  ClientCall 
> interceptCall(MethodDescriptor methodDescriptor, CallOptions 
> callOptions, Channel channel) {
> return new ForwardingClientCall.SimpleForwardingClientCall RespT>(channel.newCall(methodDescriptor, callOptions)) {
> @Override
> public void start(Listener responseListener, Metadata 
> headers) {
> callOptions.withCallCredentials(credentials);
> Metadata.Key usernameKey = 
> Metadata.Key.of("usernId", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(usernameKey, username);
> Metadata.Key passwordKey = 
> Metadata.Key.of("password", Metadata.ASCII_STRING_MARSHALLER);
> headers.put(passwordKey, pasfhocal);
> super.start(responseListener, headers);
> }
> };
> }
> Much appreciated your help if some can help on this how to authenticate 
> gRPC using root.pem file and username and password.
>
> Thanks in Advance, Kishore
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To u

[grpc-io] Re: relaying grpc calls

2019-01-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Yes, this is possible.  You'll need to implement some unpacking scheme on 
L, but it can then forward the raw bytes of the request.   Note that if you 
are  using Proto, you may not be able to use all the generated stub code.  
 If C, L, and R all know about the same message types, then you can use the 
proto stubs.

Note also that the headers will need to be captured from C->L, and the 
response trailers from R->L.If you wanted this to be "delayed" in any 
way, it may get more tricky.  

On Sunday, January 13, 2019 at 12:52:29 AM UTC-8, dan.b...@huawei.com wrote:
>
> Consider a grpc client C, and servers L & R (local & remote). 
> Servers L & R are connected. Client C can reach only server L.
>
> Client C needs to send a request to server R. 
>
> Can I create a "relay_request" message that server L will support, where 
> the body is the request to send to server R? (also dealing with relaying 
> the reply)
>
> Is there anything better I can do for such a scenario?
>
> Thanks,
> Dan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fc766ee0-a9b7-46f2-bbd5-024c1fba3e1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC inproc transport

2019-01-03 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I am the author of the blog post.  The pictures included are an 
approximation of the truth, which would otherwise be very complicated.  I 
tried to convey as much information as possible without being too 
inaccurate, but as you noticed it isn't right for C# (or any wrapped 
language except for C++).

On Wednesday, January 2, 2019 at 12:35:23 AM UTC-8, Jan Tattermusch wrote:
>
> Hey, the API for using inproc channels is currently not exposed in C#.
>
> I filed https://github.com/grpc/grpc.github.io/issues/803 to point out 
> this inaccuracy.
>
> On Saturday, December 22, 2018 at 5:46:21 AM UTC+1, vadim@gmail.com 
> wrote:
>>
>> Hello,
>>
>> This blog post https://grpc.io/blog/grpc-stacks mentions that it's 
>> possible to use In-Process transport with C# library. What's the right 
>> usage of this transport from wrapping languages? Is this transport built-in 
>> or it's necessary to recompile the library to enable it? I couldn't find 
>> examples.
>>
>> Thanks in advance.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4e8db541-9ccd-4682-8783-d10f1b0ceb1d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: What is the maximum message size

2018-12-17 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
The largest I've seen is about 40M, and it was due to extremely complex 
proto files, rather than large byte arrays.  The large proto had to be 
updated transactionally, so it couldn't be split into smaller parts. 

On Thursday, December 13, 2018 at 10:14:06 AM UTC-8, Oregonduckman wrote:
>
> I have read several posts about the maximum size gRPC will transport and 
> posts range from 4M to 2G. I am interested to know what others are doing to 
> transport large buffers on the order of 200M using gRPC.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b3c39e97-60bc-4c1e-a9ff-be2d290e920e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: The bidirectional stub life cycle

2018-12-17 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Responses inline

On Thursday, December 13, 2018 at 7:06:13 AM UTC-8, for.sh...@gmail.com 
wrote:
>
> Hi All
>
> I have a question about the bidirectional stub. I'm using gRPC-java 1.16.1.
>
> If the server side call onCompleted or onError. Does it mean the stub has 
> already been closed or just the server side been closed?
>

>From the point of view of the server, it is closed.  There is a delay 
propagating it to the client, but it no longer useable by the server.  
 

> This can become to another question. Once the server invoked onError, the 
> client side needs to call onComplete/onError to close its stream?
>

No, the client will tear down the stream. The reason is that the server 
send the "Status" in the trailing Metadata, which terminates the RPC.
 

>
> I wrote a simple echo service to verify this question.
> At first, when the server invoked onError to tell the client. The client 
> invoked onCompleted to tell the server that the client stream is completed.
> However I found the onCompleted method in the server observer did not be 
> invoked. So is it indispensable? Does it cause any connection leak?
>

onCompleted (from the servers point of view) means "I'm done with this, 
please clean things up".  There may have been messages en route from the 
client to the server when the server calls onCompleted.  The server already 
indicates that it doesn't want anything more to do with that RPC.  
 

>
> I'm confused by the gRPC guideline concepts.
>
> RPC termination
>
> In gRPC, both the client and server make independent and local 
> determinations of the success of the call, and their conclusions may not 
> match. This means that, for example, you could have an RPC that finishes 
> successfully on the server side (“I have sent all my responses!”) but fails 
> on the client side (“The responses arrived after my deadline!”). It’s also 
> possible for a server to decide to complete before a client has sent all 
> its requests.
>

Consider an RPC with a deadline of 5 seconds, and a network delay of 1 
second.  The following timeline explains how this can happen:

t0.0s: Client starts an RPC with deadline 5 seconds
t1.0s: Server Receives RPC, sees a deadline of 5 seconds
t4.9s: Server finishes the RPC after 3.9s, and Sends the RPC, thinking it 
has succeeded
t5.0s: Client deadline is reached, the RPC is cancelled.
t5.9s: Client gets the server message, for an already failed RPC.  This is 
discarded.
t6.0s: Server gets notice of the Client cancellation, for an already 
successful RPC.  This is discarded.


Both endpoints have to discard any extra messages and headers after they 
complete the RPC, or else they would have to keep RPC state around in 
memory for ever.  Suppose the server never responded at all.   The client 
would have to hold on the the status of the RPC forever, not knowing that 
the server is gone.   

 

>
> Thanks
>
> Shin
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/94c0f9bd-26c4-4486-9d62-4e89e11ad77d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Measuring latency using ClientInterceptor/ForwardingClientCallListner

2018-12-05 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
It is not correct.   The start time needs to be set when ClientCall.start() 
is invoked, rather than when the listener is constructed.  Aside from that 
it looks okay. 

On Tuesday, December 4, 2018 at 2:26:22 PM UTC-8, rahul.n.m...@verizon.com 
wrote:
>
> Hi
>
> I am working on quick solution to measure grpc call latency with more 
> precision. I wrote sample ClientInterceptor that does the same. Below code 
> is in Scala but I believe it is readable and close enough to Java. 
>
> Below I am recording start time when ClientCallListner is initialized and 
> then I am measuring end time when onClose is called. I just want to make 
> sure this is correct way to measure latency.  Let me know if you have any 
> questions.
>
>
> class ClientPerformanceCallListner[S](val delegate1: ClientCall.Listener[S]) 
> extends ForwardingClientCallListener[S] {
>
>   var startTime = System.nanoTime
>
>   override def delegate(): ClientCall.Listener[S] = delegate1
>
>   override def onMessage(message: S): Unit = {
>
> super.onMessage(message)
>   }
>
>   override def onClose(status: Status, trailers: Metadata): Unit = {
> val elapsedTime = TimeUnit.NANOSECONDS.toMicros(System.nanoTime - 
> startTime)
> println(s"elapsed time:: ${elapsedTime}µs")//println("closing connection")
> super.onClose(status, trailers)
>   }
>
> }
>
>
>
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c4ce349f-1dcd-4ce4-8779-c9544aa2cb65%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC over the internet at scale

2018-12-03 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I can't go into details, about the apps, sorry.

On Monday, December 3, 2018 at 7:52:46 AM UTC-8, Cyrus Katrak wrote:
>
> Thanks Carl, knowing that Google is deploying gRPC on mobile clients 
> certainly inspires confidence in this use case. Can you share more details, 
> like which apps?
>
> Yes, our concern with network middle boxes is primarily around their lack 
> of support / active blocking of HTTP2 upgrades.
>
> On Thu, Nov 29, 2018 at 1:10 PM 'Carl Mastrangelo' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>> I can't speak for most middle boxes, but at least nginx does have pretty 
>> good http2 support.   I believe that is pretty common.
>>
>> The main issue I see with middle boxes is not supporting TLS ALPN (which 
>> HTTP/2 depends on).   This is common in packet inspecting firewalls, which 
>> lag in functionality.
>>
>> That said, Google is gainfully using gRPC in some flagship Android 
>> applications, so it definitely has some experience in exotic environments.
>>
>>
>>
>> On Monday, November 26, 2018 at 10:25:01 AM UTC-8, cka...@slack-corp.com 
>> wrote:
>>>
>>> I'm evaluating use of gRPC between mobile clients and servers, and was 
>>> curious if anyone has experience using gRPC in a production setting, over 
>>> the internet, at scale. I'm generally curious about the use cases, learning 
>>> and outcomes of switching to gRPC. A particular area of concern we have is 
>>> the reliance on HTTP2 and lack of automated fallback to HTTP1. Even if we 
>>> get all our infrastructure to support terminating HTTP2-gRPC, we are 
>>> concerned that network middle boxes may interfere with, or block, gRPC 
>>> connections.
>>>
>>> Thanks,
>>> Cy
>>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/grpc-io/zhhPT2ZwrUA/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/be58d750-2a40-4835-b497-81efd8a67a4a%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/be58d750-2a40-4835-b497-81efd8a67a4a%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5b2d0c12-93f4-43bb-b472-ce36e998d832%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc connections

2018-12-03 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
In your case, 1 Channel, 1 connection.  Since there is one single IP 
address, all traffic will be routed to it.  (if you had 4 IP addresses, it 
would be 1 channel, 4 connections).   


I'm not sure what to say about the usage of Netty.  The connections are 
actually synchronous, but use non blocking IO.  When they go bad they 
notify the channel which notifies the client side load balancer.


On Sunday, December 2, 2018 at 4:41:22 AM UTC-8, mailar...@gmail.com wrote:
>
>
>
> On Sunday, December 2, 2018 at 6:04:02 PM UTC+5:30, mailar...@gmail.com 
> wrote:
>>
>>
>> Thanks Carl, appreciate your detailed response, helps clear my confusions.
>>
>> Regarding Load balancer, yes, you are right, i was talking about server 
>> side LB like a LinkerD or Ngnix.
>>
>> Actually this is my setup 
>>
>   grpc client  --> Virtual IP > Load balancer 1 - grpc 
> server 1, grpc server 2
>   Load 
> balancer 2 - grpc server 3, grpc server 4
>
>
>Based on the setup above, assuming everyone is on "round robin", 
> will there be 4 connections / channel ? or 1 channel used across multiple 
> grpc servers.
> Internally i see that netty is used, and async stubs, which implies async 
> connections. It adds to the confusion, how are async connections managed ? 
> do they follow the same pattern as synchronous . 
>
> Thanks again !
>
> AK
>
>  
>
>>  
>>
>>
>>
>>
>> On Saturday, December 1, 2018 at 12:40:41 AM UTC+5:30, Carl Mastrangelo 
>> wrote:
>>>
>>> gRPC uses a "channel" rather than a "connection" as the surface level 
>>> API.   A channel is a managed set of connections to your server(s).  The 
>>> number of connections could drop down to 0 if they are idle, or be as large 
>>> as every server returned by your DNS entries.   gRPC has two pluggable 
>>> components called a "name resolver" and a "load balancer" which both run 
>>> inside of your client.   The name resolver (typically DNS), converts the 
>>> "target" string you connect to, and turns it into a set of addresses.  The 
>>> load balancer is notified of the addresses, and decides which ones to 
>>> create connections to, and which ones received your RPCs.   By default, we 
>>> use a "pick first" load balancer which only makes one connection.
>>>
>>> If the connection between a client and a particular server breaks, the 
>>> connection is removed from the set, and if another connection is available, 
>>> gRPC will fire RPCs on the healthy one.  If not, gRPC will attempt to 
>>> establish a new connection, up until the point that your RPC is not 
>>> canceled or the deadline has exceeded.
>>>
>>> The word load balancer is overloaded here, because client side load 
>>> balancing is different than server side.   I think you are talking about 
>>> server side.   You will need to modify your client side load balancer to 
>>> "round robin", which will establish connections to every backend server you 
>>> have.   When servers terminate, or are added to the pool of backends, it's 
>>> up to the client side load balancer / name resolver to notice these 
>>> differences and adjust the connections.  gRPC does this by default (via the 
>>> connections actually breaking), but it's faster if your name resolver 
>>> (again, typically DNS) notices the server set has changed. 
>>>
>>> On Friday, November 30, 2018 at 7:20:27 AM UTC-8, fairly accurate wrote:


 Hello All,
   I'm a newbie grpc user. Actually, i'm using a product that is built 
 over grpc.
 I have read stuff about grpc, pardon my ignorance, some basic questions 
 on grpc connections.

 1. I understand it is 1 connection per client server pair. Does this 
 mean a connection is established between client and server, and stays 
 alive 
 until one of them ceases to exist ?
 Also, if the connection drops, a new connection is initiated ?

 2. I have a load balancer placed between the client and server. The LB 
 has 2 servers. In this case is 1 connection to each server or just one to 
 LB and the load is balanced between the tow servers or 2 connections ?

 Appreciate your response.

 AK

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d19f6e08-64d6-4100-ac2d-754414a46582%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Measuring gRPC-Java Unary Client Requests Latency From Interceptor

2018-11-30 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
One of the JVM contributors, Aleksey Shipilёv, has a great blog post on 
nanoTime.  I highly recommend his short post about 
it: https://shipilev.net/blog/2014/nanotrusting-nanotime/



On Friday, November 30, 2018 at 8:01:53 AM UTC-8, Yee-Ning Cheng wrote:
>
> Ah you were right, user error.. wasn't counting the timings when there was 
> an exception thrown.
>
> Yeah I've been told nanoTime() is better for measuring elapsed time, but 
> actually read somewhere it has a bigger performance hit.
>
> Thanks!
>
> On Thursday, November 29, 2018 at 5:34:45 PM UTC-5, Carl Mastrangelo wrote:
>>
>> System.currentTimeMillis() is not ideal for measuring elapsed time.  It's 
>> meant for measure the current time.  Instead, try doing:
>>
>>
>> long startNanos = System.nanoTime();
>>
>> // ... Do the RPC
>>
>> long stopNanos = System.nanoTime();
>>
>> tick("GrpcClient.Get", TimeUnit.NANOSECONDS.toMillis(stopNanos - 
>> startNanos));
>>
>>
>> As for the discrepancy, are you sure you don't have any exceptions being 
>> thrown?  If the call failed the time would not be recorded.  This is 
>> important for deadline exceeded calls, because they would make the average 
>> latency go way up.
>>
>>
>>
>>
>> On Thursday, November 29, 2018 at 1:24:20 PM UTC-8, Yee-Ning Cheng wrote:
>>>
>>> I also have a metric that surrounds the actual blocking call (I am using 
>>> the blocking stub in this case).
>>>
>>>
>>> Long time = System.currentTimeMillis();
>>> Userprofile.ReadResponse resp = blockingStub
>>> .withDeadlineAfter(timeout, TimeUnit.MILLISECONDS)
>>> .get(req.build());
>>> tick("GrpcClient.Get", System.currentTimeMillis() - time);
>>>
>>>
>>>
>>> The issue is that this metric (the one surrounding the blocking call) is 
>>> reporting a much lower latency (~100ms vs ~400ms).  Why is there such a 
>>> discrepancy?  And which one is correct?
>>>
>>>
>>> I will look into the ClientStreamTracer to see what's there as well. Thanks!
>>>
>>>
>>> On Thursday, November 29, 2018 at 3:57:31 PM UTC-5, Carl Mastrangelo 
>>> wrote:

 That is one way.  For more precise (but about as accurate) numbers, 
 consider using a ClientStreamTracer, which you can set on the 
 ManagedChannelBuilder.  That has more fine-grained events about an RPC. 

 On Wednesday, November 28, 2018 at 1:55:12 PM UTC-8, Yee-Ning Cheng 
 wrote:
>
> I am trying to measure gRPC unary requests from a client.
>
> I have implemented an interceptor very similar to
>
>
> https://github.com/grpc-ecosystem/java-grpc-prometheus/blob/master/src/main/java/me/dinowernli/grpc/prometheus/MonitoringClientCallListener.java
>
> I also have a metric surrounding the client call and this time is much 
> lower than the time measured from the interceptor.
>
> Is the above interceptor implementation the correct way to measure 
> each unary request from the client?
>
> Thanks
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9b35f1bb-253d-4b0f-a155-0e524fde7050%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc connections

2018-11-30 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
gRPC uses a "channel" rather than a "connection" as the surface level API.  
 A channel is a managed set of connections to your server(s).  The number 
of connections could drop down to 0 if they are idle, or be as large as 
every server returned by your DNS entries.   gRPC has two pluggable 
components called a "name resolver" and a "load balancer" which both run 
inside of your client.   The name resolver (typically DNS), converts the 
"target" string you connect to, and turns it into a set of addresses.  The 
load balancer is notified of the addresses, and decides which ones to 
create connections to, and which ones received your RPCs.   By default, we 
use a "pick first" load balancer which only makes one connection.

If the connection between a client and a particular server breaks, the 
connection is removed from the set, and if another connection is available, 
gRPC will fire RPCs on the healthy one.  If not, gRPC will attempt to 
establish a new connection, up until the point that your RPC is not 
canceled or the deadline has exceeded.

The word load balancer is overloaded here, because client side load 
balancing is different than server side.   I think you are talking about 
server side.   You will need to modify your client side load balancer to 
"round robin", which will establish connections to every backend server you 
have.   When servers terminate, or are added to the pool of backends, it's 
up to the client side load balancer / name resolver to notice these 
differences and adjust the connections.  gRPC does this by default (via the 
connections actually breaking), but it's faster if your name resolver 
(again, typically DNS) notices the server set has changed. 

On Friday, November 30, 2018 at 7:20:27 AM UTC-8, fairly accurate wrote:
>
>
> Hello All,
>   I'm a newbie grpc user. Actually, i'm using a product that is built over 
> grpc.
> I have read stuff about grpc, pardon my ignorance, some basic questions on 
> grpc connections.
>
> 1. I understand it is 1 connection per client server pair. Does this mean 
> a connection is established between client and server, and stays alive 
> until one of them ceases to exist ?
> Also, if the connection drops, a new connection is initiated ?
>
> 2. I have a load balancer placed between the client and server. The LB has 
> 2 servers. In this case is 1 connection to each server or just one to LB 
> and the load is balanced between the tow servers or 2 connections ?
>
> Appreciate your response.
>
> AK
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/18257045-84bf-4c07-b8ab-db8b5caee8d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Repeated blog article rss feed

2018-11-30 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
This has been fixed, and unfortunately caused the URL to change.  It's now 
at https://grpc.io/blog/grpc_on_http2

On Thursday, November 29, 2018 at 5:39:03 PM UTC-8, alan...@gmail.com wrote:
>
> Thanks for your suggestion, issued 
> .
>
> On Friday, November 30, 2018 at 3:08:05 AM UTC+8, Carl Mastrangelo wrote:
>>
>> I believe the RSS feed is from github.io, so you may need to file a bug 
>> there.  I have emailed them in the past (their email is on the GitHub 
>> contact page), and they have been helpful to me in the past. 
>>
>> On Tuesday, November 27, 2018 at 5:46:17 PM UTC-8, alan...@gmail.com 
>> wrote:
>>>
>>> Hi there,
>>>
>>> I subscribe the blog rss feed, there is a strange phenomenon since two 
>>> monthg ago, that is, an article rss feed repeats in random period, its 
>>> title is "In a previous article , we explored how HTTP/2 dramatically 
>>> increases network efficiency and enables real-time communic...", and links 
>>> to https://grpc.io//2018/11/27/2018-08-20-grpc-on-http2.html
>>>
>>> Anyone can fix that? Thanks.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bd78de1e-0946-4e81-9be6-64f7c601ebf3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Measuring gRPC-Java Unary Client Requests Latency From Interceptor

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io


System.currentTimeMillis() is not ideal for measuring elapsed time.  It's meant 
for measure the current time.  Instead, try doing:


long startNanos = System.nanoTime();

// ... Do the RPC

long stopNanos = System.nanoTime();

tick("GrpcClient.Get", TimeUnit.NANOSECONDS.toMillis(stopNanos - startNanos));


As for the discrepancy, are you sure you don't have any exceptions being 
thrown?  If the call failed the time would not be recorded.  This is important 
for deadline exceeded calls, because they would make the average latency go way 
up.




On Thursday, November 29, 2018 at 1:24:20 PM UTC-8, Yee-Ning Cheng wrote:
>
> I also have a metric that surrounds the actual blocking call (I am using the 
> blocking stub in this case).
>
>
> Long time = System.currentTimeMillis();
> Userprofile.ReadResponse resp = blockingStub
> .withDeadlineAfter(timeout, TimeUnit.MILLISECONDS)
> .get(req.build());
> tick("GrpcClient.Get", System.currentTimeMillis() - time);
>
>
>
> The issue is that this metric (the one surrounding the blocking call) is 
> reporting a much lower latency (~100ms vs ~400ms).  Why is there such a 
> discrepancy?  And which one is correct?
>
>
> I will look into the ClientStreamTracer to see what's there as well. Thanks!
>
>
> On Thursday, November 29, 2018 at 3:57:31 PM UTC-5, Carl Mastrangelo wrote:
>>
>> That is one way.  For more precise (but about as accurate) numbers, 
>> consider using a ClientStreamTracer, which you can set on the 
>> ManagedChannelBuilder.  That has more fine-grained events about an RPC. 
>>
>> On Wednesday, November 28, 2018 at 1:55:12 PM UTC-8, Yee-Ning Cheng wrote:
>>>
>>> I am trying to measure gRPC unary requests from a client.
>>>
>>> I have implemented an interceptor very similar to
>>>
>>>
>>> https://github.com/grpc-ecosystem/java-grpc-prometheus/blob/master/src/main/java/me/dinowernli/grpc/prometheus/MonitoringClientCallListener.java
>>>
>>> I also have a metric surrounding the client call and this time is much 
>>> lower than the time measured from the interceptor.
>>>
>>> Is the above interceptor implementation the correct way to measure each 
>>> unary request from the client?
>>>
>>> Thanks
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1edc0fa9-05a1-4e4f-84cd-e07e59769a0c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Fail over takes too long because of TCP retransmission

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
The main way we solve this is by enabling client side keep-alives, which 
IIRC run once every 45 second if there are no active RPCs.  These are 
implemented in gRPC as HTTP/2 Ping frames.   I can't say I know where this 
is for Go, but in Java this is an actively used feature.

On Monday, November 26, 2018 at 11:26:39 AM UTC-8, John Shahid wrote:
>
>
> We ended up adding the following to `Dial': 
>
> grpc.WithKeepaliveParams(keepalive.ClientParameters{ 
> Time: 10 * time.Second, 
> }) 
>
> This required bumping grpc to a commit that included the fix in 
> https://github.com/grpc/grpc-go/pull/2307 which sets the 
> TCP_USER_TIMEOUT socket option on Linux.  On a side note, this issue 
> doesn't affect windows clients.  It looks like by default windows 
> retransmissions are much lower than on GNU/Linux. 
>
>
> John Shahid > writes: 
>
> > Hi all, 
> > 
> > We just ran into an interesting issue.  We are using grpc-go for both 
> > the client and server implementation.  There are two instance of the 
> > server deployed for HA.  Clients use dns name lookup and usually are 
> > split evenly between the two servers. 
> > 
> > One of the servers had a network issue and wasn't reachable (we were 
> > able to simulate this situation by adding an iptables rule to drop 
> > packets destined to one of the two servers).  The DNS server immediately 
> > detect that one of the servers isn't reachable and removes it from the 
> > pool.  What we observed is that clients connected to that instance will 
> > keep getting "context deadline exceeded" errors for about 15 minutes. 
> > The tcpdump show multiple retransmission attempts.  The client will 
> > eventually (after ~15 minutes) reconnect to the healthy instance. 
> > 
> > Is there a way to speed up the fail over without changing the number of 
> > TCP retransmissions in `/proc/sys/net/ipv4/tcp_retries2' ? 
> > 
> > Thanks, 
> > 
> > JS 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1d2b7860-b429-4019-8798-229cd7a36a27%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Binlog as a gRPC service?

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I'm not sure that there is a service in mind, but it would need some sort 
of special treatment if it did exist. (i.e. do binlog gRPCs themselves get 
logged?)

Also, what happens if the log service hangs, gets backed up, or is 
unreachable?  

On Wednesday, November 28, 2018 at 5:22:43 AM UTC-8, to...@spotify.com 
wrote:
>
>
> Is there a plan to support on-demand streaming of binary logs by providing 
> it as a gRPC service or maybe a reason not to support it?
>
> With this example service definition:   
>
> service BinaryLogz {
>   rpc GetBinaryLogs(BinaryLogRequest) returns (stream BinaryLogResponse);
> }
>
> we wouldn't require any additional logging infrastructure for the simple 
> case of taping into a single instance request/response flow.  
>
> As simple CLI could be use to stream the response to stdout or dump it to 
> a file to be replayed/inspected later:   
>
> gdump -d '{"log_filter": 
> "Foo/*,Foo/Bar{m:256}"}' grpc.binarylog.v1alpha.BinaryLogz/GetBinaryLogs -o 
> binlog.rio
>
> I understand that this might be complicated to implement for all languages 
> and might not make sense to have in core, but I wanted to to float the 
> idea.  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d5c7b043-1669-4506-8407-1c1fe3e0d561%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: java LB round-robin has 30 minutes blank window before re-resolve

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Responses inline

On Wednesday, November 28, 2018 at 2:23:13 PM UTC-8, eleano...@gmail.com 
wrote:
>
> Here is the test case:
>
> I have implemented my custom NameResolver, and using RoundRobinLoadBalancer 
> in managedChannelBuilder. 
>
> 1. initially has 4 instances running (serverA, serverB, serverC, serverD)
>
> 2. then kill 2 instances (serverC, serverD), then serverA and serverB 
> continues serving the request
>

Do you mean gracefully shutdown, or just pull the plug?  gRPC has no way of 
knowing the latter case, which means you need to turn on keep-alives in the 
channel.
 

>
> 3. then create 2 more instances (serverE, serverF), only serverA and 
> serverB continues serving the request, since the NameResolver::refresh is 
> only triggered due to connection failures or GOAWAY signal.
>

Name resolvers are meant to be push based.   It is expected that some other 
service will notify your name resolver when new servers enter the pool.  
 DNS is pull based, so we implemented as a timer based refresh, but it 
isn't desirable.  If in your custom resolver you pull, then you'll have to 
use a timer like DNS does. 
 

>
> 4. then kill serverA and serverB, there is 30 minutes blank window, that 
> gRPC seems not doing anything, then after 30 minutes NameResolver::refresh 
> is triggered and the messages are served by serverE and serverF. (seems no 
> messaging loss).
>
> Can someone please suggest why there is a 30 minutes blank window, and is 
> there anyway we can configure it to be shorter?
>
> Thanks a lot!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ce805b16-2629-4331-96f2-97150ac50a66%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC over the internet at scale

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I can't speak for most middle boxes, but at least nginx does have pretty 
good http2 support.   I believe that is pretty common.

The main issue I see with middle boxes is not supporting TLS ALPN (which 
HTTP/2 depends on).   This is common in packet inspecting firewalls, which 
lag in functionality.

That said, Google is gainfully using gRPC in some flagship Android 
applications, so it definitely has some experience in exotic environments.



On Monday, November 26, 2018 at 10:25:01 AM UTC-8, cka...@slack-corp.com 
wrote:
>
> I'm evaluating use of gRPC between mobile clients and servers, and was 
> curious if anyone has experience using gRPC in a production setting, over 
> the internet, at scale. I'm generally curious about the use cases, learning 
> and outcomes of switching to gRPC. A particular area of concern we have is 
> the reliance on HTTP2 and lack of automated fallback to HTTP1. Even if we 
> get all our infrastructure to support terminating HTTP2-gRPC, we are 
> concerned that network middle boxes may interfere with, or block, gRPC 
> connections.
>
> Thanks,
> Cy
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/be58d750-2a40-4835-b497-81efd8a67a4a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [java] - gRPC ThreadPoolExecutor and bounded queue

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I would modify the counters before calling super (in case they throw an 
exception).  Also, I (personally), would fail the RPCs early rather than 
doing resize of the pool.   You can invoke call.close(Status.CANCELED), and 
then return a noop listener, instead of invoked methods on next.

On Thursday, November 29, 2018 at 12:56:00 PM UTC-8, 
in...@olivierboucher.com wrote:
>
> Thank you Carl.
>
> Is something like this what you meant? I striped out the resizing logic... 
> activeCalls is an AtomicInteger
>
> @Override
> public  ServerCall.Listener 
> interceptCall(ServerCall call, Metadata headers, 
> ServerCallHandler next) {
> resizePool(activeCalls.incrementAndGet());
> ServerCall.Listener delegate = next.startCall(call, headers);
> return new 
> ForwardingServerCallListener.SimpleForwardingServerCallListener(delegate)
>  
> {
> @Override
> public void onCancel() {
> super.onCancel();
> activeCalls.decrementAndGet();
> }
>
> @Override
> public void onComplete() {
> super.onComplete();
> activeCalls.decrementAndGet();
> }
> };
> }
>
>
>
>
>
> On Thursday, 29 November 2018 13:57:36 UTC-5, Carl Mastrangelo wrote:
>>
>> You *really* don't want to limit the queue size.  The queue is not per 
>> RPC, but per RPC callback event.   If the enqueue'd callback (like headers 
>> received, data received, cancellation, etc.) gets dropped, the RPC will be 
>> in a zombie state and never able to finish or die.  Additionally, if you 
>> block on attempting to add callbacks (instead of just failing them), you 
>> run the risk of deadlocking, because the net thread will be blocked on the 
>> application thread.
>>
>> The BlockingQueue in the executor is not a good fit for async callbacks.  
>>  It would be much better to install an Interceptor that keeps track of the 
>> number of active calls, and simply fails RPCs (instead of callbacks) if the 
>> number gets too high.   
>>
>> On Thursday, November 29, 2018 at 8:43:04 AM UTC-8, 
>> in...@olivierboucher.com wrote:
>>>
>>>
>>> Hi everyone,
>>>
>>> Our gRPC server runs on a ThreadPoolExecutor with a corePoolSize of 4 
>>> and a maximumPoolSize of 16. In order to have the pool size increase, we 
>>> provide a BlockingQueue with a bounded size of 20. 
>>>
>>> Sometimes short bursts happen and we're perfectly fine with dropping 
>>> requests at this moment, we provided a custom RejectionExecutionHandler 
>>> that increases a counter we are monitoring. However, this rejection handler 
>>> is not aware of the request itself, it only sees a Runnable.
>>>
>>> My question is: are the requests automatically canceled if they could 
>>> not get queued? Do I need to cancel them manually somehow?
>>>
>>> Thanks
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/63e7bb11-9269-4dba-b66b-b3de763d7e3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Measuring gRPC-Java Unary Client Requests Latency From Interceptor

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
That is one way.  For more precise (but about as accurate) numbers, 
consider using a ClientStreamTracer, which you can set on the 
ManagedChannelBuilder.  That has more fine-grained events about an RPC. 

On Wednesday, November 28, 2018 at 1:55:12 PM UTC-8, Yee-Ning Cheng wrote:
>
> I am trying to measure gRPC unary requests from a client.
>
> I have implemented an interceptor very similar to
>
>
> https://github.com/grpc-ecosystem/java-grpc-prometheus/blob/master/src/main/java/me/dinowernli/grpc/prometheus/MonitoringClientCallListener.java
>
> I also have a metric surrounding the client call and this time is much 
> lower than the time measured from the interceptor.
>
> Is the above interceptor implementation the correct way to measure each 
> unary request from the client?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6d46c517-f86d-45b7-aa07-494cbf385e7d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Thread configuration for low latency in high throughput scenario

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
+1 To what Robert said.   You have a couple options here:   

* use ForkJoinPool, which scales much more gracefully under load.  
* If your RPC logic is pretty simple, and does not block (like, ever),  
 you can use directExecutor() on the server builder and run RPCs inline.  
This avoids the need for the executor, and pushes all work into the worker 
EventLoopGroup
* Consider using an eventloop and channel type that scale better under 
load.  I recommend EpollServerSocketChannel, and the corresponding event 
loop.

On Tuesday, November 27, 2018 at 11:30:02 AM UTC-8, robert engels wrote:
>
> If you get multiple requests that are external, and all of these take 
> 200ms, you are going to be blocked… if the requests are IO bound, then 4 is 
> too small for the pool, and by increasing the pool size if other requests 
> arrive that are not external they can be handled
>
> On Nov 27, 2018, at 1:19 PM, Hugo Migneron  > wrote:
>
> Hi,
>
> We run a high throughput gRPC server (in Java) with a target latency of 
> sub 15ms. A vast majority of the requests are executed within that 
> timeframe. However, a small percentage of the requests rely on an external 
> service that must be executed inline. It is generally fast (~20ms) but can 
> sometimes be slow (200+ ms). We do timeout these external service requests 
> at 200ms, but we noticed that our event loop blocks when timeouts happen 
> the effect on latency snowballs and we quickly start taking more than 2 
> seconds to process requests. Things remain that way for a few seconds until 
> the latency gets back on track again.
>
> We run on kubernetes and our docker container is provided with about 1.5 
> to 2 CPU. There are many replicas of the same server.
>
> Here's how we start the server : 
>
> ```
>
> LinkedBlockingQueue workerQueue = new LinkedBlockingQueue<>();
> EFThreadFactory factory = new EFThreadFactory(); // Does nothing 
> fancy/special
> ExecutorService executorService = new ThreadPoolExecutor(4, 4, 0L, 
> TimeUnit.MILLISECONDS, workerQueue, threadFactory);
> Server server = NettyServerBuilder.forPort(port)
> .executor(executorService)
> .addService(new OurService())
> .build()
> .start();
>
> ```
>
> Does this configuration make sense given our goals and the situation we're 
> in ? If not, what would be the optimal configuration to avoid blocking the 
> event loop ?
>
> Would increasing `maximumPoolSize` of the executor be of any benefit 
> given the low amount of CPU each server gets ?
>
> If thread configuration / the posted code is not the issue here, any 
> pointers as to what I should look at / understand in order to solve the 
> problem ?
>
> Thank you !
>
>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+u...@googlegroups.com .
> To post to this group, send email to grp...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/db13c95b-db49-4a21-b912-f841a38cc85c%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/99307073-2ac5-4c8e-ba58-e1328eab7ac0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Executing the client's send message in a separate thread?

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Are you using Netty?  If so, you can set a direct executor (using 
MoreExecutors.directExecutor()).For the outbound path it will involve a 
thread hop (see WriteQueue), but for inbound it will not.   If you respond 
to RPCs directly on the thread that receives them, then there is no thread 
hop at all. 

On Tuesday, November 27, 2018 at 10:20:42 AM UTC-8, ted_g...@yahoo.com 
wrote:
>
>
> When making an async call from a Java client we are seeing that the 
> outgoing call happens on the originating thread.  This takes about 40 
> microseconds on our machine and we'd like the originating thread not to be 
> blocked for that time.  
>
> How do we configure gRPC such that the outgoing call is built and sent out 
> the socket on a worker thread?  We have tried various Executors with no 
> luck.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ff94494b-353b-4fd7-b0c0-1199d2da4f86%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Repeated blog article rss feed

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I believe the RSS feed is from github.io, so you may need to file a bug 
there.  I have emailed them in the past (their email is on the GitHub 
contact page), and they have been helpful to me in the past. 

On Tuesday, November 27, 2018 at 5:46:17 PM UTC-8, alan...@gmail.com wrote:
>
> Hi there,
>
> I subscribe the blog rss feed, there is a strange phenomenon since two 
> monthg ago, that is, an article rss feed repeats in random period, its 
> title is "In a previous article , we explored how HTTP/2 dramatically 
> increases network efficiency and enables real-time communic...", and links 
> to https://grpc.io//2018/11/27/2018-08-20-grpc-on-http2.html
>
> Anyone can fix that? Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/850e6f21-0399-4a79-b675-c2f9cab20e85%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Recommended gRPC URI scheme

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Hi Jakob!

gRPC has it's own name resolution built into it's own URLs, so it may get 
confusing.  For example, a dns based gRPC service would be 
dns:///localhost:9988 .   Note the actual address info is in the /path/ of 
the URI, rather than the host or authority.  For domain socket based 
connections, its uds:///tmp/file.sock

If you plan on supporting the multiple name resolution schemes gRPC has, 
you would need to embed the full gRPC URL into the the URL you planned on 
using.   Like grpc:///dns:///localhost:9988

As for the name, I would avoid including the "s" at the end.   Several 
plaintext protocols are actually secure (such as in memory, UDS, or via a 
Secure proxy). If you can limit the scope of supported protocols, you 
can probably do something like grpc+plaintext:///   or something along 
those lines, as I have seen that elsewhere.


HTH,
Carl 

On Thursday, November 29, 2018 at 4:47:08 AM UTC-8, Jakob Buchgraber wrote:
>
> Hi,
>
> in our project we want to allow a user to specify a URI to a service 
> endpoint via a flag. We support multiple protocols
> for talking to service endpoints, including gRPC. The current plan is to 
> select the protocol to use based on the scheme
> component of the URI.
>
> Do you provide any guidance as to what name to use for the scheme? I was 
> thinking "grpc" and "grpcs" to be
> reasonable to choices.
>
> Thanks,
> Jakob
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4843f338-47d2-493f-81f6-c28a9f9f4916%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [java] - gRPC ThreadPoolExecutor and bounded queue

2018-11-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
You *really* don't want to limit the queue size.  The queue is not per RPC, 
but per RPC callback event.   If the enqueue'd callback (like headers 
received, data received, cancellation, etc.) gets dropped, the RPC will be 
in a zombie state and never able to finish or die.  Additionally, if you 
block on attempting to add callbacks (instead of just failing them), you 
run the risk of deadlocking, because the net thread will be blocked on the 
application thread.

The BlockingQueue in the executor is not a good fit for async callbacks.  
 It would be much better to install an Interceptor that keeps track of the 
number of active calls, and simply fails RPCs (instead of callbacks) if the 
number gets too high.   

On Thursday, November 29, 2018 at 8:43:04 AM UTC-8, 
in...@olivierboucher.com wrote:
>
>
> Hi everyone,
>
> Our gRPC server runs on a ThreadPoolExecutor with a corePoolSize of 4 and 
> a maximumPoolSize of 16. In order to have the pool size increase, we 
> provide a BlockingQueue with a bounded size of 20. 
>
> Sometimes short bursts happen and we're perfectly fine with dropping 
> requests at this moment, we provided a custom RejectionExecutionHandler 
> that increases a counter we are monitoring. However, this rejection handler 
> is not aware of the request itself, it only sees a Runnable.
>
> My question is: are the requests automatically canceled if they could not 
> get queued? Do I need to cancel them manually somehow?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9c124d4b-e3da-4927-b552-2a09bae0c6d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] How to set source ip address in grpc client application

2018-11-15 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I believe the original questions was for Python, not Java.   There 
currently isn't a way to get the currently bound address in Java *except* 
via the Channelz service.   The issue you found is a tracking issue for an 
experimental feature to customize connection setup.  The use case is 
narrowly scoped, so I don't think it's what you are looking for.  

On Wednesday, November 14, 2018 at 11:32:35 PM UTC-8, xing...@gmail.com 
wrote:
>
>
> found this, the feature seems under developing . 
> https://github.com/grpc/grpc-java/issues/4900
>
> On Thursday, November 15, 2018 at 2:58:16 PM UTC+8, xing...@gmail.com 
> wrote:
>>
>> Hi, 
>>
>> I am facing the same problem, does there any solutions?
>>
>>
>> Thanks 
>> xcc
>>
>> On Friday, February 23, 2018 at 2:13:29 PM UTC+8, dekum...@gmail.com 
>> wrote:
>>>
>>> Hi,
>>>
>>> Is there way in grpc to bind to source ip address. 
>>> In scenario of multiple physical ecmp interface to reach server it's 
>>> better to use loopback interface source ip.
>>>
>>> Thanks,
>>> Deepak
>>>
>>> On Wednesday, October 18, 2017 at 8:43:09 AM UTC-7, Nathaniel Manista 
>>> wrote:

 On Wed, Oct 18, 2017 at 12:05 AM, GVA Rao  wrote:

> I would like my grpc client application to carry specified source ip 
> in case client has multiple hops to reach grpc server.
> grpc insecure_channel rpc has only destination ip address i.e. server 
> address field but not client source ip field.
> Is there a way to set source ip address in grpc client application?  
> If not in grpc is there way we can set source in python application and 
> use insecure_channel 
> as is?
>

 This sounds like something that you would want to include in the 
 metadata you pass when invoking your RPCs.
 -Nathaniel

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6e0fbc47-6302-4abd-b60c-e31c0af9520e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to implement asynchronous rpc in grpc?

2018-11-15 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Future stub is only suitable for Unary RPCs.   If you look at how the stub 
library is implemented, BlockingStubs wrap the FutureStubs, which wrap the 
regular Stubs, which themselves wrap ClientCall and ServerCall.  All are 
layers on top of the other, and get more advanced the farther down the 
layers you go.  The higher up, the more simple it is to use.

On Wednesday, November 14, 2018 at 1:57:20 PM UTC-8, constan...@gmail.com 
wrote:
>
> Why not using FutureStub? Seems it returning a ListenableFuture already 
> for client to use it? 
>
>
> 在 2018年11月14日星期三 UTC-5下午2:28:56,Carl Mastrangelo写道:
>>
>> Yes.  It is still async.  
>>
>> Do be aware that flow control works slightly differently for RPCs that 
>> don't have "stream" on them (we call these "unary" RPCs).   This is not 
>> usually an issue unless you are sending very fast or lots of data (like 
>> sustained Gigabits  per second).  
>>
>> On Wednesday, November 14, 2018 at 9:41:45 AM UTC-8, qplc wrote:
>>>
>>> Thank you Carl for your response.
>>>
>>> What if I don't use prefix 'stream' in service definition, shall rpc 
>>> calls still be executed in asynchronous manner by implementing 
>>> TestServiceGrpc.TestServiceStub?
>>>
>>> Modified service def:
>>> service TestService {
>>>   rpc testRPCCall(Test) returns (Test) {}
>>> }
>>>
>>>
>>> On Monday, November 12, 2018 at 5:37:56 PM UTC+5:30, qplc wrote:

 Hi,

 I've implemented below service definition in my grpc server/client 
 application.

 service TestService {
   rpc testRPCCall(stream Test) returns (stream Test) {}
 }

 I found below stubs can be implemented on proto file compilation.
 TestServiceGrpc.TestServiceStub
 TestServiceGrpc.TestServiceBlockingStub
 TestServiceGrpc.TestServiceFutureStub
 TestServiceGrpc.TestServiceImplBase

 I want to adapt asynchronous behavior of rpc calls. But, I'm not sure 
 which one of above should be implemented. Is it mandatory to stream a rpc 
 call(stream Test) as mentioned in above service definition for 
 asynchronous 
 implementation?


 Thanks in advance.



-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/29e0df2a-a622-4a3e-bbf7-eecd65e78168%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to implement asynchronous rpc in grpc?

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Yes.  It is still async.  

Do be aware that flow control works slightly differently for RPCs that 
don't have "stream" on them (we call these "unary" RPCs).   This is not 
usually an issue unless you are sending very fast or lots of data (like 
sustained Gigabits  per second).  

On Wednesday, November 14, 2018 at 9:41:45 AM UTC-8, qplc wrote:
>
> Thank you Carl for your response.
>
> What if I don't use prefix 'stream' in service definition, shall rpc calls 
> still be executed in asynchronous manner by implementing 
> TestServiceGrpc.TestServiceStub?
>
> Modified service def:
> service TestService {
>   rpc testRPCCall(Test) returns (Test) {}
> }
>
>
> On Monday, November 12, 2018 at 5:37:56 PM UTC+5:30, qplc wrote:
>>
>> Hi,
>>
>> I've implemented below service definition in my grpc server/client 
>> application.
>>
>> service TestService {
>>   rpc testRPCCall(stream Test) returns (stream Test) {}
>> }
>>
>> I found below stubs can be implemented on proto file compilation.
>> TestServiceGrpc.TestServiceStub
>> TestServiceGrpc.TestServiceBlockingStub
>> TestServiceGrpc.TestServiceFutureStub
>> TestServiceGrpc.TestServiceImplBase
>>
>> I want to adapt asynchronous behavior of rpc calls. But, I'm not sure 
>> which one of above should be implemented. Is it mandatory to stream a rpc 
>> call(stream Test) as mentioned in above service definition for asynchronous 
>> implementation?
>>
>>
>> Thanks in advance.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/094d7929-30a7-495c-970f-f2f472274e2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to implement asynchronous rpc in grpc?

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
TestServiceStub is the main asynchronous stub, and is built around the idea 
of the Observable / Observer pattern.  RxJava uses this pattern, for 
instance.   

One thing to note, if you don't like those stubs, you can call the 
ClientCall and ClientCall.Listener directly.   You can't use the generated 
stub library, but it does have async behavior.   (Personally, I use it, but 
copy-paste the MethodDescriptors from the generated code.) 

On Monday, November 12, 2018 at 4:07:56 AM UTC-8, qplc wrote:
>
> Hi,
>
> I've implemented below service definition in my grpc server/client 
> application.
>
> service TestService {
>   rpc testRPCCall(stream Test) returns (stream Test) {}
> }
>
> I found below stubs can be implemented on proto file compilation.
> TestServiceGrpc.TestServiceStub
> TestServiceGrpc.TestServiceBlockingStub
> TestServiceGrpc.TestServiceFutureStub
> TestServiceGrpc.TestServiceImplBase
>
> I want to adapt asynchronous behavior of rpc calls. But, I'm not sure 
> which one of above should be implemented. Is it mandatory to stream a rpc 
> call(stream Test) as mentioned in above service definition for asynchronous 
> implementation?
>
>
> Thanks in advance.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/85a5cd0e-bb41-4963-8ae4-300586a65abd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: JSON/Http1.x for testing Java based gRPC services

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Not a complete answer, but we typically use grpc_cli (located somewhere on 
the github.com/grpc/grpc repo, but I don't recall where) which allows you 
to poke at services.  The server needs to expose the reflection service, 
which Java exposes in the grpc-services maven library.   

As for plaintext, you can use plaintext proto  (in Java this class class is 
called TextFormat).  I personally like the proto text format better (no 
trailing commas, repeated fields don't require list syntax, compilable).
If you want all your data to be passed as plaintext, rather than just for 
debugging, you can swap out the Marshaller to be any format.   I have a 
blog post and working example of how to use JSON in gRPC with no Proto 
dependency at all:   https://grpc.io/blog/grpc-with-json

If you just want it for debugging, you'll have to use a tool that can 
decode it.   That said, with server reflection turned on, using a tool is 
not so bad.   (and you need a tool anyways to de-minify your JSON!).

HTH

Carl,

On Monday, November 12, 2018 at 12:14:45 AM UTC-8, regu...@gmail.com wrote:
>
> We are moving our services from REST JSON/H1 to gRPC. Teams are very 
> comfortable with tools like Postman and Swagger for integration testing for 
> two reasons:
>
>- No compilation needed to invoke a service
>- Text based data definition format i.e. JSON
>
> Our services are implemented using grpc-java and therefore am looking for 
> libraries/approaches to make this work on the JVM. Has anything been done 
> in Java similar to what is described here for go : 
> https://github.com/jnewmano/grpc-json-proxy?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/63e8a0fe-0c44-4bc6-a11d-c5cbf66946f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Future of github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc?

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Maybe you can ping the author on GitHub?  I'm not sure myself, but you're 
right there should not be two copies.

On Sunday, November 11, 2018 at 3:21:35 AM UTC-8, Tom Wilkie wrote:
>
> Hello!
>
> (I'm not that familiar with the situation and couldn't find any previous 
> discussion of this, sorry if its already been covered.  This email is cross 
> posted to both the opentracing and grpc groups.)
>
> github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc seems to have been 
> forked to github.com/opentracing-contrib/go-grpc, and there now seems to 
> be more activity on the fork than on the original.  This is based on a very 
> small sample size though...
>
> Its confusing to users to have two copies of the code, so should be either 
> (a) recognise the fork as the now "official" home, and at least add 
> something to the README to direct users here or (b) perhaps invite the 
> maintainer of the fork to the original repo so we can move it along, and 
> close the fork?
>
> Either repo seems a reasonable home to me.  For reference, here is the 
> discussion that triggered this email: 
> https://github.com/cortexproject/cortex/pull/1113
>
> Thanks
>
> Tom
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/484175e7-2a49-4ef1-b04c-9123336efb85%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: nettyserver await termination fail with spring boot

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Can you call awaitTermination in stop()?  You can use shutdownNow() also to 
cause a more aggressive shutdown.

On Saturday, November 10, 2018 at 5:02:50 AM UTC-8, omid pourhadi wrote:
>
> Hi,
>
> I'm trying to create a spring boot app with grpc so I implemented a grpc 
> nettyserver to run spring boot on top of it but it terminates the app after 
> running.
> how can I use gprc nettyserver await termination in spring boot webserver?
>
> here is the code : 
>
> import java.io.IOException;
>
> import org.springframework.boot.web.server.WebServer;
> import org.springframework.boot.web.server.WebServerException;
>
> import io.grpc.Server;
> import io.grpc.netty.shaded.io.grpc.netty.NettyServerBuilder;
>
> public class NettyWebServer implements WebServer
> {
>
> 
> public static final int DEFAULT_PORT = 50051;
> 
> Server server;
> 
> @Override
> public void start() throws WebServerException
> {
> if(server == null)
> server = NettyServerBuilder.forPort(DEFAULT_PORT).build();
> 
> try
> {
> server.start();
> }
> catch (IOException e)
> {
> e.printStackTrace();
> }
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run()
> {
> // Use stderr here since the logger may have been reset by 
> its
> // JVM shutdown hook.
> System.err.println("*** shutting down gRPC server since 
> JVM is shutting down");
> NettyWebServer.this.stop();
> System.err.println("*** server shut down");
> }
> });
> startDaemonAwaitThread();
> 
> }
>
> @Override
> public void stop() throws WebServerException
> {
> if (server != null)
> {
> server.shutdown();
> }
> }
>
> @Override
> public int getPort()
> {
> return DEFAULT_PORT;
> }
> 
> private void startDaemonAwaitThread() {
> Thread awaitThread = new Thread(()->{
> try {
> NettyWebServer.this.server.awaitTermination();
> } catch (InterruptedException e) {
> //log.error("gRPC server stopped.", e);
> }
> });
> awaitThread.setContextClassLoader(getClass().getClassLoader());
> awaitThread.setDaemon(false);
> awaitThread.start();
> }
>
> }
>
>
>
> and here is the spring boot app config
>
> @SpringBootApplication
> public class GrpcBoot
> {
>
> public static void main(String[] args) throws Exception
> {
> SpringApplication.run(GrpcBoot.class, args);
> }
>
> 
> @Bean
> ServletWebServerFactory servletWebServerFactory()
> {
> return new ServletWebServerFactory() {
> 
> @Override
> public WebServer getWebServer(ServletContextInitializer... 
> initializers)
> {
> return new NettyWebServer();
> }
> };
> }
> 
> 
> 
> }
>
>
> here is the log after termination :
>
> 2018-11-10 16:32:02.499  INFO 17044 --- [   main] 
>>> com.omid.grpc.boot.GrpcBoot  : Starting GrpcBoot on opourhadi 
>>> with PID 17044 
>>> (/home/omidp/workspace-grpc/grpc-crud/grpc-server/target/classes started 
>>
>> 2018-11-10 16:32:02.502  INFO 17044 --- [   main] 
>>> com.omid.grpc.boot.GrpcBoot  : No active profile set, falling 
>>> back to default profiles: default
>>
>> 2018-11-10 16:32:02.534  INFO 17044 --- [   main] 
>>> s.c.a.AnnotationConfigApplicationContext : Refreshing 
>>> org.springframework.context.annotation.AnnotationConfigApplicationContext@3c9d0b9d:
>>>  
>>> startup date [Sat Nov 10 16:32:02 IRST 2018]; root of context hierarchy
>>
>> 2018-11-10 16:32:03.054  INFO 17044 --- [   main] 
>>> o.s.j.e.a.AnnotationMBeanExporter: Registering beans for JMX 
>>> exposure on startup
>>
>> 2018-11-10 16:32:03.063  INFO 17044 --- [   main] 
>>> com.omid.grpc.boot.GrpcBoot  : Started GrpcBoot in 0.738 
>>> seconds (JVM running for 1.022)
>>
>> 2018-11-10 16:32:03.065  INFO 17044 --- [   Thread-2] 
>>> s.c.a.AnnotationConfigApplicationContext : Closing 
>>> org.springframework.context.annotation.AnnotationConfigApplicationContext@3c9d0b9d:
>>>  
>>> startup date [Sat Nov 10 16:32:02 IRST 2018]; root of context hierarchy
>>
>> 2018-11-10 16:32:03.067  INFO 17044 --- [   Thread-2] 
>>> o.s.j.e.a.AnnotationMBeanExporter: Unregistering JMX-exposed beans 
>>> on shutdown
>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@

[grpc-io] Re: The version of protoc in grpcio-tools

2018-11-14 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I think gRPC picks the one on your PATH first IIRC.

On Friday, November 9, 2018 at 3:06:48 AM UTC-8, Rui Li wrote:
>
> Hi team,
>
> I just started using gRPC recently. I wonder is there a way to 
> decide/control the protoc version of grpcio-tools? I'm asking because the 
> instructions indicate I can use grpcio-tools to generate code for Python. 
> So I need to make sure the generated code is compatible with my PB runtime. 
> Thanks in advance.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bfd60083-4b53-47e5-868f-0a1d3d8f947b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC performance benchmark

2018-11-06 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
I'm not sure ab is a very good benchmark tool.  It used to be years ago, 
but I haven't seen it used in a long time.  Also, by using it you avoid the 
deserialization that a real client would do.  



On Tuesday, November 6, 2018 at 10:39:49 AM UTC-8, din...@wepay.com wrote:
>
> Hi,
>
> Using gRPC Java RouteGuide Example available in github, I was running a 
> gRPC server and a REST server which contains gRPC client. Using this setup, 
> I tried to run AB (Apache Bench) tool to find the performance of unary and 
> server side streaming calls.
>
> I noticed that the response time becomes low as we increase the 
> concurrency level (especially more significant change for server side 
> streaming calls). I would like to if there is any recommended best 
> practices or this is and expected increase in latency.
>
> FYI, I created a single ManagedChannel instance for the client and startup 
> time. I have also attached the performace graphs.
>
> Regards,
> Dinesh P S
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/05956bfc-32bf-4d8a-847d-230b22ff1b02%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC & transaction support

2018-11-05 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io


I think OP hit the nail on the head with REST being a bad fit for 
transactions.  A previous team I worked on had pretty much the same 
problem.  There are two solutions to having transaction like semantics.

1.  Make all RPCs compareAndSwap-like, effectively making them atomic 
operations.  When querying an object from the service, every object needs a 
unique version (something like a timestamp).  When making updates, the 
system compares the modified time on the request object with the one it has 
stored and makes sure there haven't been any changes.   This works for 
objects that are updated infrequently, and which don't involve other 
dependent objects.  

2.  Make streaming RPCs a transaction.  One good thing about streaming RPCs 
is that you make the messages you send and receive effective a consistent 
snapshot.  When you half-close the streaming RPC, it attempts to commit the 
transaction as a whole, or else gives a failure to try again.  This makes 
multi object updates much easier.  The down side is that the API is uglier, 
because effectively you have a single "Transaction RPC" and all your actual 
calls are just submessages.   It works, but things like stats, auth, 
interception, etc. get more complicated.   


Personally, I would structure my data to prefer option one, even though it 
is less powerful.  I *really* don't like thinking about implementing my own 
deadlock detection or other lock ordering for a RPC service.   If you know 
locking is not a problem, I think both are valid solutions. 

  

On Monday, November 5, 2018 at 1:16:10 AM UTC-8, glert...@gmail.com wrote:
>
> Dead issue but I would like to resurrect it because this wasn't answered 
> at all.
>
> Simple use case which can easily illustrate the problem: Two different 
> services OrderService (with CreateOrder method) and AuditService (with 
> Audit method). You want to create the order and, in case everything 
> succeeded, log an audit entry. If you log an entry beforehand you could end 
> with an audit log which never happened because the create order task 
> failed. If you (try to) log an entry afterwards, the audit task could fail 
> and end not logging something that happened which fails its sole purpose of 
> having an audit log at all.
>
> What do you guys at Google do?
> * Compensate?
> * Nothing more than live with it?
> * In this concrete case having a custom audit log per service and the CDC 
> (Change Data Capture) and replicate to the central service?
>
> @Jiri what did you end up doing?
>
> Thanks,
>
>
> On Wednesday, September 9, 2015 at 7:47:51 PM UTC+2, Jorge Canizales wrote:
>>
>> For Google's JSON/REST APIs we use ETag headers (optimistic concurrency) 
>> to do these things. That's something easy to implement on top of gRPC, 
>> using the request and response metadata to send the equivalent headers.
>>
>> On Wednesday, August 5, 2015 at 1:45:53 AM UTC-7, Jiri Jetmar wrote:
>>>
>>> Hi guys, 
>>>
>>> we are (re-) designing a new RPC-based approach for our backoffice 
>>> services and we are considering the usage of gRPC. Currently we are using a 
>>> REST method to call our services, but we realize with time to design a nice 
>>> REST API is a really hard job and when we look to our internal APIs it 
>>> looks more RPC then REST. Therefore the shift to pure RPC is valid 
>>> alternative. I;m not talking here about public APIs - they will continue to 
>>> be REST-based.. 
>>>
>>> Now, when there are a number of microservices that are/can be 
>>> distributed one has to compensate issues during commands (write 
>>> interactions, aka HTTP POST, PUT, DELETE). Currently we are using the TCC 
>>> (try-confirm-cancel) pattern. 
>>>
>>> I'm curious how you guys at Google are solving it ? How you are solving 
>>> the issue with distributed transaction on top of the RPC services ? Are you 
>>> doing to solve it on a more technical level (e.g. a kind of transactional 
>>> monitor), or are you considering it more on a functional/application level 
>>> where the calling client has to compensate failed commands to a service ?
>>>
>>> Are the any plans to propose something for gRPC.io ?
>>>
>>> Thank you. 
>>>
>>> Cheers, 
>>> Jiri
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/97e93eda-97e0-4a88-8c57-66b62a0c9abf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Unable to set-up SSL on gRPC server. org/eclipse/jetty/alpn/ALPN ClassNotFound.

2018-11-05 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Random guess, but doesn't tcnative need to be a stonger dependency than 
compile?  It needs to be a compile+runtime dependency.

On Thursday, November 1, 2018 at 2:00:22 PM UTC-7, Jaroslav Gorskov wrote:
>
> EDIT: I added a -javaagent JVM argument and pointed it to ALPN. Apparently 
> it should be executed before the jvm main. The question about why doesn't 
> grpc recognize I have OpenSSL available is still open.
>
> чт, 1 нояб. 2018 г. в 16:25, >:
>
>> I want to set-up SSL on my GRPC server. Here is what I'm doing:
>>
>>  File certChain = new File("conf/server.crt");
>>  File privateKey = new File("conf/pkcs8_key.pem");
>>
>>
>>  Server server = NettyServerBuilder.forPort(8080)
>>  .useTransportSecurity(certChain, privateKey)
>>  .addService(new HelloWorldService())
>>  .build();
>>
>> *I am getting following error stack:*
>>
>> Exception in thread "main" java.lang.IllegalArgumentException: Jetty ALPN
>> /NPN has not been properly configured.
>>  at io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(
>> GrpcSslContexts.java:162)
>>  at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:136)
>>  at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:124)
>>  at io.grpc.netty.GrpcSslContexts.forServer(GrpcSslContexts.java:104)
>>  at io.grpc.netty.NettyServerBuilder.useTransportSecurity(
>> NettyServerBuilder.java:404)
>>  at server.MyServer.main(MyServer.java:22)
>> Caused by: java.lang.ClassNotFoundException: org/eclipse/jetty/alpn/ALPN
>>  at java.lang.Class.forName0(Native Method)
>>  at java.lang.Class.forName(Class.java:348)
>>  at io.grpc.netty.JettyTlsUtil.isJettyAlpnConfigured(JettyTlsUtil.java:34
>> )
>>  at io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(
>> GrpcSslContexts.java:153)
>>  ... 5 more
>>
>>
>> *Here is my gradle dependency block:*
>>
>> dependencies {
>> compile("io.grpc:grpc-netty:1.7.0")
>> compile("io.grpc:grpc-protobuf:1.7.0")
>> compile("io.grpc:grpc-stub:1.7.0")
>>  
>>  compile group: 'io.netty', name: 'netty-handler', version: 
>> '4.1.16.Final'
>>  compile group: 'io.netty', name: 'netty-tcnative-boringssl-static', 
>> version: '2.0.6.Final'
>>  compile group: 'com.google.gradle', name: 'osdetector-gradle-plugin', 
>> version: '1.2.1'
>>  
>> }
>>
>>
>> Grpc doesn't think I have OpenSSL as SSLProvider available.
>>   private static SslProvider defaultSslProvider() {
>> return OpenSsl.isAvailable() ? SslProvider.OPENSSL : SslProvider.JDK;
>>   }
>>
>> Yet, for OpenSSL to be available, I need *netty-tcnative-boringssl-static 
>> *on my classpath. Which I have.
>>
>> Even with JDK as SSLProvider, I don't understand why can't it load class 
>> in this grpc internal method:
>>
>>   static synchronized boolean isJettyAlpnConfigured() {
>> try {
>>   Class.forName("org.eclipse.jetty.alpn.ALPN", true, null);
>>   return true;
>> } catch (ClassNotFoundException e) {
>>   jettyAlpnUnavailabilityCause = e;
>>   return false;
>> }
>>   }
>>
>> I do have ALPN in my classpath, as well.
>>
>>
>> Any help is appreciated!!
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/grpc-io/v3saMxqOVOw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/0377d3cb-91aa-40bf-9850-acebc5bfbc97%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1903f496-8fd3-4a84-aa58-d32dd8374991%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Is there a way to queue RPC call in gRPC

2018-10-29 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
What language are you using?

On Sunday, October 28, 2018 at 11:30:14 PM UTC-7, deva...@gmail.com wrote:
>
> Hi,
>
> I want to limit number of threads in gRPC, I can limit it by setting 
> maximum number of threads.
> But if there are requests in process for defined number of threads, next 
> RPC call immediately fails with RESOURCE_EXHAUSTED.
>
> Is there a way that I can queue that request and it will execute once in 
> process threads completes processing?
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/57b4d9ae-0f3a-460f-877e-a4b99fa8c4de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: python support multi-thread

2018-10-26 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
+Lidi

On Fri, Oct 26, 2018 at 11:23 AM Vaibhav Bedi 
wrote:

> Can you share some reference?
> ᐧ
>
> On Fri, Oct 26, 2018 at 11:51 PM Vaibhav Bedi 
> wrote:
>
>> My problem is
>> I want to write the grpc python multithread code for the client-server
>> application, both client and server should use
>> threads in order to handle multi-requests at the same time. The client is
>> simulating a gateway where it uploads data to the server. This data should
>> be an array of objects.
>> The server is receiving these data and printing them in a multi-threaded
>> way.
>>
>> Thank you
>> ᐧ
>>
>> On Fri, Oct 26, 2018 at 11:46 PM 'Carl Mastrangelo' via grpc.io <
>> grpc-io@googlegroups.com> wrote:
>>
>>> Yes it does.  If you provide more information about what you want to do,
>>> we can give a better answer.
>>>
>>> On Thursday, October 25, 2018 at 9:11:49 AM UTC-7, rob.vai...@gmail.com
>>> wrote:
>>>>
>>>> hi
>>>>
>>>> I want to know Is grpc python support multi-thread?
>>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "grpc.io" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/grpc-io/TYz7WUUJkiw/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> grpc-io+unsubscr...@googlegroups.com.
>>> To post to this group, send email to grpc-io@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/31b617e5-629f-473b-8dcf-d9af2d17a0a5%40googlegroups.com
>>> <https://groups.google.com/d/msgid/grpc-io/31b617e5-629f-473b-8dcf-d9af2d17a0a5%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> --
>> Sincerely,
>> Vaibhav Bedi
>> Email ID- rob.vaibhavb...@gmail.com
>> Contact Number-8950597710
>> Website-http://www.vaibhavbedi.com/
>>
>
>
> --
> Sincerely,
> Vaibhav Bedi
> Email ID- rob.vaibhavb...@gmail.com
> Contact Number-8950597710
> Website-http://www.vaibhavbedi.com/
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAcqB%2BtD3F-Og_wcE73f4EXa-SN00k9KSCZyQxagLaEcDChzpQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: python support multi-thread

2018-10-26 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
Yes it does.  If you provide more information about what you want to do, we 
can give a better answer.

On Thursday, October 25, 2018 at 9:11:49 AM UTC-7, rob.vai...@gmail.com 
wrote:
>
> hi
>  
> I want to know Is grpc python support multi-thread?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/31b617e5-629f-473b-8dcf-d9af2d17a0a5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Servers in PHP?

2018-10-25 Thread &#x27;Carl Mastrangelo&#x27; via grpc.io
It may not be the same thing, but this was posted to twitter 
recently:  https://github.com/spiral/php-grpc

On Thursday, October 25, 2018 at 1:28:12 AM UTC-7, klingenbe...@gmail.com 
wrote:
>
> What is this lib you mention? I can't find anything in this thread 
> actually mentioning where one can find it.
>
> I would be very interested in trying this out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ea1d21b8-ed6e-4d0b-8205-c3ac921b515a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   >