Re: [grpc-io] Re: Bi-directional Streams with Multithreads-JAVA

2021-05-26 Thread 'Eric Gribkoff' via grpc.io
Hi,

It sounds like you are trying to send multiple messages for a unary RPC,
which results in the "Too many responses" error. The distinction between
unary-response and server-streaming RPC here is about the semantics of the
service - namely, how many responses the server can send - and not
performance. Both unary and streaming RPCs use streams under the hood. It
sounds like you may have other synchronization challenges with your
specific service, which would up to your application and not gRPC's stream
handler to enforce, but you could also consider just changing your RPC
service definition to specify that the server response will be streaming.

There was also a somewhat related discussion that you might find helpful on
this old question on the grpc-java repository:
https://github.com/grpc/grpc-java/issues/6323

Thanks,

Eric

On Wed, May 26, 2021 at 10:01 PM nitish bhardwaj <
bhardwaj.nitis...@gmail.com> wrote:

> *NOTE*: The same code works perfectly if I switch streams to normal GRPC
> calls.
>
> But, to avoid extra latency, I need to use streams.
>
> On Thursday, May 27, 2021 at 10:25:55 AM UTC+5:30 nitish bhardwaj wrote:
>
>> Hi,
>>
>> I am trying to use bi-directional streams with JAVA. Everything works as
>> expected in a POC of bi-di stream where I have 1 client and server which
>> reads and writes to it.
>>
>> But, it starts to break when I use the same in a mature way.
>>
>> *UseCase:* I am trying to implement gossip using grpc in Java. Whenever
>> any server receives a request, it gossips to the other 2 servers using grpc
>> streams. For an instance, I have 4 servers, server1, server2, server3 and
>> server4. All servers have 1 client configured to connect to every other
>> server.
>>
>> whenever any server receives a message, it selects the other two servers
>> and transmits the message and the other server does the same.
>>
>> *Issue: *As each request would be served in a new thread by grpc, I get
>> an error
>> Cancelling the stream with status Status{code=INTERNAL, description=Too
>> many) responses, cause=null}
>>
>> That's bcz every request is read and write to stream isn't sync as it
>> might not have got a response from server stream but it has written new
>> data to it on a new request.
>>
>> What can be done to overcome this problem?
>>
>> Thanks for the support!!
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/7fe537c0-ac40-48eb-913c-5c1ad8be066cn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jEcRssqx27_UqhY_meGgJMTJBouT-QX%2BG7QOzkSSp7Aw%40mail.gmail.com.


[grpc-io] gRPC-Java 1.38.0 released

2021-05-19 Thread 'Eric Gribkoff' via grpc.io
gRPC Java 1.38.0 is released and is available on Maven Central.
https://github.com/grpc/grpc-java/releases/tag/v1.38.0
API Changes

   - services: move classes with protobuf dependency into
   io.grpc.protobuf.services. Users currently using BinaryLogging,
   HealthChecking, Channelz should migrate to use the corresponding classes in
   io.grpc.protobuf.services. (#8056
   )
   - ChannelCredentials and ServerCredentials and are now stable. Notably,
   this also includes TlsChannelCredentials and TlsServerCredentials that
   allow mTLS configuration without a direct dependency on Netty. The
   description of the new API can be found in gRFC L74
   
.
   These APIs are intended to “replace” the implicit security defaults of
   channels/servers as well as the usePlaintext() and
useTransportSecurity() methods
   on the channel and server builders. The previous APIs are stable so will
   not be removed. Over time, documentation and examples will be migrated to
   the new API

Bug Fixes

   - xds: Fixed a bug that xDS users may experience null pointer exception
   in rare cases (#8087 )
   - netty: Fixed a bug that client RPCs may fail with a wrong exception
   with message "Maximum active streams violated for this endpoint" when
   receiving GOAWAY while MAX_CONCURRENT_STREAMS is reached. After the fix the
   client RPC should fail with UNAVAILABLE status in such a scenario. (#8020
   )
   - xds: Fixed a bug that xDS LB policies may process and propagate load
   balancing state update from its child LB policy after itself being shut
   down. This can be cascaded and result in hard-to-reason behaviors if any
   one layer of the LB policies does not clean up its internal state after
   shutdown.

Behavior Changes

   - core, grpclb, xds: let leaf LB policies explicitly refresh name
   resolution when subchannel connection is broken. Custom LoadBalancer
   implementations should refresh name resolution (with
   Helper.refreshNameResolution()) when seeing its created subchannel
   becomes IDLE or TRANSIENT_FAILURE. Currently the Channel will do it for you
   and log a warning. But this operation will be removed in the future
   releases. (#8048 )
   - netty: Added support for OpenJSSE

Dependencies

   - Upgrade Guava to 30.1 (#8100
   ). As part of #4671
    grpc-java will drop
   support for Java 7, with no impact to Android API levels supported. Guava
   is going through the same process and in this Guava release it warns when
   used on Java 7. If you are using Java 7 and are impacted, please comment on
   #4671 . The Java 7 check
   may be noticed by Android builds and fail without language-level
   desugaring. We expect most users have already enabled language-level
   desugaring, but if not it would be necessary to add to your build.gradle:

android {
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}


   - auth: Allow pre- and post-0.25.0 behavior from
   google-auth-library-java, for Bazel users. google-auth-library-java 0.25.0
   changed its behavior for JWT that caused a gRPC test to fail. The failure
   was benign but prevented Bazel users from using newer versions of the
   library

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jfrRewO4bjWEGruJyj5G6qFVcJdWq0Zn7ZxwZwCUwTwg%40mail.gmail.com.


Re: [grpc-io] Re: (grpc-java) Detecting client network disconnect and connect

2021-01-20 Thread 'Eric Gribkoff' via grpc.io
I believe that
https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#getState-boolean-
and
https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#notifyWhenStateChanged-io.grpc.ConnectivityState-java.lang.Runnable-
are the APIs that you are looking for.

Thanks,

Eric

On Tue, Jan 19, 2021 at 10:43 PM Sivarama Prasad <
sivaramaprasad1...@gmail.com> wrote:

> Please help in resolving the issue that I am facing.
>
> On Tuesday, January 19, 2021 at 12:21:28 PM UTC+5:30 Sivarama Prasad wrote:
>
>> Greetings,
>>
>> I am working on GRPC java assignment.
>>
>> Short desc abt requirement:
>>
>> multiple client applications and single server.
>>
>> Every time client stores the message in Database and upon successful
>> storage, pass on the message to grpc server and store the message in server
>> database.
>>
>> So, Client database and server database has the same set of database
>> records.
>>
>> Problem stmt:
>> When there is a network disconnect, client is unable to connect the
>> server and hence there is a mismatch in database records(client vs Server)
>>
>> I am looking for a grpc-java flag/status to know disconnect and connect
>> events so that I can pic the disconnect timestamp and push all the messages
>> to server to be in sync the database records.
>>
>> Please provide me code snippet , sample/example to achive this.
>>
>> Thanks in advance.
>> Sivaram
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/2324a65e-10f0-4a51-a942-5a92c80be524n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7isyWx5t8bzjYFzi5_hnWn_tFL_%2BtmMm12J1A%2BhOLYLMA%40mail.gmail.com.


Re: [grpc-io] gRPC-Java method overloading

2021-01-20 Thread 'Eric Gribkoff' via grpc.io
Assuming that you are asking about overloading the method names defined in
your service's proto file, this StackOverflow question and answer addresses
this issue:
https://stackoverflow.com/questions/65034685/is-it-possible-the-grpc-functions-overloading/65034755#65034755

Thanks,

Eric

On Sat, Jan 16, 2021 at 5:02 AM Anmol Mishra 
wrote:

> Hello,
>
> In gRPC java, Is there any way of method overloading?, How to implement it?
>
> How does gRPC java resolves methods internally?
>
> Thanks for reading so far
> - anmol mishra
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/bb4f3abf-2148-4b72-ab28-2ffdb56a555fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7iCn3nwoGsxbYiH9C_S4_wx-mt%2BKa8CtRsofN%2B5i0rAtw%40mail.gmail.com.


[grpc-io] gRPC-Java v1.35.0 Released

2021-01-13 Thread 'Eric Gribkoff' via grpc.io
gRPC Java 1.35.0 is released and should be available on Maven Central and
JCenter.

https://github.com/grpc/grpc-java/releases/tag/v1.35.0
Bug Fixes

   - core: Fix CompositeChannelCredentials to no longer use CallCredentials
   for OOB channels. OOB channels are available for load balancing policies to
   use to communicate with an LB server. It is mainly used by gRPC-LB. This
   resolves the incompatibility of the 1.34.0 release with googleapis.com.
   - alts: Limit number of concurrent handshakes to 32. ALTS uses blocking
   RPCs for handshakes. If the handshake server has a limit to number of
   concurrent handshakes this can produce a deadlock. Limiting to 32 should
   workaround the problem for the majority of the cases. A later fix will
   allow handshake RPCs to be asynchronous
   - xds: Fix missed class relocations for generated code. grpc-xds
   previously exposed generated code for multiple 3rd-party protobuf generated
   code classes outside of the io.grpc package. They are now shaded to
   avoid colliding with other users of the classes
   - xds: Fix a user visible stack trace showing
   java.util.NoSuchElementException when the environment variable
   GRPC_XDS_EXPERIMENTAL_SECURITY_SUPPORT was set and the application contains
   an xDS configured gRPC server. The exception was benign and was seen when
   the connection was dropped before an SslContextProvider was available.
   - xds: decouple xds channel creation and bootstrapping. This fixes the
   bug caused by the lifecycle mismatch between XdsClient and its channel to
   the xDS server. Cheating a new XdsClient (previous one shutdown due to no
   Channel using it) would create and use a new xDS channel.

Dependencies

   - Guava updated to 30.0-android
   - Animal Sniffer annotations updated to 1.19
   - Error Prone annotations updated to 2.4.0
   - Perfmark updated to 0.23.0
   - compiler: Linux artifacts now built using CentOS 7. Previously CentOS
   6 was used, but that distribution is discontinued and no longer available
   in our build infrastructure
   - netty: Upgrade to Netty 4.1.52 and tcnative 2.0.34. Note that this
   Netty release enables TLSv1.3 support. mTLS failures with TLSv1.3 will have
   different error messages than in TLSv1.2
   - auth,alts: google-auth-library-java updated to 0.22.2
   - census: OpenCensus updated to 0.28.0
   - protobuf: googleapi’s common protos updated to 2.0.1
   - okhttp: Okio updated to 1.17.5
   - xds: re2j updated to 1.5
   - xds: bouncycastle updated to 1.67
   - gradle: bumped protobuf-gradle-plugin version to 0.8.14
   - android, cronet: upgraded the latest support Android version to 29

Acknowledgments

@amnox 
@horizonzy 
@wanyingd1996 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jngt4S4mHRSW6B%3D0GdH%2BtBgEgtkFtiKKDMzRTNo%3D9uhg%40mail.gmail.com.


Re: [grpc-io] Re: Use Cronet underneath GRPC

2020-09-28 Thread 'Eric Gribkoff' via grpc.io
Hi,

Jan is not back just yet but I believe I can largely answer your questions:

1) gRPC C# does not support using Cronet as the underlying transport (at
least, I could not find any API in the C# implementation for enabling
Cronet, which I'm pretty sure would be a pre-requisite). I'm not aware of
all the architectural details of how C# is built on top of gRPC core, but I
think it should be possible to wrap the Cronet transport and expose an API
to use it via C# (caveat: this has worked for other languages, but there
could be blockers for doing so in C#) . I suggest filing a feature request
for Cronet with C# support at github.com/grpc/grpc.

2) Technically you could implement your own transport based on a UDP
client, but I'm pretty sure the core APIs for this are not considered
public/stable, meaning that they are intended for the interaction between
core and the wrapped languages and could potentially change. I would expect
this approach to be a significant amount of work requiring detailed
knowledge of some gRPC internals. If you do choose to go this route, I
would suggest starting another email thread to discuss whether/how it's
advised for gRPC users to actually rely on the transport API (I don't
primarily work on core so not sure about best practices here).

3) The http/2 client inside of gRPC core is not exposed for use without the
RPC layer.

Best,

Eric

On Fri, Sep 18, 2020 at 11:58 AM ashish tiwari 
wrote:

> Thanks for the response!! I will wait for the response. Basically based on
> document
> https://grpc.github.io/grpc/core/md_doc_core_transport_explainer.html,
> Cronet transport is supported or a new transport can be implement. So just
> to provide more clarification, I am trying to find out if that is possible
> for GRPC C# as well and if not supported right now is there a plan to
> support it in future? Also, Since it mentions that we can implement our own
> custom transport, does that mean we can implement transport based on UDP
> client and how much effort that would require? Also, is it possible to
> expose underlying Http Client currently used with GRPC for other purposes
> as well without have RPC layer?
>
> On Friday, September 18, 2020 at 11:05:59 AM UTC-7 yulin...@google.com
> wrote:
>
>> Hi Ashish,
>>
>> I'm not familiar with gRPC C#, I've cced the owner of gRPC-C#, but he's
>> on vacation now. I am not pretty sure but it looks like gRPC-C# doesn't
>> support Cronet transport for now.
>>
>> On Fri, Sep 18, 2020 at 10:45 AM ashish tiwari 
>> wrote:
>>
>>> Hi,
>>>
>>>   Can someone please help with the previous question? I am trying to use
>>> GRPC C# on unity for gaming application and looking into possible ways to
>>> use Cronet or any other UDP underneath GRPC.
>>>
>>> On Friday, September 11, 2020 at 4:59:08 PM UTC-7 ashish tiwari wrote:
>>>
 is Cronet transport available for GRPC C# as well? I am looking into a
 gaming application where I would like to use GRPC C#. I am looking into a
 possible ways to use Cronet for Quic. If it is not supported, is there a
 way that I can use any other UDP client/transport underneath GRPC?

 On Friday, September 11, 2020 at 4:53:33 PM UTC-7 yulin...@google.com
 wrote:

> Hi Ashish, if you use gRPC-Objc, you can simply switch to Cronet
> transport by setting *callOptions.transportType ==
> GRPCTransportTypeCronet*. If you are using gRPC-C++, you also can use
> Cronet underneath GRPC, here's an example to use gRPC with Cronet native
> interface:
>
> ```
> struct Cronet_Engine* cronetEngine = createEngineFunc();
> // get stream_engine ptr for gRPC-cronet
> stream_engine* stream_engine =
> Cronet_Engine_GetStreamEngine(cronetEngine);
>
> std::shared_ptrcredentials =
> ::grpc::CronetChannelCredentials(stream_engine);
>
> // create cronet channel
> std::shared_ptr<::grpc::Channel> channel =
> ::grpc::CreateCustomChannel(_endpoint, credentials, _args);
>
> // create stub with cronet channel
> auto stub = SomeService::NewStub(channel);
> // make async RPC calls with stub...
> ```
>
> On Friday, September 11, 2020 at 4:21:56 PM UTC-7 simpl...@gmail.com
> wrote:
>
>> Is it possible to use Cronet underneath GRPC for quic support?
>
> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/cce84873-b208-4811-9fca-dcbe08fdbf40n%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails 

Re: [grpc-io] Under high load, clients get StatusRuntimeException: UNKNOWN: channel closed

2020-07-31 Thread 'Eric Gribkoff' via grpc.io
Hi Ethan,

That sounds like a reasonable hypothesis (although I should caveat this
with I'm far from an expert on gRPC Netty server best practices). Did you
observe any errors or warnings in the server-side logs when this was
occurring? By default, a gRPC Netty server will use a cached thread pool -
the general recommendation (e.g., here

and here
)
is to specify a fixed thread pool instead via `ServerBuilder.executor`. I
think it may be possible that, if your cached thread pool running your
server application code has spun up lots of threads that are long-running,
it may starve other threads, resulting in the Netty event loops (see
https://groups.google.com/g/grpc-io/c/LrnAbWFozb0) handling incoming
connections being unable to run. Based on the recommendations for using a
fixed thread pool executor instead, I would recommend trying that, making
sure the size of the fixed thread pool is capped at a # of threads that
won't prevent the CPU from running other threads as well.

Thanks,

Eric

On Thu, Jul 30, 2020 at 1:34 PM Ethan Cheng 
wrote:

> Hi Eric, thanks for looking into this.
> I agree that this has nothing to do with `MAX_CONCURRENT_STREAM` and I can
> see why you are thinking about the blockings in the Netty's event loop.
> Since a lot of our requests are long running streaming request which
> transmit a lot data, is it possible that they are using most of the threads
> in the event loop which lead to kind of `thread starvation` for the
> connections coming from the troublesome clients? Then these connections
> stay idled and passed the `DEFAULT_SERVER_KEEPALIVE_TIMEOUT_NANOS`  and
> so these connections became inactive?
> If this is the situation, should we customize the threadpool in Netty?
>
> On Wednesday, July 29, 2020 at 10:25:28 PM UTC-7 ericgr...@google.com
> wrote:
>
>> The first stack trace's cause is truncated so I'm not sure what happened
>> there, but the second includes "at
>> io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1792)"
>> after
>> " 
>> io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1073)
>> ", which I believe indicates that the channel - for an unknown (possibly
>> truncated in the stack trace?) reason - moved to idle during the TLS
>> handshake. As for why this would happen under load: just taking a guess,
>> but is it possible your server is performing a blocking operation in the
>> Netty event loop? I don't think MAX_CONCURRENT_STREAMS-related capacity
>> issue would result in this stack trace, as the connection should be more or
>> less cleanly rejected rather than moving a connected channel to inactive
>> mid-handshake as suggested by the provided stacktrace.
>>
>> Thanks,
>>
>> Eric
>>
>> On Wed, Jul 29, 2020 at 4:57 PM Ethan Cheng 
>> wrote:
>>
>>> We had an incident earlier that when one of our client had improper
>>> retry policy that leads to huge wave of requests to the server. Then all
>>> clients started to see `this StatusRuntimeException: UNKNOWN: channel
>>> closed` at client side.
>>>
>>> for one client, it was:
>>> ```
>>> @40005f0744ef2096858c java.util.concurrent.CompletionException:
>>> io.grpc.StatusRuntimeException: UNKNOWN: channel closed
>>> @40005f0744ef2096858c Channel Pipeline: [SslHandler#0,
>>> ProtocolNegotiators$ClientTlsHandler#0,
>>> WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
>>> @40005f0744ef20968974 at
>>> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>>> ~[?:?]
>>> @40005f0744ef2096df64 at
>>> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>>> ~[?:?]
>>> @40005f0744ef2096df64 at
>>> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>>> ~[?:?]
>>> @40005f0744ef2096e34c at
>>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>>> [?:?]
>>> @40005f0744ef2096e34c at
>>> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>>> [?:?]
>>> @40005f0744ef2096ef04 at
>>> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:449)
>>> [grpc-stub-1.27.1.jar:1.27.1]
>>> @40005f0744ef2096fabc at
>>> io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
>>> [grpc-core-1.29.0.jar:1.29.0]
>>> @40005f0744ef2096fea4 at
>>> io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
>>> [grpc-core-1.29.0.jar:1.29.0]
>>> @40005f0744ef2096fea4 at
>>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
>>> [grpc-core-1.29.0.jar:1.29.0]
>>> @40005f0744ef20970674 at
>>> 

[grpc-io] gRPC-Java 1.31.0 Released

2020-07-30 Thread 'Eric Gribkoff' via grpc.io
gRPC Java 1.31.0 is released and available on Maven Central. The release
should be available on JCenter shortly.

*gRPC Java 1.31.0 Release Notes*

API Changes

   - api: ManagedChannelBuilder.nameResolverFactory is now marked
   deprecated. It has long been our plan to remove the function, but was not
   communicated. Most usages should be able to globally register via the SPI
   mechanism or NameResolverRegistry.register(). There is a plan to add a
   method to ManagedChannelBuilder to specify the default target scheme for
   the channel. If your use-case is not covered, please inform us on #7133
   

New Features

   - The following new xDS functionality is added in this release:
  - Requests matching based on path
  

(prefix,
  full path and safe regex) and headers
  

  .
  - Requests routing to multiple clusters based on weights
  

  .
  - The xDS features supported in a given release are documented here
  .
   - api: Added LoadBalancer.Helper.createResolvingOobChannelBuilder(). It
   is similar to LoadBalancer.Helper.createResolvingOobChannel() except
   allows configuring the channel (#7136
   )

Bug Fixes

   - netty: return status code unavailable when netty channel has
   unresolved InetSocketAddress (#7023
   )
   - core: fix a bug that a call may hang when using manual flow control
   and gRPC retry is enabled (#6817
   )

Documentation

   - stub: Documented more behavior of ClientCalls and ServerCalls, with
   regard to ClientResponseObserver, ClientCallStreamObserver,
   ServerCallStreamObserver, and exceptions
   - api: Documented how Providers may be used in their respective class
   documentation. Previously you “just had to know” the SPI mechanism was
   available

Dependencies

   - Update guava to 29.0 (#7079
   )

Examples

   - examples: Add client/server retrying example via service config #7111
   

Acknowledgements

@alexanderscott 
@AnarSultanov 
@cindyxue 
@d-reidenbach 
@elharo 
@gsharma 
@reggiemcdonald 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7hgGNw2ohQU5Dfbhc4MYmWSpaL80Jp_JUjbNSp7vR4%3Dkg%40mail.gmail.com.


Re: [grpc-io] Under high load, clients get StatusRuntimeException: UNKNOWN: channel closed

2020-07-29 Thread 'Eric Gribkoff' via grpc.io
The first stack trace's cause is truncated so I'm not sure what happened
there, but the second includes "at
io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1792)"
after
" 
io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1073)
", which I believe indicates that the channel - for an unknown (possibly
truncated in the stack trace?) reason - moved to idle during the TLS
handshake. As for why this would happen under load: just taking a guess,
but is it possible your server is performing a blocking operation in the
Netty event loop? I don't think MAX_CONCURRENT_STREAMS-related capacity
issue would result in this stack trace, as the connection should be more or
less cleanly rejected rather than moving a connected channel to inactive
mid-handshake as suggested by the provided stacktrace.

Thanks,

Eric

On Wed, Jul 29, 2020 at 4:57 PM Ethan Cheng 
wrote:

> We had an incident earlier that when one of our client had improper retry
> policy that leads to huge wave of requests to the server. Then all clients
> started to see `this StatusRuntimeException: UNKNOWN: channel closed` at
> client side.
>
> for one client, it was:
> ```
> @40005f0744ef2096858c java.util.concurrent.CompletionException:
> io.grpc.StatusRuntimeException: UNKNOWN: channel closed
> @40005f0744ef2096858c Channel Pipeline: [SslHandler#0,
> ProtocolNegotiators$ClientTlsHandler#0,
> WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
> @40005f0744ef20968974 at
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
> ~[?:?]
> @40005f0744ef2096df64 at
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
> ~[?:?]
> @40005f0744ef2096df64 at
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
> ~[?:?]
> @40005f0744ef2096e34c at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> [?:?]
> @40005f0744ef2096e34c at
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> [?:?]
> @40005f0744ef2096ef04 at
> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:449)
> [grpc-stub-1.27.1.jar:1.27.1]
> @40005f0744ef2096fabc at
> io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef2096fea4 at
> io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef2096fea4 at
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef20970674 at
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef20970a5c at
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef20970a5c at
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef2097604c at
> io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef20976434 at
> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> [grpc-core-1.29.0.jar:1.29.0]
> @40005f0744ef20976434 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [?:?]
> @40005f0744ef20976fec at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [?:?]
> @40005f0744ef20976fec at java.lang.Thread.run(Thread.java:834) [?:?]
> @40005f0744ef20976fec Caused by: io.grpc.StatusRuntimeException:
> UNKNOWN: channel closed
> @40005f0744ef209773d4 Channel Pipeline: [SslHandler#0,
> ProtocolNegotiators$ClientTlsHandler#0,
> WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
> @40005f0744ef209777bc at
> io.grpc.Status.asRuntimeException(Status.java:533)
> ~[grpc-api-1.29.0.jar:1.29.0]
> @40005f0744ef209777bc ... 12 more
> ```
>
> for another clients it was:
> ```
> Caused by: java.nio.channels.ClosedChannelException
> at
> io.grpc.netty.shaded.io.grpc.netty.Utils.statusFromThrowable(Utils.java:168)
> at
> io.grpc.netty.shaded.io.grpc.netty.NettyClientTransport$5.operationComplete(NettyClientTransport.java:267)
> at
> io.grpc.netty.shaded.io.grpc.netty.NettyClientTransport$5.operationComplete(NettyClientTransport.java:261)
> at
> io.grpc.netty.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
> at
> io.grpc.netty.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485)
> at
> io.grpc.netty.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
> at
> 

Re: [grpc-io] Re: Grpc excessive default memory usage

2020-05-14 Thread 'Eric Gribkoff' via grpc.io
Thanks for the additional information.

On Wed, May 13, 2020 at 11:41 PM  wrote:

> Thanks Eric for the reply.
>
> Couple of things. I am not able to get this when you say that gRPC java is
> not optimized for this type of large file. My individual message size is
> 1024 bytes only. The program reads this much bytes from the file at a time
> and sends this to onNext() call of stream observer. So, each message sent
> on wire for each onNext() must be little over 1kb only.
>

That makes sense; what I was considering as a possibility was that the
messages were going to onNext() *much* faster than they were going out on
the wire. In the worst case, the entire file would be buffered into memory
before anything was sent over the wire.

>
> Also, I just changed the gRPC version to *1.29.0* and saw that the native
> memory usage is almost zero with this version. The high usage that I saw
> was on version *1.21.0*.
>
>
Even better; most likely the memory usage was optimized or, if a leak,
fixed since version 1.21.

Thanks for the update!

Eric


> Regards,
>
> On Thursday, May 14, 2020 at 3:26:12 AM UTC+5:30, Eric Gribkoff wrote:
>>
>> gRPC Java may not be optimized for this type of large file transfer, and
>> is likely copying the provided bytes more than once which could explain why
>> you see additional direct memory allocation. These should be all released
>> once the data goes out over the wire. It looks from your code snippet that
>> the entire file is immediately buffered - we would expect to see the memory
>> consumption decrease as bytes are sent out to the client.
>>
>> Further investigation of this would probably be best carried out on the
>> grpc java repository at https://github.com/grpc/grpc-java/ if you'd care
>> to file an issue/reproduction case there.
>>
>> Thanks,
>>
>> Eric
>> On Monday, May 11, 2020 at 3:22:35 AM UTC-7 ravi@gmail.com wrote:
>>
>>> I have been testing gRPC behaviour for larger message size on my
>>> machine. I have got a single client to which I am streaming a video file of
>>> 603mb via streaming gRPC. I ran into OOM while testing and found that in
>>> case of slow clients, response messages were getting queued up and I was
>>> getting below error msg:
>>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
>>> 16777216 byte(s) of direct memory (used: 1873805512, max: 1890582528)
>>>
>>>
>>> Now this is fixable via flow control(using onReady() channel handler)
>>> but out of curiosity I increased my direct memory to 4GB
>>> via -XX:MaxDirectMemorySize=4g  jvm flag to force queuing up of all
>>> response messages and hence the client can consume on it's own pace. It got
>>> completed successfully. But I observed that I ended up using 2.4GB of
>>> direct memory. I checked it via usedDirectMemory() eposed by netty
>>> for ByteBufAllocatorMetric.
>>>
>>> *Isn't this too much for a 603mb file as it is 3 times more than the
>>> total file size*. Below is the code snippet that I am using:
>>>
>>> stream = new FileInputStream(file);
>>> byte[] buffer = new byte[1024];
>>> ByteBufAllocator byteBufAllocator = ByteBufAllocator.DEFAULT;
>>> int length;
>>> while ((length = stream.read(buffer)) > 0) {
>>>  response.onNext(VideoResponse.newBuilder().setVideoBytes(ByteString.
>>> copyFrom(buffer)).build());
>>>  if (ByteBufAllocator.DEFAULT instanceof ByteBufAllocatorMetricProvider)
>>> {
>>> ByteBufAllocatorMetric metric = ((ByteBufAllocatorMetricProvider
>>> ) byteBufAllocator).metric();
>>> System.out.println(metric.usedDirectMemory()/(1024*1024));
>>> }
>>> }
>>>
>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/e9584663-f06e-4a22-9307-520409511eab%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gCp6b1noZGBbBQqMqDjsFrNhnTMonN%2BHmrEWQQGGh0kg%40mail.gmail.com.


Re: [grpc-io] Android DownloadManager with gRPC

2020-03-26 Thread 'Eric Gribkoff' via grpc.io
As a commenter on your StackOverflow question suggested, gRPC is for
structured data not raw file downloads. Trying to map between a URI and the
RPC call would not be enough to make DownloadManager work with gRPC,
because the gRPC server is going to send and expect additional data beyond
the raw file itself that will not be understood or sent by DownloadManager.

You can certainly use gRPC server and client to send large files, but it's
not going to have the same set of features built-in for large file
downloading (such as resuming your download later if the connection is
broken) as using something like a HTTP server + a specific file download
library/client. I would recommend just running a separate HTTP server for
your file download needs.

Thanks,

Eric

On Thu, Mar 26, 2020 at 4:29 AM clement jean 
wrote:

> Hi,
>
>
>
> I recently posted a question on StackOverflow concerning the problem of
> using DownloadManager with gRPC.
>
> To summarize, I want to serve static files from my server (C++) and being
> able to download these from the DownloadManager (Kotlin)
>
>
>
> However, I need an URI to use DownloadManager. I have potentially a
> solution:
>
>- I could either map the RPC call with an URI (like the static_file
>attribute in Google Cloud YAML congirutation file)
>
>
>
> But I’m working on a Standalon server not on Google Cloud, would it still
> be feasible to use a YAML config file from gRPC C++ and serve these files?
>
>
>
> For more information on the problem, please check the question on
> StackOverflow
> 
>
>
>
> Thank you in advance
>
>
>
> Clement Jean
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/AM6PR02MB36211D760C90C8D91F55E76AF4CF0%40AM6PR02MB3621.eurprd02.prod.outlook.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7igJNMdEfezbfHEu5WSfdn6W7-QgZTSA8%2BCavqMMvD8QQ%40mail.gmail.com.


Re: [grpc-io] Huge increase of precompiled native libraries for Android and iOS

2020-02-19 Thread 'Eric Gribkoff' via grpc.io
+Jan Tattermusch  , who (I think) knows more about
our Unity builds

On Wed, Feb 19, 2020 at 10:13 AM 'Jihun Cho' via grpc.io <
grpc-io@googlegroups.com> wrote:

> we have completely different codebase for ios and android. so, it happened
> on both sides sounds interesting.
> i can't speak for the ios part, but for android build we check the binary
> size, so i am a bit surprised that you experienced this.
>
> can you narrow down which version caused the binary size increase?
> also, can you have some break down other than total binary size?
>
> On Mon, Feb 17, 2020 at 12:12 AM Ronny López  wrote:
>
>> Hi,
>>
>> We use the experimental Unity package which includes precompiled native
>> libraries Android and iOS.
>>
>> In the latest versions (concretely while upgrading from version 1.23.x to
>> 1.27.x) , the size of  those precompiled native library has doubled in size.
>>
>> In Android, from ~20MB to ~42MB
>> In iOS, from ~90MB to ~208 MB
>>
>> This makes the package near to unusable in mobile.
>>
>> Do you folks know what is provoking such huge increase in size?
>>
>>
>> Thanks
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/49fb6d80-0c96-439a-a25e-9c89ab06e78f%40googlegroups.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CABu9GjrFJdr%3DSHiLv_dTVUsSdUSjkC0-x%3D86Yix0pvUOE_eNaA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7h5CHA49o2ANLYQmXPQpAw9%2BesNbZarLyZiWPZqEvah-w%40mail.gmail.com.


Re: [grpc-io] what's the difference between enableRetry and waitForReady?

2019-12-26 Thread 'Eric Gribkoff' via grpc.io
No, if you have configured automatic retries for that error code, it will
be automatically retried. The spec for this is at
https://github.com/grpc/proposal/blob/master/A6-client-retries.md. Avoiding
a callback to the user while the RPC is automatically being tried was a
explicit goal of the design: if the client needs to alter its behavior due
to retries then it should not be using gRPC's retry support and should
instead have retry logic implemented in the application layer.  However, in
the metadata of the RPC on the clientside (when it eventually succeeds or
fails), gRPC will insert an integer value for the
`grpc-previous-rpc-attempts` key that lets you know how many attempts were
made, but not their error codes.

On Thu, Dec 26, 2019 at 12:58 PM Elhanan Maayan 
wrote:

> will i get this to see this error code? i mean let's assume i've
> configured it to have several retriable error code, can my application
> receive any call back about it?
>
> On Thu, Dec 26, 2019 at 10:54 PM Eric Gribkoff 
> wrote:
>
>> A major difference is that, if retries are enabled, the server can reply
>> with an error code that's been configured as retryable and the client can
>> retry the RPC.
>>
>> On Thu, Dec 26, 2019 at 2:07 AM Elhanan Maayan 
>> wrote:
>>
>>>
>>> from my POV those seem to be identical except for the fact that
>>> waitForReady doesn't execute calls until channel is read while enableRetry
>>> does, why should that matter? (if i'm using steaming rpc)
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/2ea04be3-a7c9-44c0-9539-00ae7394a6f8%40googlegroups.com
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jCisHRJaarR7FncHsdXFvdpOFj_b%2BXbk3bw8%3D_HTd6Fw%40mail.gmail.com.


Re: [grpc-io] what's the difference between enableRetry and waitForReady?

2019-12-26 Thread 'Eric Gribkoff' via grpc.io
A major difference is that, if retries are enabled, the server can reply
with an error code that's been configured as retryable and the client can
retry the RPC.

On Thu, Dec 26, 2019 at 2:07 AM Elhanan Maayan  wrote:

>
> from my POV those seem to be identical except for the fact that
> waitForReady doesn't execute calls until channel is read while enableRetry
> does, why should that matter? (if i'm using steaming rpc)
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/2ea04be3-a7c9-44c0-9539-00ae7394a6f8%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gQet3q1jqfmv7aEVAC4O_OhGUEHQn6jyRkZ1RtHTsyeQ%40mail.gmail.com.


Re: [grpc-io] configure connect timeout for grpc java client?

2019-12-23 Thread 'Eric Gribkoff' via grpc.io
Based on your other post, it sounds like you are using wait for ready. I'm
unclear on why the client would be willing to wait, say, 1 minute for a
connection to establish but only 15 seconds for the server to respond once
it receives the RPC. It would seem more reasonable for the client to have a
single deadline that applies to the RPC itself, and any server-processing
"deadline" can be enforced by the server itself (it knows exactly when it
received the RPC).

If this doesn't apply for your scenario somehow, then you should explicitly
- and separately from any particular RPC - monitor the connection state of
your channel via
https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#getState-boolean-
 and
https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#notifyWhenStateChanged-io.grpc.ConnectivityState-java.lang.Runnable-
.

- Eric

On Sat, Dec 21, 2019 at 8:26 AM Elhanan Maayan  wrote:

> this is what we initially did, where deadline was set to 15 seconds, to
> the deadline triggered first.
>
> the problem is that it hides the true feedback, unavailable means you have
> network configuration issue, you were unable to establish a connection to
> the server, either because the wrong ip, it was unresponsive, firewall
> etc.. deadline means you didn't get a response from the server.
> the fact it starts counting from the moment you called a method and not
> from the moment you actually connected seems like a bug, because i'd be
> interested in a dead line being measured form the moment i was actually
> able to send a signal to the server.
>
> On Friday, December 20, 2019 at 8:55:02 PM UTC+2, Eric Gribkoff wrote:
>>
>> Do you actually care about the underlying connection backoff attempts of
>> the channel? From your statement about getting an UNAVAILABLE response
>> within 10 seconds, it sounds like you are primarily interested in your RPCs
>> failing when the connection isn't established within a given time period.
>> For that, you should just set a deadline on your calls. This can be done on
>> a stub via
>> https://grpc.github.io/grpc-java/javadoc/io/grpc/stub/AbstractStub.html#withDeadlineAfter-long-java.util.concurrent.TimeUnit-,
>> where the set deadline applies to all calls on the returned stub - e.g.,
>> typical usage would be stub.withDeadlineAfter().rpcMethod().
>>
>> Thanks,
>>
>> Eric
>>
>> On Fri, Dec 20, 2019 at 2:24 AM 'Christoph John' via grpc.io <
>> grp...@googlegroups.com> wrote:
>>
>>> Hmm, never wanted to do that. But I guess you'll have to shutdown() the
>>> channel.
>>>
>>> Chris.
>>>
>>> On 20.12.19 09:20, Elhanan Maayan wrote:
>>>
>>> thanks don't how i missed that, i looked on it before,
>>> btw how can i stop it from retrying?
>>>
>>> On Thu, Dec 19, 2019 at 10:56 PM Christoph John 
>>> wrote:
>>>
 I think this can be done via the channel options of the
 NettyChannelBuilder.

 NettyChannelBuilder.forAddress( host, port ).withOption(
 ChannelOption.CONNECT_TIMEOUT_MILLIS, 1 ).build();

 Cheers,
 Chris.


 On 19.12.19 20:57, Elhanan Maayan wrote:

 hi... is there a way to configure a connect time out , for example
 trying to connect to host that's not responding , currently it seems to set
 to 20 seconds, but i'd like to shorten to 10 seconds where i'll get
 unavailable

 i've looked every git  issue there is, but it doesn't seem to be
 implemented or i don't see where it's configured.
 --
 You received this message because you are subscribed to the Google
 Groups "grpc.io" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to grp...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/grpc-io/837b141c-a869-48b6-995d-7b969f4c0fdc%40googlegroups.com
 
 .



>>> --
>>> Christoph John
>>> Software Engineering
>>> T +49 241 557080-28christ...@macd.com
>>>
>>> MACD GmbH
>>> Oppenhoffallee 103
>>> 52066 Aachen, Germanywww.macd.com
>>>
>>> Amtsgericht Aachen: HRB 8151
>>> Ust.-Id: DE 813021663
>>> Geschäftsführer: George Macdonald
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grp...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/006b172e-403d-3c9f-af61-798676d8f41e%40macd.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 

Re: [grpc-io] configure connect timeout for grpc java client?

2019-12-20 Thread 'Eric Gribkoff' via grpc.io
Do you actually care about the underlying connection backoff attempts of
the channel? From your statement about getting an UNAVAILABLE response
within 10 seconds, it sounds like you are primarily interested in your RPCs
failing when the connection isn't established within a given time period.
For that, you should just set a deadline on your calls. This can be done on
a stub via
https://grpc.github.io/grpc-java/javadoc/io/grpc/stub/AbstractStub.html#withDeadlineAfter-long-java.util.concurrent.TimeUnit-,
where the set deadline applies to all calls on the returned stub - e.g.,
typical usage would be stub.withDeadlineAfter().rpcMethod().

Thanks,

Eric

On Fri, Dec 20, 2019 at 2:24 AM 'Christoph John' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Hmm, never wanted to do that. But I guess you'll have to shutdown() the
> channel.
>
> Chris.
>
> On 20.12.19 09:20, Elhanan Maayan wrote:
>
> thanks don't how i missed that, i looked on it before,
> btw how can i stop it from retrying?
>
> On Thu, Dec 19, 2019 at 10:56 PM Christoph John 
> wrote:
>
>> I think this can be done via the channel options of the
>> NettyChannelBuilder.
>>
>> NettyChannelBuilder.forAddress( host, port ).withOption(
>> ChannelOption.CONNECT_TIMEOUT_MILLIS, 1 ).build();
>>
>> Cheers,
>> Chris.
>>
>>
>> On 19.12.19 20:57, Elhanan Maayan wrote:
>>
>> hi... is there a way to configure a connect time out , for example trying
>> to connect to host that's not responding , currently it seems to set to 20
>> seconds, but i'd like to shorten to 10 seconds where i'll get unavailable
>>
>> i've looked every git  issue there is, but it doesn't seem to be
>> implemented or i don't see where it's configured.
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/837b141c-a869-48b6-995d-7b969f4c0fdc%40googlegroups.com
>> 
>> .
>>
>>
>>
> --
> Christoph John
> Software Engineering
> T +49 241 557080-28 <+49%20241%2055708028>christoph.j...@macd.com
>
> MACD GmbH
> Oppenhoffallee 103
> 52066 Aachen, Germanywww.macd.com
>
> Amtsgericht Aachen: HRB 8151
> Ust.-Id: DE 813021663
> Geschäftsführer: George Macdonald
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/006b172e-403d-3c9f-af61-798676d8f41e%40macd.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gvmrjkiLPL9b-m--97WBdHFcR-AEaMgfehSJkPrhLZEA%40mail.gmail.com.


[grpc-io] Re: gRPC-Java v1.26.0 Released

2019-12-19 Thread 'Eric Gribkoff' via grpc.io
I overlooked some items that should have been included in the previous
release note email. The entry at
https://github.com/grpc/grpc-java/releases/tag/v1.26.0 is now amended, with
the following additions:

Dependencies

   - Bump google-auth-library-credentials and
   google-auth-library-oauth2-http to 0.18.0 (#6360
   )
   - Bump mockito to 2.28.2 (#6355
   )

Bug Fixes

   - alts: fix lazychannel close (#6475
   )

New Features

   - Pre-build binaries for the aarch64 platform
   - Add s390x cross-compiling support

Acknowledgements

   - Carl Mastrangelo (@carl-mastrangelo
   )
   - Elliotte Rusty Harold (@elharo )
   - Liu sheng (@liusheng )
   - Nayana Thorat (@Nayana-ibm )
   - Steve Rao (@steverao )
   - Tomo Suzuki (@suztomo )
   - Yongwoo Noh (@yonguno )


On Wed, Dec 18, 2019 at 3:40 PM Eric Gribkoff 
wrote:

> gRPC Java 1.26.0 is released and should be available on Maven Central and
> JCenter.
>
> https://github.com/grpc/grpc-java/releases/tag/v1.26.0
>
> Dependencies
>
>- Bump protobuf-java to 3.11.0.
>- Bump protobuf-javalite to 3.11.0. This brings lite in-line with full
>protobuf. Be aware that the Maven artifact name changed for Protobuf lite.
>The dependency is now com.google.protobuf:protobuf-javalite instead of
>com.google.protobuf:protobuf-lite
>- Bump gson to 2.8.6
>
> Bug Fixes
>
>- netty, okhttp: Known IOExceptions are logged as FINE level
>- interop-testing, benchmarks: missing executables (since 1.19.0) is
>now published again
>- cronet: grpc-cronet artifact contains empty .aar due to code
>shrinking was enabled, now it is fixed.
>
> API Changes
>
>- api, core: make channel logger accessible through NameResolver.Args (
>#6430 )
>- api, core: make scheduled executor service accessible for
>NameResolver.Args (#6455 )
>- stub, compiler: generated stubs are now extended from AbstractStub to
>indicate stub type (AbstractAsyncStub, AbstractFutureStub,
>AbstractFutureStub)
>- api: the deprecated API ManagedChannelBuilder.usePlaintext(boolean) is
>removed (#1772 , #6440
>).
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7hynZkHk5c2uOKxWYsuYitAQyxcL3Oq%2Br-aHTC42Wpk_Q%40mail.gmail.com.


[grpc-io] gRPC-Java v1.26.0 Released

2019-12-18 Thread 'Eric Gribkoff' via grpc.io
gRPC Java 1.26.0 is released and should be available on Maven Central and
JCenter.

https://github.com/grpc/grpc-java/releases/tag/v1.26.0

Dependencies

   - Bump protobuf-java to 3.11.0.
   - Bump protobuf-javalite to 3.11.0. This brings lite in-line with full
   protobuf. Be aware that the Maven artifact name changed for Protobuf lite.
   The dependency is now com.google.protobuf:protobuf-javalite instead of
   com.google.protobuf:protobuf-lite
   - Bump gson to 2.8.6

Bug Fixes

   - netty, okhttp: Known IOExceptions are logged as FINE level
   - interop-testing, benchmarks: missing executables (since 1.19.0) is now
   published again
   - cronet: grpc-cronet artifact contains empty .aar due to code shrinking
   was enabled, now it is fixed.

API Changes

   - api, core: make channel logger accessible through NameResolver.Args (
   #6430 )
   - api, core: make scheduled executor service accessible for
   NameResolver.Args (#6455 )
   - stub, compiler: generated stubs are now extended from AbstractStub to
   indicate stub type (AbstractAsyncStub, AbstractFutureStub,
   AbstractFutureStub)
   - api: the deprecated API ManagedChannelBuilder.usePlaintext(boolean) is
   removed (#1772 , #6440
   ).

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gDWGpvGFSPTBSgOQ4UMgGphMgM1BuqXRLvkzmYBKkf9w%40mail.gmail.com.


Re: [grpc-io] Re: Python Rendezvous exception when running python server, python client and C++ client on the same PC

2019-03-04 Thread 'Eric Gribkoff' via grpc.io
+Lidi Zheng , who will be available for any follow-up
questions (it will be easier for him to notice your questions if you
include his email address on the "to:" line)

Hi Alex,

Sorry for the delay. I was not able to reproduce the problem; it looks like
you are running on Windows, in which case gRPC's fork handlers are not
registered/run, so those shouldn't be the cause here . Since the
reproduction example also uses CherryPy websockets, it's quite possible the
issue stems from that software rather than the gRPC stack - we'd likely
need a reproduction case that only uses gRPC, without the websockets, to be
able to help debug this further.

Thanks,

Eric

On Mon, Mar 4, 2019 at 2:36 AM Alex  wrote:

> Hi Eric,
>
> Just wondering if you had time to run my attached example and managed to
> reproduce the problem?
>
> Thanks,
> Alex.
>
> On Wednesday, February 20, 2019 at 7:04:51 PM UTC, Eric Gribkoff wrote:
>>
>> Can you post the code you're using to reproduce this error? If you're
>> using subprocess.Popen (or otherwise using fork+exec) to start the C++ grpc
>> client process, the C++ client itself cannot be interfering with the Python
>> process. Something could be going wrong in the gRPC core fork handlers,
>> however - you can try running with the environment variable
>> `GRPC_ENABLE_FORK_SUPPORT=0` to disable this feature and see if it fixes
>> the issue.
>>
>> Also, in your step 5 you note that the C++ client isn't communicating
>> with the server. If you remove the fork+exec of a C++ subprocess
>> altogether, do you still see this intermittent exception in the Python
>> client?
>>
>> Eric
>>
>> On Wed, Feb 20, 2019 at 6:57 AM Alex  wrote:
>>
>>> I should add that the Python client application which owns the Python
>>> grpc client is the one that runs the C++ grpc client as a subprocess in
>>> case that makes a difference.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To post to this group, send email to grp...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/b323fac3-978b-47c1-b1fa-555c2f62b544%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/af4d55fa-a5e0-4e3f-a5ad-9cb62378703d%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7ji8OA1MwzZ5DHPjN0vy%3DvP1sc2gE6TCvT%2BAzHOXgCkLw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Python Rendezvous exception when running python server, python client and C++ client on the same PC

2019-02-20 Thread 'Eric Gribkoff' via grpc.io
Can you post the code you're using to reproduce this error? If you're using
subprocess.Popen (or otherwise using fork+exec) to start the C++ grpc
client process, the C++ client itself cannot be interfering with the Python
process. Something could be going wrong in the gRPC core fork handlers,
however - you can try running with the environment variable
`GRPC_ENABLE_FORK_SUPPORT=0` to disable this feature and see if it fixes
the issue.

Also, in your step 5 you note that the C++ client isn't communicating with
the server. If you remove the fork+exec of a C++ subprocess altogether, do
you still see this intermittent exception in the Python client?

Eric

On Wed, Feb 20, 2019 at 6:57 AM Alex  wrote:

> I should add that the Python client application which owns the Python grpc
> client is the one that runs the C++ grpc client as a subprocess in case
> that makes a difference.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/b323fac3-978b-47c1-b1fa-555c2f62b544%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gd6JU7FojFs_uQOevz1h69_k2GMoLHMM6jJJ6Yy9W28Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC Android stub creation best practice? DEADLINE_EXCEEDED but no request made to server

2019-01-17 Thread 'Eric Gribkoff' via grpc.io
On Thu, Jan 17, 2019 at 12:31 PM  wrote:

> After researching a bit, I believe the issue was that the proxy on the
> server was closing the connection after a few minutes of idle time, and the
> client ManagedChannel didn't automatically detect that and connect again
> when that happened. When constructing the ManagedChannel, I added an
> idleTimeout to it, which will proactively kill the connection when it's
> idle, and reestablish it when it's needed again, and this seems to solve
> the problem. So the new channel construction looks like this:
>
> @Singleton@Provides
> fun providesMyClient(app: Application): MyClient {
> val channel = AndroidChannelBuilder
> .forAddress("example.com", 443)
> .overrideAuthority("example.com")
> .context(app.applicationContext)
> .idleTimeout(60, TimeUnit.SECONDS)
> .build()
> return MyClient(channel)}
>
> To anyone who might see this, does that seem like a plausible explanation?
>
>
The explanation seems plausible, but I would generally expect that when the
proxy closes the connection, this would be noticed by the gRPC client. For
example, if the TCP socket is closed by the proxy, then the managed channel
will see this and try to reconnect. Can you provide some more details about
what proxy is in use, and how you were able to determine that the proxy is
closing the connection?

If you can deterministically reproduce the DEADLINE_EXCEEDED errors from
the original email, it may also be helpful to ensure that you observe the
same behavior when using OkHttpChannelBuilder directly instead of
AndroidChannelBuilder. AndroidChannelBuilder is only intended to respond to
changes in the device's internet state, so it should be irrelevant to
detecting (or failing to detect) server-side disconnections, but it's a
relatively new feature and would be worth ruling it out as a source of the
problem.

Thanks,

Eric




>
> On Wednesday, January 16, 2019 at 7:30:42 PM UTC-6, davis@gmail.com
> wrote:
>>
>> I believe I may not understand something about how gRPC Channels, Stubs,
>> And Transports work. I have an Android app that creates a channel and a
>> single blocking stub and injects it with dagger when the application is
>> initialized. When I need to make a grpc call, I have a method in my client,
>> that calls a method with that stub. After the app is idle a while, all of
>> my calls return DEADLINE_EXCEEDED errors, though there are no calls showing
>> up in the server logs.
>>
>> @Singleton@Provides
>> fun providesMyClient(app: Application): MyClient {
>> val channel = AndroidChannelBuilder
>> .forAddress("example.com", 443)
>> .overrideAuthority("example.com")
>> .context(app.applicationContext)
>> .build()
>> return MyClient(channel)}
>>
>> Where my client class has a function to return a request with a deadline:
>>
>> class MyClient(channel: ManagedChannel) {private val blockingStub: 
>> MyServiceGrpc.MyServiceBlockingStub = MyServiceGrpc.newBlockingStub(channel)
>>
>> fun getStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getStuff(stuffRequest())}
>> fun getOtherStuff(): StuffResponse =
>> blockingStub
>> .withDeadlineAfter(7, TimeUnit.SECONDS)
>> .getOtherStuff(stuffRequest())}
>>
>> I make the calls to the server inside a LiveData class in My Repository,
>> where the call looks like this: myClient.getStuff()
>>
>> I am guessing that the channel looses its connection at some point, and
>> then all of the subsequent stubs simply can't connect, but I don't see
>> anywhere in the AndroidChannelBuilder documentation that talks about how to
>> handle this (I believed it reconnected automatically). Is it possible that
>> the channel I use to create my blocking stub gets stale, and I should be
>> creating a new blocking stub each time I call getStuff()? Any help in
>> understanding this would be greatly appreciated.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1202aad5-4897-4bbb-a238-34edae74e368%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, 

Re: [grpc-io] Kill server task if client disconnects? (python)

2018-12-18 Thread 'Eric Gribkoff' via grpc.io
On Tue, Dec 18, 2018 at 10:45 AM  wrote:

> Thanks, Eric.  That makes some degree of sense, although there are a few
> cases we still won't be able to deal with, I suspect (and we may have
> trouble later anyway... in some cases our server program has to shell out
> to run a separate program, and if that runs into the fork trouble and can't
> be supported by GRPC we may be stuck with a very clanky REST
> implementation).
>
>
Sorry, I should have been more precise in my earlier response: you are fine
to use fork+exec (e.g., subprocess.Popen) to run a separate program in a
new shell. (Caveat: we had a bug
 that
may cause problems even with fork+exec when using Python3. The fix is now
merged and will be in the next release; our nightly builds will also
include the fix ~tomorrow if you are hitting this issue). The issues on the
server-side with fork arise when using libraries that fork and, rather than
exec'ing a new program, continue to run the original program in the child
process, e.g., Python's multiprocessing module.



> Hmm, quite a pickle.  I can see I'll be playing with a bunch of toy
> problems for a bit before even considering doing a migration to GRPC.  Most
> disagreeable, but we'll see what we get.
>
> Can grpc client stubs be used from within grpc servicers?  (imagining
> fracturing this whole thing into microservices even if that doesn't solve
> this particular problem).
>

Absolutely, and that's an intended/common usage.

Thanks,

Eric


>
> On Tuesday, December 18, 2018 at 12:32:15 PM UTC-6, Eric Gribkoff wrote:
>>
>>
>>
>> On Tue, Dec 18, 2018 at 10:17 AM  wrote:
>>
>>> Hmm; I'm having some luck looking at the context, which quite happily
>>> changes from is_active() to not is_active() the instant I kill the waiting
>>> client.  So I thought I'd proceed with something like
>>>
>>> while not my_future.done():
>>>   if not context.is_active():
>>> my_future.cancel()
>>>
>>>
>> Consider using add_callback
>>  on
>> the RpcContext instead, so you don't have to poll.
>>
>>
>>> Terminating the worker thread/process is actually vexing me though!  I
>>> tried having a ThreadPoolExecutor to give me a future for the worker task,
>>> but you can't really cancel a future from a thread, it turns out (you can
>>> only cancel it if it hasn't started running; once it's started, it still
>>> goes to completion).  So I've tried having a separate ProcessPoolExecutor
>>> (maybe processes can be killed?) but that's not actually going so well
>>> either, as attempts to use that to generate futures results in some odd
>>> "Failed accept4: Invalid Argument" errors which I can't quite work through.
>>>
>>>
>> ProcessPoolExecutor will fork subprocesses, and gRPC servers (and many
>> other multi-threaded libraries) are not compatible with this. There is some
>> discussion around this in https://github.com/grpc/grpc/issues/16001. You
>> could pre-fork (fork before creating the gRPC server), but I don't think
>> this will help with your goal of cancelling long-running jobs. It's
>> difficult to cleanly kill subprocesses, as they may be in the middle of an
>> operation that you would really like to clean up gracefully.
>>
>>
>>> Most confusing.  I wonder if I'll need to subclass grpc.server or if my
>>> servicer can manually run a secondary process or some such.
>>>
>>> Still, surprising to me this isn't a solved problem built into GRPC.  I
>>> feel like I'm missing something really obvious.
>>>
>>>
>> I wouldn't consider cancelling long running jobs spawned by your server
>> as part of the functionality that gRPC is intended for - this is a task
>> that can came up regardless of what server protocol you are using, and will
>> arise often even on non-server applications. A standard approach for this
>> in a multi-threaded environment would be setting a cancel boolean variable
>> (e.g., in your gRPC servicer implementation) that your task (the
>> long-running subroutine) periodically checks for to exit early. This should
>> be compatible with ThreadPoolExecutor.
>>
>> Thanks,
>>
>> Eric
>>
>>
>>> On Monday, December 17, 2018 at 1:35:41 PM UTC-6, robert engels wrote:

 You don’t have to - just use the future as described - if the stream is
 cancelled by the client - you can cancel the future - if the future
 completes you send the result back in the stream (if any) - you don’t have
 to keep sending messages as long as the keep alive is on.

 On Dec 17, 2018, at 1:32 PM, vbp...@gmail.com wrote:

 Good idea, but the problem I have with this (if I understand you right)
 is that some of the server tasks are just these big monolithic calls that
 sit there doing CPU-intensive work (sometimes in a third-party library;
 it's not trivial to change them to stream back progress reports or
 anything).

 So it feels like some way of running them in a separate 

Re: [grpc-io] Kill server task if client disconnects? (python)

2018-12-18 Thread 'Eric Gribkoff' via grpc.io
On Tue, Dec 18, 2018 at 10:17 AM  wrote:

> Hmm; I'm having some luck looking at the context, which quite happily
> changes from is_active() to not is_active() the instant I kill the waiting
> client.  So I thought I'd proceed with something like
>
> while not my_future.done():
>   if not context.is_active():
> my_future.cancel()
>
>
Consider using add_callback
 on the
RpcContext instead, so you don't have to poll.


> Terminating the worker thread/process is actually vexing me though!  I
> tried having a ThreadPoolExecutor to give me a future for the worker task,
> but you can't really cancel a future from a thread, it turns out (you can
> only cancel it if it hasn't started running; once it's started, it still
> goes to completion).  So I've tried having a separate ProcessPoolExecutor
> (maybe processes can be killed?) but that's not actually going so well
> either, as attempts to use that to generate futures results in some odd
> "Failed accept4: Invalid Argument" errors which I can't quite work through.
>
>
ProcessPoolExecutor will fork subprocesses, and gRPC servers (and many
other multi-threaded libraries) are not compatible with this. There is some
discussion around this in https://github.com/grpc/grpc/issues/16001. You
could pre-fork (fork before creating the gRPC server), but I don't think
this will help with your goal of cancelling long-running jobs. It's
difficult to cleanly kill subprocesses, as they may be in the middle of an
operation that you would really like to clean up gracefully.


> Most confusing.  I wonder if I'll need to subclass grpc.server or if my
> servicer can manually run a secondary process or some such.
>
> Still, surprising to me this isn't a solved problem built into GRPC.  I
> feel like I'm missing something really obvious.
>
>
I wouldn't consider cancelling long running jobs spawned by your server as
part of the functionality that gRPC is intended for - this is a task that
can came up regardless of what server protocol you are using, and will
arise often even on non-server applications. A standard approach for this
in a multi-threaded environment would be setting a cancel boolean variable
(e.g., in your gRPC servicer implementation) that your task (the
long-running subroutine) periodically checks for to exit early. This should
be compatible with ThreadPoolExecutor.

Thanks,

Eric


> On Monday, December 17, 2018 at 1:35:41 PM UTC-6, robert engels wrote:
>>
>> You don’t have to - just use the future as described - if the stream is
>> cancelled by the client - you can cancel the future - if the future
>> completes you send the result back in the stream (if any) - you don’t have
>> to keep sending messages as long as the keep alive is on.
>>
>> On Dec 17, 2018, at 1:32 PM, vbp...@gmail.com wrote:
>>
>> Good idea, but the problem I have with this (if I understand you right)
>> is that some of the server tasks are just these big monolithic calls that
>> sit there doing CPU-intensive work (sometimes in a third-party library;
>> it's not trivial to change them to stream back progress reports or
>> anything).
>>
>> So it feels like some way of running them in a separate thread and having
>> an overseer method able to kill them if the client disconnects is the way
>> to go.  We're already using a ThreadPoolExecutor to run worker threads so I
>> feel like there's something that can be done on that side... just seems
>> like this ought to be a Really Common Problem, so I'm surprised it's either
>> not directly addressed or at least commonly answered.
>>
>> On Monday, December 17, 2018 at 1:27:39 PM UTC-6, robert engels wrote:
>>>
>>> You can do this if you use the streaming protocol - that is the only way
>>> I know to have any facilities to determine when a “client disconnects”.
>>>
>>> On Dec 17, 2018, at 1:24 PM, vbp...@gmail.com wrote:
>>>
>>> I'm sure it's been answered before but I've searched for quite a while
>>> and not found anything, so apologies:
>>>
>>> We're using python... we've got server tasks that can last quite a while
>>> (minutes) and chew up lots of CPU.  Right now we're using REST, and when/if
>>> the client disconnects before return, the task keeps running on the server
>>> side.  This is unfortunate; it's costly (since the server may be using
>>> for-pay services remotely, leaving the task running could cost the client)
>>> and vulnerable (a malicious client could just start and immediately
>>> disconnect hundreds of tasks and lock the server up for quite a while).
>>>
>>> I was hoping that a move to GRPC, in addition to solving other problems,
>>> would provide a clean way to deal with this.  But it's not immediately
>>> obvious how to do so.  I could see maybe manually starting a thread/Future
>>> for the worker process and iterating sleeping until either the context is
>>> invalid or the thread/future returns, but I feel like that's manually
>>> hacking something that probably exists 

Re: [grpc-io] how do I write tests for my gRPC server? (Python)

2018-12-12 Thread 'Eric Gribkoff' via grpc.io
This was answered on Gitter earlier this week, but reposting my response
here for anyone who finds this in the future:

We don't have a stand-alone example of unit testing a gRPC service (yet - I
> just filed grpc/grpc#17453 ).
> The APIs documented at https://grpc.io/grpc/python/grpc_testing.html are
> what you should be using, but the best I can do right now as far as
> pointing you to a "simple" usage of these is the
> test_successful_unary_unary method in
> https://github.com/grpc/grpc/blob/master/src/python/grpcio_tests/tests/testing/_server_test.py
>

On Sun, Dec 9, 2018 at 10:27 PM  wrote:

> Let's say I have a simple proto:
>
> service HelloService {
> rpc SayHello(HelloReq) returns (HelloResp) {};
> }
>
>
> message HelloReq {
> string Name = 1;
> }
>
>
> message HelloResp {
> string Result = 1;
> }
>
>
>
> and my SayHello is implemented like:
>
> class HelloServicer(hello_pb2_grpc.HelloServiceServicer):
>
>
> def SayHello(self, request, context):
> if len(request.Name) >= 10:
> msg = 'Length of `Name` cannot be more than 10 characters'
> context.set_details(msg)
> context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
> return hello_pb2.HelloResp()
>
>
> return hello_pb2.HelloResp(Result="Hey, {}!".format(request.Name))
>
> Can anyone tell me how do I write tests for my server? I checked this repo
> -
> https://github.com/grpc/grpc/tree/master/src/python/grpcio_tests/tests/testing
> or the docs here - https://grpc.io/grpc/python/grpc_testing.html but I
> couldn't understand anything at all.
>
> I really appreciate some help, thank you!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/c313a922-3f94-4bdb-9324-fb419b7d2c12%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jCMc-mLsBn%2ByhS2T2%3D_oDQw6fusxQGfy1o5UKtNLvtbQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: crash when running server on a used address

2018-11-07 Thread 'Eric Gribkoff' via grpc.io

This question was also asked on github, so linking to the discussion for 
future searchers: https://github.com/grpc/grpc/issues/17075


On Wednesday, October 31, 2018 at 10:10:44 AM UTC-7, Stefan Seefeld wrote:
>
>
> Hello,
>
> I'm using the C++ API to write an RPC server, following the provided 
> example code:
> ```
>   std::string server_address("0.0.0.0:50051");
>   GreeterServiceImpl service;
>   ServerBuilder builder;
>   builder.AddListeningPort(server_address, 
> grpc::InsecureServerCredentials());
>   builder.RegisterService();
>   std::unique_ptr server(builder.BuildAndStart());
>   server->Wait();
> ```
>
> whenever another process is already using the above address, this code 
> results in a segfault, with this error being printed out:
>
> E1031 13:01:42.858837542   17537 server_chttp2.cc:40]
> {"created":"@1541005302.858452834","description":"No address added out of 
> total 1 
> resolved","file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":307,"referenced_errors":[{"created":"@1541005302.858443799","description":"Failed
>  
> to add any wildcard 
> listeners","file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":324,"referenced_errors":[{"created":"@1541005302.858395680","description":"Unable
>  
> to configure 
> socket","fd":5,"file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":214,"referenced_errors":[{"created":"@1541005302.858378854","description":"OS
>  
> Error","errno":98,"file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":187,"os_error":"Address
>  
> already in 
> use","syscall":"bind"}]},{"created":"@1541005302.858442040","description":"Unable
>  
> to configure 
> socket","fd":5,"file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":214,"referenced_errors":[{"created":"@1541005302.858431608","description":"OS
>  
> Error","errno":98,"file":"/home2/shenderson/GRPC/Google/grpc/git/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":187,"os_error":"Address
>  
> already in use","syscall":"bind"}]}]}]}
> Segmentation fault (core dumped)
>
> I understand that the error stems from an unsuccessful socket bind. What I 
> don't understand is whether I should check some error condition in my own 
> code above, or whether it's the gRPC implementation that wrongly ignores 
> this error internally, later causing the crash.
>
> I'm running gRPC 1.14.0 (on Linux).
>
> Is this a known issue ?
>
> Thanks,
> Stefan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a3609cd3-5e40-41e2-81c3-80de13febdfa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: tcp connection management

2018-08-08 Thread 'Eric Gribkoff' via grpc.io
There should only be a single TCP connection when sending five unary calls.
Can you post a code sample of how you are testing this? It sounds like you
might be re-creating the gRPC channel for each call, which would create a
separate TCP connection for each RPC. You should create only one channel,
and use this to send multiple RPCs over the same TCP connection.

Eric


On Tue, Aug 7, 2018 at 4:22 PM  wrote:

> BTW, I am using grpc-java
>
> On Tuesday, August 7, 2018 at 4:21:53 PM UTC-7, eleano...@gmail.com wrote:
>>
>>
>> Hi,
>>
>> I am doing an experiment to decide whether my application should choose
>> unary call, or bi-directional streaming. Here is what I observe by enable
>> the debug logging:
>>
>> for unary call, the tcp connection is created per call:
>>
>> client side single thread making 5 calls in a for loop: total 5 tcp
>> connections - using blocking stub
>> client side multi-threaded making 5 calls at the same time: total 5 tcp
>> connections - using block stub
>> bi-directional streaming making 5 requests: total 1 tcp connection -
>> using async stub
>>
>> So that means for unary call, it will always create new tcp connection
>> every time? Can you please confirm this behaviour?
>>
>> Thanks!
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1251d22f-547a-4888-9096-2d36ce1c5705%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jS7X%2B0ngT4mkfGpJ%3DJ%2B-V7LzppOWnqN49KgQ4TFBs6Mw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to run Python examples with gRPC built from sources?

2018-04-11 Thread 'Eric Gribkoff' via grpc.io
Build instructions for gRPC Python are located 
in https://github.com/grpc/grpc/blob/master/src/python/grpcio/README.rst.

Thanks,

Eric

On Saturday, April 7, 2018 at 2:23:39 PM UTC-7, Mohamed Moanis wrote:
>
> Hi gRPC!
>
> I have a question about running Python examples and working with gRPC 
> Python in general.
>
> How to build/install the gRPC Python from sources so that I can use it for 
> example with the examples? Right now I followed the tutorial and installed 
> grpcio and grpc-tools in a virtualenv to run the examples but I want to use 
> a build from source.
>
> Additionally, how can I "make clean" the Cython files I built with 
> "setup.py"?
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ec5985ca-77d4-4b96-9065-490d721ec664%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc streaming large file

2018-04-11 Thread 'Eric Gribkoff' via grpc.io

Either approach could be appropriate: dividing the large message into 
chunks or sending it all in one message (note: gRPC has a default max 
message size that varies by language but can be configured). Which one 
performs best for your use case will depend on a number of other factors. 
There was some earlier discussion around this 
in 
https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/grpc-io/MbDTqNXhv7o/cvPjrhwCAgAJ.

Thanks,

Eric

On Tuesday, April 10, 2018 at 2:39:51 PM UTC-7, Weidong Lian wrote:
>
> Hello grpcs,
>
> I have the following task sent from client to server.
>
> service applis {
>   rpc GenerateVoxelMesh(VoxelMeshRequest) returns (VoxelMeshReply) {}
> }
>
> message VoxelMeshRequest {
>   any image_input = 1;
>   bool smooth = 4;
>   int32 iterations = 5;   
>   double mu = 6;  
>   double lambda = 7;   
>   double scale = 8;  
>   repeat int32 part_ids = 9; 
> }
>
> message VoxelMeshReply {
>   any nas_output = 1; // text file
> }
>
> The image_input, nas_output are the binary files that can be fairly large 
> sometimes. I would guess the `any` is not a recommended type. 
> It is preferred to use stream chunk bytes to send and receive the image 
> and nas files. However, if we stream chunk file, we can not send
> the other request parameters at one call. We will have to make multiple 
> calls and make server side a state machine. It increases the complexity.
>
> I am just wondering if there any more element design or what the idiomatic 
> way of doing this in grpc? 
>
> the possible design like below.
>
> service applis {
>   rpc GenerateVoxelMesh(stream VoxelMeshRequest) returns (stream 
> VoxelMeshReply) {}
> }
>
> message VoxelMeshRequest {
> oneof test_oneof {
>FileChunk image_input = 1;
>VoxelMeshParamters mesh_params = 2;
> }
> }
>
> message FileChunk {
>  bytes chunk = 1;
> }
>
> message VoxelMeshParameters {
>   bool smooth = 4;
>   int32 iterations = 5;   
>   double mu = 6;  
>   double lambda = 7;   
>   double scale = 8;  
>   repeat int32 part_ids = 9; 
> }
>
> message VoxelMeshReply {
>   FileChunk nas_output = 1; // text file
> }
>
> Any suggestion will be appreciated. 
> Thanks in advance,
> Weidong
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a4c82a7c-6542-4d51-be35-7c2b9e583dc9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: [gRPC-Java] Minimum version of Guava required to work with gRPC.

2018-03-02 Thread 'Eric Gribkoff' via grpc.io
You mention proguard, so I assume you're building for Android? If you're
running proguard, you shouldn't be seeing anything like a 2.3MB size from
the guava dependency. You can see the proguard configuration for our
Android interop test app here:
https://github.com/grpc/grpc-java/blob/master/android-interop-testing/app/proguard-rules.pro
.

With this configuration, our entire interop app APK after proguard is only
1.4MB, and the com.google.common.* dependencies from Guava are only 65.7KB.

Eric


On Fri, Mar 2, 2018 at 12:46 PM, 'Carl Mastrangelo' via grpc.io <
grpc-io@googlegroups.com> wrote:

> We (gRPC) avoid depending on anything in common.collect from guava, so in
> theory you should be able to trim your deps down.  I don't know about open
> census's API.
>
> On Thursday, March 1, 2018 at 9:48:59 PM UTC-8, dknku...@gmail.com wrote:
>>
>> Hi Devs,
>>
>> Currently I am working on implementing gRPC support for my product. Since
>> grpc-core has a transitive dependency to guava library. I need to include
>> guava library alone with grpc libraries in my distribution. Since guava
>> library size is around 2.3MB, I am trying to create miniature version of
>> guava library using ProGuard[1].
>>
>> After going through the gRPC-java code, identified following
>> classes/packages are used in gRPC-core library and dependent opencensus-api,
>>
>> com.google.common.base.**
>>
>> com.google.common.util.concurrent.**
>>
>> com.google.common.collect.Maps
>>
>> com.google.common.collect.ImmutableMultiset
>>
>> com.google.common.collect.ImmutableList
>>
>> com.google.common.collect.HashMultiset
>>
>> com.google.common.collect.Lists
>>
>> com.google.common.collect.Multiset
>>
>> com.google.common.io.**
>>
>>
>> I am able to create miniature version which is only 850kB size and it
>> worked fine for the basic gRPC operation. I would like to know whether it
>> is ok to create miniature guava version to work with gRPC or is there other
>> recommended way. It would be great, if you can give me list of guava
>> classes/packages(minimum version of Guava we require) used in gRPC.
>>
>> 1. https://github.com/google/guava/wiki/UsingProGuardWithGuava
>>
>> Appreciate your response.
>>
>> Thanks
>> Danesh
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/34d19f2a-0a0a-4bec-8bd6-b8f177554e30%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gP58kKHx-JmH9mXJpzQQZuvrfQbfF4ONHzH%2B0A-7-9PA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] gRPC Java v1.10.0 Released

2018-02-15 Thread 'Eric Gribkoff' via grpc.io
gRPC Java 1.10.0 is now released and available on Maven Central. 

v1.11.0 is scheduled for March 26th.

Dependencies
   
   - Update opencensus to 0.11.0 (#3993 
   )

Bug fixes
   
   - bazel: fix protobuf sha256 (#3924 
   )
   - core: fix regression in v1.9.0 where the user-agent would be like 
   "grpc-java-netty1.9.0" instead of "grpc-java-netty/1.9.0" (#3974 
   )
   - netty: workaround Netty regression in v1.9.0 (netty/netty#7639 
   ) which caused TLS failures 
   to fail with "UNAVAILABLE: Channel closed while performing protocol 
   negotiation" instead of useful failure information (#4033 
   )
   - netty: fix regression in v1.9.0 where using GRPC_PROXY_EXP with the 
   Netty transport would cause an UnresolvedAddressException (#4027 
   ). ProxySelector (including 
   the default one that processes -Dhttps.proxyHost) is still known-broken for 
   Netty; this is being tracked in #4029 
   
   - netty: workaround Netty regression in v1.9.0 (netty/netty#7639 
   ) which caused TLS failures 
   to fail with "UNAVAILABLE: Channel closed while performing protocol 
   negotiation" instead of useful failure information (#3997 
   )
   - netty: avoid NullPointerException in 
   NettyServerHandler.newStreamException (#3932 
   )
   - core: MethodDescriptor.toBuilder() now copies the schemaDescriptor (
   #3941 )
   - core: change @Internal and @ExperimentalApi retention policies to 
   CLASS, to enable the upcoming grpc/grpc-java-api-checker to access these 
   annotations (#3994 )
   - compiler: avoid invoking experimental method in generated code, to 
   clean up the output of the upcoming grpc/grpc-java-api-checker (#4055 
   )

API changes
   
   - core: delete outboundMessage() and inboundMessage() on StreamTracer. (
   #4014 )
   - core: deprecate passing ServerCall to StatsTraceContext (#3912 
   )

Features:
   
   - netty: Use Java 9 ALPN if available (#3555 
   ). We still want to improve 
   testing with Java 9 ALPN, but it has been reported to work.
   - cronet: add build support (#3965 
   )
   - bazel: use same version libraries which gradle uses (#3911 
   ) (This upgrades Bazel to 
   netty 4.1.17)

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1683230f-d645-49fb-b136-75065263ee2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC via USB

2018-02-14 Thread 'Eric Gribkoff' via grpc.io

USB as a transport isn't currently supported. There was a recent discussion 
around this with further details 
in 
https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/grpc-io/rCOTPM65A7U/QkwcLEV3AQAJ

Thanks,

Eric

On Tuesday, February 13, 2018 at 9:44:49 AM UTC-8, banshe...@googlemail.com 
wrote:
>
> Hey,
>
> is it possible to somehow use gRPC with USB, be it with 3rd party 
> implementations? The bandwith limitation of Gigabit Ethernet is an issue 
> for us and we'd like to achieve the data rates of a USB connection for big 
> data types.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fab9d74c-3537-4bbf-90b0-16698240e1dd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] What is the minimum android/iOS version that gRPC support?

2018-01-16 Thread 'Eric Gribkoff' via grpc.io
+mxyan for iOS. On Android, we support API levels 14 and up, as this
matches the requirement of recent versions of Google Play Services, which
is used to obtain an up-to-date TLS1.2 implementation on older phones: see
our security doc
.
You may be able to get gRPC running on older Android API levels, but this
isn't something we actively test or (at this time) explicitly support.

Eric

On Tue, Jan 16, 2018 at 8:50 PM, 'yz' via grpc.io 
wrote:

> I did not find it on the gRPC website. Does anyone have a clue?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gtQzF7Gcm%2BYxgWJasxEQz2NQK7U_LLA4B-aRdhEWMevQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] IllegalStateException from SimpleForwardingServerCallListener#onHalfClose

2018-01-09 Thread 'Eric Gribkoff' via grpc.io
Can you provide the IllegalStateException that you're seeing? Is the
exception coming from your ServerInterceptor implementation or from the
delegate Listener?

On Mon, Jan 8, 2018 at 6:46 PM,  wrote:

> Hi,
>
> We have a pretty straight forward ServerInterceptor implementation that
> seems to randomly throw an IllegalStateException from its onHalfClose
> method, which is simply calling the original ServerCall.Listener (from
> next.startCall(...)) - the server then gets a CANCELLED result from the
> client, and the connection (long term bidi stream) is then lost.
>
> What could be causing this? Is there a way to get more info from the logs?
>
> Thanks,
> - Matt
>
> Btw, this is the exception we get from gRPC in the logs (
> grpc-netty-shadeddep-4.0.0-SNAPSHOT is just a shaded dep around netty):
>
> io.grpc.StatusRuntimeException: CANCELLED
>   at io.grpc.Status.asRuntimeException(Status.java:526) 
> ~[grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
>  [grpc-stub-1.6.1.jar:1.6.1]
>   at 
> io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:422) 
> [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:61) 
> [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:504)
>  [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:425)
>  [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:536)
>  [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) 
> [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102) 
> [grpc-netty-shadeddep-4.0.0-SNAPSHOT.jar:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_111]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_111]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/7035f731-7766-4e7a-9b4b-f07b9d736303%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jbSPrPghzHOb2Gc2_yYum_XhX_euRoxC7C7mNkK_fB8w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] turn off priority flag on java server?

2017-11-07 Thread 'Eric Gribkoff' via grpc.io
Currently there is no way to do this in gRPC Java. The PRIORITY is coming
from Netty, and Netty would need to be changed to disable sending it. For
now, getting this issue fixed on the grpc/grpc side seems like the correct
way forward, and it looks like the C-team is now reviewing the fix in your
PR.

Thanks,

Eric

On Tue, Nov 7, 2017 at 9:24 AM,  wrote:

> Is there a way to ask the Java gRPC server to not set the PRIORITY flag on
> HTTP2 frames?
>
> I want this because I'm waiting for this bug fix to arrive for Python
> clients:
>
> https://github.com/grpc/grpc/pull/13201
>
>
> If I could prevent the priority flag from being set on the Java server
> side I could avoid this bug while I wait for a release of the Python client
> that has the fix.
>
> Whitney
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/ece35432-38ce-4291-ba3b-276db5113b73%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jo%3DZExX16W17rPFP8XO28Wyvk5SoNufAAD__t%3DEeviSA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Trouble with java quickstart for grpc: unable to locate branch 1.6.1

2017-09-18 Thread 'Eric Gribkoff' via grpc.io
I'm not sure why you're unable to checkout the tagged release. It is
available on github (https://github.com/grpc/grpc-java/releases/tag/v1.6.1)
and the command (git clone -b ...) works for me. Is it possible you're
getting a cached version of grpc-java?

Thanks,

Eric

On Mon, Sep 18, 2017 at 11:03 AM, Jeff Gaer 
wrote:

> I get the same result with 1.6.1. I had tried the 1.6.2 just as a guess
> along with a variety of other versions. Sorry that I posted the wrong
> command line when I asked the questions. I'm guessing it is a git config
> issue, but I don't have a clue on where to look. There is not of lot of
> content in my gitconfig so I assume I am running with mostly defaults.
>
>
> jgaer@ljgaer1_/data/grpc: git clone -b v1.6.1
> https://github.com/grpc/grpc-java
> Initialized empty Git repository in /data/grpc/grpc-java/.git/
> remote: Counting objects: 52663, done.
> remote: Compressing objects: 100% (30/30), done.
> remote: Total 52663 (delta 7), reused 22 (delta 1), pack-reused 52621
> Receiving objects: 100% (52663/52663), 17.75 MiB | 694 KiB/s, done.
> Resolving deltas: 100% (21774/21774), done.
> warning: Remote branch v1.6.1 not found in upstream origin, using HEAD
> instead
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/cacb50a1-7287-4627-ab0c-d680ee270b67%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7h8PXwfx7-HXQ_UwPTg5vNuHiTTmzYX_2w8kRTL8KykNQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Trouble with java quickstart for grpc: unable to locate branch 1.6.1

2017-09-18 Thread 'Eric Gribkoff' via grpc.io
It should be v1.6.1, not v1.6.2. See
https://grpc.io/docs/quickstart/java.html, which says to checkout v1.6.1.
Did you find another document saying to use v1.6.2? If so, please let me
know the source for that and I will correct it.

Thanks,

Eric

On Mon, Sep 18, 2017 at 9:26 AM, Jeff Gaer  wrote:

>
> HI,
>
> I am trying to execute the steps in the java quickstart for grpc. When I
> execute git clone -b v1.6.2  https://github.com/grpc/grpc-java I get q
> warning:"warning: Remote branch v1.6.2 not found in upstream origin, using
> HEAD instead" and the checked out version seems to be the latest snapshot.
> I am unable to build the examples  as it can not locate the frpc
> dependancies (i.e Could not find io.grpc:grpc-netty:1.7.0-SNAPSHOT.). I
> googled for the dependancy issue and found that it is a result of trying to
> build a snapshot version, which I guess is what would be on HEAD. How do I
> check out the branch successfully? I am not a big gradle user so pardon the
> dumb question.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/4cf63e31-b53b-4c82-ae09-0bb8fa4e84bb%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7hLBtihDJM%2BhsSkZ%2BLE-x0UrLhG-W-jwA_zGjM%3D9ADWZQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Can someone give me an example with grpc-java and ssl ?

2017-08-16 Thread 'Eric Gribkoff' via grpc.io
TLS is on by default for OkHttp channels. See our android-interop-test app

for an example of how to create a secure connection using a test
certificate, or just remove the call to ManagedChannel.usePlaintext(true)
from this line

in
our Android Hello World app to have it use TLS when connecting.

Eric


On Aug 16, 2017 3:18 AM,  wrote:

> Since Netty can't be used in Android, how it should be done in an Android
> Client? I couldn't find any useful example.
>
> On Thursday, January 14, 2016 at 1:43:32 PM UTC+5:30, Young You wrote:
>>
>> It works! Thank you very much!
>>
>> 在 2016年1月14日星期四 UTC+8上午3:44:41,Eric Anderson写道:
>>>
>>> Yep, you need to use GrpcSslContexts
>>> 
>>> :
>>> GrpcSslContexts.forClient().trustManager(...).build();
>>> GrpcSslContexts.forServer(new File...).build();
>>>
>>> Alternatively, you could use GrpcSslContexts.configure(...), but I'd
>>> suggest the easier forms above.
>>>
>>> On Wed, Jan 13, 2016 at 1:35 AM, Young You  wrote:
>>>
 // Server

 SslContext sslContext =  SslContextBuilder.forServer(
   new File("/Users/u/Desktop/api.grpc/src/main/resources/server.crt"),
   new 
 File("/Users/u/Desktop/api.grpc/src/main/resources/private_key_pkcs8.pem"))
   .build();

 server = NettyServerBuilder.forPort(port).sslContext(sslContext)
   .addService(GreeterGrpc.bindService(new GreeterImpl())).build()
   .start();


 // Client
 SslContext sslContext = SslContextBuilder.forClient().trustManager(new 
 File(
   
 "/Users/u/Desktop/api.grpc/src/main/resources/server.crt")).build();
   channel = NettyChannelBuilder.forAddress(host, port)
 .sslContext(sslContext)
 .build();
   blockingStub = GreeterGrpc.newBlockingStub(channel);


 Both server and client do not work, I have tried with another client
 and server written in Ruby.


 在 2016年1月8日星期五 UTC+8上午11:10:59,Young You写道:

> Can someone give me an example with grpc-java and ssl ?
>
> My code returns this error
>
> io.grpc.StatusRuntimeException: UNKNOWN
> at io.grpc.Status.asRuntimeException(Status.java:430)
> at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:156)
> at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:106)
> at ex.grpc.GreeterGrpc$GreeterBlockingStub.sayHello(GreeterGrpc
> .java:109)
> at com.chinark.api.helloworld.HelloWorldClient.greet(HelloWorld
> Client.java:45)
> at com.chinark.api.helloworld.HelloWorldClient.main(HelloWorldC
> lient.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.j
> ava:144)
> Caused by: java.lang.Exception: Failed ALPN negotiation: Unable to
> find compatible protocol.
> at io.grpc.netty.ProtocolNegotiators$BufferUntilTlsNegotiatedHa
> ndler.userEventTriggered(ProtocolNegotiators.java:400)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeUserEventTr
> iggeredNow(ChannelHandlerInvokerUtil.java:75)
> at io.netty.channel.DefaultChannelHandlerInvoker.invokeUserEven
> tTriggered(DefaultChannelHandlerInvoker.java:135)
> at io.netty.channel.AbstractChannelHandlerContext.fireUserEvent
> Triggered(AbstractChannelHandlerContext.java:149)
> at io.netty.channel.ChannelInboundHandlerAdapter.userEventTrigg
> ered(ChannelInboundHandlerAdapter.java:108)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeUserEventTr
> iggeredNow(ChannelHandlerInvokerUtil.java:75)
> at io.netty.channel.DefaultChannelHandlerInvoker.invokeUserEven
> tTriggered(DefaultChannelHandlerInvoker.java:135)
> at io.netty.channel.AbstractChannelHandlerContext.fireUserEvent
> Triggered(AbstractChannelHandlerContext.java:149)
> at io.netty.handler.ssl.SslHandler.setHandshakeSuccess(SslHandl
> er.java:1240)
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1067)
> at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:965)
> at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteT
> oMessageDecoder.java:327)
> at 

Re: [grpc-io] SSL error with GRPC Java

2017-08-09 Thread 'Eric Gribkoff' via grpc.io
This is for an OSGi bundle? It looks like the errors you're getting are
internal to Netty, and indicate that your bundle is not correctly adding
netty-tcnative to the classpath. I don't have any experience with OSGi, but
you may be able to get help with the class loading issue at
https://github.com/netty/netty.

Eric

On Sat, Aug 5, 2017 at 8:58 PM,  wrote:

>
> JDK version : 1.8u77
>
> proto3.0.3 version
>
> I have tried incorporating SSL into current application. Please find below
> approaches we have tried.
> 1) OpenSSL Static approach
>
> We have added the io.netty.tcnative-boringssl-static, io.netty.handler
> and bundles to com.pelco.vms.pelcotools.application.bnd and
>
> Tried the below code snippet (added to RPCHandler) :
>
>
> *SslContext sslContext = SslContextBuilder.forServer(certificatePemFile,
> privateKeyPemFile))*
> *
> .sslProvider(SslProvider.OPENSSL)*
> * .build();*
> *server = NettyServerBuilder.forAddress(new
> InetSocketAddress(InetAddress.getLoopbackAddress(), 8443))*
> *   .addService(service)*
> *   .sslContext(sslContext)*
> *   .build()*
> *   .start();*
>
>
> But we are receiving the below exception while building the SslContext.
>
> *java.lang.UnsatisfiedLinkError: failed to load the required native
> library*
> *at
> io.netty.handler.ssl.OpenSsl.ensureAvailability(OpenSsl.java:311)*
> *at
> io.netty.handler.ssl.ReferenceCountedOpenSslContext.(ReferenceCountedOpenSslContext.java:230)*
> *at
> io.netty.handler.ssl.OpenSslContext.(OpenSslContext.java:43)*
> *at
> io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:347)*
> *at
> io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:335)*
> *at
> io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:421)*
> *at
> io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:441)*
> *at
> com.pelco.vms.pelcotools.handlers.RPCHandler.start(RPCHandler.java:105)*
> *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)*
> *at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
> Source)*
> *at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)*
> *at java.lang.reflect.Method.invoke(Unknown Source)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)*
> *at
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:866)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:833)*
> *at
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)*
> *at
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:954)*
> *at
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:915)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1215)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1136)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:945)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:881)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1167)*
> *at
> org.apache.felix.scr.impl.BundleComponentActivator$ListenerInfo.serviceChanged(BundleComponentActivator.java:120)*
> *at
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)*
> *at
> 

Re: [grpc-io] 2 Client running on the same machine

2017-07-26 Thread 'Eric Gribkoff' via grpc.io
Hi Fabio,

I'm not sure exactly what you mean by the "server side" part when you say
both clients are doing a "BIDI stream server side". But two separate
clients connected to the same server should work just fine. Do you see
anything in the server logs after the first client disconnects?

Eric

On Wed, Jul 26, 2017 at 2:48 AM,  wrote:

> Hi all,
>
> I have 2 grpc clients written in java running on the same machine but in 2
> different JVMs. Both the clients doing a BIDI stream server side on the
> same endpoint/method, and I log each time the server sends a message. The
> problem i'm facing is that when I stop one client I don't see log for the
> other client and after a timeout it receive the message onError. Do you
> have any insight? is it a bug?
>
> thanks,
> Fabio
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/597335c4-3a97-4638-93c2-df95c56f50e9%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7gW1SkBVNUfb9r9OkARmXsTvmucqwPHkd4bKwdAY6rsZA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] One file has both java and javalite version

2017-07-19 Thread 'Eric Gribkoff' via grpc.io
It's probably not a good idea to try to include your server and Android app
in the same project; at the least, you should not be trying to run a gRPC
server via Android Studio. You need your server to depend on grpc-netty,
and this is not intended/supported for Android.

I would suggest creating two projects, one for Android, one for your
server, and just copy and modify the existing examples to create each
project. If you want to share the same .proto definitions between the
projects, just put the protos in a separate project that the other two
depend on.

Hope this helps.

Eric

On Tue, Jul 18, 2017 at 9:57 PM,  wrote:

> Yes, the example project builds perfectly. But as soon as I copy settings
> over to my projects, things start to break.
>
> I'm new to Android, Gradle, and gRPC, so it's too easy for me to miss
> anything.. Sorry to borther.
>
> From a beginner's point of view, the example project is too complicated
> and is not only "hello world". It's like a sub project in gRPC project and
> contains advanced examples, so I'm not sure which settings/codes are
> essential to the simplest helloworld demo, and it  isn't organized in
> Android Studio‘s way.. It's just my two cents.
>
> I use Android Studio and my project structure is this, I have an Android
>  module named as Client, and a Java module named as Protocol.  Protocol
> module contains proto files and generates necessary gRPC classes. Client
> module references Protocol, and in the future when I create a Server
> module, it could also reference Protocol module..
>
> I committed the project to GitHub. I would so appreciate if you could
> clone my *debug *branch and take a look.
>
> git clone -b debug https://github.com/gqqnbig/SimpleChat2017
>
> 在 2017年7月18日星期二 UTC-7下午1:48:09,Eric Gribkoff写道:
>>
>> You still need the javalite {} block in both protobuf.plugins and
>> protobuf.generateProtoTasks. I would suggest following these instructions
>>  to build the example gRPC
>> Android Hello World app (here is its build.gradle
>> ),
>> confirm the example builds in your local environment then adapt the working
>> example to your project. If the Hello World app doesn't build for you, let
>> me know.
>>
>> Best,
>>
>> Eric
>>
>> On Sun, Jul 16, 2017 at 3:51 PM,  wrote:
>>
>>> I now have another problem. Do you midn looking again?
>>>
>>> I make the project, but Android Studio cannot find many packages.. See
>>> the screenshot.
>>>
>>>
>>> 
>>>
>>>
>>>
>>> I already have
>>>
>>>
>>> dependencies {
>>> compile 'io.grpc:grpc-okhttp:1.4.0'
>>> compile 'io.grpc:grpc-protobuf-lite:1.4.0'
>>> compile 'io.grpc:grpc-stub:1.4.0'
>>> compile 'javax.annotation:javax.annotation-api:1.2'
>>> }
>>>
>>>
>>> in build.gradle.
>>>
>>>
>>> Am I still missing anything?
>>>
>>>
>>> 在 2017年7月16日星期日 UTC-7上午11:40:01,Eric Gribkoff写道:

 Can you try "./gradlew clean" and then rebuild? You can encounter this
 type of error when you've already built a project using one flavor of
 protobuf, then switch to another. The generated files from the old proto
 flavor are not always removed automatically, causing conflicts, but
 "./gradlew clean" will clear them out.

 Thanks,

 Eric

 On Sat, Jul 15, 2017 at 11:21 PM,  wrote:

> This is my build.gradle.
>
> apply plugin: 'java'
>
> dependencies {
> compile fileTree(dir: 'libs', include: ['*.jar'])
>
> compile 'io.grpc:grpc-okhttp:1.4.0'
> compile 'io.grpc:grpc-protobuf-lite:1.4.0'
> compile 'io.grpc:grpc-stub:1.4.0'
> compile 'javax.annotation:javax.annotation-api:1.2'
> }
>
> sourceCompatibility = "1.7"
> targetCompatibility = "1.7"
>
>
> apply plugin: 'com.google.protobuf'
>
> protobuf {
> protoc {
> artifact = 'com.google.protobuf:protoc:3.0.0'
> }
> plugins {
> javalite {
> artifact = "com.google.protobuf:protoc-gen-javalite:3.0.0"
> }
> grpc {
> artifact = 'io.grpc:protoc-gen-grpc-java:1.0.0' // 
> CURRENT_GRPC_VERSION
> }
> }
> generateProtoTasks {
> all().each { task ->
> task.plugins {
> javalite {}
> grpc {
> // Options added to --grpc_out
> option 'lite'
> }
> }
> }
> }
> }
>
>
>
> I copied hellowworld.proto to my module. Once I make the module, Android 
> Studio gives error Duplicated class 
> 

Re: [grpc-io] One file has both java and javalite version

2017-07-18 Thread 'Eric Gribkoff' via grpc.io
You still need the javalite {} block in both protobuf.plugins and
protobuf.generateProtoTasks. I would suggest following these instructions
 to build the example gRPC
Android Hello World app (here is its build.gradle
),
confirm the example builds in your local environment then adapt the working
example to your project. If the Hello World app doesn't build for you, let
me know.

Best,

Eric

On Sun, Jul 16, 2017 at 3:51 PM,  wrote:

> I now have another problem. Do you midn looking again?
>
> I make the project, but Android Studio cannot find many packages.. See the
> screenshot.
>
>
> 
>
>
>
> I already have
>
>
> dependencies {
> compile 'io.grpc:grpc-okhttp:1.4.0'
> compile 'io.grpc:grpc-protobuf-lite:1.4.0'
> compile 'io.grpc:grpc-stub:1.4.0'
> compile 'javax.annotation:javax.annotation-api:1.2'
> }
>
>
> in build.gradle.
>
>
> Am I still missing anything?
>
>
> 在 2017年7月16日星期日 UTC-7上午11:40:01,Eric Gribkoff写道:
>>
>> Can you try "./gradlew clean" and then rebuild? You can encounter this
>> type of error when you've already built a project using one flavor of
>> protobuf, then switch to another. The generated files from the old proto
>> flavor are not always removed automatically, causing conflicts, but
>> "./gradlew clean" will clear them out.
>>
>> Thanks,
>>
>> Eric
>>
>> On Sat, Jul 15, 2017 at 11:21 PM,  wrote:
>>
>>> This is my build.gradle.
>>>
>>> apply plugin: 'java'
>>>
>>> dependencies {
>>> compile fileTree(dir: 'libs', include: ['*.jar'])
>>>
>>> compile 'io.grpc:grpc-okhttp:1.4.0'
>>> compile 'io.grpc:grpc-protobuf-lite:1.4.0'
>>> compile 'io.grpc:grpc-stub:1.4.0'
>>> compile 'javax.annotation:javax.annotation-api:1.2'
>>> }
>>>
>>> sourceCompatibility = "1.7"
>>> targetCompatibility = "1.7"
>>>
>>>
>>> apply plugin: 'com.google.protobuf'
>>>
>>> protobuf {
>>> protoc {
>>> artifact = 'com.google.protobuf:protoc:3.0.0'
>>> }
>>> plugins {
>>> javalite {
>>> artifact = "com.google.protobuf:protoc-gen-javalite:3.0.0"
>>> }
>>> grpc {
>>> artifact = 'io.grpc:protoc-gen-grpc-java:1.0.0' // 
>>> CURRENT_GRPC_VERSION
>>> }
>>> }
>>> generateProtoTasks {
>>> all().each { task ->
>>> task.plugins {
>>> javalite {}
>>> grpc {
>>> // Options added to --grpc_out
>>> option 'lite'
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>>
>>>
>>> I copied hellowworld.proto to my module. Once I make the module, Android 
>>> Studio gives error Duplicated class io.grpc.examples.helloworld.HelloReply. 
>>> I searched the file explorer, the same class exists in 
>>> build\generated\source\proto\main\java\io\grpc\examples\helloworld\HelloReply.java
>>>  and 
>>> D:\SimpleChat\protocol\build\generated\source\proto\main\javalite\io\grpc\examples\helloworld\HelloReply.java.
>>>
>>>
>>>
>>> 
>>>
>>>
>>>
>>> I checked the sample repository in https://github.com/grpc/grpc-java, it 
>>> only has the java one.
>>>
>>>
>>> How do I resolve this?
>>>
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To post to this group, send email to grp...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/grpc-io/2071b2d9-9168-4b20-83ec-769215328a83%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/grpc-io/8c810d1e-727e-4df0-a046-431fde8f3ee9%40googlegroups.com
> 
> .
>
> For more options, visit 

Re: [grpc-io] How to access error_details via grpc-java API?

2017-05-12 Thread 'Eric Gribkoff' via grpc.io
You can use the utility method StatusProto#fromThrowable

to
extract the error details as a com.google.rpc.Status proto. Metadata keys
beginning with "grpc-" are intended for internal use by the gRPC library,
so the Metaday.Key itself is not public.

Thanks,

Eric

On Fri, May 12, 2017 at 8:17 AM, Przemysław Sobala <
przemyslaw.sob...@gmail.com> wrote:

> Hi, how to use error_details send from server via (C++):
>
> grpc::ServerAsyncResponseWriter::FinishWithError(Status(StatusCode code
> , const grpc::string& error_message, *const grpc::string& error_details*))
>
> to java client, other than using (Java):
>
> Metadata m = Status.trailersFromThrowable(t);
> byte[] error_details = m.get(Metadata.Key.of("grpc-status-details-bin",
> Metadata.BINARY_BYTE_MARSHALLER));
>
> That key, that I have to pass,  -  "grpc-status-details-bin" - is
> bothering me the most. Should it be accessible via some Java API?
> --
> regards
> Przemysław Sobala
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/953bee4b-c263-481d-a7fe-3441a2ecb560%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7jPigG8MVzYinae-aydSv1G2MQEyLowPLQwZtOgULwJhg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] 2D array in proto file for C++ grpc

2017-03-14 Thread 'Eric Gribkoff' via grpc.io
If I'm understanding your question correctly, you are asking about setting
nested message values in a proto using the C++ API. The protobuf
documentation provides an example of doing this at
https://developers.google.com/protocol-buffers/docs/cpptutorial#writing-a-message
.



On Thu, Mar 9, 2017 at 11:02 AM, Anirudh Kasturi 
wrote:

> Hello folks,
>
> I have a 2D array emulation in my proto file.
>
> In C++, after generating the protobuf files,  I see I have methods
> declared for adding "columns" (add_columns) in the pb.h  without any
> parameters.  Also there is another method declared to add "records"
> (add_records) in the pb.h without any parameters.
>
> In Java the generated functions accept message builder as a parameter and
> it is easy to construct the request.
>
> In C++ for fields in proto file with standard datatypes like string and
> int I have setters that accept the string and int as parameters.  For type
> google.protobuf.Value or type Record, I have no parameters.
>
> Here is the code.  How can I populate the request with the "columns"
> values of type google.protobuf.Value and "records" values of type Record?
> Any help is appreciated.  Thank you !
>
> message DataMessage {
>
> int32 Status = 1;
>
> int32 Entries = 2;
>
> repeated string columnNames = 4;
>
>
> // By repeating this message, we somewhat emulate a 2D array
>
> message Record {
>
> repeated google.protobuf.Value columns = 1;
>
> }
>
> repeated Record records = 5;
>
> }
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/8da02464-6e75-43d1-90f8-ba8ff97cc515%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7i5yJPLH-twqG3PWYvC2mcxNU2e9NMZ9%2Bn0t2ghyV6MKw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] gRPC A6: Retries

2017-03-02 Thread 'Eric Gribkoff' via grpc.io
On Thu, Mar 2, 2017 at 9:03 AM, 'Eric Anderson' via grpc.io <
grpc-io@googlegroups.com> wrote:

> On Thu, Mar 2, 2017 at 8:38 AM, Mark D. Roth  wrote:
>
>> On Thu, Mar 2, 2017 at 8:24 AM, Eric Gribkoff 
>> wrote:
>>
>>> On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth  wrote:

 I agree that we don't need to say anything about whether or not the
 server delays sending Response-Headers until a message is sent.  However, I
 think we should say that if the server is going to immediately signal
 failure without sending any messages, it should send Trailers-Only instead
 of Response-Headers followed by Trailers.


>>>
>>> This is in the retry gRFC doc now (https://github.com/ncteisen/p
>>> roposal/blob/ad060be281c45c262e71a56e5777d26616dad69f/A6.md#
>>> when-retries-are-valid).
>>>
>>
> The language is still confusing:
>
>> The client receives a non-error response from the server. Because of the
>> gRPC wire specification, this will always be a Response-Headers frame
>> containing the initial metadata.
>
>
> What does "non-error response" mean there? I would have expected that
> means receiving a Status in some way (which is part of Response), as
> otherwise how is "error" decided. But the next part shows that isn't the
> case since Status isn't in Response-Headers.
>
>
The second sentence is defining what non-error response means: a
Response-Headers frame. The only alternative (an "error" response) is
Trailers-Only. I can chose a name other than "non-error response" to make
this clear.


> The wire spec *almost* says it: "Trailers-Only is permitted for calls
>>> that produce an immediate error" (https://github.com/grpc/grpc/
>>> blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the
>>> wire spec itself or is the inclusion in the gRFC for retries sufficient?
>>>
>>
>> I think it would be good to also change the wire spec doc.  We should do
>> something like changing "is permitted" to "SHOULD be used".  We may even
>> want to specifically mention that this is important for retry functionality
>> to work right.
>>
>
> Changing to 'should' sounds fine. Although maybe there should be a note
> that clients can't decide if something is an 'immediate error' so there
> must not be any validation for it client-side.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/CA%2B4M1oON-6sgSW%3DLLJZLABLm_RFCFgNb%
> 2Bki6%2BbwJuxMMPXMxUA%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALUXJ7j%3DbnPCNgNZw5m444r3eaaVYCYSuY%2BwaM95oQhK-xAdvA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] gRPC A6: Retries

2017-03-02 Thread 'Eric Gribkoff' via grpc.io
On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <r...@google.com> wrote:

> On Thu, Mar 2, 2017 at 8:09 AM, Eric Gribkoff <ericgribk...@google.com>
> wrote:
>
>> I've update the gRFC document to include the latest discussions here.
>>
>> On Thu, Mar 2, 2017 at 7:20 AM, Mark D. Roth <r...@google.com> wrote:
>>
>>> On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <
>>> grpc-io@googlegroups.com> wrote:
>>>
>>>> I think the terminology here gets confusing between initial/trailing
>>>> metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was
>>>> indeed underspecified in regards to dealing with initial metadata, and will
>>>> be updated. I go over all of the considerations in detail below.
>>>>
>>>> For clarity, I will use all caps for the names of HTTP/2 frame types,
>>>> e.g., HEADERS frame, and use the capitalized gRPC rule names from the
>>>> specification
>>>> <https://github.com/grpc/grpc/blob/f1666d48244143ddaf463523030ee76cc0fe691c/doc/PROTOCOL-HTTP2.md>
>>>> .
>>>>
>>>> The gRPC specification ensures that a status (containing a gRPC status
>>>> code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS
>>>> frame. The only way that the gRPC status code can be contained in the first
>>>> HTTP/2 frame received is if the server sends a Trailers-Only response.
>>>>
>>>> Otherwise, the gRPC spec mandates that the first frame sent be the
>>>> Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers
>>>> includes (optional) Custom-Metadata, which is usually what we are talking
>>>> about when we say "initial metadata".
>>>>
>>>> Regardless of whether the Response-Headers includes anything in its
>>>> Custom-Metadata, if the gRPC client library notifies the client application
>>>> layer of what metadata is (or is not) included, we now have to view the RPC
>>>> as committed, aka no longer retryable. This is the only option, as a later
>>>> retry attempt could receive different Custom-Metadata, contradicting what
>>>> we've already told the client application layer.
>>>>
>>>> We cannot include gRPC status codes in the Response-Headers along with
>>>> "initial metadata". It's perfectly valid according to the spec for a server
>>>> to send metadata along a stream in its Response-Headers, wait for one hour,
>>>> then (without having sent any messages), close the stream with a retryable
>>>> error.
>>>>
>>>> However, the proposal that a server include the gRPC status code (if
>>>> known) in the initial response is still sound. Concretely, this means: if a
>>>> gRPC server has not yet sent Response-Headers and receives an error
>>>> response, it should send a Trailers-Only response containing the gRPC
>>>> status code. This would allow retry attempts on the client-side to proceed,
>>>> if applicable. This is going to be superior to sending Response-Headers
>>>> immediately followed by Trailers, which would cause the RPC to become
>>>> committed on the client side (if the Response-Header metadata is made
>>>> available to the client application layer) and stop retry attempts.
>>>>
>>>> We still can encounter the case where a server intentionally sends
>>>> Response-Headers to open a stream, then eventually closes the stream with
>>>> an error without ever sending any messages. Such cases would not be
>>>> retryable, but I think it's fair to argue that if the server *has* to send
>>>> metadata in advance of sending any responses, that metadata is actually a
>>>> response, and should be treated as such (i.e., their metadata just ensured
>>>> the RPC will be committed on the client-side).
>>>>
>>>> Rather than either explicitly disallowing such behavior by modifying
>>>> some specification (this behavior is currently entirely unspecified, so
>>>> while specification is worthwhile, it should be separate from the retry
>>>> policy design currently under discussion), we can just change the default
>>>> server behavior of C++, and Go if necessary, to match Java. In Java
>>>> servers, the Response-Headers are delayed until some response message is
>>>> sent. If the server application returns an error status before sending a
>>>> message, then Trailers-O

Re: [grpc-io] gRPC A6: Retries

2017-03-01 Thread 'Eric Gribkoff' via grpc.io
I think the terminology here gets confusing between initial/trailing
metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was
indeed underspecified in regards to dealing with initial metadata, and will
be updated. I go over all of the considerations in detail below.

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g.,
HEADERS frame, and use the capitalized gRPC rule names from the
specification

.

The gRPC specification ensures that a status (containing a gRPC status
code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS
frame. The only way that the gRPC status code can be contained in the first
HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the
Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers
includes (optional) Custom-Metadata, which is usually what we are talking
about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its
Custom-Metadata, if the gRPC client library notifies the client application
layer of what metadata is (or is not) included, we now have to view the RPC
as committed, aka no longer retryable. This is the only option, as a later
retry attempt could receive different Custom-Metadata, contradicting what
we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with
"initial metadata". It's perfectly valid according to the spec for a server
to send metadata along a stream in its Response-Headers, wait for one hour,
then (without having sent any messages), close the stream with a retryable
error.

However, the proposal that a server include the gRPC status code (if known)
in the initial response is still sound. Concretely, this means: if a gRPC
server has not yet sent Response-Headers and receives an error response, it
should send a Trailers-Only response containing the gRPC status code. This
would allow retry attempts on the client-side to proceed, if applicable.
This is going to be superior to sending Response-Headers immediately
followed by Trailers, which would cause the RPC to become committed on the
client side (if the Response-Header metadata is made available to the
client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends
Response-Headers to open a stream, then eventually closes the stream with
an error without ever sending any messages. Such cases would not be
retryable, but I think it's fair to argue that if the server *has* to send
metadata in advance of sending any responses, that metadata is actually a
response, and should be treated as such (i.e., their metadata just ensured
the RPC will be committed on the client-side).

Rather than either explicitly disallowing such behavior by modifying some
specification (this behavior is currently entirely unspecified, so while
specification is worthwhile, it should be separate from the retry policy
design currently under discussion), we can just change the default server
behavior of C++, and Go if necessary, to match Java. In Java servers, the
Response-Headers are delayed until some response message is sent. If the
server application returns an error status before sending a message, then
Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide
when an RPC is committed based on received Response-Headers. If and while
the client library can guarantee that the presence (or absence) of initial
metadata is not visible to the client application layer, the RPC can be
considered uncommitted. This is an implementation detail that should very
rarely be necessary if the above change is made to default server behavior,
but it would not violate anything in the retry spec or semantics.

Eric

On Wed, Mar 1, 2017 at 11:32 AM, 'Eric Anderson' via grpc.io <
grpc-io@googlegroups.com> wrote:

> On Wed, Mar 1, 2017 at 10:51 AM, 'Mark D. Roth' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> On Wed, Mar 1, 2017 at 10:20 AM, 'Eric Anderson' via grpc.io <
>> grpc-io@googlegroups.com> wrote:
>>
>>> What? That does not seem to be a proper understanding of the text, or
>>> the text is wrongly worded. Why would the RPC be "committed as soon as it
>>> receives the initial metadata"? That isn't in the text... In your example
>>> it seems it would be committed at "the trailing metadata that includes a
>>> status" as long as that status was OK, as per the "an explicit OK status"
>>> in the text.
>>>
>>
>> The language in the above quote is probably not as specific as it should
>> be, at least with respect to the wire protocol.  The intent here is that
>> the RPC should be considered committed when it receives either initial
>> metadata or a payload message.
>>