[grpc-io] Re: C++ Interceptor for authorization

2020-09-30 Thread 'yas...@google.com' via grpc.io
We've had a similar request/question 
in https://github.com/grpc/grpc/issues/24017

On Tuesday, September 29, 2020 at 11:03:21 AM UTC-7 ayeg...@gmail.com wrote:

> I have a fairly simple use case - check the headers of an incoming RPC 
> call for a special string indicating authorization to use the service. 
> Currently I am experimenting based off of this example 
> https://github.com/grpc/grpc/blob/7bf82de9eda0aa8fecfe5edb33834f1b272be30b/test/cpp/end2end/server_interceptors_end2end_test.cc.
>  
> Specifically the `LoggerInterceptor` example. The happy path involves 
> calling `Proceed` on the `InterceptorBatchMethods`, but `Hijack` kills the 
> program.
>
> My main question is how do I `fail` the connection when the authorization 
> is not present?
>
> Sincerely,
> Aleks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ca0cdd8f-ceae-4645-b816-a4a7c2840920n%40googlegroups.com.


Re: [grpc-io] Synchronous and asynchronous waiting in GRPC service methods

2020-09-30 Thread 'Eric Anderson' via grpc.io
On Wed, Sep 30, 2020 at 1:18 AM 'weißnet auchnicht' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Increasing the thread pool size is probably not a good solution, as it
> would always have to be greater than or equal to the number of concurrently
> active requests for these 2 methods.
>

Increasing the thread pool size is an *easy* solution and keeps the code
simple. But yes, it really requires an unbounded thread pool to avoid
issues, although you could limit the service to 10 outstanding RPCs and
fail any above that threshold.

However, now I have to manage an application-side buffer for each active
> request until this request gets the lock. I hoped that there was a way to
> let GRPC handle the problem by buffering the requests at a higher level
> instead of this lower application-side level and handle the exception
> (discard the request, send an exception to the client, ...) when the buffer
> fills.
>

That is possible; we call it flow control. You can take a look at the
manualflowcontrol
example
.
Unfortunately it is an invisible API unless someone (like an example, or
me) refers you to it.

Basically, you want to cast StreamObserver responseObserver
to ServerCallStreamObserver. At that point you can call
disableAutoRequest()
.
By default, every time you return from onNext(), the stub will call
request(1) on your behalf, asking for another request. With this API you
disable that behavior to manage incoming messages yourself.

So you'd call disableAutoRequest(), and once you acquire the lock you'd
call request(1) each time you're ready for a message.

For example, by returning a Future> instead of a
> StreamObserver as I know it from ASP.NET MVC asynchronous controller
> methods (
> https://docs.microsoft.com/en-us/aspnet/mvc/overview/performance/using-asynchronous-methods-in-aspnet-mvc-4#CreatingAsynchGizmos).
> Probably there are similar examples in the Java world, but I have less
> experience with the Java ecosystem, yet.
>

Your difficulty is associated with using client streaming, so that API
doesn't really seem related. Although I'm not too familiar with it. Also,
C# has async/await which makes things like this a *lot* easier.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oO0BNKME8XEKRLANhMPX64viHYbrH2dZ7T%3D6feCpkq9LA%40mail.gmail.com.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] Re: Synchronous and asynchronous waiting in GRPC service methods

2020-09-30 Thread 'sonstige...@googlemail.com' via grpc.io

Thanks for your replies so far!

I configured the thread pool with only a single thread just for 
demonstration purposes. In production, I intend to use multiple threads 
for GRPC. However, as I suggested in my initial question, I do not think 
that my general problem is solvable by changing the number of threads.


--
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/72fd7193-431d-36b9-9857-ed2669e37647%40googlemail.com.


Re: [grpc-io] Re: Synchronous and asynchronous waiting in GRPC service methods

2020-09-30 Thread 'Eric Anderson' via grpc.io
You should not use directExecutor. It offers no benefits over the
single-thread thread pool for your use-case. I'm writing a lengthier reply.

On Wed, Sep 30, 2020 at 11:37 AM 'sanjay...@google.com' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Sorry, just realized you are using client streaming. If you need to share
> the resource just between 2 methods of the same service. Can you get rid of
> your lock and just use directExecutor
> https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ServerBuilder.java#L58
> ?
>
> Alternatively you can use 2 threads instead of the single-thread-executor
> so that one of the threads will be used to complete the current request
> while the other thread is receiving next request(s).
>
> On Wednesday, September 30, 2020 at 1:18:54 AM UTC-7 weißnet auchnicht
> wrote:
>
>> Hi,
>>
>> my service has 2 methods, which share a resource. Each method invocation
>> needs exclusive access to this resource. In the example the client makes 5
>> successive requests for one of these methods, sends a message using the
>> stream observer and completes the stream observer. Then, without further
>> delay the next request is made in the same way.
>>
>> The full source code of the sample application is here:
>> https://github.com/Niklas-Peter/grpc-async.
>>
>> The GRPC server is configured with
>> ServerBuilder.executor(Executors.newSingleThreadExecutor());
>>
>> Here the most important extracts:
>>
>> *Client code:*
>> @Slf4j
>> public class MyServiceClient {
>> @SneakyThrows
>> public void myServiceMethodA() {
>> log.info("myServiceMethodA(): started.");
>> var writeConnection = stub.myServiceMethodA(new
>> LoggingStreamObserver());
>> log.info("myServiceMethodA(): received write connection.");
>>
>> writeConnection.onNext(createEvent());
>> writeConnection.onCompleted();
>> }
>> }
>>
>> *Synchronous service implementation:*
>> @Slf4j
>> public class MySyncService extends MyServiceGrpc.MyServiceImplBase {
>> private final Semaphore lock = new Semaphore(1);
>>
>> @SneakyThrows
>> @Override
>> public StreamObserver
>> myServiceMethodA(StreamObserver responseObserver) {
>> log.info(responseObserver + ": Acquiring lock ...");
>> if (!lock.tryAcquire(10, TimeUnit.SECONDS)) {
>> log.warn(responseObserver + ": Lock acquire timeout
>> exceeded");
>> return new NoOpEventStreamObserver(); // Only to prevent
>> exceptions in the log.
>> }
>>
>> log.info(responseObserver + ": Acquired lock.");
>>
>> return new StreamObserver<>() {
>> private final List events = new ArrayList<>();
>>
>> @Override
>> public void onNext(Event event) {
>> log.info(responseObserver + ": Received event.");
>>
>> var preprocessedEvent = preprocess(event);
>> events.add(preprocessedEvent);
>> }
>>
>> @Override
>> public void onCompleted() {
>> log.info(responseObserver + ": Received complete.");
>>
>> var storageLibrary = new StorageLibrary();
>>
>> storageLibrary.store(events.toArray(Event[]::new)).handle((unused,
>> throwable) -> {
>> log.info(responseObserver + ": Store completed.");
>>
>>
>> responseObserver.onNext(Confirmation.newBuilder().build());
>> responseObserver.onCompleted();
>>
>> lock.release();
>>
>> return null;
>> });
>> }
>>
>> private Event preprocess(Event event) {
>> // The preprocessing already requires the lock.
>> return event;
>> }
>> };
>> }
>>
>> @SneakyThrows
>> @Override
>> public void myServiceMethodB(Event event,
>> StreamObserver responseObserver) {
>> if (!lock.tryAcquire(5, TimeUnit.SECONDS))
>> throw new TimeoutException("The lock acquire timeout
>> exceeded.");
>>
>> // Requires exclusive access to a shared resource and uses async
>> I/O.
>> var storageLibrary = new StorageLibrary();
>> storageLibrary.store(event).handle((unused, throwable) -> {
>> responseObserver.onNext(Confirmation.newBuilder().build());
>> responseObserver.onCompleted();
>>
>> lock.release();
>>
>> return null;
>> });
>> }
>>
>>
>> @Override
>> public void otherServiceMethod(Event request,
>> StreamObserver responseObserver) {
>> // Do something independent from the other service methods.
>> }
>> }
>>
>> *Output:*
>> 09:53:03.453 [pool-1-thread-1] INFO MySyncService -
>> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd:
>> Acquiring lock ...
>> 09:53:03.453 [pool-1-thread-1] INFO MySyncService -
>> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd: Acquired
>> lock.
>> 09:53:03.456 

[grpc-io] Re: Synchronous and asynchronous waiting in GRPC service methods

2020-09-30 Thread 'sanjay...@google.com' via grpc.io
Sorry, just realized you are using client streaming. If you need to share 
the resource just between 2 methods of the same service. Can you get rid of 
your lock and just use 
directExecutor 
https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ServerBuilder.java#L58
 
?

Alternatively you can use 2 threads instead of the single-thread-executor 
so that one of the threads will be used to complete the current request 
while the other thread is receiving next request(s).

On Wednesday, September 30, 2020 at 1:18:54 AM UTC-7 weißnet auchnicht 
wrote:

> Hi,
>
> my service has 2 methods, which share a resource. Each method invocation 
> needs exclusive access to this resource. In the example the client makes 5 
> successive requests for one of these methods, sends a message using the 
> stream observer and completes the stream observer. Then, without further 
> delay the next request is made in the same way.
>
> The full source code of the sample application is here: 
> https://github.com/Niklas-Peter/grpc-async.
>
> The GRPC server is configured with 
> ServerBuilder.executor(Executors.newSingleThreadExecutor());
>
> Here the most important extracts:
>
> *Client code:*
> @Slf4j
> public class MyServiceClient {
> @SneakyThrows
> public void myServiceMethodA() {
> log.info("myServiceMethodA(): started.");
> var writeConnection = stub.myServiceMethodA(new 
> LoggingStreamObserver());
> log.info("myServiceMethodA(): received write connection.");
>
> writeConnection.onNext(createEvent());
> writeConnection.onCompleted();
> }
> }
>
> *Synchronous service implementation:*
> @Slf4j
> public class MySyncService extends MyServiceGrpc.MyServiceImplBase {
> private final Semaphore lock = new Semaphore(1);
>
> @SneakyThrows
> @Override
> public StreamObserver 
> myServiceMethodA(StreamObserver responseObserver) {
> log.info(responseObserver + ": Acquiring lock ...");
> if (!lock.tryAcquire(10, TimeUnit.SECONDS)) {
> log.warn(responseObserver + ": Lock acquire timeout exceeded");
> return new NoOpEventStreamObserver(); // Only to prevent 
> exceptions in the log.
> }
>
> log.info(responseObserver + ": Acquired lock.");
>
> return new StreamObserver<>() {
> private final List events = new ArrayList<>();
>
> @Override
> public void onNext(Event event) {
> log.info(responseObserver + ": Received event.");
>
> var preprocessedEvent = preprocess(event);
> events.add(preprocessedEvent);
> }
>
> @Override
> public void onCompleted() {
> log.info(responseObserver + ": Received complete.");
>
> var storageLibrary = new StorageLibrary();
> 
> storageLibrary.store(events.toArray(Event[]::new)).handle((unused, 
> throwable) -> {
> log.info(responseObserver + ": Store completed.");
>
> 
> responseObserver.onNext(Confirmation.newBuilder().build());
> responseObserver.onCompleted();
>
> lock.release();
>
> return null;
> });
> }
>
> private Event preprocess(Event event) {
> // The preprocessing already requires the lock.
> return event;
> }
> };
> }
>
> @SneakyThrows
> @Override
> public void myServiceMethodB(Event event, StreamObserver 
> responseObserver) {
> if (!lock.tryAcquire(5, TimeUnit.SECONDS))
> throw new TimeoutException("The lock acquire timeout 
> exceeded.");
>
> // Requires exclusive access to a shared resource and uses async 
> I/O.
> var storageLibrary = new StorageLibrary();
> storageLibrary.store(event).handle((unused, throwable) -> {
> responseObserver.onNext(Confirmation.newBuilder().build());
> responseObserver.onCompleted();
>
> lock.release();
>
> return null;
> });
> }
>
>
> @Override
> public void otherServiceMethod(Event request, 
> StreamObserver responseObserver) {
> // Do something independent from the other service methods.
> }
> }
>
> *Output:*
> 09:53:03.453 [pool-1-thread-1] INFO MySyncService - 
> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd: Acquiring 
> lock ...
> 09:53:03.453 [pool-1-thread-1] INFO MySyncService - 
> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd: Acquired 
> lock.
> 09:53:03.456 [pool-1-thread-1] INFO MySyncService - 
> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@4a5ebb14: Acquiring 
> lock ...
> 09:53:13.465 [pool-1-thread-1] WARN MySyncService - 
> io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@4a5ebb14: Lock 
> acquire timeout exceeded
> 09:53:13.465 [pool-1-thread-1] INFO MySyncService - 

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'zda...@google.com' via grpc.io
Agree with Eric. I'll also note that if connection is broken in the middle 
of an RPC after the client receives partial data from the server, say, only 
the response headers, then although the channel will be reconnecting 
automatically by the library, that individual RPC is not retried 
automatically by the library, see the definition of committed 

 in 
the retry design for details.

On Wednesday, September 30, 2020 at 9:37:24 AM UTC-7 Eric Anderson wrote:

> You need to call `enableRetry()` on the channel builder. See the retry 
> example 
> 
>  and 
> example config 
> 
> .
>
> I think your methodConfig may not be selected because there is no 'name' 
> list for methods to match. Now that we support wildcard service names, you 
> could probably use methodConfig.put("name", 
> Arrays.asList(Collections.emptyMap())).
>
> I'll note that reconnect attempts are completely separate from RPC 
> retries. gRPC always has reconnect behavior enabled.
>
> On Wed, Sep 30, 2020 at 8:39 AM 'Mark D. Roth' via grpc.io <
> grp...@googlegroups.com> wrote:
>
>> As per discussion earlier in this thread, we haven't yet finished 
>> implementing the retry functionality, so it's not yet enabled by default.  
>> I believe that in Java, you may be able to use it, albeit with some 
>> caveats.  Penn (CC'ed) can tell you what the current status is in Java.
>>
>> On Wed, Sep 30, 2020 at 8:35 AM Guillermo Romero  
>> wrote:
>>
>>> Thanks Mark: 
>>>
>>> So.
>>>what's level this retry policy works?. 
>>>
>>>
>>> final Map retryPolicy = new HashMap<>();
>>> retryPolicy.put("maxAttempts", 10D);
>>> retryPolicy.put("initialBackoff", "10s");
>>> retryPolicy.put("maxBackoff", "30s");
>>> retryPolicy.put("backoffMultiplier", 2D);
>>> retryPolicy.put("retryableStatusCodes", Arrays.asList(
>>> "UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
>>> final Map methodConfig = new HashMap<>();
>>> methodConfig.put("retryPolicy", retryPolicy);
>>>
>>> final Map serviceConfig = new HashMap<>();
>>> serviceConfig.put("methodConfig", Collections.singletonList(
>>> methodConfig));
>>>
>>> I'm having a problem with netty client, it thows an exception  when tcp 
>>> breaks an not try to reconnect N times (MaxAttemps) - 
>>>
>>>
>>>
>>> El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D. 
>>> Roth escribió:
>>>
 gRPC client channels will automatically reconnect to the server when 
 the TCP connection fails.  That has nothing to do with the retry feature, 
 and it's not something you need to configure -- it will happen 
 automatically.

 Now, if an individual request is already in-flight when the TCP 
 connection fails, that will cause the request to fail.  And in that case, 
 retrying the request would be what you want.

 On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero  
 wrote:

> Hi: 
> I'm using Jboss Netty as Grpc client, and my doubts are related to 
> the Retry Policy. My understanding is that the Retry Policy is related to 
> the internal message transport between the client and the server using 
> the 
> gRPC protocol.
>  But my problem is related to the TCP breaks, there is a way of write 
> a TCP retry policy?
>
>
> El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3, 
> ncte...@google.com escribió:
>
>> I've created a gRFC describing the design and implementation plan for 
>> gRPC Retries.
>>
>> Take a look at the gRPC on Github 
>> .
>>
> -- 
>
 You received this message because you are subscribed to the Google 
> Groups "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to grpc-io+u...@googlegroups.com.
>
 To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
>  
> 
> .
>


 -- 
 Mark D. Roth 
 Software Engineer
 Google, Inc.

>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> 

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
It's definitely something that we want to finish.  I personally spent
almost a year working on the C-core implementation, and it's mostly
complete, but not quite enough to actually use yet -- there's still a bit
of missing functionality to implement, and there are some design issues
related to stats that we need to resolve.

Unfortunately, we've had other higher priority items come up that have
required us to set this aside.  I hope to be able to get back to finishing
this up in Q2 next year.

On Wed, Sep 30, 2020 at 9:12 AM Nathan Roberson 
wrote:

> I would advocate for finishing this implementation and releasing for C++
> as a high priority item. :)
>
>
>
> On Wednesday, September 30, 2020 at 8:39:51 AM UTC-7 Mark D. Roth wrote:
>
>> As per discussion earlier in this thread, we haven't yet finished
>> implementing the retry functionality, so it's not yet enabled by default.
>> I believe that in Java, you may be able to use it, albeit with some
>> caveats.  Penn (CC'ed) can tell you what the current status is in Java.
>>
>> On Wed, Sep 30, 2020 at 8:35 AM Guillermo Romero 
>> wrote:
>>
>>> Thanks Mark:
>>>
>>> So.
>>>what's level this retry policy works?.
>>>
>>>
>>> final Map retryPolicy = new HashMap<>();
>>> retryPolicy.put("maxAttempts", 10D);
>>> retryPolicy.put("initialBackoff", "10s");
>>> retryPolicy.put("maxBackoff", "30s");
>>> retryPolicy.put("backoffMultiplier", 2D);
>>> retryPolicy.put("retryableStatusCodes", Arrays.asList(
>>> "UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
>>> final Map methodConfig = new HashMap<>();
>>> methodConfig.put("retryPolicy", retryPolicy);
>>>
>>> final Map serviceConfig = new HashMap<>();
>>> serviceConfig.put("methodConfig", Collections.singletonList(
>>> methodConfig));
>>>
>>> I'm having a problem with netty client, it thows an exception  when tcp
>>> breaks an not try to reconnect N times (MaxAttemps) -
>>>
>>>
>>>
>>> El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D.
>>> Roth escribió:
>>>
 gRPC client channels will automatically reconnect to the server when
 the TCP connection fails.  That has nothing to do with the retry feature,
 and it's not something you need to configure -- it will happen
 automatically.

 Now, if an individual request is already in-flight when the TCP
 connection fails, that will cause the request to fail.  And in that case,
 retrying the request would be what you want.

 On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero 
 wrote:

> Hi:
> I'm using Jboss Netty as Grpc client, and my doubts are related to
> the Retry Policy. My understanding is that the Retry Policy is related to
> the internal message transport between the client and the server using the
> gRPC protocol.
>  But my problem is related to the TCP breaks, there is a way of write
> a TCP retry policy?
>
>
> El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3,
> ncte...@google.com escribió:
>
>> I've created a gRFC describing the design and implementation plan for
>> gRPC Retries.
>>
>> Take a look at the gRPC on Github
>> .
>>
> --
>
 You received this message because you are subscribed to the Google
> Groups "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to grpc-io+u...@googlegroups.com.
>
 To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
> 
> .
>


 --
 Mark D. Roth 
 Software Engineer
 Google, Inc.

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/6319a287-42c4-424d-8f40-3dfd97a8be5dn%40googlegroups.com
>>> 
>>> .
>>>
>>
>>
>> --
>> Mark D. Roth 
>> Software Engineer
>> Google, Inc.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/4c3a51ea-4609-4ac5-b3e7-5dea86fee824n%40googlegroups.com
> 
> .
>


-- 
Mark D. Roth 
Software Engineer

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Eric Anderson' via grpc.io
You need to call `enableRetry()` on the channel builder. See the retry
example

and
example config

.

I think your methodConfig may not be selected because there is no 'name'
list for methods to match. Now that we support wildcard service names, you
could probably use methodConfig.put("name",
Arrays.asList(Collections.emptyMap())).

I'll note that reconnect attempts are completely separate from RPC retries.
gRPC always has reconnect behavior enabled.

On Wed, Sep 30, 2020 at 8:39 AM 'Mark D. Roth' via grpc.io <
grpc-io@googlegroups.com> wrote:

> As per discussion earlier in this thread, we haven't yet finished
> implementing the retry functionality, so it's not yet enabled by default.
> I believe that in Java, you may be able to use it, albeit with some
> caveats.  Penn (CC'ed) can tell you what the current status is in Java.
>
> On Wed, Sep 30, 2020 at 8:35 AM Guillermo Romero <
> guillermo.rom...@gmail.com> wrote:
>
>> Thanks Mark:
>>
>> So.
>>what's level this retry policy works?.
>>
>>
>> final Map retryPolicy = new HashMap<>();
>> retryPolicy.put("maxAttempts", 10D);
>> retryPolicy.put("initialBackoff", "10s");
>> retryPolicy.put("maxBackoff", "30s");
>> retryPolicy.put("backoffMultiplier", 2D);
>> retryPolicy.put("retryableStatusCodes", Arrays.asList(
>> "UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
>> final Map methodConfig = new HashMap<>();
>> methodConfig.put("retryPolicy", retryPolicy);
>>
>> final Map serviceConfig = new HashMap<>();
>> serviceConfig.put("methodConfig", Collections.singletonList(
>> methodConfig));
>>
>> I'm having a problem with netty client, it thows an exception  when tcp
>> breaks an not try to reconnect N times (MaxAttemps) -
>>
>>
>>
>> El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D. Roth
>> escribió:
>>
>>> gRPC client channels will automatically reconnect to the server when the
>>> TCP connection fails.  That has nothing to do with the retry feature, and
>>> it's not something you need to configure -- it will happen automatically.
>>>
>>> Now, if an individual request is already in-flight when the TCP
>>> connection fails, that will cause the request to fail.  And in that case,
>>> retrying the request would be what you want.
>>>
>>> On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero 
>>> wrote:
>>>
 Hi:
 I'm using Jboss Netty as Grpc client, and my doubts are related to
 the Retry Policy. My understanding is that the Retry Policy is related to
 the internal message transport between the client and the server using the
 gRPC protocol.
  But my problem is related to the TCP breaks, there is a way of write a
 TCP retry policy?


 El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3,
 ncte...@google.com escribió:

> I've created a gRFC describing the design and implementation plan for
> gRPC Retries.
>
> Take a look at the gRPC on Github
> .
>
 --

>>> You received this message because you are subscribed to the Google
 Groups "grpc.io" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to grpc-io+u...@googlegroups.com.

>>> To view this discussion on the web visit
 https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
 
 .

>>>
>>>
>>> --
>>> Mark D. Roth 
>>> Software Engineer
>>> Google, Inc.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/6319a287-42c4-424d-8f40-3dfd97a8be5dn%40googlegroups.com
>> 
>> .
>>
>
>
> --
> Mark D. Roth 
> Software Engineer
> Google, Inc.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CAJgPXp44%3DNF95XfmGCk4nzBCuO89HaR886XSKeYZMCC-i6m3Fg%40mail.gmail.com
> 

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread Nathan Roberson
I would advocate for finishing this implementation and releasing for C++ as 
a high priority item. :)



On Wednesday, September 30, 2020 at 8:39:51 AM UTC-7 Mark D. Roth wrote:

> As per discussion earlier in this thread, we haven't yet finished 
> implementing the retry functionality, so it's not yet enabled by default.  
> I believe that in Java, you may be able to use it, albeit with some 
> caveats.  Penn (CC'ed) can tell you what the current status is in Java.
>
> On Wed, Sep 30, 2020 at 8:35 AM Guillermo Romero  
> wrote:
>
>> Thanks Mark: 
>>
>> So.
>>what's level this retry policy works?. 
>>
>>
>> final Map retryPolicy = new HashMap<>();
>> retryPolicy.put("maxAttempts", 10D);
>> retryPolicy.put("initialBackoff", "10s");
>> retryPolicy.put("maxBackoff", "30s");
>> retryPolicy.put("backoffMultiplier", 2D);
>> retryPolicy.put("retryableStatusCodes", Arrays.asList(
>> "UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
>> final Map methodConfig = new HashMap<>();
>> methodConfig.put("retryPolicy", retryPolicy);
>>
>> final Map serviceConfig = new HashMap<>();
>> serviceConfig.put("methodConfig", Collections.singletonList(
>> methodConfig));
>>
>> I'm having a problem with netty client, it thows an exception  when tcp 
>> breaks an not try to reconnect N times (MaxAttemps) - 
>>
>>
>>
>> El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D. Roth 
>> escribió:
>>
>>> gRPC client channels will automatically reconnect to the server when the 
>>> TCP connection fails.  That has nothing to do with the retry feature, and 
>>> it's not something you need to configure -- it will happen automatically.
>>>
>>> Now, if an individual request is already in-flight when the TCP 
>>> connection fails, that will cause the request to fail.  And in that case, 
>>> retrying the request would be what you want.
>>>
>>> On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero  
>>> wrote:
>>>
 Hi: 
 I'm using Jboss Netty as Grpc client, and my doubts are related to 
 the Retry Policy. My understanding is that the Retry Policy is related to 
 the internal message transport between the client and the server using the 
 gRPC protocol.
  But my problem is related to the TCP breaks, there is a way of write a 
 TCP retry policy?


 El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3, 
 ncte...@google.com escribió:

> I've created a gRFC describing the design and implementation plan for 
> gRPC Retries.
>
> Take a look at the gRPC on Github 
> .
>
 -- 

>>> You received this message because you are subscribed to the Google 
 Groups "grpc.io" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to grpc-io+u...@googlegroups.com.

>>> To view this discussion on the web visit 
 https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
  
 
 .

>>>
>>>
>>> -- 
>>> Mark D. Roth 
>>> Software Engineer
>>> Google, Inc.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/6319a287-42c4-424d-8f40-3dfd97a8be5dn%40googlegroups.com
>>  
>> 
>> .
>>
>
>
> -- 
> Mark D. Roth 
> Software Engineer
> Google, Inc.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4c3a51ea-4609-4ac5-b3e7-5dea86fee824n%40googlegroups.com.


Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
As per discussion earlier in this thread, we haven't yet finished
implementing the retry functionality, so it's not yet enabled by default.
I believe that in Java, you may be able to use it, albeit with some
caveats.  Penn (CC'ed) can tell you what the current status is in Java.

On Wed, Sep 30, 2020 at 8:35 AM Guillermo Romero 
wrote:

> Thanks Mark:
>
> So.
>what's level this retry policy works?.
>
>
> final Map retryPolicy = new HashMap<>();
> retryPolicy.put("maxAttempts", 10D);
> retryPolicy.put("initialBackoff", "10s");
> retryPolicy.put("maxBackoff", "30s");
> retryPolicy.put("backoffMultiplier", 2D);
> retryPolicy.put("retryableStatusCodes", Arrays.asList(
> "UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
> final Map methodConfig = new HashMap<>();
> methodConfig.put("retryPolicy", retryPolicy);
>
> final Map serviceConfig = new HashMap<>();
> serviceConfig.put("methodConfig", Collections.singletonList(
> methodConfig));
>
> I'm having a problem with netty client, it thows an exception  when tcp
> breaks an not try to reconnect N times (MaxAttemps) -
>
>
>
> El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D. Roth
> escribió:
>
>> gRPC client channels will automatically reconnect to the server when the
>> TCP connection fails.  That has nothing to do with the retry feature, and
>> it's not something you need to configure -- it will happen automatically.
>>
>> Now, if an individual request is already in-flight when the TCP
>> connection fails, that will cause the request to fail.  And in that case,
>> retrying the request would be what you want.
>>
>> On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero 
>> wrote:
>>
>>> Hi:
>>> I'm using Jboss Netty as Grpc client, and my doubts are related to
>>> the Retry Policy. My understanding is that the Retry Policy is related to
>>> the internal message transport between the client and the server using the
>>> gRPC protocol.
>>>  But my problem is related to the TCP breaks, there is a way of write a
>>> TCP retry policy?
>>>
>>>
>>> El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3,
>>> ncte...@google.com escribió:
>>>
 I've created a gRFC describing the design and implementation plan for
 gRPC Retries.

 Take a look at the gRPC on Github
 .

>>> --
>>>
>> You received this message because you are subscribed to the Google Groups
>>> "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
>>> 
>>> .
>>>
>>
>>
>> --
>> Mark D. Roth 
>> Software Engineer
>> Google, Inc.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/6319a287-42c4-424d-8f40-3dfd97a8be5dn%40googlegroups.com
> 
> .
>


-- 
Mark D. Roth 
Software Engineer
Google, Inc.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAJgPXp44%3DNF95XfmGCk4nzBCuO89HaR886XSKeYZMCC-i6m3Fg%40mail.gmail.com.


Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread Guillermo Romero
Thanks Mark: 

So.
   what's level this retry policy works?. 


final Map retryPolicy = new HashMap<>();
retryPolicy.put("maxAttempts", 10D);
retryPolicy.put("initialBackoff", "10s");
retryPolicy.put("maxBackoff", "30s");
retryPolicy.put("backoffMultiplier", 2D);
retryPolicy.put("retryableStatusCodes", Arrays.asList("UNAVAILABLE" 
, "RESOURCE_EXHAUSTED" , "INTERNAL"));
final Map methodConfig = new HashMap<>();
methodConfig.put("retryPolicy", retryPolicy);

final Map serviceConfig = new HashMap<>();
serviceConfig.put("methodConfig", Collections.singletonList(
methodConfig));

I'm having a problem with netty client, it thows an exception  when tcp 
breaks an not try to reconnect N times (MaxAttemps) - 



El miércoles, 30 de septiembre de 2020 a las 12:09:25 UTC-3, Mark D. Roth 
escribió:

> gRPC client channels will automatically reconnect to the server when the 
> TCP connection fails.  That has nothing to do with the retry feature, and 
> it's not something you need to configure -- it will happen automatically.
>
> Now, if an individual request is already in-flight when the TCP connection 
> fails, that will cause the request to fail.  And in that case, retrying the 
> request would be what you want.
>
> On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero  
> wrote:
>
>> Hi: 
>> I'm using Jboss Netty as Grpc client, and my doubts are related to 
>> the Retry Policy. My understanding is that the Retry Policy is related to 
>> the internal message transport between the client and the server using the 
>> gRPC protocol.
>>  But my problem is related to the TCP breaks, there is a way of write a 
>> TCP retry policy?
>>
>>
>> El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3, 
>> ncte...@google.com escribió:
>>
>>> I've created a gRFC describing the design and implementation plan for 
>>> gRPC Retries.
>>>
>>> Take a look at the gRPC on Github 
>>> .
>>>
>> -- 
>>
> You received this message because you are subscribed to the Google Groups "
>> grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
>>  
>> 
>> .
>>
>
>
> -- 
> Mark D. Roth 
> Software Engineer
> Google, Inc.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6319a287-42c4-424d-8f40-3dfd97a8be5dn%40googlegroups.com.


Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
gRPC client channels will automatically reconnect to the server when the
TCP connection fails.  That has nothing to do with the retry feature, and
it's not something you need to configure -- it will happen automatically.

Now, if an individual request is already in-flight when the TCP connection
fails, that will cause the request to fail.  And in that case, retrying the
request would be what you want.

On Wed, Sep 30, 2020 at 6:01 AM Guillermo Romero 
wrote:

> Hi:
> I'm using Jboss Netty as Grpc client, and my doubts are related to the
> Retry Policy. My understanding is that the Retry Policy is related to the
> internal message transport between the client and the server using the gRPC
> protocol.
>  But my problem is related to the TCP breaks, there is a way of write a
> TCP retry policy?
>
>
> El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3, ncte...@google.com
> escribió:
>
>> I've created a gRFC describing the design and implementation plan for
>> gRPC Retries.
>>
>> Take a look at the gRPC on Github
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com
> 
> .
>


-- 
Mark D. Roth 
Software Engineer
Google, Inc.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAJgPXp7T%3DKzz1J2vb9NaT6qwE92cYmuB47H92%2B5Dan-KXETXMg%40mail.gmail.com.


[grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread Guillermo Romero
Hi: 
I'm using Jboss Netty as Grpc client, and my doubts are related to the 
Retry Policy. My understanding is that the Retry Policy is related to the 
internal message transport between the client and the server using the gRPC 
protocol.
 But my problem is related to the TCP breaks, there is a way of write a TCP 
retry policy?


El viernes, 10 de febrero de 2017 a las 21:31:01 UTC-3, ncte...@google.com 
escribió:

> I've created a gRFC describing the design and implementation plan for gRPC 
> Retries.
>
> Take a look at the gRPC on Github 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/33fc1235-073a-48ca-b387-b964b74e366fn%40googlegroups.com.


[grpc-io] Synchronous and asynchronous waiting in GRPC service methods

2020-09-30 Thread 'weißnet auchnicht' via grpc . io
Hi,

my service has 2 methods, which share a resource. Each method invocation 
needs exclusive access to this resource. In the example the client makes 5 
successive requests for one of these methods, sends a message using the 
stream observer and completes the stream observer. Then, without further 
delay the next request is made in the same way.

The full source code of the sample application is here: 
https://github.com/Niklas-Peter/grpc-async.

The GRPC server is configured with 
ServerBuilder.executor(Executors.newSingleThreadExecutor());

Here the most important extracts:

*Client code:*
@Slf4j
public class MyServiceClient {
@SneakyThrows
public void myServiceMethodA() {
log.info("myServiceMethodA(): started.");
var writeConnection = stub.myServiceMethodA(new 
LoggingStreamObserver());
log.info("myServiceMethodA(): received write connection.");

writeConnection.onNext(createEvent());
writeConnection.onCompleted();
}
}

*Synchronous service implementation:*
@Slf4j
public class MySyncService extends MyServiceGrpc.MyServiceImplBase {
private final Semaphore lock = new Semaphore(1);

@SneakyThrows
@Override
public StreamObserver 
myServiceMethodA(StreamObserver responseObserver) {
log.info(responseObserver + ": Acquiring lock ...");
if (!lock.tryAcquire(10, TimeUnit.SECONDS)) {
log.warn(responseObserver + ": Lock acquire timeout exceeded");
return new NoOpEventStreamObserver(); // Only to prevent 
exceptions in the log.
}

log.info(responseObserver + ": Acquired lock.");

return new StreamObserver<>() {
private final List events = new ArrayList<>();

@Override
public void onNext(Event event) {
log.info(responseObserver + ": Received event.");

var preprocessedEvent = preprocess(event);
events.add(preprocessedEvent);
}

@Override
public void onCompleted() {
log.info(responseObserver + ": Received complete.");

var storageLibrary = new StorageLibrary();

storageLibrary.store(events.toArray(Event[]::new)).handle((unused, 
throwable) -> {
log.info(responseObserver + ": Store completed.");


responseObserver.onNext(Confirmation.newBuilder().build());
responseObserver.onCompleted();

lock.release();

return null;
});
}

private Event preprocess(Event event) {
// The preprocessing already requires the lock.
return event;
}
};
}

@SneakyThrows
@Override
public void myServiceMethodB(Event event, StreamObserver 
responseObserver) {
if (!lock.tryAcquire(5, TimeUnit.SECONDS))
throw new TimeoutException("The lock acquire timeout 
exceeded.");

// Requires exclusive access to a shared resource and uses async 
I/O.
var storageLibrary = new StorageLibrary();
storageLibrary.store(event).handle((unused, throwable) -> {
responseObserver.onNext(Confirmation.newBuilder().build());
responseObserver.onCompleted();

lock.release();

return null;
});
}


@Override
public void otherServiceMethod(Event request, 
StreamObserver responseObserver) {
// Do something independent from the other service methods.
}
}

*Output:*
09:53:03.453 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd: Acquiring 
lock ...
09:53:03.453 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@580c60dd: Acquired 
lock.
09:53:03.456 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@4a5ebb14: Acquiring 
lock ...
09:53:13.465 [pool-1-thread-1] WARN MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@4a5ebb14: Lock 
acquire timeout exceeded
09:53:13.465 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@40947ea2: Acquiring 
lock ...
09:53:23.484 [pool-1-thread-1] WARN MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@40947ea2: Lock 
acquire timeout exceeded
09:53:23.484 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@23f24271: Acquiring 
lock ...
09:53:33.496 [pool-1-thread-1] WARN MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@23f24271: Lock 
acquire timeout exceeded
09:53:33.496 [pool-1-thread-1] INFO MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@44de4271: Acquiring 
lock ...
09:53:43.501 [pool-1-thread-1] WARN MySyncService - 
io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl@44de4271: Lock 
acquire timeout exceeded