Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread Lukáš Drbal
Ach, sorry, my bad.
Thanks.

L.

On Monday, August 16, 2021 at 7:57:28 PM UTC+2 Mark D. Roth wrote:

> I am not familiar with java-control-plane, so I can't answer that.  You 
> might try asking in their developer community.
>
> On Mon, Aug 16, 2021 at 10:55 AM Lukáš Drbal  wrote:
>
>> Hello Mark,
>>
>> as first thanks a lot for Your replay, it is more clear for me now.
>>
>> Do you have any idea how to distinguish connected clients? I was looking 
>> for some information which I can use but I don't see anything usable in 
>> NodeGroup interface [1]. It provides access just for `Node` protobuf object 
>> but I don't see anything usable there.
>>
>> Any idea or example?
>>
>> Thanks!
>>  
>> [1] 
>> https://github.com/envoyproxy/java-control-plane/blob/main/cache/src/main/java/io/envoyproxy/controlplane/cache/NodeGroup.java
>> On Monday, August 16, 2021 at 7:15:11 PM UTC+2 Mark D. Roth wrote:
>>
>>> The xDS protocol does not require the node information to be sent by the 
>>> client for every request on the stream; the client needs to send it only on 
>>> the first request on the stream.  Quoting this section of the xDS spec 
>>> 
>>> :
>>>
>>> Only the first request on a stream is guaranteed to carry the node 
 identifier. The subsequent discovery requests on the same stream may carry 
 an empty node identifier. This holds true regardless of the acceptance of 
 the discovery responses on the same stream. The node identifier should 
 always be identical if present more than once on the stream. It is 
 sufficient to only check the first message for the node identifier as a 
 result.
>>>
>>>
>>> The Java implementation may currently happen to send the node 
>>> information with every request on the stream, but it's not required to do 
>>> that, and your xDS server should not expect that behavior.  I think you 
>>> need to change your xDS server to look at the node information on the first 
>>> request on the stream and store the node.cluster field so that it knows the 
>>> value when it sees subsequent requests on the same stream.
>>>
>>> I hope this information is helpful.
>>>
>>> On Mon, Aug 16, 2021 at 2:30 AM Lukáš Drbal  wrote:
>>>
 Hello everyone,

 We are trying to setup routing via XDS to our GRPC services. Routing 
 should be based on `node.cluster` information provided from client.

 Basically we would like to have 2 groups of GRPC clusters (priority and 
 normal) with same endpoints and choose right one by client `node.cluster` 
 identification.

 I have very minimal setup [1] which works absolutely as we expected for 
 java client but doesn't work for C++ (and grpc_cli). Node hashing 
 implementation [2]. This is minimal setup to reproducing this behaviour, 
 regular routing is more complicated.

 From log perspective it looks like from C++ xds server receive 
 `node.cluster` information just in first request. 

 From java I see cluster in all requests:
 grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - 
 Routing [priority] to priority group. [grpc-default-executor-1] INFO 
 org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
 [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
 [priority] to priority group. [grpc-default-executor-0] INFO 
 org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
 [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
 [priority] to priority group. [grpc-default-executor-2] INFO 
 org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
 [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
 [priority] to priority group. [grpc-default-executor-2] INFO 
 org.example.xds.routing.XdsServer - Routing [priority] to priority group.

 But from cli / c++ I see cluster just in first request:
 [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - 
 Routing [priority] to priority group. [grpc-default-executor-0] INFO 
 org.example.xds.routing.XdsServer - Routing [] to normal group. 
 [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
 [] to normal group.

 This leads to expected error when c++ client is trying to get priority 
 listeners and routes from default group.

 Can somebody give me any hint what's wrong here?

 Thanks a lot!

 L.

 [1] https://github.com/LesTR/xds-routing-test
 [2] 
 https://github.com/LesTR/xds-routing-test/blob/master/src/main/java/org/example/xds/routing/XdsServer.java#L61

 -- 
 You received this message because you are subscribed to the Google 
 Groups "grpc.io" group.
 To unsubscribe from this group and stop receiving emails from it, 

Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread 'Mark D. Roth' via grpc.io
I am not familiar with java-control-plane, so I can't answer that.  You
might try asking in their developer community.

On Mon, Aug 16, 2021 at 10:55 AM Lukáš Drbal  wrote:

> Hello Mark,
>
> as first thanks a lot for Your replay, it is more clear for me now.
>
> Do you have any idea how to distinguish connected clients? I was looking
> for some information which I can use but I don't see anything usable in
> NodeGroup interface [1]. It provides access just for `Node` protobuf object
> but I don't see anything usable there.
>
> Any idea or example?
>
> Thanks!
>
> [1]
> https://github.com/envoyproxy/java-control-plane/blob/main/cache/src/main/java/io/envoyproxy/controlplane/cache/NodeGroup.java
> On Monday, August 16, 2021 at 7:15:11 PM UTC+2 Mark D. Roth wrote:
>
>> The xDS protocol does not require the node information to be sent by the
>> client for every request on the stream; the client needs to send it only on
>> the first request on the stream.  Quoting this section of the xDS spec
>> 
>> :
>>
>> Only the first request on a stream is guaranteed to carry the node
>>> identifier. The subsequent discovery requests on the same stream may carry
>>> an empty node identifier. This holds true regardless of the acceptance of
>>> the discovery responses on the same stream. The node identifier should
>>> always be identical if present more than once on the stream. It is
>>> sufficient to only check the first message for the node identifier as a
>>> result.
>>
>>
>> The Java implementation may currently happen to send the node information
>> with every request on the stream, but it's not required to do that, and
>> your xDS server should not expect that behavior.  I think you need to
>> change your xDS server to look at the node information on the first request
>> on the stream and store the node.cluster field so that it knows the value
>> when it sees subsequent requests on the same stream.
>>
>> I hope this information is helpful.
>>
>> On Mon, Aug 16, 2021 at 2:30 AM Lukáš Drbal  wrote:
>>
>>> Hello everyone,
>>>
>>> We are trying to setup routing via XDS to our GRPC services. Routing
>>> should be based on `node.cluster` information provided from client.
>>>
>>> Basically we would like to have 2 groups of GRPC clusters (priority and
>>> normal) with same endpoints and choose right one by client `node.cluster`
>>> identification.
>>>
>>> I have very minimal setup [1] which works absolutely as we expected for
>>> java client but doesn't work for C++ (and grpc_cli). Node hashing
>>> implementation [2]. This is minimal setup to reproducing this behaviour,
>>> regular routing is more complicated.
>>>
>>> From log perspective it looks like from C++ xds server receive
>>> `node.cluster` information just in first request.
>>>
>>> From java I see cluster in all requests:
>>> grpc-default-executor-0] INFO org.example.xds.routing.XdsServer -
>>> Routing [priority] to priority group. [grpc-default-executor-1] INFO
>>> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
>>> [priority] to priority group. [grpc-default-executor-0] INFO
>>> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
>>> [priority] to priority group. [grpc-default-executor-2] INFO
>>> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
>>> [priority] to priority group. [grpc-default-executor-2] INFO
>>> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>>>
>>> But from cli / c++ I see cluster just in first request:
>>> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer -
>>> Routing [priority] to priority group. [grpc-default-executor-0] INFO
>>> org.example.xds.routing.XdsServer - Routing [] to normal group.
>>> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing
>>> [] to normal group.
>>>
>>> This leads to expected error when c++ client is trying to get priority
>>> listeners and routes from default group.
>>>
>>> Can somebody give me any hint what's wrong here?
>>>
>>> Thanks a lot!
>>>
>>> L.
>>>
>>> [1] https://github.com/LesTR/xds-routing-test
>>> [2]
>>> https://github.com/LesTR/xds-routing-test/blob/master/src/main/java/org/example/xds/routing/XdsServer.java#L61
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/1de7e140-862f-414f-b25a-7b1afc4069can%40googlegroups.com
>>> 

Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread Lukáš Drbal
Hello Mark,

as first thanks a lot for Your replay, it is more clear for me now.

Do you have any idea how to distinguish connected clients? I was looking 
for some information which I can use but I don't see anything usable in 
NodeGroup interface [1]. It provides access just for `Node` protobuf object 
but I don't see anything usable there.

Any idea or example?

Thanks!
 
[1] 
https://github.com/envoyproxy/java-control-plane/blob/main/cache/src/main/java/io/envoyproxy/controlplane/cache/NodeGroup.java
On Monday, August 16, 2021 at 7:15:11 PM UTC+2 Mark D. Roth wrote:

> The xDS protocol does not require the node information to be sent by the 
> client for every request on the stream; the client needs to send it only on 
> the first request on the stream.  Quoting this section of the xDS spec 
> 
> :
>
> Only the first request on a stream is guaranteed to carry the node 
>> identifier. The subsequent discovery requests on the same stream may carry 
>> an empty node identifier. This holds true regardless of the acceptance of 
>> the discovery responses on the same stream. The node identifier should 
>> always be identical if present more than once on the stream. It is 
>> sufficient to only check the first message for the node identifier as a 
>> result.
>
>
> The Java implementation may currently happen to send the node information 
> with every request on the stream, but it's not required to do that, and 
> your xDS server should not expect that behavior.  I think you need to 
> change your xDS server to look at the node information on the first request 
> on the stream and store the node.cluster field so that it knows the value 
> when it sees subsequent requests on the same stream.
>
> I hope this information is helpful.
>
> On Mon, Aug 16, 2021 at 2:30 AM Lukáš Drbal  wrote:
>
>> Hello everyone,
>>
>> We are trying to setup routing via XDS to our GRPC services. Routing 
>> should be based on `node.cluster` information provided from client.
>>
>> Basically we would like to have 2 groups of GRPC clusters (priority and 
>> normal) with same endpoints and choose right one by client `node.cluster` 
>> identification.
>>
>> I have very minimal setup [1] which works absolutely as we expected for 
>> java client but doesn't work for C++ (and grpc_cli). Node hashing 
>> implementation [2]. This is minimal setup to reproducing this behaviour, 
>> regular routing is more complicated.
>>
>> From log perspective it looks like from C++ xds server receive 
>> `node.cluster` information just in first request. 
>>
>> From java I see cluster in all requests:
>> grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
>> [priority] to priority group. [grpc-default-executor-1] INFO 
>> org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
>> [priority] to priority group. [grpc-default-executor-0] INFO 
>> org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
>> [priority] to priority group. [grpc-default-executor-2] INFO 
>> org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
>> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
>> [priority] to priority group. [grpc-default-executor-2] INFO 
>> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>>
>> But from cli / c++ I see cluster just in first request:
>> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - 
>> Routing [priority] to priority group. [grpc-default-executor-0] INFO 
>> org.example.xds.routing.XdsServer - Routing [] to normal group. 
>> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
>> [] to normal group.
>>
>> This leads to expected error when c++ client is trying to get priority 
>> listeners and routes from default group.
>>
>> Can somebody give me any hint what's wrong here?
>>
>> Thanks a lot!
>>
>> L.
>>
>> [1] https://github.com/LesTR/xds-routing-test
>> [2] 
>> https://github.com/LesTR/xds-routing-test/blob/master/src/main/java/org/example/xds/routing/XdsServer.java#L61
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/1de7e140-862f-414f-b25a-7b1afc4069can%40googlegroups.com
>>  
>> 
>> .
>>
>
>
> -- 
> Mark D. Roth 
> Software Engineer
> Google, Inc.
>

-- 
You received this message because you are subscribed to the Google Groups 

[grpc-io] Diyarbakır trv Re: Benchmark data for gRPC + xDS. vs envoy

2021-08-16 Thread Diyarbakır Prenses betül
[image: IMG_20210809_134756011_edited.jpg]

16 Ağustos 2021 Pazartesi tarihinde saat 19:50:14 UTC+3 itibarıyla Gaurav 
Poothia şunları yazdı:

> Hello,
> I saw a talk by Mark Roth from envoycon that talked about gRPC proxyless 
> mesh having superior QPS per cpu second and latency than envoy all of which 
> is of course expected.
>
> Can anyone pls share results/setup from benchmarks around these two 
> metrics? 
> It would be great to understand perf benefits more deeply.
>
> Thanks!
> Gaurav
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/de22a0ad-15d2-40e0-878a-5419bbb2da39n%40googlegroups.com.


Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread 'Mark D. Roth' via grpc.io
The xDS protocol does not require the node information to be sent by the
client for every request on the stream; the client needs to send it only on
the first request on the stream.  Quoting this section of the xDS spec

:

Only the first request on a stream is guaranteed to carry the node
> identifier. The subsequent discovery requests on the same stream may carry
> an empty node identifier. This holds true regardless of the acceptance of
> the discovery responses on the same stream. The node identifier should
> always be identical if present more than once on the stream. It is
> sufficient to only check the first message for the node identifier as a
> result.


The Java implementation may currently happen to send the node information
with every request on the stream, but it's not required to do that, and
your xDS server should not expect that behavior.  I think you need to
change your xDS server to look at the node information on the first request
on the stream and store the node.cluster field so that it knows the value
when it sees subsequent requests on the same stream.

I hope this information is helpful.

On Mon, Aug 16, 2021 at 2:30 AM Lukáš Drbal  wrote:

> Hello everyone,
>
> We are trying to setup routing via XDS to our GRPC services. Routing
> should be based on `node.cluster` information provided from client.
>
> Basically we would like to have 2 groups of GRPC clusters (priority and
> normal) with same endpoints and choose right one by client `node.cluster`
> identification.
>
> I have very minimal setup [1] which works absolutely as we expected for
> java client but doesn't work for C++ (and grpc_cli). Node hashing
> implementation [2]. This is minimal setup to reproducing this behaviour,
> regular routing is more complicated.
>
> From log perspective it looks like from C++ xds server receive
> `node.cluster` information just in first request.
>
> From java I see cluster in all requests:
> grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing
> [priority] to priority group. [grpc-default-executor-1] INFO
> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
> [priority] to priority group. [grpc-default-executor-0] INFO
> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
> [priority] to priority group. [grpc-default-executor-2] INFO
> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
> [grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing
> [priority] to priority group. [grpc-default-executor-2] INFO
> org.example.xds.routing.XdsServer - Routing [priority] to priority group.
>
> But from cli / c++ I see cluster just in first request:
> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing
> [priority] to priority group. [grpc-default-executor-0] INFO
> org.example.xds.routing.XdsServer - Routing [] to normal group.
> [grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing
> [] to normal group.
>
> This leads to expected error when c++ client is trying to get priority
> listeners and routes from default group.
>
> Can somebody give me any hint what's wrong here?
>
> Thanks a lot!
>
> L.
>
> [1] https://github.com/LesTR/xds-routing-test
> [2]
> https://github.com/LesTR/xds-routing-test/blob/master/src/main/java/org/example/xds/routing/XdsServer.java#L61
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/1de7e140-862f-414f-b25a-7b1afc4069can%40googlegroups.com
> 
> .
>


-- 
Mark D. Roth 
Software Engineer
Google, Inc.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAJgPXp5yn7HN15w-o7Ry2xJg15FA3E7zZ1Zr_NdyFG824wDBZg%40mail.gmail.com.


[grpc-io] Re: Browsers' stream API as "native" transport for PROTOCOL-WEB / Grpc-web

2021-08-16 Thread 'wen...@google.com' via grpc.io
Thanks for the note. We should update the doc (i.e. remove the timeframe).



On Monday, August 16, 2021 at 6:26:34 AM UTC-7 Fabio Monte wrote:

> Hi,
>
> I have been following "native" support of "real" Grpc client in browsers 
> for quite some time, and was wondering if, indeed, what you can read on the 
> Github doc is still the aimed goal?
> Quote from Design Goals:
>
>- *"become optional (in 1-2 years) when browsers are able to speak the 
>native gRPC protocol via the new whatwg streams API 
>"*
>
> https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md#design-goals
>
> ---> I was wondering if this is still relevant ? Are you any closer to 
> that (if I remind correctly, 2 years at least have past since I read this 
> sentence haha, although I did not look at the commits log I admit)
>
> Thanks for help
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ee80f6fe-56ff-4e26-8e4c-7f692c86fbe9n%40googlegroups.com.


[grpc-io] Benchmark data for gRPC + xDS. vs envoy

2021-08-16 Thread 'Gaurav Poothia' via grpc.io
Hello,
I saw a talk by Mark Roth from envoycon that talked about gRPC proxyless
mesh having superior QPS per cpu second and latency than envoy all of which
is of course expected.

Can anyone pls share results/setup from benchmarks around these two
metrics?
It would be great to understand perf benefits more deeply.

Thanks!
Gaurav

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAFJ0%2B9-h7P_hrQjmgGyfF4jT6UGD%3D42V8a7zy3cqxKHLG4fOsQ%40mail.gmail.com.


[grpc-io] Browsers' stream API as "native" transport for PROTOCOL-WEB / Grpc-web

2021-08-16 Thread Fabio Monte
Hi,

I have been following "native" support of "real" Grpc client in browsers 
for quite some time, and was wondering if, indeed, what you can read on the 
Github doc is still the aimed goal?
Quote from Design Goals:

   - *"become optional (in 1-2 years) when browsers are able to speak the 
   native gRPC protocol via the new whatwg streams API 
   "*

https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md#design-goals

---> I was wondering if this is still relevant ? Are you any closer to that 
(if I remind correctly, 2 years at least have past since I read this 
sentence haha, although I did not look at the commits log I admit)

Thanks for help

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4acd0eba-ddf3-4dbb-a750-620858edaa5an%40googlegroups.com.


[grpc-io] Re: java: client cancels, but StatusRuntimeException not thrown on the server

2021-08-16 Thread Piotr Morgwai Kotarbinski
I've looked into the source and now I can see why it happens. I think it's 
a bug so filed an issue with exact description what should be fixed: 
https://github.com/grpc/grpc-java/issues/8409

On Monday, August 16, 2021 at 5:42:50 PM UTC+7 Piotr Morgwai Kotarbinski 
wrote:

> Hi all,
> I have a simplest possible server-streaming method:
>
> service CancelService {
> rpc fun(Empty) returns (stream Empty) {}
> }
>
> an implementation that sleeps for 1s and then sends reply:
>
> public class CancelService extends CancelServiceImplBase {
>
> @Override
> public void fun(Empty request, StreamObserver 
> basicResponseObserver) {
> final var responseObserver = (ServerCallStreamObserver) 
> basicResponseObserver;
> try {
> Thread.sleep(1000l);  // longer than client's deadline
> System.out.println("isCancelled: " + responseObserver.isCancelled());
> responseObserver.onNext(request);
> responseObserver.onCompleted();
> System.out.println("completed successfully");
> } catch (StatusRuntimeException e) {
> System.out.println("StatusRuntimeException" + e);
> } catch (Throwable t) {
> System.out.println("server error" + t);
> responseObserver.onError(Status.INTERNAL.withCause(t).asException());
> if (t instanceof Error) throw (Error) t;
> }
> }
>
> public static void main(String[] args) throws Exception {
> final Server server = NettyServerBuilder
> .forPort()
> .addService(new CancelService())
> .build()
> .start();
> Runtime.getRuntime().addShutdownHook(new Thread(() -> {
> try { server.shutdown().awaitTermination(5, TimeUnit.SECONDS); } catch 
> (Exception e) {}
> }));
> System.out.println("server started");
> server.awaitTermination();
> }
> }
>
> and a client that sets deadline to 0.5s:
>
> public static void main(String[] args) throws Exception {
> final var channel = ManagedChannelBuilder
> .forTarget("localhost:")
> .usePlaintext()
> .build();
> final var connector = CancelServiceGrpc.newBlockingStub(channel)
> .withDeadlineAfter(500l, TimeUnit.MILLISECONDS);
> final var results = connector.fun(Empty.newBuilder().build());
> while (results.hasNext()) {
> System.out.println("got result: " + results.next());
> }
> System.out.println("call successful");
> }
>
> when run, on the client side I get an excp as expected:
>
> Exception in thread "main" io.grpc.StatusRuntimeException: 
> DEADLINE_EXCEEDED: deadline exceeded (...)
>
> on the server however, responseObserver.isCancelled() returns true but 
> StatusRuntimeException is not thrown when calling onNext(...) and 
> onCompleted() and call finishes as normal:
>
> server started
> isCancelled: true
> completed successfully
>
> is this an expected behavior? I thought StatusRuntimeException with 
> Status.CANCELLED should be thrown, no? (unless onCancelHandler is set, 
> which is not the case here)
>
> To make things more confusing, if I dispatch work in fun(...) to some 
> executor like this:
>
> ThreadPoolExecutor executor =
> new ThreadPoolExecutor(3, 3, 0, TimeUnit.DAYS, new 
> LinkedBlockingQueue<>());
>
> @Override
> public void fun(Empty request, StreamObserver 
> basicResponseObserver) {
> final var responseObserver = (ServerCallStreamObserver) 
> basicResponseObserver;
> executor.execute(() -> {
> try {
> Thread.sleep(1000l);  // longer than client's deadline
> System.out.println("isCancelled: " + responseObserver.isCancelled());
> responseObserver.onNext(request);
> responseObserver.onCompleted();
> System.out.println("completed successfully");
> } catch (StatusRuntimeException e) {
> System.out.println("StatusRuntimeException" + e);
> } catch (Throwable t) {
> System.out.println("server error" + t);
> responseObserver.onError(Status.INTERNAL.withCause(t).asException());
> if (t instanceof Error) throw (Error) t;
> }
> });
> }
>
>
> then I do get a StatusRuntimeException:
>
> server started
> isCancelled: true
> StatusRuntimeExceptionio.grpc.StatusRuntimeException: CANCELLED: call 
> already cancelled. Use ServerCallStreamObserver.setOnCancelHandler() to 
> disable this exception
>
> Shouldn't the behavior be consistent in both of these cases?
>
> Thanks!
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e530fa6e-d766-47b1-b2c5-b29dca2c6bedn%40googlegroups.com.


[grpc-io] java: client cancels, but StatusRuntimeException not thrown on the server

2021-08-16 Thread Piotr Morgwai Kotarbinski
Hi all,
I have a simplest possible server-streaming method:

service CancelService {
rpc fun(Empty) returns (stream Empty) {}
}

an implementation that sleeps for 1s and then sends reply:

public class CancelService extends CancelServiceImplBase {

@Override
public void fun(Empty request, StreamObserver basicResponseObserver) 
{
final var responseObserver = (ServerCallStreamObserver) 
basicResponseObserver;
try {
Thread.sleep(1000l);  // longer than client's deadline
System.out.println("isCancelled: " + responseObserver.isCancelled());
responseObserver.onNext(request);
responseObserver.onCompleted();
System.out.println("completed successfully");
} catch (StatusRuntimeException e) {
System.out.println("StatusRuntimeException" + e);
} catch (Throwable t) {
System.out.println("server error" + t);
responseObserver.onError(Status.INTERNAL.withCause(t).asException());
if (t instanceof Error) throw (Error) t;
}
}

public static void main(String[] args) throws Exception {
final Server server = NettyServerBuilder
.forPort()
.addService(new CancelService())
.build()
.start();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
try { server.shutdown().awaitTermination(5, TimeUnit.SECONDS); } catch 
(Exception e) {}
}));
System.out.println("server started");
server.awaitTermination();
}
}

and a client that sets deadline to 0.5s:

public static void main(String[] args) throws Exception {
final var channel = ManagedChannelBuilder
.forTarget("localhost:")
.usePlaintext()
.build();
final var connector = CancelServiceGrpc.newBlockingStub(channel)
.withDeadlineAfter(500l, TimeUnit.MILLISECONDS);
final var results = connector.fun(Empty.newBuilder().build());
while (results.hasNext()) {
System.out.println("got result: " + results.next());
}
System.out.println("call successful");
}

when run, on the client side I get an excp as expected:

Exception in thread "main" io.grpc.StatusRuntimeException: 
DEADLINE_EXCEEDED: deadline exceeded (...)

on the server however, responseObserver.isCancelled() returns true but 
StatusRuntimeException is not thrown when calling onNext(...) and 
onCompleted() and call finishes as normal:

server started
isCancelled: true
completed successfully

is this an expected behavior? I thought StatusRuntimeException with 
Status.CANCELLED should be thrown, no? (unless onCancelHandler is set, 
which is not the case here)

To make things more confusing, if I dispatch work in fun(...) to some 
executor like this:

ThreadPoolExecutor executor =
new ThreadPoolExecutor(3, 3, 0, TimeUnit.DAYS, new LinkedBlockingQueue<>());

@Override
public void fun(Empty request, StreamObserver basicResponseObserver) 
{
final var responseObserver = (ServerCallStreamObserver) 
basicResponseObserver;
executor.execute(() -> {
try {
Thread.sleep(1000l);  // longer than client's deadline
System.out.println("isCancelled: " + responseObserver.isCancelled());
responseObserver.onNext(request);
responseObserver.onCompleted();
System.out.println("completed successfully");
} catch (StatusRuntimeException e) {
System.out.println("StatusRuntimeException" + e);
} catch (Throwable t) {
System.out.println("server error" + t);
responseObserver.onError(Status.INTERNAL.withCause(t).asException());
if (t instanceof Error) throw (Error) t;
}
});
}


then I do get a StatusRuntimeException:

server started
isCancelled: true
StatusRuntimeExceptionio.grpc.StatusRuntimeException: CANCELLED: call 
already cancelled. Use ServerCallStreamObserver.setOnCancelHandler() to 
disable this exception

Shouldn't the behavior be consistent in both of these cases?

Thanks!
 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/253a2c7f-4b36-4003-bd22-c9fd66b4d0a6n%40googlegroups.com.


[grpc-io] Node group based XDS routing

2021-08-16 Thread Lukáš Drbal
Hello everyone,

We are trying to setup routing via XDS to our GRPC services. Routing should 
be based on `node.cluster` information provided from client.

Basically we would like to have 2 groups of GRPC clusters (priority and 
normal) with same endpoints and choose right one by client `node.cluster` 
identification.

I have very minimal setup [1] which works absolutely as we expected for 
java client but doesn't work for C++ (and grpc_cli). Node hashing 
implementation [2]. This is minimal setup to reproducing this behaviour, 
regular routing is more complicated.

>From log perspective it looks like from C++ xds server receive 
`node.cluster` information just in first request. 

>From java I see cluster in all requests:
grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
[priority] to priority group. [grpc-default-executor-1] INFO 
org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
[grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
[priority] to priority group. [grpc-default-executor-0] INFO 
org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
[grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
[priority] to priority group. [grpc-default-executor-2] INFO 
org.example.xds.routing.XdsServer - Routing [priority] to priority group. 
[grpc-default-executor-1] INFO org.example.xds.routing.XdsServer - Routing 
[priority] to priority group. [grpc-default-executor-2] INFO 
org.example.xds.routing.XdsServer - Routing [priority] to priority group.

But from cli / c++ I see cluster just in first request:
[grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
[priority] to priority group. [grpc-default-executor-0] INFO 
org.example.xds.routing.XdsServer - Routing [] to normal group. 
[grpc-default-executor-0] INFO org.example.xds.routing.XdsServer - Routing 
[] to normal group.

This leads to expected error when c++ client is trying to get priority 
listeners and routes from default group.

Can somebody give me any hint what's wrong here?

Thanks a lot!

L.

[1] https://github.com/LesTR/xds-routing-test
[2] 
https://github.com/LesTR/xds-routing-test/blob/master/src/main/java/org/example/xds/routing/XdsServer.java#L61

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1de7e140-862f-414f-b25a-7b1afc4069can%40googlegroups.com.