[grpc-io] dealing with variable network latency and streams of big request messages

2022-09-26 Thread Piotr Morgwai Kotarbinski
Hello All,
I currently have a bi-di gRPC service dealing with request streams that 
often contain quite big messages (several MBs). Resources needed to process 
request messages are limited, so to avoid buffering of such big messages, 
the service uses manual flow-control and requests only as many messages as 
it can process at the given time without a need to wait for resources (if 
there's enough resources, messages from a single stream may be processed 
concurrently in separate threads).

Currently all clients of this service are in the same cluster, so network 
delays are negligible: request messages arrive almost instantly when 
requested, so the fraction of time when resources are idle is thus also 
negligible. However soon there will be client calls from different regions 
and later also calls from mobile clients. Am I assuming correctly, that the 
current flow-control strategy will result in much more wasted idle times of 
the processing resources? The service is implemented in Java in case it 
matters.

As I understand, if both the delay and the average processing time are 
stable, the service could be requesting additional `networkDelay / 
averageMessageProcessingTime` messages each time. This condition however 
may not always be the case, especially for mobile clients. Do I understand 
correctly, that in order to optimize resource utilization, I would need to 
implement something similar to dynamic TCP window size negotiation (but for 
messages, not bytes) or is there maybe some built-in, ready to use 
mechanism already in place to deal with this?
In case I need to deal with it myself, are there any APIs in the Java 
library that may be helpful? For example for measuring message network 
delivery delay of messages, number of request messages currently buffered 
by the server side that are awaiting processing, etc.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/553aa6c3-713f-438c-9839-58a8f8d245e7n%40googlegroups.com.


[grpc-io] Re: Flutter with GRPC

2022-08-21 Thread Piotr Morgwai Kotarbinski
I think "full" gRPC is not yet supported in browsers generally due to 
browser platform limitations (this might have changed though: you need to 
verify if this info is up2date). There's a protocol gGRPC-web designed 
exactly to make up for it and it is definitely supported on flutter:  
https://pub.dev/documentation/grpc/latest/grpc_web/grpc_web-library.html

Furthermore, there's also a higher level abstraction wrapper that 
automatically switches between "native" gRPC and gRPC-web depending for 
which platform the build was made:
https://pub.dev/documentation/grpc/latest/grpc_or_grpcweb/grpc_or_grpcweb-library.html

Hope this helps :)

On Wednesday, August 17, 2022 at 8:25:55 PM UTC+7 ricardo.s...@gmail.com 
wrote:

> Hi Google lovers!
> I'm new in all here, and i try to create a example of flutter (on web) 
> with gRPC.
> Unfortunately I notice that gRPC on flutter don't support WEB.
> I'm doing this wrong?
> Why this lib doesn't support WEB?
> https://pub.dev/packages/grpc

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d78a6ffe-fd4c-4ae6-838e-d7d441fda88dn%40googlegroups.com.


[grpc-io] Re: Long lived bidirectional stream closed after laptop goes to sleep

2021-10-19 Thread Piotr Morgwai Kotarbinski
I believe it's an expected behavior: your laptop cannot correctly reply to 
any messages or HTTP/2 control frames while sleeping, so terminating a call 
seems like a quite reasonable thing to do.
I don't know low level details of HTTP/2 stream control, though, so 
hopefully someone with a better knowledge in this area can give a better 
answer.
Furthermore, it may depend on TCP socket options: if the other side 
performs TCP keep-alive checks and notices that your laptop is gone, then 
it will terminate the connection.

Cheers!
On Monday, October 18, 2021 at 6:38:38 PM UTC+7 IIFE wrote:

> Hi,
>
> I've got a sync C++ client that opens a long lived bidirectional stream to 
> a sync C++ server. I'm seeing an issue where if I keep my client open and 
> then put my machine to sleep, the connection is terminated a number of 
> hours. Here are the logs I got.
>
> I connected the client at 00:25, and then got the following log:
> 18/10/2021 03:07:48.578
> File: src\core\ext\transport\chttp2\transport\chttp2_transport.cc
> Line: 2871
> Message: ipv4:127.0.0.1:51656: Keepalive watchdog fired. Closing 
> transport.
>
> Then at the time I woke up my machine from sleep mode, I saw the following 
> log:
> 18/10/2021 09:22:15.372
> src\core\lib\security\transport\security_handshaker.cc
> Line: 183
> Message: Security handshake failed: 
> {"created":"@1634545335.37200","description":"Handshake read 
> failed","file":"src\17cc203898-db2679e7f2.clean\src\core\lib\security\transport\security_handshaker.cc","file_line":425,"referenced_errors":[{"created":"@1634545335.37200","description":"TCP
>  
> stream shutting 
> down","file":"src\17cc203898-db2679e7f2.clean\src\core\lib\iomgr\tcp_windows.cc","file_line":228,"referenced_errors":[{"created":"@1634545334.50300","description":"Handshake
>  
> timed 
> out","file":"src\17cc203898-db2679e7f2.clean\src\core\lib\channel\handshaker.cc","file_line":163}]}]}
>
> I haven't set GRPC_ARG_KEEPALIVE_TIME_MS on the client channel, which 
> according to the documentation is disabled by default. Do I need to 
> configure keepalive settings on the client to resolve this issue, or is 
> there something else going on?
>
> Thanks
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0904eeb7-c1f0-4714-a64b-805bcec26bacn%40googlegroups.com.


[grpc-io] Re: One client many servers

2021-10-18 Thread Piotr Morgwai Kotarbinski
If all the function calls are unary, then It seems to me, it would be 
probably easier to use some existing publish-subscribe system, rather than 
a bare gRPC.
In such a setup, each function would  have its "requests topic", to which a 
caller (PC in your case) would publish request messages and subscribed 
devices would process it. Replies from the devices could be either sent to 
function's corresponding "replies topic' (actually for replies a queue may 
be better than a topic) to which the caller was subscribed, OR directly to 
the caller via an unary call (gRPC or other) with void return type.
Alternatively there could be just 1 topic for all the functions and each 
request message would contain a function id and a "polymorphic" list of 
arguments specific to the given function.

If some of the calls are streaming, then gRPC is probably still your best 
option, but it may depend how tightly request and reply streams are related 
(for example whether replies from devices will have an impact on the 
content of subsequent request messages and whether reply messages from a 
given device impact subsequent requests messages for this device only or 
for all devices).

Cheers!

On Monday, October 18, 2021 at 11:09:45 PM UTC+7 Fabiano Ferronato wrote:

> I have a problem to solve: one computer (PC) will send requests to many 
> devices (e.g. RPi). The devices will execute the request and respond.
>
> Is it possible to use gRPC ? 
>
> From the documentation (Introduction) it shows the opposite: clients 
> sending requests to a server. So maybe I'm going the wrong way choosing 
> gRPC.
>
> Any help is much appreciated.
>
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/03c30d92-5b0a-46eb-9d4a-01e6cb3d8e8an%40googlegroups.com.


[grpc-io] Re: Can i get socket channel?

2021-10-15 Thread Piotr Morgwai Kotarbinski
I think you may get better results on some Netty mailing lists/forums, as 
your question is not really gRPC specific the way it is stated now.

Cheers!
On Wednesday, October 13, 2021 at 9:41:06 AM UTC+7 10367...@qq.com wrote:

> hello, everyone.
> i met a problem recently,  how can i get socket channel.
> for example, i want to get SocketChannel object in  
> ChannelInitializer#initChannel.
> can we provide some public method to expose it?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ae056221-1d6c-4bde-b33e-791a87f41ee9n%40googlegroups.com.


[grpc-io] Re: Is support for "real" GRPC in browsers through Streams API still ongoing?

2021-08-25 Thread Piotr Morgwai Kotarbinski
Hi,
This seems like a major (sub-) project, so, unless google already has been 
having a plan for this, it seems unlikely that google will dedicate 
resources to it just because few ppl starred a feature request on github ;-)
Hence, the information if google has such plans may be useful for ppl from 
the open-source community who would consider dedicating their time to 
develop this themselves (ie: if google has a plan for this already, then it 
probably does not make sense to duplicate the efforts).

Cheers!

On Thursday, August 26, 2021 at 12:55:53 AM UTC+7 zi...@google.com wrote:

> Thank you for asking. If this feature looks interesting but is not 
> available yet is open source I would encourage you to open issues to do the 
> feature request on github, adding your use cases and any suggestions.
> Once available you would see from release notes and here as well.
>
> On Tuesday, August 24, 2021 at 8:01:36 AM UTC-7 Fabio Monte wrote:
>
>>
>> I dare BUMP this thread. Thanks for any insights from "insiders" kinda. 
>> Thanks
>> On Wednesday, August 18, 2021 at 2:38:19 PM UTC+2 Fabio Monte wrote:
>>
>>> Hi again,
>>>
>>> Following my question https://groups.google.com/g/grpc-io/c/RUG-nzLXSWY 
>>> , and still reading the docs/forums/github etc, I dare ask in a more clear 
>>> way :
>>>  ---> Is support for "native" / "real" GRPC in browsers (and not an 
>>> approximation of what we have currently with PROTOCOL-WEB.md 
>>>  / 
>>> GRPC-Web + need to rely on a proxy for translation), which if we read the 
>>> docs was at least in need of release of the Streams API 
>>> 
>>>  (which 
>>> has landed for quite some time particularly in Chrome-ish family) which was 
>>> mandatory since browsers were lacking the primitives needed to implement 
>>> fully GRPC.. is it STILL, then, an active path of development CURRENTLY 
>>> followed by the community / Google ?
>>> Are the chokepoints in particular places ?
>>>
>>> Thanks for any clarification.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8eefa20a-ef42-4ac3-ad9f-d576860bd53en%40googlegroups.com.


[grpc-io] Re: java: client cancels, but StatusRuntimeException not thrown on the server

2021-08-16 Thread Piotr Morgwai Kotarbinski
I've looked into the source and now I can see why it happens. I think it's 
a bug so filed an issue with exact description what should be fixed: 
https://github.com/grpc/grpc-java/issues/8409

On Monday, August 16, 2021 at 5:42:50 PM UTC+7 Piotr Morgwai Kotarbinski 
wrote:

> Hi all,
> I have a simplest possible server-streaming method:
>
> service CancelService {
> rpc fun(Empty) returns (stream Empty) {}
> }
>
> an implementation that sleeps for 1s and then sends reply:
>
> public class CancelService extends CancelServiceImplBase {
>
> @Override
> public void fun(Empty request, StreamObserver 
> basicResponseObserver) {
> final var responseObserver = (ServerCallStreamObserver) 
> basicResponseObserver;
> try {
> Thread.sleep(1000l);  // longer than client's deadline
> System.out.println("isCancelled: " + responseObserver.isCancelled());
> responseObserver.onNext(request);
> responseObserver.onCompleted();
> System.out.println("completed successfully");
> } catch (StatusRuntimeException e) {
> System.out.println("StatusRuntimeException" + e);
> } catch (Throwable t) {
> System.out.println("server error" + t);
> responseObserver.onError(Status.INTERNAL.withCause(t).asException());
> if (t instanceof Error) throw (Error) t;
> }
> }
>
> public static void main(String[] args) throws Exception {
> final Server server = NettyServerBuilder
> .forPort()
> .addService(new CancelService())
> .build()
> .start();
> Runtime.getRuntime().addShutdownHook(new Thread(() -> {
> try { server.shutdown().awaitTermination(5, TimeUnit.SECONDS); } catch 
> (Exception e) {}
> }));
> System.out.println("server started");
> server.awaitTermination();
> }
> }
>
> and a client that sets deadline to 0.5s:
>
> public static void main(String[] args) throws Exception {
> final var channel = ManagedChannelBuilder
> .forTarget("localhost:")
> .usePlaintext()
> .build();
> final var connector = CancelServiceGrpc.newBlockingStub(channel)
> .withDeadlineAfter(500l, TimeUnit.MILLISECONDS);
> final var results = connector.fun(Empty.newBuilder().build());
> while (results.hasNext()) {
> System.out.println("got result: " + results.next());
> }
> System.out.println("call successful");
> }
>
> when run, on the client side I get an excp as expected:
>
> Exception in thread "main" io.grpc.StatusRuntimeException: 
> DEADLINE_EXCEEDED: deadline exceeded (...)
>
> on the server however, responseObserver.isCancelled() returns true but 
> StatusRuntimeException is not thrown when calling onNext(...) and 
> onCompleted() and call finishes as normal:
>
> server started
> isCancelled: true
> completed successfully
>
> is this an expected behavior? I thought StatusRuntimeException with 
> Status.CANCELLED should be thrown, no? (unless onCancelHandler is set, 
> which is not the case here)
>
> To make things more confusing, if I dispatch work in fun(...) to some 
> executor like this:
>
> ThreadPoolExecutor executor =
> new ThreadPoolExecutor(3, 3, 0, TimeUnit.DAYS, new 
> LinkedBlockingQueue<>());
>
> @Override
> public void fun(Empty request, StreamObserver 
> basicResponseObserver) {
> final var responseObserver = (ServerCallStreamObserver) 
> basicResponseObserver;
> executor.execute(() -> {
> try {
> Thread.sleep(1000l);  // longer than client's deadline
> System.out.println("isCancelled: " + responseObserver.isCancelled());
> responseObserver.onNext(request);
> responseObserver.onCompleted();
> System.out.println("completed successfully");
> } catch (StatusRuntimeException e) {
> System.out.println("StatusRuntimeException" + e);
> } catch (Throwable t) {
> System.out.println("server error" + t);
> responseObserver.onError(Status.INTERNAL.withCause(t).asException());
> if (t instanceof Error) throw (Error) t;
> }
> });
> }
>
>
> then I do get a StatusRuntimeException:
>
> server started
> isCancelled: true
> StatusRuntimeExceptionio.grpc.StatusRuntimeException: CANCELLED: call 
> already cancelled. Use ServerCallStreamObserver.setOnCancelHandler() to 
> disable this exception
>
> Shouldn't the behavior be consistent in both of these cases?
>
> Thanks!
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e530fa6e-d766-47b1-b2c5-b29dca2c6bedn%40googlegroups.com.


[grpc-io] java: client cancels, but StatusRuntimeException not thrown on the server

2021-08-16 Thread Piotr Morgwai Kotarbinski
Hi all,
I have a simplest possible server-streaming method:

service CancelService {
rpc fun(Empty) returns (stream Empty) {}
}

an implementation that sleeps for 1s and then sends reply:

public class CancelService extends CancelServiceImplBase {

@Override
public void fun(Empty request, StreamObserver basicResponseObserver) 
{
final var responseObserver = (ServerCallStreamObserver) 
basicResponseObserver;
try {
Thread.sleep(1000l);  // longer than client's deadline
System.out.println("isCancelled: " + responseObserver.isCancelled());
responseObserver.onNext(request);
responseObserver.onCompleted();
System.out.println("completed successfully");
} catch (StatusRuntimeException e) {
System.out.println("StatusRuntimeException" + e);
} catch (Throwable t) {
System.out.println("server error" + t);
responseObserver.onError(Status.INTERNAL.withCause(t).asException());
if (t instanceof Error) throw (Error) t;
}
}

public static void main(String[] args) throws Exception {
final Server server = NettyServerBuilder
.forPort()
.addService(new CancelService())
.build()
.start();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
try { server.shutdown().awaitTermination(5, TimeUnit.SECONDS); } catch 
(Exception e) {}
}));
System.out.println("server started");
server.awaitTermination();
}
}

and a client that sets deadline to 0.5s:

public static void main(String[] args) throws Exception {
final var channel = ManagedChannelBuilder
.forTarget("localhost:")
.usePlaintext()
.build();
final var connector = CancelServiceGrpc.newBlockingStub(channel)
.withDeadlineAfter(500l, TimeUnit.MILLISECONDS);
final var results = connector.fun(Empty.newBuilder().build());
while (results.hasNext()) {
System.out.println("got result: " + results.next());
}
System.out.println("call successful");
}

when run, on the client side I get an excp as expected:

Exception in thread "main" io.grpc.StatusRuntimeException: 
DEADLINE_EXCEEDED: deadline exceeded (...)

on the server however, responseObserver.isCancelled() returns true but 
StatusRuntimeException is not thrown when calling onNext(...) and 
onCompleted() and call finishes as normal:

server started
isCancelled: true
completed successfully

is this an expected behavior? I thought StatusRuntimeException with 
Status.CANCELLED should be thrown, no? (unless onCancelHandler is set, 
which is not the case here)

To make things more confusing, if I dispatch work in fun(...) to some 
executor like this:

ThreadPoolExecutor executor =
new ThreadPoolExecutor(3, 3, 0, TimeUnit.DAYS, new LinkedBlockingQueue<>());

@Override
public void fun(Empty request, StreamObserver basicResponseObserver) 
{
final var responseObserver = (ServerCallStreamObserver) 
basicResponseObserver;
executor.execute(() -> {
try {
Thread.sleep(1000l);  // longer than client's deadline
System.out.println("isCancelled: " + responseObserver.isCancelled());
responseObserver.onNext(request);
responseObserver.onCompleted();
System.out.println("completed successfully");
} catch (StatusRuntimeException e) {
System.out.println("StatusRuntimeException" + e);
} catch (Throwable t) {
System.out.println("server error" + t);
responseObserver.onError(Status.INTERNAL.withCause(t).asException());
if (t instanceof Error) throw (Error) t;
}
});
}


then I do get a StatusRuntimeException:

server started
isCancelled: true
StatusRuntimeExceptionio.grpc.StatusRuntimeException: CANCELLED: call 
already cancelled. Use ServerCallStreamObserver.setOnCancelHandler() to 
disable this exception

Shouldn't the behavior be consistent in both of these cases?

Thanks!
 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/253a2c7f-4b36-4003-bd22-c9fd66b4d0a6n%40googlegroups.com.


[grpc-io] Re: java: deadlock in a server-streaming RPC

2021-06-05 Thread Piotr Morgwai Kotarbinski
ok, solved it:
the deadlock was caused by the fact that a Listener is guaranteed to be called 
by at most 1 thread concurrently. Therefore, after I blocked a thread in line 
50 
(https://github.com/morgwai/grpc-deadlock/blob/master/src/main/java/pl/morgwai/samples/grpc/deadlock/EchoService.java#L50),
 it was still holding Listener's lock (as user method is called by 
Listener.onHalfClose() in case of unary clients), so no other thread could 
possibly call Listener.onReady() which calls onReadyHandler which would notify 
the first blocked thread.

I've refactored the code to never block the thread by doing the actual work in 
onReadyHandler, so now it works fine:

public void multiEcho(EchoRequest verbalVomit, StreamObserver 
responseObserver) {
log.fine("someone has just emitted an inconsiderated verbal vomit");
int[] repsRemainingHolder = { verbalVomit.getReps() };
var echoObserver = (ServerCallStreamObserver) 
responseObserver;
echoObserver.setOnReadyHandler(() -> {
log.finer("sink ready");
try {
while (
repsRemainingHolder[0] > 0
&&echoObserver.isReady()
&& ! echoObserver.isCancelled()
) {
// multiply the content to fill the buffer faster
var echoBuilder = new StringBuilder();
for (int j = 0; j < MULTIPLY_FACTOR; j++) 
echoBuilder.append(verbalVomit);
var echoedVomit =

EchoResposne.newBuilder().setEchoedVomit(echoBuilder.toString()).build();

if (log.isLoggable(Level.FINEST)) log.finest("echo");
repsRemainingHolder[0]--;
echoObserver.onNext(echoedVomit);
}
if (echoObserver.isCancelled()) {
log.fine("client cancelled the call 2");
return;
}
if (repsRemainingHolder[0] == 0) {
echoObserver.onCompleted();
log.fine("done");
return;
}
log.finer("sink clogged at rep "
+ (verbalVomit.getReps() - repsRemainingHolder[0] + 1));
} catch (StatusRuntimeException e) {
if (e.getStatus().getCode() == Code.CANCELLED) {
log.fine("client cancelled the call 1");
} else {
log.severe("server error: " + e);
e.printStackTrace();
}
} catch (Exception e) {
log.severe("server error: " + e);
e.printStackTrace();
echoObserver.onError(Status.INTERNAL.withCause(e).asException());
}
});
}

(hope the formatting will be ok this time ;-] )

The above code is in the branch named 'solution' of the previously mentioned 
github repo: 
https://github.com/morgwai/grpc-deadlock/blob/solution/src/main/java/pl/morgwai/samples/grpc/deadlock/EchoService.java

Cheers!





On 05/06/2021 22:01, Piotr Morgwai Kotarbinski wrote:
> I was trying this with grpc-1.38.0 on openjdk-11 on ubuntu-18.04 in case it 
> matters.
> 
> 
> On 05/06/2021 21:49, Piotr Morgwai Kotarbinski wrote:
>> Hi all,
>> I have the following server-streaming method:
>>
>>> public void multiEcho(EchoRequest verbalVomit, StreamObserver 
>>> responseObserver) {
>>> log.fine("someone has just emitted an inconsiderated verbal vomit");
>>> var callMonitor = new Object();
>>> var echoObserver = (ServerCallStreamObserver) 
>>> responseObserver;
>>> echoObserver.setOnReadyHandler(() -> {
>>> log.finer("sink ready");
>>> synchronized (callMonitor) { callMonitor.notifyAll(); }
>>> });
>>> echoObserver.setOnCancelHandler(() -> {
>>> log.fine("client cancelled the call 1");
>>> synchronized (callMonitor) { callMonitor.notifyAll(); }
>>> });
>>>
>>> try {
>>> for (int i = 1; i <= verbalVomit.getReps(); i++) {
>>> if (echoObserver.isCancelled()) {
>>> log.fine("client cancelled the call 2");
>>> return;
>>> }
>>> synchronized (callMonitor) {
>>> while( ! echoObserver.isReady()) {
>>> log.finer("sink clogged at rep " + i);
>>> callMonitor.wait();
>>> }
>>> }
>>>
>>> // multiply the content to fill the buffer faster
>>> var echoBuilder = new StringBuilder();
>>> for (int j = 0; j < MULTIPLY_FACTOR; j++) {
&

[grpc-io] Re: java: deadlock in a server-streaming RPC

2021-06-05 Thread Piotr Morgwai Kotarbinski
I was trying this with grpc-1.38.0 on openjdk-11 on ubuntu-18.04 in case it 
matters.


On 05/06/2021 21:49, Piotr Morgwai Kotarbinski wrote:
> Hi all,
> I have the following server-streaming method:
> 
>> public void multiEcho(EchoRequest verbalVomit, StreamObserver 
>> responseObserver) {
>> log.fine("someone has just emitted an inconsiderated verbal vomit");
>> var callMonitor = new Object();
>> var echoObserver = (ServerCallStreamObserver) 
>> responseObserver;
>> echoObserver.setOnReadyHandler(() -> {
>> log.finer("sink ready");
>> synchronized (callMonitor) { callMonitor.notifyAll(); }
>> });
>> echoObserver.setOnCancelHandler(() -> {
>> log.fine("client cancelled the call 1");
>> synchronized (callMonitor) { callMonitor.notifyAll(); }
>> });
>>
>> try {
>> for (int i = 1; i <= verbalVomit.getReps(); i++) {
>> if (echoObserver.isCancelled()) {
>> log.fine("client cancelled the call 2");
>> return;
>> }
>> synchronized (callMonitor) {
>> while( ! echoObserver.isReady()) {
>> log.finer("sink clogged at rep " + i);
>> callMonitor.wait();
>> }
>> }
>>
>> // multiply the content to fill the buffer faster
>> var echoBuilder = new StringBuilder();
>> for (int j = 0; j < MULTIPLY_FACTOR; j++) {
>> 
>> echoBuilder.append(verbalVomit.getInconsideratedVerbalVomit());
>> }
>> var echoedVomit =
>> 
>> EchoResposne.newBuilder().setEchoedVomit(echoBuilder.toString()).build();
>>
>> if (log.isLoggable(Level.FINEST)) log.finest("echo");
>> echoObserver.onNext(echoedVomit);
>> }
>> echoObserver.onCompleted();
>> } catch (StatusRuntimeException e) {
>> if (e.getStatus().getCode() == Code.CANCELLED) {
>> log.fine("client cancelled the call 3");
>> } else {
>> log.severe("server error: " + e);
>> e.printStackTrace();
>> }
>> } catch (Exception e) {
>> log.severe("server error: " + e);
>> e.printStackTrace();
>> echoObserver.onError(Status.INTERNAL.withCause(e).asException());
>> }
>> }
> 
> I create the server this way:
> 
>> echoServer = NettyServerBuilder
>>  .forPort(port)
>>  .maxConnectionAge(10, TimeUnit.MINUTES)
>>  .maxConnectionAgeGrace(12, TimeUnit.HOURS)
>>  .addService(new EchoService())
>>  .build();
> 
> and the client looks like this:
> 
>> var connector = EchoServiceGrpc.newBlockingStub(channel);
>> var request = EchoRequest
>>  .newBuilder()
>>  .setInconsideratedVerbalVomit(
>>  "blhh" +
>>  "")  // 64B
>>  .setReps(100)
>>  .build();
>> var vomitIterator = connector.multiEcho(request);
>> while (vomitIterator.hasNext()) {
>>  vomitIterator.next();
>>  System.out.println("got echo");
>> }
> 
> and it dead-locks after just a few messages  ;-]
> 
> server output looks like this:
> 
>> started gRPC EchoServer on port 
>> Jun 05, 2021 8:22:52 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
>> multiEcho
>> FINE: someone has just emitted an inconsiderated verbal vomit
>> Jun 05, 2021 8:22:53 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
>> multiEcho
>> FINER: sink clogged at rep 7
> 
> ...and clients:
> 
>> got echo
>> got echo
>> got echo
>> got echo
>> got echo
>> got echo
> 
> ...and they both hand indefinitely :(
> 
> am I doing something wrong or is it a bug?
> 
> The interesting part is that if I use direct executor in the server and 
> dispatch the work to a separate executor, then it works without any problems 
> (that's what I usually do, so I've never encountered this problem before).
> ie: 
> 
>> public void multiEcho(EchoRequest verbalVomit, StreamObserver 
>> responseObserver) {
>> log.fine("someone has just emitted an inconsiderated verbal vomit");
>> var callMonitor = new Object();
>> var echoObserver = (ServerCallStreamObserver) 
>>

[grpc-io] Re: java: deadlock in a server-streaming RPC

2021-06-05 Thread Piotr Morgwai Kotarbinski
sorry for the formatting, let me try again:
I have the following server-streaming method:

> public void multiEcho(EchoRequest verbalVomit, 
StreamObserver responseObserver) {
> log.fine("someone has just emitted an inconsiderated verbal vomit");
> var callMonitor = new Object();
> var echoObserver = (ServerCallStreamObserver) 
responseObserver;
> echoObserver.setOnReadyHandler(() -> {
> log.finer("sink ready");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> echoObserver.setOnCancelHandler(() -> {
> log.fine("client cancelled the call 1");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
>
> try {
> for (int i = 1; i <= verbalVomit.getReps(); i++) {
> if (echoObserver.isCancelled()) {
> log.fine("client cancelled the call 2");
> return;
> }
> synchronized (callMonitor) {
> while( ! echoObserver.isReady()) {
> log.finer("sink clogged at rep " + i);
> callMonitor.wait();
> }
> }
>
> // multiply the content to fill the buffer faster
> var echoBuilder = new StringBuilder();
> for (int j = 0; j < MULTIPLY_FACTOR; j++) {
>
 echoBuilder.append(verbalVomit.getInconsideratedVerbalVomit());
> }
> var echoedVomit =
>
 EchoResposne.newBuilder().setEchoedVomit(echoBuilder.toString()).build();
>
> if (log.isLoggable(Level.FINEST)) log.finest("echo");
> echoObserver.onNext(echoedVomit);
> }
> echoObserver.onCompleted();
> } catch (StatusRuntimeException e) {
> if (e.getStatus().getCode() == Code.CANCELLED) {
> log.fine("client cancelled the call 3");
> } else {
> log.severe("server error: " + e);
> e.printStackTrace();
> }
> } catch (Exception e) {
> log.severe("server error: " + e);
> e.printStackTrace();
> echoObserver.onError(Status.INTERNAL.withCause(e).asException());
> }
> }

I create the server this way:

> echoServer = NettyServerBuilder
> .forPort(port)
> .maxConnectionAge(10, TimeUnit.MINUTES)
> .maxConnectionAgeGrace(12, TimeUnit.HOURS)
> .addService(new EchoService())
> .build();

and the client looks like this:

> var connector = EchoServiceGrpc.newBlockingStub(channel);
> var request = EchoRequest
> .newBuilder()
> .setInconsideratedVerbalVomit(
> "blhh" +
> "")  // 64B
> .setReps(100)
> .build();
> var vomitIterator = connector.multiEcho(request);
> while (vomitIterator.hasNext()) {
> vomitIterator.next();
> System.out.println("got echo");
> }

and it dead-locks after just a few messages  ;-]

server output looks like this:

> started gRPC EchoServer on port 
> Jun 05, 2021 8:22:52 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
multiEcho
> FINE: someone has just emitted an inconsiderated verbal vomit
> Jun 05, 2021 8:22:53 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
multiEcho
> FINER: sink clogged at rep 7

...and clients:

> got echo
> got echo
> got echo
> got echo
> got echo
> got echo

...and they both hand indefinitely :(

am I doing something wrong or is it a bug?

The interesting part is that if I use direct executor in the server and 
dispatch the work to a separate executor, then it works without any 
problems (that's what I usually do, so I've never encountered this problem 
before).
ie: 

> public void multiEcho(EchoRequest verbalVomit, 
StreamObserver responseObserver) {
> log.fine("someone has just emitted an inconsiderated verbal vomit");
> var callMonitor = new Object();
> var echoObserver = (ServerCallStreamObserver) 
responseObserver;
> echoObserver.setOnReadyHandler(() -> {
> log.finer("sink ready");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> echoObserver.setOnCancelHandler(() -> {
> log.fine("client cancelled the call 1");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
>
> cpuIntensiveOpExecutor.execute(() -> {
> try {
> for (int i = 1; i <= verbalVomit.getReps(); i++) {
(...)

and

> echoServer = NettyServerBuilder
> .forPort(port)
> .maxConnectionAge(10, TimeUnit.MINUTES)
> .maxConnectionAgeGrace(12, TimeUnit.HOURS)
> .addService(ne

[grpc-io] java: deadlock in a server-streaming RPC

2021-06-05 Thread Piotr Morgwai Kotarbinski
Hi all,
I have the following server-streaming method:

> public void multiEcho(EchoRequest verbalVomit, StreamObserver 
> responseObserver) {
> log.fine("someone has just emitted an inconsiderated verbal vomit");
> var callMonitor = new Object();
> var echoObserver = (ServerCallStreamObserver) 
> responseObserver;
> echoObserver.setOnReadyHandler(() -> {
> log.finer("sink ready");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> echoObserver.setOnCancelHandler(() -> {
> log.fine("client cancelled the call 1");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> 
> try {
> for (int i = 1; i <= verbalVomit.getReps(); i++) {
> if (echoObserver.isCancelled()) {
> log.fine("client cancelled the call 2");
> return;
> }
> synchronized (callMonitor) {
> while( ! echoObserver.isReady()) {
> log.finer("sink clogged at rep " + i);
> callMonitor.wait();
> }
> }
> 
> // multiply the content to fill the buffer faster
> var echoBuilder = new StringBuilder();
> for (int j = 0; j < MULTIPLY_FACTOR; j++) {
> 
> echoBuilder.append(verbalVomit.getInconsideratedVerbalVomit());
> }
> var echoedVomit =
> 
> EchoResposne.newBuilder().setEchoedVomit(echoBuilder.toString()).build();
> 
> if (log.isLoggable(Level.FINEST)) log.finest("echo");
> echoObserver.onNext(echoedVomit);
> }
> echoObserver.onCompleted();
> } catch (StatusRuntimeException e) {
> if (e.getStatus().getCode() == Code.CANCELLED) {
> log.fine("client cancelled the call 3");
> } else {
> log.severe("server error: " + e);
> e.printStackTrace();
> }
> } catch (Exception e) {
> log.severe("server error: " + e);
> e.printStackTrace();
> echoObserver.onError(Status.INTERNAL.withCause(e).asException());
> }
> }

I create the server this way:

> echoServer = NettyServerBuilder
>   .forPort(port)
>   .maxConnectionAge(10, TimeUnit.MINUTES)
>   .maxConnectionAgeGrace(12, TimeUnit.HOURS)
>   .addService(new EchoService())
>   .build();

and the client looks like this:

> var connector = EchoServiceGrpc.newBlockingStub(channel);
> var request = EchoRequest
>   .newBuilder()
>   .setInconsideratedVerbalVomit(
>   "blhh" +
>   "")  // 64B
>   .setReps(100)
>   .build();
> var vomitIterator = connector.multiEcho(request);
> while (vomitIterator.hasNext()) {
>   vomitIterator.next();
>   System.out.println("got echo");
> }

and it dead-locks after just a few messages  ;-]

server output looks like this:

> started gRPC EchoServer on port 
> Jun 05, 2021 8:22:52 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
> multiEcho
> FINE: someone has just emitted an inconsiderated verbal vomit
> Jun 05, 2021 8:22:53 P.M. pl.morgwai.samples.grpc.deadlock.EchoService 
> multiEcho
> FINER: sink clogged at rep 7

...and clients:

> got echo
> got echo
> got echo
> got echo
> got echo
> got echo

...and they both hand indefinitely :(

am I doing something wrong or is it a bug?

The interesting part is that if I use direct executor in the server and 
dispatch the work to a separate executor, then it works without any problems 
(that's what I usually do, so I've never encountered this problem before).
ie: 

> public void multiEcho(EchoRequest verbalVomit, StreamObserver 
> responseObserver) {
> log.fine("someone has just emitted an inconsiderated verbal vomit");
> var callMonitor = new Object();
> var echoObserver = (ServerCallStreamObserver) 
> responseObserver;
> echoObserver.setOnReadyHandler(() -> {
> log.finer("sink ready");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> echoObserver.setOnCancelHandler(() -> {
> log.fine("client cancelled the call 1");
> synchronized (callMonitor) { callMonitor.notifyAll(); }
> });
> 
> cpuIntensiveOpExecutor.execute(() -> {
> try {
> for (int i = 1; i <= verbalVomit.getReps(); i++) {
(...)

and

> echoServer = NettyServerBuilder
>   .forPort(port)
>   .maxConnectionAge(10, TimeUnit.MINUTES)
>   .maxConnectionAgeGrace(12, TimeUnit.HOURS)
>   .addService(new EchoService())
>   .directExecutor()
>   .build();

A full working example (dead-locking that is) can be found on github: 
https://github.com/morgwai/grpc-deadlock

Any hints will be much appreciated :)

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send 

Re: [grpc-io] Re: abstract gRPC protocol clarification: client half-closing

2021-06-01 Thread Piotr Morgwai Kotarbinski
Thanks for the answer! I've created a tiny PR with docs 
update: https://github.com/grpc/grpc/pull/26396

Thanks!

On Monday, May 31, 2021 at 12:49:50 AM UTC+7 sanjay...@google.com wrote:

> https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests 
> mentions EOS which is described as 
>
> "For requests, EOS (end-of-stream) is indicated by the presence of the 
> END_STREAM flag on the last received DATA frame. In scenarios where the 
> Request stream needs to be closed but no data remains to be sent 
> implementations MUST send an empty DATA frame with this flag set."
>
>
> On Sun, May 30, 2021 at 7:18 AM Piotr Morgwai Kotarbinski <
> mor...@gmail.com> wrote:
>
>> I've noticed that in case of the java client code, calling 
>> `requestObserver.onCompleted()` results in a call to `
>> io.netty.handler.codec.http2.DefaultHttp2FrameWriter.writeData(...) 
>> <https://github.com/netty/netty/blob/18e92304a700c1b3664f5a3cf24ee3ed58bafbdd/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java#L136>`
>>  
>> with `endStream` param set to `true` in case of netty, and in case of 
>> okHttp in a call to `io.grpc.okhttp.internal.framed.Http2.Writer.data() 
>> <https://github.com/grpc/grpc-java/blob/d4e90a78fdb6af6579364c4b622edbfad9ebf37f/okhttp/third_party/okhttp/main/java/io/grpc/okhttp/internal/framed/Http2.java#L493>`
>>  
>> with `outFinished` param set to `true`. In both cases this results in 
>> `END_STREAM` flag to be set, so it seems my guess was correct. However some 
>> confirmation from someone more familiar with the protocol internals would 
>> be welcomed :)
>>
>> Thanks!
>>
>> On Sunday, May 30, 2021 at 2:26:51 PM UTC+7 Piotr Morgwai Kotarbinski 
>> wrote:
>>
>>> Hi all,
>>> I was reading the protocol overview here 
>>> https://github.com/grpc/grpc/blob/master/CONCEPTS.md#abstract-grpc-protocol
>>> One thing that is unclear to me is how a streaming client signals end of 
>>> its stream to a server (so called 'client half-closing'). The abstract part 
>>> says nothing about it. My only guess that in case of HTTP/2 implementation 
>>> clients basically close their input stream, but not sure about it. Could 
>>> someone with a deep knowledge of the protocol clarify this please?
>>>
>>> Thanks!
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/87e3fc86-f3c4-4c5a-8be7-e0b773b078d1n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/87e3fc86-f3c4-4c5a-8be7-e0b773b078d1n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0d571d1c-0aa8-4b90-9075-f8e5a444cfbdn%40googlegroups.com.


[grpc-io] Re: abstract gRPC protocol clarification: client half-closing

2021-05-30 Thread Piotr Morgwai Kotarbinski
I've noticed that in case of the java client code, calling 
`requestObserver.onCompleted()` results in a call to `
io.netty.handler.codec.http2.DefaultHttp2FrameWriter.writeData(...) 
<https://github.com/netty/netty/blob/18e92304a700c1b3664f5a3cf24ee3ed58bafbdd/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java#L136>`
 
with `endStream` param set to `true` in case of netty, and in case of 
okHttp in a call to `io.grpc.okhttp.internal.framed.Http2.Writer.data() 
<https://github.com/grpc/grpc-java/blob/d4e90a78fdb6af6579364c4b622edbfad9ebf37f/okhttp/third_party/okhttp/main/java/io/grpc/okhttp/internal/framed/Http2.java#L493>`
 
with `outFinished` param set to `true`. In both cases this results in 
`END_STREAM` flag to be set, so it seems my guess was correct. However some 
confirmation from someone more familiar with the protocol internals would 
be welcomed :)

Thanks!

On Sunday, May 30, 2021 at 2:26:51 PM UTC+7 Piotr Morgwai Kotarbinski wrote:

> Hi all,
> I was reading the protocol overview here 
> https://github.com/grpc/grpc/blob/master/CONCEPTS.md#abstract-grpc-protocol
> One thing that is unclear to me is how a streaming client signals end of 
> its stream to a server (so called 'client half-closing'). The abstract part 
> says nothing about it. My only guess that in case of HTTP/2 implementation 
> clients basically close their input stream, but not sure about it. Could 
> someone with a deep knowledge of the protocol clarify this please?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/87e3fc86-f3c4-4c5a-8be7-e0b773b078d1n%40googlegroups.com.


[grpc-io] abstract gRPC protocol clarification: client half-closing

2021-05-30 Thread Piotr Morgwai Kotarbinski
Hi all,
I was reading the protocol overview here 
https://github.com/grpc/grpc/blob/master/CONCEPTS.md#abstract-grpc-protocol
One thing that is unclear to me is how a streaming client signals end of 
its stream to a server (so called 'client half-closing'). The abstract part 
says nothing about it. My only guess that in case of HTTP/2 implementation 
clients basically close their input stream, but not sure about it. Could 
someone with a deep knowledge of the protocol clarify this please?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d9c58632-7657-44fa-94ad-2d251c7cc146n%40googlegroups.com.


[grpc-io] Re: Example on using inproc transport rather than http2

2021-05-11 Thread Piotr Morgwai Kotarbinski
I think no language grpc implementation supports "pluggable" transports 
mechanism, so you would need to customize internal code (I may be wrong 
though).
Some time ago some awesome folks have developed grpc over usb: you can have 
a look at the thread, there's a link there somewhere: 
https://groups.google.com/g/grpc-io/c/ZS7yqRRfviY/m/1FGWMzT3CwAJ
in Java you would probably need to customize ServerBuilder and 
ManagedChannelBuilder.

Cheers!

On Tuesday, May 11, 2021 at 11:55:41 PM UTC+7 pei...@gmail.com wrote:

> Hello,
> I'm new to gRPC and explorating the features. I'm interested in different 
> transport pluggins. How can I wirte a simple app (both client and server) 
> using inproc transport rather than http2?  Is there any sample test case 
> available?
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d2f4eee7-e2a3-4818-9a71-083bd72b1663n%40googlegroups.com.


Re: [grpc-io] Re: java concurrent ordered response buffer

2021-05-11 Thread Piotr Morgwai Kotarbinski
pushed both classes (and some others) to central: 
https://search.maven.org/artifact/pl.morgwai.base/grpc-utils/1.0-alpha1/jar

Cheers!
On Thursday, May 6, 2021 at 1:58:01 AM UTC+7 Piotr Morgwai Kotarbinski 
wrote:

> to fulfill the void left by my previous message  ;-)  here is 
> ConcurrentRequestObserver which eases up developing bi-di streaming 
> methods that dispatch work to other threads but don't need to preserve 
> order of responses:
>
>
> https://github.com/morgwai/grpc-utils/blob/master/src/main/java/pl/morgwai/base/grpc/utils/ConcurrentRequestObserver.java
>
> it handles all the synchronization and manual flow control to maintain the 
> desired concurrency level while also preventing an excessive response 
> messages buffering.
> This one is gRPC specific as  websockets do not have half-closing and 
> manual flow control, so not much left to ease-up there  ;-]
>
> Cheers!
>
> On Friday, April 30, 2021 at 4:10:25 PM UTC+7 Piotr Morgwai Kotarbinski 
> wrote:
>
>> please note that this class should only be used if the response messages 
>> order requirement cannot be dropped: if you control a given proto interface 
>> definition, then it's more efficient to add some unique id to request 
>> messages, include it in response messages and send them as soon as they are 
>> produced, so nothing needs to be buffered.
>>
>> Cheers!
>>
>> On Friday, April 30, 2021 at 8:59:11 AM UTC+7 nitish bhardwaj wrote:
>>
>>>
>>> Thanks for contributing  *OrderedConcurrentOutputBuffer. * I totally 
>>> agree, this would be really useful to utilize cores efficiently.
>>>
>>>
>>> On Wednesday, April 28, 2021 at 1:27:46 PM UTC+5:30 mor...@gmail.com 
>>> wrote:
>>>
>>>> Thanks :)
>>>>
>>>> I was surprised actually, because I thought parallel request message 
>>>> processing was a common use-case, both for gRPC and websockets
>>>> (for example if we have a service that does some single-threaded 
>>>> graphic processing on received images and sends back a modified version of 
>>>> a given image, it would be most efficient to dispatch the processing to a 
>>>> thread pool with a size corresponding to available CPU/GPU cores, right? 
>>>> As 
>>>> processing them sequentially would utilize just 1 core per request stream, 
>>>> so in case of low number of concurrent request streams, we would be 
>>>> underutilizing the cores).
>>>>
>>>> Cheers! 
>>>>
>>>> On Wednesday, April 28, 2021 at 6:52:08 AM UTC+7 Eric Anderson wrote:
>>>>
>>>>> Yeah, we don't have anything pre-existing that does something like 
>>>>> that; it gets into the specifics of your use-case. Making something 
>>>>> yourself was appropriate. I will say that the strategy used in 
>>>>> OrderedConcurrentOutputBuffer with the Buckets seems really clean.
>>>>>
>>>>> On Thu, Apr 22, 2021 at 9:21 AM Piotr Morgwai Kotarbinski <
>>>>> mor...@gmail.com> wrote:
>>>>>
>>>>>> in case someone needs it also, I've written it myself due to lack of 
>>>>>> answers either here and on SO:
>>>>>>
>>>>>> https://github.com/morgwai/java-utils/blob/master/src/main/java/pl/morgwai/base/utils/OrderedConcurrentOutputBuffer.java
>>>>>> feedback is welcome :)
>>>>>> On Tuesday, April 20, 2021 at 11:09:59 PM UTC+7 Piotr Morgwai 
>>>>>> Kotarbinski wrote:
>>>>>>
>>>>>>> Hello
>>>>>>> i have a stream of messages coming from a websocket or a grpc 
>>>>>>> client. for each message my service produces 0 or more reply messages. 
>>>>>>> by 
>>>>>>> default both websocket endpoints and grpc request observers are 
>>>>>>> guaranteed 
>>>>>>> to be called by maximum 1 thread concurrently, so my replies are sent 
>>>>>>> in 
>>>>>>> the same order as requests. Now I want to dispatch request processing 
>>>>>>> to 
>>>>>>> other threads and process them in parallel, but still keep the order. 
>>>>>>> Therefore, I need some "concurrent ordered response buffer", which will 
>>>>>>> buffer replies to a given request message until processing of previous 
>>>>>>> requests is finished and replies to them are sent (in order they were 
>>>

Re: [grpc-io] Re: java concurrent ordered response buffer

2021-05-05 Thread Piotr Morgwai Kotarbinski
to fulfill the void left by my previous message  ;-)  here is 
ConcurrentRequestObserver which eases up developing bi-di streaming methods 
that dispatch work to other threads but don't need to preserve order of 
responses:

https://github.com/morgwai/grpc-utils/blob/master/src/main/java/pl/morgwai/base/grpc/utils/ConcurrentRequestObserver.java

it handles all the synchronization and manual flow control to maintain the 
desired concurrency level while also preventing an excessive response 
messages buffering.
This one is gRPC specific as  websockets do not have half-closing and 
manual flow control, so not much left to ease-up there  ;-]

Cheers!

On Friday, April 30, 2021 at 4:10:25 PM UTC+7 Piotr Morgwai Kotarbinski 
wrote:

> please note that this class should only be used if the response messages 
> order requirement cannot be dropped: if you control a given proto interface 
> definition, then it's more efficient to add some unique id to request 
> messages, include it in response messages and send them as soon as they are 
> produced, so nothing needs to be buffered.
>
> Cheers!
>
> On Friday, April 30, 2021 at 8:59:11 AM UTC+7 nitish bhardwaj wrote:
>
>>
>> Thanks for contributing  *OrderedConcurrentOutputBuffer. * I totally 
>> agree, this would be really useful to utilize cores efficiently.
>>
>>
>> On Wednesday, April 28, 2021 at 1:27:46 PM UTC+5:30 mor...@gmail.com 
>> wrote:
>>
>>> Thanks :)
>>>
>>> I was surprised actually, because I thought parallel request message 
>>> processing was a common use-case, both for gRPC and websockets
>>> (for example if we have a service that does some single-threaded graphic 
>>> processing on received images and sends back a modified version of a given 
>>> image, it would be most efficient to dispatch the processing to a thread 
>>> pool with a size corresponding to available CPU/GPU cores, right? As 
>>> processing them sequentially would utilize just 1 core per request stream, 
>>> so in case of low number of concurrent request streams, we would be 
>>> underutilizing the cores).
>>>
>>> Cheers! 
>>>
>>> On Wednesday, April 28, 2021 at 6:52:08 AM UTC+7 Eric Anderson wrote:
>>>
>>>> Yeah, we don't have anything pre-existing that does something like 
>>>> that; it gets into the specifics of your use-case. Making something 
>>>> yourself was appropriate. I will say that the strategy used in 
>>>> OrderedConcurrentOutputBuffer with the Buckets seems really clean.
>>>>
>>>> On Thu, Apr 22, 2021 at 9:21 AM Piotr Morgwai Kotarbinski <
>>>> mor...@gmail.com> wrote:
>>>>
>>>>> in case someone needs it also, I've written it myself due to lack of 
>>>>> answers either here and on SO:
>>>>>
>>>>> https://github.com/morgwai/java-utils/blob/master/src/main/java/pl/morgwai/base/utils/OrderedConcurrentOutputBuffer.java
>>>>> feedback is welcome :)
>>>>> On Tuesday, April 20, 2021 at 11:09:59 PM UTC+7 Piotr Morgwai 
>>>>> Kotarbinski wrote:
>>>>>
>>>>>> Hello
>>>>>> i have a stream of messages coming from a websocket or a grpc client. 
>>>>>> for each message my service produces 0 or more reply messages. by 
>>>>>> default 
>>>>>> both websocket endpoints and grpc request observers are guaranteed to be 
>>>>>> called by maximum 1 thread concurrently, so my replies are sent in the 
>>>>>> same 
>>>>>> order as requests. Now I want to dispatch request processing to other 
>>>>>> threads and process them in parallel, but still keep the order. 
>>>>>> Therefore, 
>>>>>> I need some "concurrent ordered response buffer", which will buffer 
>>>>>> replies 
>>>>>> to a given request message until processing of previous requests is 
>>>>>> finished and replies to them are sent (in order they were produced 
>>>>>> within 
>>>>>> each "request bucket").
>>>>>>
>>>>>> I can develop such class myself, but it seems a common case, so I was 
>>>>>> wondering if maybe such thing already exists (to not reinvent the 
>>>>>> wheel). 
>>>>>> however I could not easily find anything on the web nor get any answer 
>>>>>> on 
>>>>>> SO 
>>>>>> <https://stackoverflow.com/questions/67174565/java-concurrent-ordered-re

Re: [grpc-io] Re: java concurrent ordered response buffer

2021-04-30 Thread Piotr Morgwai Kotarbinski
please note that this class should only be used if the response messages 
order requirement cannot be dropped: if you control a given proto interface 
definition, then it's more efficient to add some unique id to request 
messages, include it in response messages and send them as soon as they are 
produced, so nothing needs to be buffered.

Cheers!

On Friday, April 30, 2021 at 8:59:11 AM UTC+7 nitish bhardwaj wrote:

>
> Thanks for contributing  *OrderedConcurrentOutputBuffer. * I totally 
> agree, this would be really useful to utilize cores efficiently.
>
>
> On Wednesday, April 28, 2021 at 1:27:46 PM UTC+5:30 mor...@gmail.com 
> wrote:
>
>> Thanks :)
>>
>> I was surprised actually, because I thought parallel request message 
>> processing was a common use-case, both for gRPC and websockets
>> (for example if we have a service that does some single-threaded graphic 
>> processing on received images and sends back a modified version of a given 
>> image, it would be most efficient to dispatch the processing to a thread 
>> pool with a size corresponding to available CPU/GPU cores, right? As 
>> processing them sequentially would utilize just 1 core per request stream, 
>> so in case of low number of concurrent request streams, we would be 
>> underutilizing the cores).
>>
>> Cheers! 
>>
>> On Wednesday, April 28, 2021 at 6:52:08 AM UTC+7 Eric Anderson wrote:
>>
>>> Yeah, we don't have anything pre-existing that does something like that; 
>>> it gets into the specifics of your use-case. Making something yourself was 
>>> appropriate. I will say that the strategy used in 
>>> OrderedConcurrentOutputBuffer with the Buckets seems really clean.
>>>
>>> On Thu, Apr 22, 2021 at 9:21 AM Piotr Morgwai Kotarbinski <
>>> mor...@gmail.com> wrote:
>>>
>>>> in case someone needs it also, I've written it myself due to lack of 
>>>> answers either here and on SO:
>>>>
>>>> https://github.com/morgwai/java-utils/blob/master/src/main/java/pl/morgwai/base/utils/OrderedConcurrentOutputBuffer.java
>>>> feedback is welcome :)
>>>> On Tuesday, April 20, 2021 at 11:09:59 PM UTC+7 Piotr Morgwai 
>>>> Kotarbinski wrote:
>>>>
>>>>> Hello
>>>>> i have a stream of messages coming from a websocket or a grpc client. 
>>>>> for each message my service produces 0 or more reply messages. by default 
>>>>> both websocket endpoints and grpc request observers are guaranteed to be 
>>>>> called by maximum 1 thread concurrently, so my replies are sent in the 
>>>>> same 
>>>>> order as requests. Now I want to dispatch request processing to other 
>>>>> threads and process them in parallel, but still keep the order. 
>>>>> Therefore, 
>>>>> I need some "concurrent ordered response buffer", which will buffer 
>>>>> replies 
>>>>> to a given request message until processing of previous requests is 
>>>>> finished and replies to them are sent (in order they were produced within 
>>>>> each "request bucket").
>>>>>
>>>>> I can develop such class myself, but it seems a common case, so I was 
>>>>> wondering if maybe such thing already exists (to not reinvent the wheel). 
>>>>> however I could not easily find anything on the web nor get any answer on 
>>>>> SO 
>>>>> <https://stackoverflow.com/questions/67174565/java-concurrent-ordered-response-buffer>
>>>>> . does anyone knows about something like this?
>>>>>
>>>>> Thanks!
>>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "grpc.io" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to grpc-io+u...@googlegroups.com.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/grpc-io/e7107eed-fa35-4b2e-8d5a-5754e0a37740n%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/grpc-io/e7107eed-fa35-4b2e-8d5a-5754e0a37740n%40googlegroups.com?utm_medium=email_source=footer>
>>>> .
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f98eef23-84ce-44f9-af08-ad0d1ad9babdn%40googlegroups.com.


[grpc-io] Re: long live grpc issue in bidi stream

2021-04-29 Thread Piotr Morgwai Kotarbinski
it's very hard to figure out what you are trying to achieve and how you 
expect this code to behave: you should narrow the problem and post the 
minimal example that causes it. Moreover, the code is in some intermediate 
inconsistent state and wouldn't even compile (proto has functions 
*GetRequest* and  *SendDataToServer*, but then in your code you call 
something named *sendRequest *;  *GrpcClient* has something that looks like 
constructor but is named  *GrpcClient2* ; etc), which makes it even harder 
to read.
If you ask random ppl on the internet for help, you should make it easy for 
them to help you ;-)

Cheers!

On Wednesday, April 28, 2021 at 1:16:23 PM UTC+7 simpl...@gmail.com wrote:

> Hi All
>
> In my application I have a client which sends the streams of record and 
> server streams the acknowledgment, things are working fine for first 
> request however for the subsequent request I am getting below errors. The 
> subsequent request is made after 2 or 3 seconds(GetRequest in proto file 
> below)
>
> On Server side I am getting
> Cancelling the stream with status Status{code=INTERNAL, description=Too 
> many responses, cause=null}
>
> On client Side I am getting 
>
> CANCELLED: RST_STREAM closed stream. HTTP/2 error code: CANCEL
>
>
> Please find the code:
> *proto file*
> syntax = "proto3";
> option java_multiple_files = true;
> option java_package = "io.grpc.examples.tps";
> option java_outer_classname = "TpsProto";
> option objc_class_prefix = "tps";
>
> package tps;
> // Interface exported by the server.
> service TPS {
>   rpc GetRequest(Request) returns (Acknowledgement) {}// takes data from 
> client and stream data to different grpc server
>   rpc SendDataToServer(stream Request) returns (stream Acknowledgement) {}
> }
>
>  message  Request  {
>   string policyId = 1;
>   string txnId = 2;
>   string clientId = 3;
>   }
>   
>   message Acknowledgement {
>   string txnId = 1;
>   string clientId = 2;
>   }
>   
>
> Client Side code:
>
> public final class GrpcClient {
> private final Semaphore limiter = new Semaphore(1000);
> private final List channels;
> private final List futureStubList;
> StreamObserver str;
> public GrpcClient2(String host, int port) {
> channels = new ArrayList<>();
> futureStubList = new ArrayList<>();
> ManagedChannel channel =null;
>  channel = NettyChannelBuilder.forAddress(host, port)
> 
> .usePlaintext().keepAliveWithoutCalls(true).keepAliveTime(20, TimeUnit.DAYS)
> .build();
>  
>  channels.add(channel);
>  futureStubList.add(TPSGrpc.newStub(channel));
>  str = futureStubList.get(0).sendRequest(new 
> StreamObserver() {
>
>   @Override
>   public void onNext(Acknowledgement value) {
>   // TODO Auto-generated method stub
>   System.out.println(value.getTxnId());
>   
>   }
>   @Override
>   public void onError(Throwable t) {
>   // TODO Auto-generated method stub
>   System.out.println(t.getMessage());
>   
>   }
>   @Override
>   public void onCompleted() {
>   // TODO Auto-generated method stub
>   System.out.println("comp");
>   
>   }
>   });
> }
> public void shutdown() throws InterruptedException {
> }
> public void verifyAsync(Request request) throws InterruptedException {
> limiter.acquire();
>str.onNext(request);
> }
> }
>
>
> Server Side
>
>
> public class Service extends TPSGrpc.TPSImplBase{
>
>
> static Map urlVsGrpcClient = new HashMap<>();
> @Override
> public void getRequest(Request request, StreamObserver 
> responseObserver) {
> Request clone = Request.newBuilder().setClientId(request.getClientId())
> .setTxnId(request.getTxnId()).build();
>
> responseObserver.onNext(Acknowledgement.newBuilder().setTxnId("ab").build());
> responseObserver.onCompleted();
> CompletableFuture.runAsync(new Runnable() {
> @Override
> public void run() {
> for(int i=0;i callNode(GrpcServer.node.get(i), clone);
> }
> }
> });
> }
> @Override
> public StreamObserver sendRequest(StreamObserver 
> responseObserver) {
>  
> return new StreamObserver() {
>
> @Override
> public void onNext(Request value) {
> Acknowledgement ack = 
> Acknowledgement.newBuilder().setClientId(value.getClientId()).
>   setTxnId(value.getTxnId()).build();
> responseObserver.onNext(ack);
> System.out.println(value.getTxnId());
> }
>
> @Override
> public void onError(Throwable t) {
> }
> @Override
> public void onCompleted() {
> responseObserver.onCompleted();
> System.out.println("server comp");
> }
> };
> }
> private void callNode(String node, Request request) {
> GrpcClient client = urlVsGrpcClient.get(node);
> if (client == null) {
> String[] split = node.split(":");
> client = new GrpcClient(split[0], Integer.parseInt(split[1]));
> urlVsGrpcClient.put(node, client);
> }
> try {
> client.verifyAsync(request);
> } catch (InterruptedException e) {
> e.printStackTrace();
> } 
> }
> }
>
>
> Please 

Re: [grpc-io] Re: java concurrent ordered response buffer

2021-04-28 Thread Piotr Morgwai Kotarbinski
Thanks :)

I was surprised actually, because I thought parallel request message 
processing was a common use-case, both for gRPC and websockets
(for example if we have a service that does some single-threaded graphic 
processing on received images and sends back a modified version of a given 
image, it would be most efficient to dispatch the processing to a thread 
pool with a size corresponding to available CPU/GPU cores, right? As 
processing them sequentially would utilize just 1 core per request stream, 
so in case of low number of concurrent request streams, we would be 
underutilizing the cores).

Cheers! 

On Wednesday, April 28, 2021 at 6:52:08 AM UTC+7 Eric Anderson wrote:

> Yeah, we don't have anything pre-existing that does something like that; 
> it gets into the specifics of your use-case. Making something yourself was 
> appropriate. I will say that the strategy used in 
> OrderedConcurrentOutputBuffer with the Buckets seems really clean.
>
> On Thu, Apr 22, 2021 at 9:21 AM Piotr Morgwai Kotarbinski <
> mor...@gmail.com> wrote:
>
>> in case someone needs it also, I've written it myself due to lack of 
>> answers either here and on SO:
>>
>> https://github.com/morgwai/java-utils/blob/master/src/main/java/pl/morgwai/base/utils/OrderedConcurrentOutputBuffer.java
>> feedback is welcome :)
>> On Tuesday, April 20, 2021 at 11:09:59 PM UTC+7 Piotr Morgwai Kotarbinski 
>> wrote:
>>
>>> Hello
>>> i have a stream of messages coming from a websocket or a grpc client. 
>>> for each message my service produces 0 or more reply messages. by default 
>>> both websocket endpoints and grpc request observers are guaranteed to be 
>>> called by maximum 1 thread concurrently, so my replies are sent in the same 
>>> order as requests. Now I want to dispatch request processing to other 
>>> threads and process them in parallel, but still keep the order. Therefore, 
>>> I need some "concurrent ordered response buffer", which will buffer replies 
>>> to a given request message until processing of previous requests is 
>>> finished and replies to them are sent (in order they were produced within 
>>> each "request bucket").
>>>
>>> I can develop such class myself, but it seems a common case, so I was 
>>> wondering if maybe such thing already exists (to not reinvent the wheel). 
>>> however I could not easily find anything on the web nor get any answer on 
>>> SO 
>>> <https://stackoverflow.com/questions/67174565/java-concurrent-ordered-response-buffer>
>>> . does anyone knows about something like this?
>>>
>>> Thanks!
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/e7107eed-fa35-4b2e-8d5a-5754e0a37740n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/e7107eed-fa35-4b2e-8d5a-5754e0a37740n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e9405581-0290-468b-bc06-db27ef4e1064n%40googlegroups.com.


[grpc-io] Re: java concurrent ordered response buffer

2021-04-22 Thread Piotr Morgwai Kotarbinski
in case someone needs it also, I've written it myself due to lack of 
answers either here and on SO:
https://github.com/morgwai/java-utils/blob/master/src/main/java/pl/morgwai/base/utils/OrderedConcurrentOutputBuffer.java
feedback is welcome :)
On Tuesday, April 20, 2021 at 11:09:59 PM UTC+7 Piotr Morgwai Kotarbinski 
wrote:

> Hello
> i have a stream of messages coming from a websocket or a grpc client. for 
> each message my service produces 0 or more reply messages. by default both 
> websocket endpoints and grpc request observers are guaranteed to be called 
> by maximum 1 thread concurrently, so my replies are sent in the same order 
> as requests. Now I want to dispatch request processing to other threads and 
> process them in parallel, but still keep the order. Therefore, I need some 
> "concurrent ordered response buffer", which will buffer replies to a given 
> request message until processing of previous requests is finished and 
> replies to them are sent (in order they were produced within each "request 
> bucket").
>
> I can develop such class myself, but it seems a common case, so I was 
> wondering if maybe such thing already exists (to not reinvent the wheel). 
> however I could not easily find anything on the web nor get any answer on 
> SO 
> <https://stackoverflow.com/questions/67174565/java-concurrent-ordered-response-buffer>
> . does anyone knows about something like this?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e7107eed-fa35-4b2e-8d5a-5754e0a37740n%40googlegroups.com.


[grpc-io] java concurrent ordered response buffer

2021-04-20 Thread Piotr Morgwai Kotarbinski
Hello
i have a stream of messages coming from a websocket or a grpc client. for 
each message my service produces 0 or more reply messages. by default both 
websocket endpoints and grpc request observers are guaranteed to be called 
by maximum 1 thread concurrently, so my replies are sent in the same order 
as requests. Now I want to dispatch request processing to other threads and 
process them in parallel, but still keep the order. Therefore, I need some 
"concurrent ordered response buffer", which will buffer replies to a given 
request message until processing of previous requests is finished and 
replies to them are sent (in order they were produced within each "request 
bucket").

I can develop such class myself, but it seems a common case, so I was 
wondering if maybe such thing already exists (to not reinvent the wheel). 
however I could not easily find anything on the web nor get any answer on SO 

. does anyone knows about something like this?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/56f93d6b-2cce-422a-ae24-8f189d6ef578n%40googlegroups.com.


[grpc-io] Re: gRPC Java on FreeBSD: did anyone try?

2021-04-19 Thread Piotr Morgwai Kotarbinski
you haven't included enough info, so we can only guess (with high 
probability though) that the error is caused by the fact that there's no 
protoc binary maven artifact for FreeBSD in the maven central 
repo: https://repo1.maven.org/maven2/com/google/protobuf/protoc/3.12.0/
It is probably possible to work around by building protoc first and 
installing it in your local maven repo.

Cheers!

On Tuesday, April 6, 2021 at 11:27:45 AM UTC+7 tar...@gmail.com wrote:

> Well, Java is supposed to be multiplatofrm. And as well gRPC should not be 
> system-dependent.
> However, I cannot build gRPC-java because of "Unknown OS: FreeBSD" error.
> As far as I understand, just adding FreeBSD to the list should work; 
> there's nothing THAT different, unless gRPC use something kernel-dependent.
> How can I do it?
>
> Alex
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6b4abf84-83f9-4d7f-b7a4-d5d3fc4484ebn%40googlegroups.com.


[grpc-io] announce: unofficial gRPC Guice scopes

2021-04-17 Thread Piotr Morgwai Kotarbinski
Hello all,
Recently I've finally found some time to polish and generalize a set of 
classes providing Guice scopes for gRPC, that I was copy-pasting-tweaking 
for every grpc-java project for the last few years: maybe someone else will 
find it useful :)

https://github.com/morgwai/grpc-scopes

it provides `rpcScope` and `listenerCallScope`. Oversimplifying, in case of 
a streaming client, `listenerCallScope` spans over processing of a single 
message from client's stream, while `rpcScope` spans over a whole given 
RPC. Oversimplifying again, in case of a unary client, these 2 scopes have 
roughly the same span.

Feedback is welcome :)

Cheers!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fa37c78e-a5f1-4e32-908b-0be9077197fan%40googlegroups.com.