Hm, I would have thought that seeing the TCP connection broken would at 
some point cause StreamObserver.onNext() (on the server side) to complain.

I've set all four options you mentioned on my Netty server. When the gRPC 
starts, the server enters a loop where it calls StreamObserver.onNext() 
basically forever until the server shuts down, at which point it calls 
onCompleted().

When the connection from client to server is made, I can see keepalive 
pings via the grpc debug logs. Once the time surpasses the maxConnectionAge 
+ grace period, the client disconnects. I'm assuming this is an 
"ungraceful" disconnect, since it only disconnects once the grace period is 
over (as the stream is still producing more messages).

However, the server continues to execute its loop calling onNext() ad 
infinitum. It seems to completely ignore that the client no longer accepts 
messages, nor does it respond to the keepalive pings. So now it looks like 
my problem is that the server *never* recognizes that the client has 
disappeared.

Here's my server configuration. The time values aren't meant to be 
realistic, I just wanted to see the behavior in a manual test:

Server server = NettyServerBuilder.forPort(port)
            .addService(testService)
            .executor(Executors.newFixedThreadPool(5))
            .keepAliveTime(10, TimeUnit.SECONDS)
            .keepAliveTimeout(10, TimeUnit.SECONDS)
            .maxConnectionAge(30, TimeUnit.SECONDS)
            .maxConnectionAgeGrace(5, TimeUnit.SECONDS)
            .maxConnectionIdle(5, TimeUnit.SECONDS)
            .build();

Inside testService, it has this server-side streaming method:

// for the purpose of this test, this queue is pre-filled with a handful of 
StreamItems, and never has any other objects added to it   
BlockingQueue<StreamItem> queue = new LinkedBlockingQueue<>();

    @Override
    public void testStream(StreamRequest request, 
StreamObserver<StreamItem> responseObserver) {
        try {
            while (!Thread.interrupted()) {
                StreamItem next = queue.poll();
                if (next == null) {
                    LOGGER.debug("Queue is empty, waiting before next 
poll()");
                    Thread.sleep(1000);
                    // send default instance which is empty, to signal that 
the queue is empty right now
                    
responseObserver.onNext(StreamItem.getDefaultInstance());
                } else {
                    responseObserver.onNext(next);
                }
            }
        } catch (InterruptedException e) {
            LOGGER.info("Interrupted. Exiting loop.");
        } catch (Exception e) {
            LOGGER.error("Unexpected error", e);
            responseObserver.onError(e);
            return;
        }
        responseObserver.onCompleted();
    }

And here's my test client code:

ManagedChannel channel = ManagedChannelBuilder.forTarget(target)
        .usePlaintext(true)
        .build();

TestServiceGrpc.TestServiceBlockingStub blockingStub = 
TestServiceGrpc.newBlockingStub(channel);

try {
    Iterator<StreamItem> stream = 
blockingStub.testStream(subscriptionRequest);
    while (stream.hasNext()) {
        LOGGER.debug("Received an item:\n{}", stream.next());
    }
} catch (StatusRuntimeException e) {
    LOGGER.error("GRPC Exception: {}", e.getStatus());
}

Is there something wrong here? I feel like I'm missing something.


On Thursday, April 12, 2018 at 2:35:19 PM UTC-7, Carl Mastrangelo wrote:
>
> This can be eventually detected by 
> setting keepAliveTime(), keepAliveTimeout(), maxConnectionIdle(), 
> maxConnectionAge(), and so forth on your ServerBuilder.    
>
> In general, it's not possible to quickly detect if the remote side has 
> silently stopped, so the best we can do is actively check, and set 
> timeouts. 
>
> On Thursday, April 12, 2018 at 2:07:54 PM UTC-7, Christopher Schechter 
> wrote:
>>
>> Hi all,
>>
>> I'm working on setting up a server-side stream GRPC. When the stream is 
>> started, the server should stream messages to the client until either the 
>> server or the client shuts down.
>>
>> When the server shuts down, this is easy - it calls 
>> StreamObserver.onCompleted() and the client is then notified that the 
>> stream is ending.
>>
>> When clients shut down, they can do a graceful shutdown with 
>> ManagedChannel.shutdownNow() to stop the stream from their end. However, 
>> when the client shuts down ungracefully, the server never notices, and can 
>> continue to call StreamObserver.onNext() basically forever.
>>
>> My question is, is there a way to detect this situation from the server 
>> side when the client shuts down ungracefully? I would expect an exception 
>> to be thrown at some point, but that never happens. Can I manually check 
>> something to see whether the connection is broken?
>>
>> I've seen one other mention of a similar/same issue in this topic 
>> <https://groups.google.com/forum/#!msg/grpc-io/aFsIiGnQKTY/ipbaPKuvBgAJ>, 
>> which is from a long time ago but I don't see a resolution to it.
>>
>> Thanks,
>> Chris
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/925b1f24-5eb6-46f8-9d18-48beebc983be%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to