>  Is it possible some new clients (esp clients using another gRPC 
language) are now connecting which is why you are seeing this error?

Both the client and the server are backend Java services. Both use same set 
of gRPC+ProtoBuf libraries. After removing below method on the client-side 
there are no more errors for few days (the communication is bi-di streaming 
and is always up, messages get exchanged constantly):

NettyChannelBuilder...enableFullStreamDecompression()

It might not be the reason though (the error was on the server-side), since 
the API says: 


*Enables full-stream decompression of inbound streams. This will cause the 
channel's outboundheaders to advertise support for GZIP compressed streams, 
and gRPC servers which support the feature may respond with a GZIP 
compressed stream.*



> Can you enable debug/trace logging to see the value of the 
"grpc-encoding"  header for the offending RPC? That will give us some idea 
whether you need to use a custom decompressor registry.

Could you please provide more info of how to do that? 

At present we have such logging config:

<logger name="io.grpc.netty.shaded.io.grpc.netty">
  <level value="debug"/>
  <appender-ref ref="GrpcCoreAppender"/>
</logger>

which produces log like that (no "grpc-encoding"  header in it):

DEBUG netty.NettyServerHandler - [id: 0x9f841cd8, L:/xxx:8443 - 
R:/yyy:43421] INBOUND PING: ack=false bytes=1234
DEBUG netty.NettyServerHandler - [id: 0x9f841cd8, L:/xxx:8443 - 
R:/yyy:43421] OUTBOUND PING: ack=true bytes=1234

DEBUG netty.NettyServerHandler - [id: 0x9f841cd8, L:/xxx:8443 - 
R:/yyy:43421] OUTBOUND DATA: streamId=3 padding=0 endStream=false 
length=249 
bytes=00000000c708820610f8dc9594df30186b22b8010a0732353233363432120d6f706f732d3239373535316333180120033a06455552555344420b313030323432...

DEBUG netty.NettyServerHandler - [id: 0x9f841cd8, L:/xxx:8443 - 
R:/yyy:43421] INBOUND DATA: streamId=3 padding=0 endStream=false length=48 
bytes=000000002b08e00110fbdc9594df301804221d0a0732353233363432120c6f72642d3239373735316333180220012803


On Thursday, January 26, 2023 at 9:53:00 PM UTC+2 sanjay...@google.com 
wrote:
> There were plenty of messages processed for few days, and all of a sudden 
this exception breaks the connection. 

Is it possible some new clients (esp clients using another gRPC language) 
are now connecting which is why you are seeing this error?

> What does it mean, and how can be solved?

Can you enable debug/trace logging to see the value of the "grpc-encoding"  
header for the offending RPC? That will give us some idea whether you need 
to use a custom decompressor registry.



On Monday, January 23, 2023 at 9:41:17 PM UTC-8 chris...@gmail.com wrote:
Hello Everyone,

We got this error on the server-side of a bi-di streaming:

2023-01-23 23:41:54,879 ERROR [GRPC worker, id #1]  
io.grpc.netty.shaded.io.grpc.netty.NettyServerStream$TransportState 
deframeFailed
WARNING: Exception processing message
io.grpc.StatusRuntimeException: INTERNAL: Can't decode compressed gRPC 
message as compression not configured
        at io.grpc.Status.asRuntimeException(Status.java:526)
        at 
io.grpc.internal.MessageDeframer.getCompressedBody(MessageDeframer.java:428)
        at 
io.grpc.internal.MessageDeframer.processBody(MessageDeframer.java:410)
        at 
io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:275)
        at 
io.grpc.internal.MessageDeframer.request(MessageDeframer.java:161)
        at 
io.grpc.internal.AbstractStream$TransportState$1RequestRunnable.run(AbstractStream.java:236)
        at 
io.grpc.netty.shaded.io.grpc.netty.NettyServerStream$TransportState$1.run(NettyServerStream.java:202)
        at 
io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
        at 
io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
        at 
io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
        at 
io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at 
io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at java.lang.Thread.run(Thread.java:748)


There were plenty of messages processed for few days, and all of a sudden 
this exception breaks the connection. What does it mean, and how can be 
solved?

Is there a way to trap and "swallow" such exception so that the channel 
does not get closed?

Using grpc netty shaded 1.42.1 with epoll.

Regards,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c8aae68c-27c0-48fa-bf16-f3c341fab707n%40googlegroups.com.

Reply via email to