Thanks a lot! Jan

I will look into that code

在2020年10月19日星期一 UTC+8 下午4:21:25<Jan Tattermusch> 写道:

> This is best answered by looking at the sources:
> Deserialization: 
> https://github.com/grpc/grpc/blob/master/src/csharp/Grpc.Core/Internal/DefaultDeserializationContext.cs
> Serialization: 
> https://github.com/grpc/grpc/blob/master/src/csharp/Grpc.Core/Internal/DefaultSerializationContext.cs
>
> Starting from Grpc.Core 2.32.0 (which has 
> https://github.com/grpc/grpc/pull/23485/files), protobufs are serialized 
> to an IBufferWriter which is implemented so it writes directly to native 
> memory (which is consumed by the C core native library). On the 
> deserialization side, protobufs can be parsed directly from the native 
> memory ("slice buffer") returned by the native C core library (the native 
> slice buffer is transformed into a ReadOnlySequence "view" without copying 
> the data).
>
> On Thursday, October 8, 2020 at 3:15:19 PM UTC+2 [email protected] wrote:
>
>> Hi, 
>>
>> got 2 questions need help regarding request/response stream memory 
>> management in gRpc.Core:
>>
>> 1. Is  request/response stream buffer being pooled under the cover so 
>> that memory allocation is controlled even the requests volume and size are 
>> both large? if yes are there nobs that can be tuned at application layer? 
>> for instance if the application found the machine is not under pressure of 
>> memory usage, it may expand the stream pool for gRpc request/response?
>>
>> 2. If the Request say contains one "bytes" type data which is pretty big. 
>> When the client commits the Async request, and gRpc.core prepare request 
>> stream data, does gRpc.Core do a copy of the "bytes" data to the request 
>> stream buffer? If yes, since the data size is large, is it possible to 
>> avoid copy? e.g. access the request stream buffer from the Application 
>> layer?
>>
>> thanks a lot!
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/089de028-9d0b-490c-8136-52b202dd6f7an%40googlegroups.com.

Reply via email to