I stumbled upon a post from 2010 mentioning using CExtensions to improve 
python protobuf serialization times:
https://groups.google.com/g/protobuf/c/z7E80KYJscc/m/ysCjHHmoraUJ
where the author states "~13x speedups" as a result of "Python code with 
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp actually
also *searches for the symbols for any pre-generated C++ code in the
current process*, and uses them if available instead of
DynamicMessage." when using a CExtension. I can find reference code on 
github 
<https://github.com/CampagneLaboratory/goby3/blob/9cd384a0b4afa5c85e70ba5a6b14ed6066d0a7de/python/setup.py#L30>
 
that uses this technique, and references a corresponding speedup, however I 
do notice that all code using this approach is ~10 years old. Is this 
approach still beneficial in any way to improve serialization performance 
in python? Or would protobuf 3.19+ with 
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp result in equally performant 
serialization when serializing in Python3? Thanks
-  Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/a2161dc5-e384-4a9d-a414-0dd045524034n%40googlegroups.com.

Reply via email to