chaokunyang opened a new issue, #1993:
URL: https://github.com/apache/fury/issues/1993
### Feature Request
pyfury is 3x faster than pickle serialization and 2x faster than pickle
deserialization, here is the benchmark code:
```python
@dataclass
class ComplexObject1:
f1: Any = None
f2: str = None
f3: List[str] = None
f4: Dict[pyfury.Int8Type, pyfury.Int32Type] = None
f5: pyfury.Int8Type = None
f6: pyfury.Int16Type = None
f7: pyfury.Int32Type = None
f8: pyfury.Int64Type = None
f9: pyfury.Float32Type = None
f10: pyfury.Float64Type = None
f12: List[pyfury.Int16Type] = None
@dataclass
class ComplexObject2:
f1: Any
f2: Dict[pyfury.Int8Type, pyfury.Int32Type]
fury = pyfury.Fury(language=pyfury.Language.PYTHON)
fury.register_type(ComplexObject1)
fury.register_type(ComplexObject2)
o = COMPLEX_OBJECT
start = time.time()
binary = fury.serialize(o)
for i in range(50000000):
# binary = fury.serialize(o)
fury.deserialize(binary)
print(time.time() - start)
start = time.time()
binary = pickle.dumps(o)
for i in range(500000):
# binary = pickle.dumps(o)
pickle.loads(binary)
print(time.time() - start)
```
But the performance is not fast enough, with the flame graph, we can see
there are still performance improvement space:

### Is your feature request related to a problem? Please describe
_No response_
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]