sjperkins commented on issue #16264:
URL: https://github.com/apache/arrow/issues/16264#issuecomment-2309951781

   Replying to @rok 's comment 
   
   - https://github.com/apache/arrow/issues/43809#issuecomment-2308936512
   
   here as it's probably more relevant to this thread.
    
   > Oh interesting approach. Is there other systems that do this? Would this 
approach be better fitted for vectorization? I suppose it would be more 
efficient for Parquet.
   
   I'm not aware of other systems that do this and haven't considered the 
benefits to vectorization. Mostly it satisfies the Fixed Width requirements in 
FixedShapeTensor (and I guess VariableShapeTensor).
   
   
https://github.com/apache/arrow/blob/51e9f70f94cd09a0a08196afdd2f4fc644666b5e/cpp/src/arrow/extension/fixed_shape_tensor.cc#L208-L220
   
   Then one could interpret the raw bytes as real and imaginary components. I 
guess one would have to consider endianess here.
   
   From hazy memory, the previous attempt created numeric primitive complex64 
and complex128 types based on std::complex<float> and std::complex<double>,
   
   - https://github.com/apache/arrow/pull/10452
   
   but this resulted in extensive changes throughout the code base due to the 
need to support the relevant operations for those types. This probably has 
knock-on effects to binary size.
   
    A FixedSizeBinary(64/128) might be a good compromise between creating a 
fixed with type while not implementing every primitive operation for complex 
numbers.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to