This is an automated email from the ASF dual-hosted git repository.

jorisvandenbossche pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow.git


The following commit(s) were added to refs/heads/main by this push:
     new 334c93726f GH-37598: [Python][Interchange Protocol] Fix the 
from_dataframe implementation to use the column dtype (#37986)
334c93726f is described below

commit 334c93726f3c9d9e69eb51136c7f354974c739fd
Author: Alenka Frim <[email protected]>
AuthorDate: Thu Oct 5 11:02:23 2023 +0200

    GH-37598: [Python][Interchange Protocol] Fix the from_dataframe 
implementation to use the column dtype (#37986)
    
    ### Rationale for this change
    
    We have been defining buffer dtypes for string and timestamp types 
incorrectly in the DataFrame Interchange Protocol implementation. This PR is 
the first step to fix the error and is dealing with the `from_dataframe` part. 
The next two steps to solve the connected issue are:
    
    2. Make sure other libraries have also updated their `from_dataframe` 
implementation
    3. Fix the data buffer dtypes for strings and timestamps.
    
    ### What changes are included in this PR?
    
    Fix the `from_dataframe` implementation to use the column dtype rather than 
the data buffer dtype to interpret the buffers. Only for the indices of the 
categorical column we still use buffer data type in order to convert the 
indices when constructing the `DictionaryArray`.
    
    ### Are these changes tested?
    
    No new tests are added but all the existing tests should pass and with that 
the stability of the change is tested.
    
    ### Are there any user-facing changes?
    
    No.
    * Closes: #37598
    
    Lead-authored-by: AlenkaF <[email protected]>
    Co-authored-by: Alenka Frim <[email protected]>
    Co-authored-by: Joris Van den Bossche <[email protected]>
    Signed-off-by: Joris Van den Bossche <[email protected]>
---
 python/pyarrow/interchange/from_dataframe.py | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/python/pyarrow/interchange/from_dataframe.py 
b/python/pyarrow/interchange/from_dataframe.py
index 1d41aa8d7e..d653054e91 100644
--- a/python/pyarrow/interchange/from_dataframe.py
+++ b/python/pyarrow/interchange/from_dataframe.py
@@ -19,6 +19,7 @@ from __future__ import annotations
 
 from typing import (
     Any,
+    Tuple,
 )
 
 from pyarrow.interchange.column import (
@@ -204,7 +205,9 @@ def column_to_array(
     pa.Array
     """
     buffers = col.get_buffers()
-    data = buffers_to_array(buffers, col.size(),
+    data_type = col.dtype
+    data = buffers_to_array(buffers, data_type,
+                            col.size(),
                             col.describe_null,
                             col.offset,
                             allow_copy)
@@ -236,7 +239,9 @@ def bool_column_to_array(
         )
 
     buffers = col.get_buffers()
-    data = buffers_to_array(buffers, col.size(),
+    data_type = col.dtype
+    data = buffers_to_array(buffers, data_type,
+                            col.size(),
                             col.describe_null,
                             col.offset)
     data = pc.cast(data, pa.bool_())
@@ -274,11 +279,15 @@ def categorical_column_to_dictionary(
         raise NotImplementedError(
             "Non-dictionary categoricals not supported yet")
 
+    # We need to first convert the dictionary column
     cat_column = categorical["categories"]
     dictionary = column_to_array(cat_column)
-
+    # Then we need to convert the indices
+    # Here we need to use the buffer data type!
     buffers = col.get_buffers()
-    indices = buffers_to_array(buffers, col.size(),
+    _, data_type = buffers["data"]
+    indices = buffers_to_array(buffers, data_type,
+                               col.size(),
                                col.describe_null,
                                col.offset)
 
@@ -326,6 +335,7 @@ def map_date_type(data_type):
 
 def buffers_to_array(
     buffers: ColumnBuffers,
+    data_type: Tuple[DtypeKind, int, str, str],
     length: int,
     describe_null: ColumnNullType,
     offset: int = 0,
@@ -339,6 +349,9 @@ def buffers_to_array(
     buffer : ColumnBuffers
         Dictionary containing tuples of underlying buffers and
         their associated dtype.
+    data_type : Tuple[DtypeKind, int, str, str],
+        Dtype description of the column as a tuple ``(kind, bit-width, format 
string,
+        endianness)``.
     length : int
         The number of values in the array.
     describe_null: ColumnNullType
@@ -360,7 +373,7 @@ def buffers_to_array(
     is responsible for keeping the memory owner object alive as long as
     the returned PyArrow array is being used.
     """
-    data_buff, data_type = buffers["data"]
+    data_buff, _ = buffers["data"]
     try:
         validity_buff, validity_dtype = buffers["validity"]
     except TypeError:

Reply via email to