jorisvandenbossche commented on a change in pull request #10101:
URL: https://github.com/apache/arrow/pull/10101#discussion_r616728155



##########
File path: python/pyarrow/array.pxi
##########
@@ -1170,7 +1170,13 @@ cdef class Array(_PandasConvertible):
         array = PyObject_to_object(out)
 
         if isinstance(array, dict):
-            array = np.take(array['dictionary'], array['indices'])
+            if zero_copy_only or not self.null_count:
+                # zero_copy doesn't allow for nulls to be in the array
+                array = np.take(array['dictionary'], array['indices'])
+            else:
+                missings = array["indices"] < 0
+                array = np.take(array['dictionary'], array['indices'])
+                array[missings] = np.NaN

Review comment:
       This will not work for all data types, I think (eg a dictionary array 
with integer dictionary). Also, for strings, we might want to use `None` 
instead of `np.nan`, since this is what we do in basic conversion as well (eg 
see `pa.array(['a', None]).to_numpy(zero_copy_only=False)`). 
   
   Given those complexities, I am wondering if it might not be easier to 
*first* convert to a "dense" array before doing the arrow->python conversion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to