paleolimbot commented on code in PR #288:
URL: https://github.com/apache/arrow-nanoarrow/pull/288#discussion_r1311660331


##########
r/R/convert-array.R:
##########
@@ -85,14 +85,28 @@ convert_array <- function(array, to = NULL, ...) {
 #' @export
 convert_array.default <- function(array, to = NULL, ..., .from_c = FALSE) {
   if (.from_c) {
+    # Handle extension conversion
+    # We don't need the user-friendly versions and this is 
performance-sensitive
+    schema <- .Call(nanoarrow_c_infer_schema_array, array)
+    parsed <- .Call(nanoarrow_c_schema_parse, schema)
+    if (!is.null(parsed$extension_name)) {
+      spec <- resolve_nanoarrow_extension(parsed$extension_name)
+      return(convert_array_extension(spec, array, to, ...))
+    }
+
     # Handle default dictionary conversion since it's the same for all types
     dictionary <- array$dictionary
 
     if (!is.null(dictionary)) {
       values <- .Call(nanoarrow_c_convert_array, dictionary, to)
       array$dictionary <- NULL
       indices <- .Call(nanoarrow_c_convert_array, array, integer())

Review Comment:
   That is definitely a good optimization...ideally there would be an offset + 
scale for all numerics (scale is used for timestamp and difftime conversion). I 
would like to move the conversion process such to something like 
`get_converter(array, to)` returning something that could be constructed like 
`int_vector_converter(offset = 1)`. I'm currently trying to (for better or 
possibly worse) provide full type coverage (and associated test coverage), at 
the expense of speed for the less frequently used types.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to