CurtHagenlocher commented on issue #3134:
URL: https://github.com/apache/arrow-adbc/issues/3134#issuecomment-3062332255

   I'm curious about how this works for the Dremio ODBC driver (either with or 
without Flight); does it wait until it's read all the chunks before it reports 
the schema of the result set?
   
   The only somewhat-analogous thing I've seen is with Snowflake `NUMBER(N, 0)` 
columns where an Arrow-formatted data chunk might be represented as a number of 
lower precision. But the reported schema always contains the maximum precision 
and so the driver just needs to cast columns in individual chunks in order to 
assemble the final output. (And there's no FlightSQL in this picture; just 
Arrow-formatted data.)
   
   There's also some overlap with schema evolution in formats like Delta, where 
columns in the individual Parquet files don't have to match exactly the 
declared schema of the table. But they do have to be compatible with it, so a 
table with an `int` column can't have a Parquet file where that column is an 
`int64`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to