zeroshade commented on issue #3134:
URL: https://github.com/apache/arrow-adbc/issues/3134#issuecomment-3063654670

   > It may still be possible as an option (most of this functionality should 
be available in arrow-go already) but I'd like to see what @zeroshade thinks 
(should we move this to the mailing list to put in front of the other Flight 
SQL users?).
   
   Personally, my view is similar to @lidavidm in that I'm curious why there 
isn't a consistent schema in the first place that Dremio workers could cast to 
before output. Similar to @CurtHagenlocher I would like to know how the Dremio 
ODBC driver avoids this issue also, given that as far as I'm aware it is also 
using Arrow Flight/FlightSQL under the hood and thus would run into the same 
problem.
   
   The issue with Snowflake that was brought up is that internally, they do not 
maintain a min/max over the entire result set to know what the final Arrow 
schema should be up front, and instead just use the smallest precision for a 
given chunk. In Dremio's case you are already using Arrow internally throughout 
the entire system, so everything should in theory map easily to Arrow types 
enabling the planner and workers to easily know what to cast to.
   
   If this can't be done on the server side, I'd prefer we simply create a 
*dremio* ADBC driver which performs any necessary casting / handling of 
inconsistencies rather than pushing this onto the generic Flight SQL situation. 
As David said, Flight SQL shouldn't *require* a client to perform casting and 
transformations, it should just require handling the Arrow memory format.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to