joellubi commented on code in PR #1456:
URL: https://github.com/apache/arrow-adbc/pull/1456#discussion_r1464234874


##########
go/adbc/driver/snowflake/record_reader.go:
##########
@@ -212,13 +215,7 @@ func getTransformer(sc *arrow.Schema, ld 
gosnowflake.ArrowStreamLoader, useHighP
                                                        continue
                                                }
 
-                                               q := int64(t) / 
int64(math.Pow10(int(srcMeta.Scale)))
-                                               r := int64(t) % 
int64(math.Pow10(int(srcMeta.Scale)))
-                                               v, err := 
arrow.TimestampFromTime(time.Unix(q, r), dt.Unit)
-                                               if err != nil {
-                                                       return nil, err
-                                               }
-                                               tb.Append(v)
+                                               tb.Append(arrow.Timestamp(t))

Review Comment:
   I think we might have to make this assumption one way or another. In the 
existing approach we rely on `srcMeta.Scale` being correct, which comes from 
Snowflake. If that doesn't match the actual scale of the `int64` value we get 
then the calculations to get `q` and `r` (`sec` and `nsec`) will be wrong as 
well. Given that assumption, the additional computation doesn't end up being 
necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to