zeroshade commented on issue #2084:
URL: https://github.com/apache/arrow-adbc/issues/2084#issuecomment-2297723386
@pkit Currently, decimal is not supported for using arrow record batch data
to be inserted into a query via bind params such as doing `SELECT * FROM table
WHERE col = ?` and having a decimal replace this.
If you're doing a bulk insert, you should utilize `cursor.adbc_ingest` which
has no issues with decimal data:
```
>>> tbl
pyarrow.Table
NUMBERTYPE: decimal128(38, 0)
NUMBERFLOAT: decimal128(15, 2)
----
NUMBERTYPE: [[1, 12345678901234567890123456789012345678]]
NUMBERFLOAT: [[1234567.89,9876543210.99]]
>>> conn = adbc_driver_snowflake.dbapi.connect(uri)
>>> cur = conn.cursor()
>>> cur.adbc_ingest('NUMBER_TEST', tbl)
0
>>> cur.execute('SELECT * FROM NUMBER_TEST')
>>> cur.fetch_arrow_table()
pyarrow.Table
NUMBERTYPE: decimal128(38, 0)
NUMBERFLOAT: decimal128(15, 2)
----
NUMBERTYPE: [[1, 12345678901234567890123456789012345678]]
NUMBERFLOAT: [[1234567.89,9876543210.99]]
```
At the same token, by default I believe we return all `NUMBER(38, 0)` as
decimal128, but there is an option
`adbc.snowflake.sql.client_option.use_high_precision` which can be set to
"false" to have fixed-point data with a scale of 0 returned as int64 columns.
Could you share the code you were getting that error from if this doesn't
answer your issue sufficiently?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]