Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1160577823
Document updated, thanks for previous valuable comments!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1159351619
It's interesting that previous commit could pass the test and some other PRs
can pass it also. I will try to revert some changes to see whether I can pass
it.
--
This is an
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1152503470
Please kindly help relaunch the test once the CI issue has been fixed,
thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1152425534
Jenkins retest this please
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1150774845
The test failure seems have no relationship with the committed code, several
recent PRs failed with same error, like [this
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1150603723
retest this please
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1147448291
Agreed with you that it's better not to modify Oracle related part, just
removed from the commit.
Yes, I suggest we use scale = 18.
And for precision, when `Number(*)` or
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1147065740
We have "maximum scale" [defined in
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1146822523
For NUMBER(*) on Teradata, the scale is not fixed but can suit itself to
different value, as they said, it's only constrained by `system limit`. So the
issue for Teradata is about
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1146638887
@srowen Thanks for your response. For first part, `indicate NUMBER with the
system limits for precision and scale`, we didn't find more explanations about
it. It sounds like the
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1137093715
@HyukjinKwon @srowen Just updated the latest comment with some findings
about the root cause of the issue and current solution. Any comment is
welcomed, thanks!
--
This is an
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1129569337
@srowen It should be a Teradata specific issue. I tried to read data with
teradata driver, `terajdbc4` and `tdgssconfig` , the data read contains the
fractional part. The code is sth
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1129110003
@HyukjinKwon The [issue-38846
](https://issues.apache.org/jira/browse/SPARK-38846) shows that the Number type
of Teradata will lose its fractional part after loading to Spark. We
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1129099918
@srowen I'm also not a Teradata guy, just invokes Teradata's API from Spark
and found the issue. I didn't find the document explaining the issue at
Teradata side. I tried to print
Eugene-Mark commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1126221985
@HyukjinKwon @srowen It's appreciated if the PR can be reviewed recently.
Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
15 matches
Mail list logo