Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/22470
@mgaido91 you are right, this still has behavior changes if the intermedia
result exceed the max precision. Since most of the storages don't support
negative scale(hive, parquet, etc.), I think
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/22470
@mgaido91 makes sense. Actually @cloud-fan had asked me to write some test
cases for decimal values with -ve scale in another PR. While i was playing
around, i found this issue. It seemed to me
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22470
@dilipbiswal yes, I definitely think that in general we should get to
forbid negative scales. I only thought that this should be done in 3.0 rather
than now. And for now the safest option to me
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/22470
@mgaido91 @cloud-fan On the other hand .. some use cases may work better
:-) , for example
Before
```
scala> spark.sql("create table dec as select (1e36 * 1) as col1")
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22470
@cloud-fan I think the main problem about this (and it is the reason why I
haven't proposed it) is that the range of operations supported would be
smaller, so we may forbid operations which now
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22470
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96271/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22470
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/22470
**[Test build #96271 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96271/testReport)**
for PR 22470 at commit
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/22470
@cloud-fan Could you please check CSVinferSchema::tryParseDecimal() ? There
is a condition to check negative scale.
---
-
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/22470
**[Test build #96271 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96271/testReport)**
for PR 22470 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22470
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22470
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/22470
cc @mgaido91 @hvanhovell @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
13 matches
Mail list logo