Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-154087900
Please see PR #9495 for the oracle dialect solution proposed above.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user travishegner closed the pull request at:
https://github.com/apache/spark/pull/8780
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature i
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-154004331
**[Test build #1986 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/1986/consoleFull)**
for PR 8780 at commit
[`d11141c`](https://gi
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-154003137
**[Test build #1986 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/1986/consoleFull)**
for PR 8780 at commit
[`d11141c`](https://git
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-153918977
@travishegner Will you have time to continue your work? I think our
resolution is to create a oracle dialect and we automatically register it (see
https://github.com/apach
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-145693030
@travishegner looks like it is best to just do it in the oracle dialect.
---
If your project is set up for it, you can reply to this email and have your
reply appear on Gi
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-145609065
@cloud-fan @bdolbeare @davies I'm certainly open to doing this in an oracle
specific way if that is what is required. I was simply hoping to solve my
problem while
Github user travishegner commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r41172742
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -140,7 +140,12 @@ object DecimalType extends AbstractDataType {
Github user travishegner commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r41171411
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -66,9 +66,7 @@ private[sql] object JDBCRDD extend
Github user bdolbeare commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-145598968
The problem with Oracle is that you can define numbers without providing
precision or scale:
column_name NUMBER (this is the only case that doesn't work v
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-144541760
I met this problem before, and actually it's not spark that detect them to
be 0 and -127, but JDBC. My solution is just adding a `OracleDialect` to handle
this sepcial
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r40835035
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -140,7 +140,12 @@ object DecimalType extends AbstractDataType {
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r40834945
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -66,9 +66,7 @@ private[sql] object JDBCRDD extends Logg
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-144502287
cc @davies to take a quick look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-144054401
So any thoughts on merging this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142282068
I'm making sure the new version builds, but here are the rules:
```scala
private[sql] def bounded(precision: Int, scale: Int): DecimalType = {
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142103750
precision = 10 and scale = -20 should be fine.
```
scala> Seq((121, 134)).toDF("a","b")
res0: org.apache.spark.sql.DataFrame = [a: int, b: int]
scal
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142100699
OK, after looking at this a little further, it seems that
DecimalType.bounded() should be called regardless of precision and scale values
in JDBCRDD.scala, and then
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142095013
Oh actually - let me correct it.
If scale is positive, then precision needs to >= scale.
If scale is negative, then precision can be anything (>0).
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142045620
But a negative scale is inherently less than a defined precision... or do
you mean precision should never be less than the absolute value of scale? Is
that somethin
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142037639
They would all be null then. It doesn't make sense to have precision <
scale.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142004449
Working on a new patch... Would it ever be possible to have a case where
precision is 0 (essentially undefined), but scale is still intentionally set?
Or is it that
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-141609708
That should work.
On Sep 18, 2015, at 7:15 AM, Travis Hegner wrote:
That is exactly what I was afraid of. Would the patch make more sense to
*only* che
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-141462484
That is exactly what I was afraid of. Would the patch make more sense to
*only* check precision for a zero value? Does it ever make sense to have a
precision of zer
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-141351034
(I actually don't know if Spark implements this correctly -- we should test
it)
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-141350914
Actually scale can be negative. It just means the number of 0s to the left
of decimal point.
For example, for number 123, precision = 2 and scale = -1, then 123 wou
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140936357
I'm not sure if oracle can be associated with anything *reasonable*, but
sometimes you have to play the hand you are dealt. :)
I can only answer your questi
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140913785
I'm not super sure on that, one question would be if this is reasonable
behavior for all databases or only Oracle.
---
If your project is set up for it, you can reply t
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140913028
Yes, that is the intention. Is this the proper way to address that issue?
On Wed, Sep 16, 2015, 5:14 PM Holden Karau wrote:
> So I understand, the
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140896861
So I understand, the goal of this patch is that if an invalid value is
returned (e.g. a precision or scale <= 0), then the defaults are used yes?
---
If your project is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140881328
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
GitHub user travishegner opened a pull request:
https://github.com/apache/spark/pull/8780
[SPARK-10648] Proposed bug fix when oracle returns -127 as a scale to a
numeric type
In my environment, the precision and scale are undefined in the oracle
database, but spark is detecting the
32 matches
Mail list logo