GitHub user travishegner opened a pull request:
https://github.com/apache/spark/pull/19191
[SPARK-21958][ML] Word2VecModel save: transform data in the cluster
## What changes were proposed in this pull request?
Change a data transformation while saving a Word2VecModel to
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/9495#issuecomment-154423022
Thanks @yhuai for taking care of this!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-154087900
Please see PR #9495 for the oracle dialect solution proposed above.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user travishegner closed the pull request at:
https://github.com/apache/spark/pull/8780
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user travishegner opened a pull request:
https://github.com/apache/spark/pull/9495
Oracle dialect to handle nonspecific numeric types
This is the alternative/agreed upon solution to PR #8780.
Creating an OracleDialect to handle the nonspecific numeric types that can
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-145609065
@cloud-fan @bdolbeare @davies I'm certainly open to doing this in an oracle
specific way if that is what is required. I was simply hoping to solve my
problem
Github user travishegner commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r41172742
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -140,7 +140,12 @@ object DecimalType extends AbstractDataType
Github user travishegner commented on a diff in the pull request:
https://github.com/apache/spark/pull/8780#discussion_r41171411
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -66,9 +66,7 @@ private[sql] object JDBCRDD
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-144054401
So any thoughts on merging this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142282068
I'm making sure the new version builds, but here are the rules:
```scala
private[sql] def bounded(precision: Int, scale: Int): Decima
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142100699
OK, after looking at this a little further, it seems that
DecimalType.bounded() should be called regardless of precision and scale values
in JDBCRDD.scala, and
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142045620
But a negative scale is inherently less than a defined precision... or do
you mean precision should never be less than the absolute value of scale? Is
that
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-142004449
Working on a new patch... Would it ever be possible to have a case where
precision is 0 (essentially undefined), but scale is still intentionally set?
Or is it
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-141462484
That is exactly what I was afraid of. Would the patch make more sense to
*only* check precision for a zero value? Does it ever make sense to have a
precision of
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140936357
I'm not sure if oracle can be associated with anything *reasonable*, but
sometimes you have to play the hand you are dealt. :)
I can only answer
Github user travishegner commented on the pull request:
https://github.com/apache/spark/pull/8780#issuecomment-140913028
Yes, that is the intention. Is this the proper way to address that issue?
On Wed, Sep 16, 2015, 5:14 PM Holden Karau wrote:
> So I understand,
GitHub user travishegner opened a pull request:
https://github.com/apache/spark/pull/8780
[SPARK-10648] Proposed bug fix when oracle returns -127 as a scale to a
numeric type
In my environment, the precision and scale are undefined in the oracle
database, but spark is detecting
17 matches
Mail list logo