GitHub user tmnd1991 opened a pull request:
https://github.com/apache/spark/pull/21693
[SPARK 24673]
## What changes were proposed in this pull request?
Add an overloaded version to `from_utc_timestamp` and `to_utc_timestamp`
having second argument as a `Column` instead of
Github user tmnd1991 commented on the issue:
https://github.com/apache/spark/pull/13509
I corrected the style errors you pointed out. If you say I cannot retrieve
default values, I will leave the 64m hard coded that way.
---
If your project is set up for it, you can reply to this
Github user tmnd1991 commented on the issue:
https://github.com/apache/spark/pull/13509
The only thing I don't like is that "64m" hard coded, but I couldn't find
where default spark confs are stored!
---
If your project is set up for it, you can reply to this
Github user tmnd1991 commented on the issue:
https://github.com/apache/spark/pull/13509
Can anyone verify this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tmnd1991 commented on the issue:
https://github.com/apache/spark/pull/13509
I noticed a scala style error, wait till new commit before triggering a
jenkins build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tmnd1991 commented on the issue:
https://github.com/apache/spark/pull/13509
(Fix the title please)
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user tmnd1991 opened a pull request:
https://github.com/apache/spark/pull/13509
SPARK-15740
## What changes were proposed in this pull request?
"test big model load / save" in Word2VecSuite, lately resulted into OOM.
Therefore we decided to make the pa
Github user tmnd1991 commented on the pull request:
https://github.com/apache/spark/pull/9989#issuecomment-160271736
Something went wrong with the commit.
Now should be fine. Never commit before a good coffee!
---
If your project is set up for it, you can reply to this email and
Github user tmnd1991 commented on the pull request:
https://github.com/apache/spark/pull/9989#issuecomment-160269241
I adjusted the code style.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tmnd1991 commented on the pull request:
https://github.com/apache/spark/pull/9989#issuecomment-160266635
Yes, it was wrong, I fixed it, sorry.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tmnd1991 commented on the pull request:
https://github.com/apache/spark/pull/9989#issuecomment-159875793
I adjusted the code as you suggested
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tmnd1991 commented on the pull request:
https://github.com/apache/spark/pull/9989#issuecomment-159866134
I explained it on the Jira issue, I'm going to explain it again here:
Since `spark.kryoserializer.buffer.max` defaults to 64MB, I decided to
increase the numb
GitHub user tmnd1991 opened a pull request:
https://github.com/apache/spark/pull/9989
[Spark 11932]
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tmnd1991/spark SPARK-11932
Alternatively you can review and apply these changes
Github user tmnd1991 closed the pull request at:
https://github.com/apache/spark/pull/8301
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user tmnd1991 opened a pull request:
https://github.com/apache/spark/pull/8301
[SPARK-10105] Add most frequent k parameter to Word2Vec
When training Word2Vec on a really big dataset, it's really hard to
evaluate the right minCount parameter, it would really help hav
15 matches
Mail list logo