Github user krishnakalyan3 closed the pull request at:
https://github.com/apache/spark/pull/14741
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/16767
ping @wangmiao1981 @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/16767
@wangmiao1981 sorry, had made a erroneous commit. Could you please review
the PR?.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16767#discussion_r98974015
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -494,6 +494,8 @@ SparkR supports the following machine learning models
and algorithms
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/16767
[SPARK-19386][SPARKR][DOC]
## What changes were proposed in this pull request?
Update programming guide, example and vignette with Bisecting k-means.
You can merge this pull request
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14741
ping @shivaram @davies, I am planning to revisit this PR.
Could you please let me know which daemon process on Linux we are trying to
interrupt. I am assuming its the R process
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/16242
cc @MLnick and @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/16242
[SPARK-18628][ML] Update Scala param and Python param to have quotes
## What changes were proposed in this pull request?
Updated Scala param and Python param to have quotes around
Github user krishnakalyan3 closed the pull request at:
https://github.com/apache/spark/pull/15755
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/15755
@srowen will do.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/15755
[SPARK-15902][PySpark] Add deprecation warning if python version below
Python 2.7
## What changes were proposed in this pull request?
Deprecation warning if we detect we are running
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14741
@shivaram thanks for the advice.
Some Issue being faced by me
- While reading a large file from Rstudio and trying to kill the the
process using `Sys.getpid()`, I tried
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14741
@shivaram @davies
- Signature of the readBin function is `readBin(con, what, n,
as.integer(size), endian)`
What should the value of `what` be when an the process is interrupted
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14741
@shivaram I am not sure on how to go about the retry method. Could you
please share some example that I could refer to?.
---
If your project is set up for it, you can reply to this email
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/14741
[SPARK-6832][SPARKR][WIP]Handle partial reads in SparkR
## What changes were proposed in this pull request?
Handle partial reads in SparkR by implementing a retry method in R
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
@MLnick @holdenk @jkbradley thanks for the reviews.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
cc @MLnick @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
@shivaram @felixcheung thanks for the reviews. Will keep the feedbacks in
mind.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
@felixcheung @shivaram Is the current state okay?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r71077269
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,10 @@ sparkR.sparkContext <- function(
existingPort <- Sys.
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
@felixcheung @shivaram I am not sure if the warning message is clear
enough. I did the best I could with character limit of 100. I am not sure which
SparkR unit tests fail from the logs
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
@felixcheung my local unit test still fail, anyway thanks for the
clarification.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
@shivaram @felixcheung My patch fails spark unit test. (./R/run-tests.sh)
Logs https://gist.github.com/krishnakalyan3/6585a1007b731e82fede1b942ea00bec
I am not sure how to go about
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r70899726
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,9 @@ sparkR.sparkContext <- function(
existingPort <- Sys.
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/14179
cc @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/14179
[SPARK-16055][SPARKR] warning added while using sparkPackages with
spark-submit
## What changes were proposed in this pull request?
SPARK-16055
parkPackages - argument is passed
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
cc @holdenk @MLnick @jkbradley. Does the current state look good?.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
@holdenk @MLnick sorry for so many changes. Newbie here. Please let me know
if the current state is okay?.
---
If your project is set up for it, you can reply to this email and have your
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13894#discussion_r69567260
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -57,7 +57,7 @@ private[ml] trait CrossValidatorParams extends
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
@holdenk @MLnick is the current update okay?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
Updated the doc based on the reviews. Thanks for the review comments
@holdenk and @MLnick.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13894#discussion_r69103201
--- Diff: python/pyspark/ml/tuning.py ---
@@ -266,7 +269,7 @@ class CrossValidatorModel(Model, ValidatorParams):
"""
Github user krishnakalyan3 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13894#discussion_r69086192
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -194,8 +194,7 @@ object CrossValidator extends
MLReadable
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
cc @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/13894
[SPARK-15254][DOC] Improve ML pipeline Cross Validation Scaladoc & PyDoc
## What changes were proposed in this pull request?
Updated ML pipeline Cross Validation Scaladoc &am
Github user krishnakalyan3 commented on the pull request:
https://github.com/apache/spark/pull/13268#issuecomment-221126159
@holdenk @shivaram added the reverse conversion details. Please let me know
if its okay.
Thanks
---
If your project is set up for it, you can reply
Github user krishnakalyan3 commented on the pull request:
https://github.com/apache/spark/pull/13268#issuecomment-221124498
@holdenk @shivaram will add that. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user krishnakalyan3 opened a pull request:
https://github.com/apache/spark/pull/13268
[SPARK-12071][Doc] Document the behaviour of NA in R
## What changes were proposed in this pull request?
Under Upgrading From SparkR 1.5.x to 1.6.x section added the information
39 matches
Mail list logo