[
https://issues.apache.org/jira/browse/SPARK-26257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782633#comment-16782633
]
Jeff Zhang commented on SPARK-26257:
I know apache beam provide one abstraction layer for multiple
[
https://issues.apache.org/jira/browse/SPARK-22640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270139#comment-16270139
]
Jeff Zhang commented on SPARK-22640:
you need to use spark.yarn.appMasterEnv since you are using yarn
[
https://issues.apache.org/jira/browse/SPARK-22095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175770#comment-16175770
]
Jeff Zhang commented on SPARK-22095:
Could you tell how to reproduce this issue ?
>
[
https://issues.apache.org/jira/browse/SPARK-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071254#comment-16071254
]
Jeff Zhang commented on SPARK-21186:
I think this is due to how spark-deep-learning distribute its
[
https://issues.apache.org/jira/browse/SPARK-20249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-20249:
---
Component/s: PySpark
> Add summary for LinearSVCModel
> --
>
>
[
https://issues.apache.org/jira/browse/SPARK-20249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960540#comment-15960540
]
Jeff Zhang commented on SPARK-20249:
Will work on it.
> Add summary for LinearSVCModel
>
Jeff Zhang created SPARK-20249:
--
Summary: Add summary for LinearSVCModel
Key: SPARK-20249
URL: https://issues.apache.org/jira/browse/SPARK-20249
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930922#comment-15930922
]
Jeff Zhang edited comment on SPARK-20001 at 3/18/17 1:08 PM:
-
Thanks
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930922#comment-15930922
]
Jeff Zhang edited comment on SPARK-20001 at 3/18/17 12:17 AM:
--
Thanks
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930922#comment-15930922
]
Jeff Zhang edited comment on SPARK-20001 at 3/18/17 12:14 AM:
--
Thanks
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-20001:
---
Comment: was deleted
(was: Thanks [~dansanduleac] It looks like we are do similar things, recently I
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930924#comment-15930924
]
Jeff Zhang commented on SPARK-20001:
Thanks [~dansanduleac] It looks like we are do similar things,
[
https://issues.apache.org/jira/browse/SPARK-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930922#comment-15930922
]
Jeff Zhang commented on SPARK-20001:
Thanks [~dansanduleac] It looks like we are do similar things,
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923683#comment-15923683
]
Jeff Zhang commented on SPARK-13587:
I linked a detailed document about how to use it in both batch
[
https://issues.apache.org/jira/browse/SPARK-19439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901290#comment-15901290
]
Jeff Zhang commented on SPARK-19439:
Make sense, I will work on it.
> PySpark's
Jeff Zhang created SPARK-19572:
--
Summary: Allow to disable hive in sparkR shell
Key: SPARK-19572
URL: https://issues.apache.org/jira/browse/SPARK-19572
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-19572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-19572:
---
Description: SPARK-15236 do this for scala shell, this ticket is for sparkR
shell. This is not only
[
https://issues.apache.org/jira/browse/SPARK-19570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-19570:
---
Description: SPARK-15236 do this for scala shell, this ticket is for
pyspark shell. This is not
[
https://issues.apache.org/jira/browse/SPARK-19570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-19570:
---
Description: SPARK-15236 do this for scala shell, this ticket is for
pyspark shell. This is not
[
https://issues.apache.org/jira/browse/SPARK-19570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-19570:
---
Description: SPARK-15236 do this for scala shell, this ticket is for
pyspark shell.
> Allow to
Jeff Zhang created SPARK-19570:
--
Summary: Allow to disable hive in pyspark shell
Key: SPARK-19570
URL: https://issues.apache.org/jira/browse/SPARK-19570
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang closed SPARK-19096.
--
Resolution: Invalid
Will do it in SPARK-13587
> Kmeans.py application fails with virtualenv and due
[
https://issues.apache.org/jira/browse/SPARK-19095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang resolved SPARK-19095.
Resolution: Invalid
Will do it in SPARK-13587
> virtualenv example does not work in yarn cluster
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747407#comment-15747407
]
Jeff Zhang commented on SPARK-13587:
If it is pretty large cluster, then I would suggest to set up a
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747338#comment-15747338
]
Jeff Zhang commented on SPARK-13587:
[~prasanna.santha...@icloud.com] I don't understand how this can
[
https://issues.apache.org/jira/browse/SPARK-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18786:
---
Component/s: PySpark
> pySpark SQLContext.getOrCreate(sc) take stopped sparkContext
>
[
https://issues.apache.org/jira/browse/SPARK-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15696978#comment-15696978
]
Jeff Zhang edited comment on SPARK-18405 at 11/26/16 1:01 AM:
--
I think he
[
https://issues.apache.org/jira/browse/SPARK-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15696978#comment-15696978
]
Jeff Zhang commented on SPARK-18405:
I think he mean to launch multiple spark thrift server in
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Summary: spark.files & spark.jars should not be passed to driver in yarn
mode (was: spark.files
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Summary: spark.files should not passed to driver in yarn-cluster mode
(was: SparkContext.addFile
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Summary: spark.files should not be passed to driver in yarn-cluster mode
(was: spark.files should
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Description:
The following command will fails for spark 2.0
{noformat}
bin/spark-submit --class
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Description:
The following command will fails for spark 2.0
{noformat}
bin/spark-submit --class
[
https://issues.apache.org/jira/browse/SPARK-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-18160:
---
Description:
{noformat}
bin/spark-submit --class org.apache.spark.examples.SparkPi --master
Jeff Zhang created SPARK-18160:
--
Summary: SparkContext.addFile doesn't work in yarn-cluster mode
Key: SPARK-18160
URL: https://issues.apache.org/jira/browse/SPARK-18160
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-16321:
---
Component/s: (was: PySpark)
SQL
> [Spark 2.0] Performance regression when
[
https://issues.apache.org/jira/browse/SPARK-17904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571366#comment-15571366
]
Jeff Zhang commented on SPARK-17904:
It make sense to provide such api to install packages, one
Jeff Zhang created SPARK-17605:
--
Summary: Add option spark.usePython and spark.useR for
applications that use both pyspark and sparkr
Key: SPARK-17605
URL: https://issues.apache.org/jira/browse/SPARK-17605
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang closed SPARK-17054.
--
Resolution: Won't Fix
Close it as it is resolved somewhere else.
> SparkR can not run in
[
https://issues.apache.org/jira/browse/SPARK-17428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475664#comment-15475664
]
Jeff Zhang commented on SPARK-17428:
Found another elegant way to specify version, using devtools
[
https://issues.apache.org/jira/browse/SPARK-17428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475645#comment-15475645
]
Jeff Zhang commented on SPARK-17428:
I just link the jira of python virtualenv. It seems R support
[
https://issues.apache.org/jira/browse/SPARK-17428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475630#comment-15475630
]
Jeff Zhang commented on SPARK-17428:
Source code url needs to be specified for version.
[
https://issues.apache.org/jira/browse/SPARK-17261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15444981#comment-15444981
]
Jeff Zhang commented on SPARK-17261:
It works if you change 'sc._instantiatedContext = None' to
[
https://issues.apache.org/jira/browse/SPARK-17261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15444694#comment-15444694
]
Jeff Zhang commented on SPARK-17261:
[~dongjoon] spark-shell works well for me. It seems your case is
Jeff Zhang created SPARK-17210:
--
Summary: sparkr.zip is not distributed to executors when run
sparkr in RStudio
Key: SPARK-17210
URL: https://issues.apache.org/jira/browse/SPARK-17210
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15432195#comment-15432195
]
Jeff Zhang commented on SPARK-14501:
I didn't work on it now, as it is duplicate of SPARK-14503
>
[
https://issues.apache.org/jira/browse/SPARK-17157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-17157:
---
Issue Type: Sub-task (was: New Feature)
Parent: SPARK-16442
> Add multiclass logistic
Jeff Zhang created SPARK-17178:
--
Summary: Allow to set sparkr shell command through --conf
Key: SPARK-17178
URL: https://issues.apache.org/jira/browse/SPARK-17178
Project: Spark
Issue Type:
Jeff Zhang created SPARK-17125:
--
Summary: Allow to specify spark config using non-string type in
SparkR
Key: SPARK-17125
URL: https://issues.apache.org/jira/browse/SPARK-17125
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425965#comment-15425965
]
Jeff Zhang commented on SPARK-17116:
Is it possible to raise error if there's no such param exist
Jeff Zhang created SPARK-17121:
--
Summary: Support _HOST replacement for principal
Key: SPARK-17121
URL: https://issues.apache.org/jira/browse/SPARK-17121
Project: Spark
Issue Type: Improvement
Jeff Zhang created SPARK-17103:
--
Summary: Can not define class variable in repl
Key: SPARK-17103
URL: https://issues.apache.org/jira/browse/SPARK-17103
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423745#comment-15423745
]
Jeff Zhang commented on SPARK-17054:
I push another commit to disable downloading spark if it is
[
https://issues.apache.org/jira/browse/SPARK-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423741#comment-15423741
]
Jeff Zhang commented on SPARK-16578:
Another scenario I'd like to clarify is that. Say we launch R
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422058#comment-15422058
]
Jeff Zhang commented on SPARK-17054:
I have single node hadoop cluster in my laptop, and I run R
[
https://issues.apache.org/jira/browse/SPARK-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421878#comment-15421878
]
Jeff Zhang commented on SPARK-16578:
I think this feature can also be applied in pyspark.
>
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421851#comment-15421851
]
Jeff Zhang commented on SPARK-17054:
Here's the command I run.
{code}
bin/spark-submit --master
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421846#comment-15421846
]
Jeff Zhang commented on SPARK-17054:
Do you run it as yarn-cluster mode ?
> SparkR can not run in
[
https://issues.apache.org/jira/browse/SPARK-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420749#comment-15420749
]
Jeff Zhang commented on SPARK-16578:
I think one purpose of this ticket is to share the same
[
https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420553#comment-15420553
]
Jeff Zhang commented on SPARK-17054:
Although I can fix it by using the correct cache dir for mac OS,
Jeff Zhang created SPARK-17054:
--
Summary: SparkR can not run in yarn-cluster mode on mac os
Key: SPARK-17054
URL: https://issues.apache.org/jira/browse/SPARK-17054
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-16781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420547#comment-15420547
]
Jeff Zhang commented on SPARK-16781:
JAVA_HOME will be set by yarn, not sure about other cluster
[
https://issues.apache.org/jira/browse/SPARK-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418585#comment-15418585
]
Jeff Zhang commented on SPARK-15882:
I think it is better to keep RDD api underneath as I don't see
[
https://issues.apache.org/jira/browse/SPARK-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-16965:
---
Component/s: PySpark
MLlib
> Fix bound checking for SparseVector
>
Jeff Zhang created SPARK-16965:
--
Summary: Fix bound checking for SparseVector
Key: SPARK-16965
URL: https://issues.apache.org/jira/browse/SPARK-16965
Project: Spark
Issue Type: Bug
Affects
[
https://issues.apache.org/jira/browse/SPARK-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407550#comment-15407550
]
Jeff Zhang commented on SPARK-16890:
If I remember correctly, it is by design. Because in the sql
[
https://issues.apache.org/jira/browse/SPARK-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364972#comment-15364972
]
Jeff Zhang commented on SPARK-16367:
[~gae...@xeberon.net] I still don't understand how the binary
[
https://issues.apache.org/jira/browse/SPARK-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15363509#comment-15363509
]
Jeff Zhang commented on SPARK-16367:
Preparing the wheelhouse seems time consuming to me especially
[
https://issues.apache.org/jira/browse/SPARK-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15363513#comment-15363513
]
Jeff Zhang commented on SPARK-16367:
Oh, happen to find this project to build local python package
[
https://issues.apache.org/jira/browse/SPARK-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362764#comment-15362764
]
Jeff Zhang commented on SPARK-16367:
[~gae...@xeberon.net] Thanks for the new idea, this makes the
[
https://issues.apache.org/jira/browse/SPARK-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359361#comment-15359361
]
Jeff Zhang commented on SPARK-16324:
I think this is by design
{code}
override def nullSafeEval(s:
[
https://issues.apache.org/jira/browse/SPARK-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359339#comment-15359339
]
Jeff Zhang commented on SPARK-16321:
This could due to a lot things, may be reading parquet file,
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351512#comment-15351512
]
Jeff Zhang commented on SPARK-13587:
Thanks [~gae...@xeberon.net] Have you take a look at my PR ?
[
https://issues.apache.org/jira/browse/SPARK-16168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346487#comment-15346487
]
Jeff Zhang edited comment on SPARK-16168 at 6/23/16 2:10 PM:
-
I don't think
[
https://issues.apache.org/jira/browse/SPARK-16168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346487#comment-15346487
]
Jeff Zhang commented on SPARK-16168:
I don't think it is spark issue, it is more likely your query
[
https://issues.apache.org/jira/browse/SPARK-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345652#comment-15345652
]
Jeff Zhang commented on SPARK-15345:
It has been resolved in
[
https://issues.apache.org/jira/browse/SPARK-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang closed SPARK-15345.
--
Resolution: Fixed
> SparkSession's conf doesn't take effect when there's already an existing
>
[
https://issues.apache.org/jira/browse/SPARK-15705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-15705:
---
Comment: was deleted
(was: I will take a look at it. )
> Spark won't read ORC schema from metastore
[
https://issues.apache.org/jira/browse/SPARK-15705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345533#comment-15345533
]
Jeff Zhang commented on SPARK-15705:
I will take a look at it.
> Spark won't read ORC schema from
[
https://issues.apache.org/jira/browse/SPARK-16065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343953#comment-15343953
]
Jeff Zhang commented on SPARK-16065:
Would you mind to paste the codes around line 22 of test.scala ?
[
https://issues.apache.org/jira/browse/SPARK-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335747#comment-15335747
]
Jeff Zhang commented on SPARK-16013:
Found SPARK-11562, although it is not necessary in spark 2.0, I
[
https://issues.apache.org/jira/browse/SPARK-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335727#comment-15335727
]
Jeff Zhang commented on SPARK-16013:
I mean to introduce this to 1.6 as in spark 2.0 we can disable
[
https://issues.apache.org/jira/browse/SPARK-16013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335727#comment-15335727
]
Jeff Zhang edited comment on SPARK-16013 at 6/17/16 8:53 AM:
-
I mean to
Jeff Zhang created SPARK-16013:
--
Summary: Add option to disable HiveContext in spark-shell/pyspark
Key: SPARK-16013
URL: https://issues.apache.org/jira/browse/SPARK-16013
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335218#comment-15335218
]
Jeff Zhang commented on SPARK-15993:
RuntimeConfig in scala api is mutable, if it doesn't work in
[
https://issues.apache.org/jira/browse/SPARK-15909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333289#comment-15333289
]
Jeff Zhang commented on SPARK-15909:
If I remember correctly, pyspark can only run cluster mode in
[
https://issues.apache.org/jira/browse/SPARK-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329074#comment-15329074
]
Jeff Zhang commented on SPARK-15930:
I see, I guess you are trying to get the total number of
[
https://issues.apache.org/jira/browse/SPARK-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329059#comment-15329059
]
Jeff Zhang commented on SPARK-15930:
Don't we can get the count from freqItemsets in FPGrowthModel ?
[
https://issues.apache.org/jira/browse/SPARK-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326977#comment-15326977
]
Jeff Zhang commented on SPARK-14503:
[~GayathriMurali] [~yuhaoyan] Do you still work on this ? If
[
https://issues.apache.org/jira/browse/SPARK-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-15819:
---
Component/s: PySpark
ML
> Add KMeanSummary in KMeans of PySpark
>
[
https://issues.apache.org/jira/browse/SPARK-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang closed SPARK-15751.
--
Resolution: Won't Fix
> Add generateAssociationRules in fpm in pyspark
>
[
https://issues.apache.org/jira/browse/SPARK-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325515#comment-15325515
]
Jeff Zhang commented on SPARK-14501:
working on it.
> spark.ml parity for fpm - frequent items
>
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324323#comment-15324323
]
Jeff Zhang edited comment on SPARK-13587 at 6/10/16 12:01 PM:
--
Sorry, guys,
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324323#comment-15324323
]
Jeff Zhang edited comment on SPARK-13587 at 6/10/16 11:43 AM:
--
Sorry, guys,
[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324323#comment-15324323
]
Jeff Zhang commented on SPARK-13587:
Sorry, guys, I am busy on other stuff recently and late for
Jeff Zhang created SPARK-15819:
--
Summary: Add KMeanSummary in KMeans of PySpark
Key: SPARK-15819
URL: https://issues.apache.org/jira/browse/SPARK-15819
Project: Spark
Issue Type: Improvement
Jeff Zhang created SPARK-15803:
--
Summary: Support with statement syntax for SparkSession
Key: SPARK-15803
URL: https://issues.apache.org/jira/browse/SPARK-15803
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317975#comment-15317975
]
Jeff Zhang commented on SPARK-3451:
---
+1 for this feature, or allow specifying jar folder.
>
[
https://issues.apache.org/jira/browse/SPARK-15779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317959#comment-15317959
]
Jeff Zhang edited comment on SPARK-15779 at 6/7/16 6:27 AM:
Actually it is
[
https://issues.apache.org/jira/browse/SPARK-15779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317959#comment-15317959
]
Jeff Zhang commented on SPARK-15779:
You need to specify hive.execution.engine=mr in your
1 - 100 of 331 matches
Mail list logo