[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15442542#comment-15442542
]
Weichen Xu commented on SPARK-17139:
Because LOR & MLOR interface need to be unified, I will create
[
https://issues.apache.org/jira/browse/SPARK-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17138:
Assignee: (was: Apache Spark)
> Python API for multinomial logistic regression
>
[
https://issues.apache.org/jira/browse/SPARK-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15442537#comment-15442537
]
Apache Spark commented on SPARK-17138:
--
User 'WeichenXu123' has created a pull request for this
[
https://issues.apache.org/jira/browse/SPARK-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17138:
Assignee: Apache Spark
> Python API for multinomial logistic regression
>
[
https://issues.apache.org/jira/browse/SPARK-17264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15442475#comment-15442475
]
Hyukjin Kwon commented on SPARK-17264:
--
Is this a duplicate of SPARK-15472?
> DataStreamWriter does
[
https://issues.apache.org/jira/browse/SPARK-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Josh Rosen resolved SPARK-10834.
Resolution: Fixed
As of (at least) Spark 2.0 we now support INSERT INTO ... VALUES, so I'm going
[
https://issues.apache.org/jira/browse/SPARK-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Josh Rosen resolved SPARK-11299.
Resolution: Fixed
This was fixed by my PR in 2015.
> SQL Programming Guide's link to DataFrame
[
https://issues.apache.org/jira/browse/SPARK-17275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15442059#comment-15442059
]
Shivaram Venkataraman commented on SPARK-17275:
---
I dont think anything changed there
[
https://issues.apache.org/jira/browse/SPARK-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441999#comment-15441999
]
Apache Spark commented on SPARK-17281:
--
User 'WeichenXu123' has created a pull request for this
[
https://issues.apache.org/jira/browse/SPARK-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17281:
Assignee: (was: Apache Spark)
> Add treeAggregateDepth parameter for
[
https://issues.apache.org/jira/browse/SPARK-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17281:
Assignee: Apache Spark
> Add treeAggregateDepth parameter for AFTSurvivalRegression
>
Weichen Xu created SPARK-17281:
--
Summary: Add treeAggregateDepth parameter for AFTSurvivalRegression
Key: SPARK-17281
URL: https://issues.apache.org/jira/browse/SPARK-17281
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441981#comment-15441981
]
Cody Koeninger commented on SPARK-17280:
I can take a look but there's not a lot to go on.
>
[
https://issues.apache.org/jira/browse/SPARK-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yin Huai updated SPARK-17280:
-
Summary: Flaky test: org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite
and
[
https://issues.apache.org/jira/browse/SPARK-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yin Huai updated SPARK-17280:
-
Description:
https://spark-tests.appspot.com/builds/spark-master-test-maven-hadoop-2.2/1793
[
https://issues.apache.org/jira/browse/SPARK-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441885#comment-15441885
]
Yin Huai commented on SPARK-17280:
--
[~c...@koeninger.org] Will you have time to take a look at these two
Yin Huai created SPARK-17280:
Summary: Flaky test:
org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite
Key: SPARK-17280
URL: https://issues.apache.org/jira/browse/SPARK-17280
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441782#comment-15441782
]
Darren Fu commented on SPARK-12394:
---
Good news, Tejas!
I think SMB join is a required feature to make
[
https://issues.apache.org/jira/browse/SPARK-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441640#comment-15441640
]
Abdeali Kothari commented on SPARK-16957:
-
Hi, I'd like to begin contributing, and this seems
[
https://issues.apache.org/jira/browse/SPARK-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441107#comment-15441107
]
Arihanth Jain edited comment on SPARK-13525 at 8/27/16 1:58 PM:
I checked
[
https://issues.apache.org/jira/browse/SPARK-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441107#comment-15441107
]
Arihanth Jain commented on SPARK-13525:
---
I checked for localhost and it works. The spark cluster
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-9066.
---
Resolution: Fixed
> Improve cartesian performance
> --
>
> Key:
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-13768.
Resolution: Fixed
> Set hive conf failed use --hiveconf when beeline connect to thriftserver
>
[
https://issues.apache.org/jira/browse/SPARK-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17279:
Assignee: Apache Spark (was: Wenchen Fan)
> better error message for NPE during ScalaUDF
[
https://issues.apache.org/jira/browse/SPARK-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17279:
Assignee: Wenchen Fan (was: Apache Spark)
> better error message for NPE during ScalaUDF
[
https://issues.apache.org/jira/browse/SPARK-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15441008#comment-15441008
]
Apache Spark commented on SPARK-17279:
--
User 'cloud-fan' has created a pull request for this issue:
Wenchen Fan created SPARK-17279:
---
Summary: better error message for NPE during ScalaUDF execution
Key: SPARK-17279
URL: https://issues.apache.org/jira/browse/SPARK-17279
Project: Spark
Issue
Wenchen Fan created SPARK-17278:
---
Summary: better error message for NPE during ScalaUDF execution
Key: SPARK-17278
URL: https://issues.apache.org/jira/browse/SPARK-17278
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
[
https://issues.apache.org/jira/browse/SPARK-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-15044.
---
Resolution: Not A Problem
> spark-sql will throw "input path does not exist" exception if it handles
Weizhong created SPARK-17277:
Summary: Set hive conf failed
Key: SPARK-17277
URL: https://issues.apache.org/jira/browse/SPARK-17277
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-17236.
---
Resolution: Not A Problem
I'm not sure how to resolve it, but if it's more an HBase issue, ask the
[
https://issues.apache.org/jira/browse/SPARK-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440966#comment-15440966
]
Sean Owen commented on SPARK-17214:
---
I think the issue is that the 'underlying' dataframe hasn't
[
https://issues.apache.org/jira/browse/SPARK-17143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-17143.
---
Resolution: Not A Problem
> pyspark unable to create UDF: java.lang.RuntimeException:
>
[
https://issues.apache.org/jira/browse/SPARK-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-17001.
---
Resolution: Fixed
Fix Version/s: 2.1.0
Issue resolved by pull request 14663
[
https://issues.apache.org/jira/browse/SPARK-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen reassigned SPARK-17001:
-
Assignee: Sean Owen
> Enable standardScaler to standardize sparse vectors when withMean=True
>
[
https://issues.apache.org/jira/browse/SPARK-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-17216:
--
Assignee: Robert Kruszewski
Component/s: Web UI
> Even timeline for a stage doesn't core 100%
[
https://issues.apache.org/jira/browse/SPARK-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-17216.
---
Resolution: Fixed
Fix Version/s: 2.1.0
2.0.1
Issue resolved by pull
[
https://issues.apache.org/jira/browse/SPARK-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-15382:
--
Assignee: Takeshi Yamamuro
> monotonicallyIncreasingId doesn't work when data is upsampled
>
[
https://issues.apache.org/jira/browse/SPARK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Reynold Xin resolved SPARK-17274.
-
Resolution: Fixed
Fix Version/s: 2.1.0
2.0.1
> Move join optimizer
[
https://issues.apache.org/jira/browse/SPARK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Reynold Xin resolved SPARK-17273.
-
Resolution: Fixed
Fix Version/s: 2.1.0
> Move expression optimizer rules into a separate
[
https://issues.apache.org/jira/browse/SPARK-17272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Reynold Xin resolved SPARK-17272.
-
Resolution: Fixed
Fix Version/s: 2.1.0
> Move subquery optimizer rules into its own file
[
https://issues.apache.org/jira/browse/SPARK-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Reynold Xin updated SPARK-17270:
Fix Version/s: 2.0.1
> Move object optimization rules into its own file
>
[
https://issues.apache.org/jira/browse/SPARK-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tomer Kaftan updated SPARK-17110:
-
Environment: Cluster of 2 AWS r3.xlarge slaves launched via ec2 scripts,
Spark 2.0.0, hadoop:
[
https://issues.apache.org/jira/browse/SPARK-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440817#comment-15440817
]
Tomer Kaftan commented on SPARK-17110:
--
Hi Miao,
That setup wouldn't cause this bug to appear
[
https://issues.apache.org/jira/browse/SPARK-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440812#comment-15440812
]
Apache Spark commented on SPARK-17276:
--
User 'keypointt' has created a pull request for this issue:
[
https://issues.apache.org/jira/browse/SPARK-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17276:
Assignee: (was: Apache Spark)
> Stop environment parameters flooding Jenkins build
[
https://issues.apache.org/jira/browse/SPARK-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17276:
Assignee: Apache Spark
> Stop environment parameters flooding Jenkins build output
>
[
https://issues.apache.org/jira/browse/SPARK-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17254:
Assignee: Apache Spark
> Filter operator should have “stop if false” semantics for sorted
[
https://issues.apache.org/jira/browse/SPARK-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440806#comment-15440806
]
Apache Spark commented on SPARK-17254:
--
User 'viirya' has created a pull request for this issue:
[
https://issues.apache.org/jira/browse/SPARK-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-17254:
Assignee: (was: Apache Spark)
> Filter operator should have “stop if false” semantics
[
https://issues.apache.org/jira/browse/SPARK-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tomer Kaftan updated SPARK-17110:
-
Description:
In Pyspark 2.0.0, any task that accesses cached data non-locally throws a
53 matches
Mail list logo