[
https://issues.apache.org/jira/browse/SPARK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258896#comment-16258896
]
Apache Spark commented on SPARK-9853:
-
User 'yucai' has created a pull request for this issue:
[
https://issues.apache.org/jira/browse/SPARK-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258894#comment-16258894
]
Maciej BryĆski commented on SPARK-16996:
[~ste...@apache.org]
I didn't replace spark-hive.jar but
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258886#comment-16258886
]
Apache Spark commented on SPARK-22541:
--
User 'viirya' has created a pull request for this issue:
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22541:
Assignee: Apache Spark
> Dataframes: applying multiple filters one after another using
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22541:
Assignee: (was: Apache Spark)
> Dataframes: applying multiple filters one after
[
https://issues.apache.org/jira/browse/SPARK-22559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22559:
Assignee: (was: Apache Spark)
> history server: handle exception on opening corrupted
[
https://issues.apache.org/jira/browse/SPARK-22559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258881#comment-16258881
]
Apache Spark commented on SPARK-22559:
--
User 'gengliangwang' has created a pull request for this
[
https://issues.apache.org/jira/browse/SPARK-22559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22559:
Assignee: Apache Spark
> history server: handle exception on opening corrupted
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258873#comment-16258873
]
Liang-Chi Hsieh edited comment on SPARK-22541 at 11/20/17 7:14 AM:
---
Gengliang Wang created SPARK-22559:
--
Summary: history server: handle exception on opening corrupted
listing.ldb
Key: SPARK-22559
URL: https://issues.apache.org/jira/browse/SPARK-22559
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258873#comment-16258873
]
Liang-Chi Hsieh commented on SPARK-22541:
-
Similar to the case of using python udfs with
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258868#comment-16258868
]
Liang-Chi Hsieh edited comment on SPARK-22541 at 11/20/17 7:01 AM:
---
[
https://issues.apache.org/jira/browse/SPARK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258868#comment-16258868
]
Liang-Chi Hsieh commented on SPARK-22541:
-
Sorry, my previous reply is not completely correct.
KhajaAsmath Mohammed created SPARK-22558:
Summary: SparkHiveDynamicPartition fails when trying to write data
from kafka to hive using spark streaming
Key: SPARK-22558
URL:
[
https://issues.apache.org/jira/browse/SPARK-22554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon resolved SPARK-22554.
--
Resolution: Fixed
Fix Version/s: 2.3.0
Issue resolved by pull request 19782
[
https://issues.apache.org/jira/browse/SPARK-22554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon reassigned SPARK-22554:
Assignee: Hyukjin Kwon
> Add a config to control if PySpark should use daemon or not
>
[
https://issues.apache.org/jira/browse/SPARK-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon resolved SPARK-22557.
--
Resolution: Fixed
Fix Version/s: 2.3.0
Issue resolved by pull request 19784
[
https://issues.apache.org/jira/browse/SPARK-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon reassigned SPARK-22557:
Assignee: Dongjoon Hyun
> Use ThreadSignaler explicitly
> -
>
[
https://issues.apache.org/jira/browse/SPARK-22556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258697#comment-16258697
]
Sean Owen commented on SPARK-22556:
---
Are you saying the behavior is incorrect or undesirable? if it's
[
https://issues.apache.org/jira/browse/SPARK-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258690#comment-16258690
]
Apache Spark commented on SPARK-22557:
--
User 'dongjoon-hyun' has created a pull request for this
[
https://issues.apache.org/jira/browse/SPARK-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22557:
Assignee: (was: Apache Spark)
> Use ThreadSignaler explicitly
>
[
https://issues.apache.org/jira/browse/SPARK-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22557:
Assignee: Apache Spark
> Use ThreadSignaler explicitly
> -
>
Dongjoon Hyun created SPARK-22557:
-
Summary: Use ThreadSignaler explicitly
Key: SPARK-22557
URL: https://issues.apache.org/jira/browse/SPARK-22557
Project: Spark
Issue Type: Bug
Thiago Rodrigues Baldim created SPARK-22556:
---
Summary: WrappedArray with Explode Function create WrappedArray
with 1 object.
Key: SPARK-22556
URL: https://issues.apache.org/jira/browse/SPARK-22556
[
https://issues.apache.org/jira/browse/SPARK-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Felix Cheung updated SPARK-20201:
-
Target Version/s: 2.2.1
> Flaky Test: org.apache.spark.sql.catalyst.expressions.OrderingSuite
>
[
https://issues.apache.org/jira/browse/SPARK-22543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Felix Cheung updated SPARK-22543:
-
Target Version/s: 2.2.1, 2.3.0
> fix java 64kb compile error for deeply nested expressions
>
[
https://issues.apache.org/jira/browse/SPARK-22495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Felix Cheung updated SPARK-22495:
-
Target Version/s: 2.2.1, 2.3.0
> Fix setup of SPARK_HOME variable on Windows
>
[
https://issues.apache.org/jira/browse/SPARK-21322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258601#comment-16258601
]
Ron Hu commented on SPARK-21322:
Pull request 19357 was created while there were several dependencies
[
https://issues.apache.org/jira/browse/SPARK-21322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258596#comment-16258596
]
Apache Spark commented on SPARK-21322:
--
User 'ron8hu' has created a pull request for this issue:
Andrew Crosby created SPARK-22555:
-
Summary: Possibly incorrect scaling of L2 regularization strength
in LinearRegression
Key: SPARK-22555
URL: https://issues.apache.org/jira/browse/SPARK-22555
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258487#comment-16258487
]
Sean Owen commented on SPARK-19476:
---
Ok. These limitations are from your app though (no batching, high
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258485#comment-16258485
]
Gal Topper commented on SPARK-19476:
The DB supports concurrent requests, but not batching. Meaning
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258482#comment-16258482
]
Sean Owen commented on SPARK-19476:
---
But if the DB doesn't like more than 1 concurrent request how do
[
https://issues.apache.org/jira/browse/SPARK-22393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258476#comment-16258476
]
Mark Petruska commented on SPARK-22393:
---
Trace of the 2.11 version:
{code}
...
parse("
class
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258472#comment-16258472
]
Gal Topper commented on SPARK-19476:
> why not more partitions?
Because the overhead of 1 slot (or
[
https://issues.apache.org/jira/browse/SPARK-22393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258469#comment-16258469
]
Mark Petruska commented on SPARK-22393:
---
The difference between scala repls 2.11 and 2.12 is seen
[
https://issues.apache.org/jira/browse/SPARK-22393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258468#comment-16258468
]
Mark Petruska commented on SPARK-22393:
---
With the 2.12 build:
{code}
import
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-19476.
---
Resolution: Not A Problem
> Running threads in Spark DataFrame foreachPartition() causes
>
[
https://issues.apache.org/jira/browse/SPARK-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258467#comment-16258467
]
Sean Owen commented on SPARK-19476:
---
Then it's just back to the question: why not more partitions? Why
[
https://issues.apache.org/jira/browse/SPARK-22393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258441#comment-16258441
]
Mark Petruska commented on SPARK-22393:
---
Tested with spark-shell build 2.11:
{code}
import
40 matches
Mail list logo