[
https://issues.apache.org/jira/browse/SPARK-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Armbrust resolved SPARK-1424.
-
Resolution: Fixed
InsertInto should work on JavaSchemaRDD as well.
Saisai Shao created SPARK-1510:
--
Summary: Add Spark Streaming metrics source for metrics system
Key: SPARK-1510
URL: https://issues.apache.org/jira/browse/SPARK-1510
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970641#comment-13970641
]
Tal Sliwowicz commented on SPARK-1175:
--
This prevents us from having a real automated
[
https://issues.apache.org/jira/browse/SPARK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ye Xianjin updated SPARK-1511:
--
Affects Version/s: 0.8.1
0.9.0
Update TestUtils.createCompiledClass() API to
[
https://issues.apache.org/jira/browse/SPARK-1399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971415#comment-13971415
]
Lianhui Wang commented on SPARK-1399:
-
i think the user defined accumulators of every
[
https://issues.apache.org/jira/browse/SPARK-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971520#comment-13971520
]
Tal Sliwowicz commented on SPARK-1175:
--
Yes
on shutting down a long running job,
wangfei created SPARK-1512:
--
Summary: improve spark sql to support table with more than 22
fields
Key: SPARK-1512
URL: https://issues.apache.org/jira/browse/SPARK-1512
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-1277.
Resolution: Won't Fix
Subsumed by other configuration patches.
Automatically set the UI
[
https://issues.apache.org/jira/browse/SPARK-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-1497.
Resolution: Fixed
Assignee: Sean Owen
Spark YARN code isn't checked with Scalastyle
Cheng Lian created SPARK-1513:
-
Summary: Specialized ColumnType for Timestamp
Key: SPARK-1513
URL: https://issues.apache.org/jira/browse/SPARK-1513
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-1514:
---
Summary: Standardize process for creating Spark packages (was: Standardize
way to create
Patrick Wendell created SPARK-1514:
--
Summary: Standardize way to create Spark releases
Key: SPARK-1514
URL: https://issues.apache.org/jira/browse/SPARK-1514
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-1469.
Resolution: Fixed
Fix Version/s: (was: 1.1.0)
1.0.0
[
https://issues.apache.org/jira/browse/SPARK-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971662#comment-13971662
]
Sean Owen commented on SPARK-1439:
--
I had a run at this today. First I tried Maven-based
[
https://issues.apache.org/jira/browse/SPARK-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971671#comment-13971671
]
haosdent commented on SPARK-1496:
-
Is it should return Option[Seq[String]]? Maybe I could
DB Tsai created SPARK-1516:
--
Summary: Yarn Client should not call System.exit, should throw
exception instead.
Key: SPARK-1516
URL: https://issues.apache.org/jira/browse/SPARK-1516
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
DB Tsai updated SPARK-1516:
---
Assignee: (was: DB Tsai)
Yarn Client should not call System.exit, should throw exception instead.
Patrick Wendell created SPARK-1517:
--
Summary: Publish nightly snapshots of documentation, maven
artifacts, and binary builds
Key: SPARK-1517
URL: https://issues.apache.org/jira/browse/SPARK-1517
[
https://issues.apache.org/jira/browse/SPARK-1465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Thomas Graves resolved SPARK-1465.
--
Resolution: Fixed
Spark compilation is broken with the latest hadoop-2.4.0 release
Marcelo Vanzin created SPARK-1518:
-
Summary: Spark master doesn't compile against hadoop-common trunk
Key: SPARK-1518
URL: https://issues.apache.org/jira/browse/SPARK-1518
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nan Zhu updated SPARK-1519:
---
Description:
though Scala implementation provides the parameter of minPartitions in
wholeTextFiles, PySpark
[
https://issues.apache.org/jira/browse/SPARK-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971961#comment-13971961
]
Nan Zhu commented on SPARK-1483:
made the PR: https://github.com/apache/spark/pull/430
[
https://issues.apache.org/jira/browse/SPARK-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matei Zaharia resolved SPARK-1462.
--
Resolution: Fixed
Fix Version/s: 1.0.0
Examples of ML algorithms are using deprecated
[
https://issues.apache.org/jira/browse/SPARK-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972225#comment-13972225
]
Kan Zhang commented on SPARK-1475:
--
A second PR that fixes the unit test introduced
24 matches
Mail list logo