Github user scwf closed the pull request at:
https://github.com/apache/spark/pull/1385
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r15055059
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -128,25 +123,13 @@ class HadoopRDD[K, V](
// Returns a JobConf that
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14866324
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -552,17 +552,10 @@ class SparkContext(config: SparkConf) extends Logging
{
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14866367
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -128,25 +123,13 @@ class HadoopRDD[K, V](
// Returns a JobConf that will
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14866389
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -552,17 +552,10 @@ class SparkContext(config: SparkConf) extends Logging
{
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14866988
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -206,17 +202,10 @@ class HadoopTableReader(@transient _tableDesc:
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14867102
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -206,17 +202,10 @@ class HadoopTableReader(@transient _tableDesc:
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14868164
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -206,17 +202,10 @@ class HadoopTableReader(@transient _tableDesc:
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/1385#issuecomment-48834426
Is this related to the other conf-related concurrency issue that was fixed
recently? https://github.com/apache/spark/pull/1273
---
If your project is set up for it,
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/1385#issuecomment-48861711
@rxin and @aarondav, yeah ï¼the master branch deadlocks, it seems locks of
#1273 and Hadoop-10456 lead to the problem. when run hivesql self join sql---
hql(SELECT t1.a,
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14862963
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -128,25 +123,13 @@ class HadoopRDD[K, V](
// Returns a JobConf that
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14862987
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -552,17 +552,10 @@ class SparkContext(config: SparkConf) extends Logging
{
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14864755
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -206,17 +202,10 @@ class HadoopTableReader(@transient _tableDesc:
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1385#discussion_r14864765
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -552,17 +552,10 @@ class SparkContext(config: SparkConf) extends Logging
{
14 matches
Mail list logo