[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-08-01 Thread mateiz
Github user mateiz commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50855127
  
Thanks for the changes! I've merged this in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-08-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/1578


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-31 Thread mateiz
Github user mateiz commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15628876
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -199,15 +199,22 @@ object BlockFetcherIterator {
   // Get the local blocks while remote blocks are being fetched. Note 
that it's okay to do
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
-  for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
-  }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+  var fetchIndex = 0
+  try {
+for (id - localBlocksToFetch) {
+
+  // getLocalFromDisk never return None but throws BlockException
+  val iter = getLocalFromDisk(id, serializer).get
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  fetchIndex += 1
+  logDebug(Got local block  + id)
+}
+  } catch {
+case e: Exception = {
+  logError(sError occurred while fetching local blocks, e)
+  for (id - localBlocksToFetch.drop(fetchIndex)) {
+results.put(new FetchResult(id, -1, null))
--- End diff --

I thought next() would return a failure block, and then the caller of 
BlockFetcherIterator will just stop. Did you see it not doing that? I think all 
you have to do is put *one* FetchResult with size = -1 in the queue and return, 
and everything will be fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-31 Thread sarutak
Github user sarutak commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15646280
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -199,15 +199,22 @@ object BlockFetcherIterator {
   // Get the local blocks while remote blocks are being fetched. Note 
that it's okay to do
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
-  for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
-  }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+  var fetchIndex = 0
+  try {
+for (id - localBlocksToFetch) {
+
+  // getLocalFromDisk never return None but throws BlockException
+  val iter = getLocalFromDisk(id, serializer).get
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  fetchIndex += 1
+  logDebug(Got local block  + id)
+}
+  } catch {
+case e: Exception = {
+  logError(sError occurred while fetching local blocks, e)
+  for (id - localBlocksToFetch.drop(fetchIndex)) {
+results.put(new FetchResult(id, -1, null))
--- End diff --

I thought wrong. Exactly, in current usage of BlockFetcherIterator, next() 
is not invoked after FetchFailedException has been thrown.
I wonder it's a little bit problem that we can invoke next() after 
FetchFailedException even if there are no such usages in current implementation.
I think it's better to prohibit invoking next() after FetchFailedException 
to clearly express the correct usage of the method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-31 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50838402
  
QA tests have started for PR 1578. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17618/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-31 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50841362
  
QA results for PR 1578:br- This patch PASSES unit tests.br- This patch 
merges cleanlybr- This patch adds no public classesbrbrFor more 
information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17618/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-30 Thread mateiz
Github user mateiz commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15615048
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -199,15 +199,22 @@ object BlockFetcherIterator {
   // Get the local blocks while remote blocks are being fetched. Note 
that it's okay to do
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
-  for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
-  }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+  var fetchIndex = 0
+  try {
+for (id - localBlocksToFetch) {
+
+  // getLocalFromDisk never return None but throws BlockException
+  val iter = getLocalFromDisk(id, serializer).get
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  fetchIndex += 1
+  logDebug(Got local block  + id)
+}
+  } catch {
+case e: Exception = {
+  logError(sError occurred while fetching local blocks, e)
+  for (id - localBlocksToFetch.drop(fetchIndex)) {
+results.put(new FetchResult(id, -1, null))
--- End diff --

I wouldn't do drop and such on a ConcurrentQueue, since it might drop stuff 
other threads were adding. Just do a results.put on the failed block and don't 
worry about dropping other ones. You can actually move the try/catch into the 
for loop and add a return at the bottom of the catch after adding this 
failing FetchResult.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-30 Thread mateiz
Github user mateiz commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15615178
  
--- Diff: 
core/src/test/scala/org/apache/spark/storage/BlockFetcherIteratorSuite.scala ---
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.storage
+
+import org.scalatest.{FunSuite, Matchers}
+import org.scalatest.PrivateMethodTester._
+
+import org.mockito.Mockito._
+import org.mockito.Matchers.{any, eq = meq}
+import org.mockito.stubbing.Answer
+import org.mockito.invocation.InvocationOnMock
+
+import org.apache.spark._
+import org.apache.spark.storage.BlockFetcherIterator._
+import org.apache.spark.network.{ConnectionManager, ConnectionManagerId,
+ Message}
+
+class BlockFetcherIteratorSuite extends FunSuite with Matchers {
+
+  test(block fetch from local fails using BasicBlockFetcherIterator) {
+val blockManager = mock(classOf[BlockManager])
+val connManager = mock(classOf[ConnectionManager])
+doReturn(connManager).when(blockManager).connectionManager
+doReturn(BlockManagerId(test-client, test-client, 1, 
0)).when(blockManager).blockManagerId
+
+doReturn((48 * 1024 * 
1024).asInstanceOf[Long]).when(blockManager).maxBytesInFlight
+
+val blIds = Array[BlockId](
+  ShuffleBlockId(0,0,0),
+  ShuffleBlockId(0,1,0),
+  ShuffleBlockId(0,2,0),
+  ShuffleBlockId(0,3,0),
+  ShuffleBlockId(0,4,0))
+
+val optItr = mock(classOf[Option[Iterator[Any]]])
+val answer = new Answer[Option[Iterator[Any]]] {
+  override def answer(invocation: InvocationOnMock) = 
Option[Iterator[Any]] {
+throw new Exception
+  }
+}
+
+// 3rd block is going to fail
+doReturn(optItr).when(blockManager).getLocalFromDisk(meq(blIds(0)), 
any())
+doReturn(optItr).when(blockManager).getLocalFromDisk(meq(blIds(1)), 
any())
+doAnswer(answer).when(blockManager).getLocalFromDisk(meq(blIds(2)), 
any())
+doReturn(optItr).when(blockManager).getLocalFromDisk(meq(blIds(3)), 
any())
+doReturn(optItr).when(blockManager).getLocalFromDisk(meq(blIds(4)), 
any())
+
+val bmId = BlockManagerId(test-client, test-client,1 , 0)
+val blocksByAddress = Seq[(BlockManagerId, Seq[(BlockId, Long)])](
+  (bmId, blIds.map(blId = (blId, 1.asInstanceOf[Long])).toSeq)
+)
+
+val iterator = new BasicBlockFetcherIterator(blockManager,
+  blocksByAddress, null)
+
+iterator.initialize()
+
+// 3rd getLocalFromDisk invocation should be failed
+verify(blockManager, times(3)).getLocalFromDisk(any(), any())
+
+(iterator.hasNext) should be(true)
--- End diff --

Put a space after the be if you use this syntax. FYI it's also okay to do 
`assert(iterator.hasNext === true)`, or, for booleans, 
`assert(iterator.hasNext, iterator did not have next)` (for a nicer error 
message).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-30 Thread mateiz
Github user mateiz commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50685938
  
Thanks for adding the test! I had one more comment on using drop() on the 
concurrent queue -- it seems like it might be troublesome. I'd rather just put 
the failed result and exit from getLocalBlocks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-30 Thread sarutak
Github user sarutak commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15627125
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -199,15 +199,22 @@ object BlockFetcherIterator {
   // Get the local blocks while remote blocks are being fetched. Note 
that it's okay to do
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
-  for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
-  }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+  var fetchIndex = 0
+  try {
+for (id - localBlocksToFetch) {
+
+  // getLocalFromDisk never return None but throws BlockException
+  val iter = getLocalFromDisk(id, serializer).get
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  fetchIndex += 1
+  logDebug(Got local block  + id)
+}
+  } catch {
+case e: Exception = {
+  logError(sError occurred while fetching local blocks, e)
+  for (id - localBlocksToFetch.drop(fetchIndex)) {
+results.put(new FetchResult(id, -1, null))
--- End diff --

Thank you for your comment, @mateiz .

 I wouldn't do drop and such on a ConcurrentQueue, since it might drop 
stuff other threads  were adding. Just do a results.put on the failed block and 
don't worry about dropping other ones. You can actually move the try/catch into 
the for loop and add a return at the bottom of the catch after adding this 
failing FetchResult.

But, if it returns from getLocalBlocks immediately rest of FetchResults is 
not set to results, and we waits on results.take() in next method forever 
right? results is a instance of LinkedBlockingQueue and take method is blocking 
method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-29 Thread sarutak
Github user sarutak commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50528558
  
I've modified BasicBlockFetcherIterator to fail fast and added test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-26 Thread witgo
Github user witgo commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50232324
  
Here also should throw an `FetchFailedException `?
```scala
override def next(): (BlockId, Option[Iterator[Any]]) = {
  resultsGotten += 1
  val startFetchWait = System.currentTimeMillis()
  val result = results.take()
  val stopFetchWait = System.currentTimeMillis()
  _fetchWaitTime += (stopFetchWait - startFetchWait)
  if (!result.failed) bytesInFlight -= result.size
  while (!fetchRequests.isEmpty 
(bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = 
maxBytesInFlight)) {
sendRequest(fetchRequests.dequeue())
  }
  (result.blockId, if (result.failed) None else 
Some(result.deserialize()))
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-25 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50110964
  
QA results for PR 1578:br- This patch PASSES unit tests.br- This patch 
merges cleanlybr- This patch adds no public classesbrbrFor more 
information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17166/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-25 Thread mateiz
Github user mateiz commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15426042
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -200,14 +200,21 @@ object BlockFetcherIterator {
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
   for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
+try{
--- End diff --

Small code style thing, add a space before the {


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-25 Thread mateiz
Github user mateiz commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15426163
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -200,14 +200,21 @@ object BlockFetcherIterator {
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
   for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
+try{
+  getLocalFromDisk(id, serializer) match {
+case Some(iter) = {
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  logDebug(Got local block  + id)
+}
+case None = {
+  throw new BlockException(id, Could not get block  + id +  
from local machine)
+}
   }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+} catch {
+  case e: Exception = {
+logError(sError occurred while fetch local block $id, e)
+results.put(new FetchResult(id, -1, null))
   }
--- End diff --

Why do we throw an exception above and then immediately catch it, instead 
of doing results.put above? Is there any other kind of error that can happen 
beyond getLocalFromDisk returning None?

Also, the current code seems to forget the exception: it just puts in a 
failed result. Is this intentional, i.e. will get a FetchFailedException later? 
It seems we should return from this method ASAP if there's a problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-25 Thread sarutak
Github user sarutak commented on a diff in the pull request:

https://github.com/apache/spark/pull/1578#discussion_r15430260
  
--- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -200,14 +200,21 @@ object BlockFetcherIterator {
   // these all at once because they will just memory-map some files, 
so they won't consume
   // any memory that might exceed our maxBytesInFlight
   for (id - localBlocksToFetch) {
-getLocalFromDisk(id, serializer) match {
-  case Some(iter) = {
-// Pass 0 as size since it's not in flight
-results.put(new FetchResult(id, 0, () = iter))
-logDebug(Got local block  + id)
+try{
+  getLocalFromDisk(id, serializer) match {
+case Some(iter) = {
+  // Pass 0 as size since it's not in flight
+  results.put(new FetchResult(id, 0, () = iter))
+  logDebug(Got local block  + id)
+}
+case None = {
+  throw new BlockException(id, Could not get block  + id +  
from local machine)
+}
   }
-  case None = {
-throw new BlockException(id, Could not get block  + id +  
from local machine)
+} catch {
+  case e: Exception = {
+logError(sError occurred while fetch local block $id, e)
+results.put(new FetchResult(id, -1, null))
   }
--- End diff --

Actually, getLocalFromDisk never return None but can throw BlockException. 
so I think case None block above is useless and we should remove the case 
None block rather than doing results.put.

 Is there any other kind of error that can happen beyond getLocalFromDisk 
returning None?
Yes, BlockException is thrown from getLocalFromDisk, and 
FileNotFoundException from DiskStore#getBytes when it failed to fetch 
shuffle_*_* from local disk. 

 Also, the current code seems to forget the exception: it just puts in a 
failed result. Is this intentional, i.e. will get a FetchFailedException later?
It's for get FetchFailedException later. If we return from 
BasicBlockFetchIterator#getLocallocks, we can't know whether rest of blocks can 
be read successfully or not.






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-25 Thread sarutak
Github user sarutak commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50217313
  
@pwendell I found this issue when I simulated disk fault. When shuffle_*_* 
cannot be open  successfully, FileNotFoundException was thrown from the 
constructor of RandomAccessFile in DiskStore#getBytes.

Yes, I will add test cases later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-24 Thread sarutak
GitHub user sarutak opened a pull request:

https://github.com/apache/spark/pull/1578

[SPARK-2670] FetchFailedException should be thrown when local fetch has 
failed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sarutak/spark SPARK-2670

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/1578.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1578


commit e310c0ba502496b5790791c92d7adb7691562835
Author: Kousuke Saruta saru...@oss.nttdata.co.jp
Date:   2014-07-24T19:09:38Z

Modified BlockFetcherIterator to handle local fetch failure as fatch fail




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-24 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50064004
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-24 Thread pwendell
Github user pwendell commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50108622
  
Thanks - this is a good idea. Two questions: (a) what type of exception 
have you seen here? (b) could you add a unit test for this? Jenkins, test this 
please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...

2014-07-24 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1578#issuecomment-50108754
  
QA tests have started for PR 1578. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17166/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---