I misread that you're running in standalone mode, so ignore the "local[3]"
example ;)  How many separate readers are listening to rabbitmq topics?

This might not be the problem, but I'm just eliminating possibilities.
Another possibility is that the in-bound data rate exceeds your ability to
process it. What's your streaming batch window size?

See also here for ideas:
http://spark.apache.org/docs/1.2.1/streaming-programming-guide.html#performance-tuning


Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Thu, Apr 2, 2015 at 10:57 AM, Bill Young <[email protected]>
wrote:

> Sorry for the obvious typo, I have 4 workers with 16 cores total*
>
> On Thu, Apr 2, 2015 at 11:56 AM, Bill Young <[email protected]>
> wrote:
>
>> Thank you for the response, Dean. There are 2 worker nodes, with 8 cores
>> total, attached to the stream. I have the following settings applied:
>>
>> spark.executor.memory 21475m
>> spark.cores.max 16
>> spark.driver.memory 5235m
>>
>>
>> On Thu, Apr 2, 2015 at 11:50 AM, Dean Wampler <[email protected]>
>> wrote:
>>
>>> Are you allocating 1 core per input stream plus additional cores for the
>>> rest of the processing? Each input stream Reader requires a dedicated core.
>>> So, if you have two input streams, you'll need "local[3]" at least.
>>>
>>> Dean Wampler, Ph.D.
>>> Author: Programming Scala, 2nd Edition
>>> <http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
>>> Typesafe <http://typesafe.com>
>>> @deanwampler <http://twitter.com/deanwampler>
>>> http://polyglotprogramming.com
>>>
>>> On Thu, Apr 2, 2015 at 11:45 AM, byoung <[email protected]>
>>> wrote:
>>>
>>>> I am running a spark streaming stand-alone cluster, connected to
>>>> rabbitmq
>>>> endpoint(s). The application will run for 20-30 minutes before failing
>>>> with
>>>> the following error:
>>>>
>>>> WARN 2015-04-01 21:00:53,944
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 22 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,944
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 23 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,951
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 20 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,951
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 19 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,952
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 18 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,952
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 17 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:53,952
>>>> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to
>>>> remove
>>>> RDD 16 - Ask timed out on
>>>> [Actor[akka.tcp://
>>>> [email protected]:43018/user/BlockManagerActor1#-1913092216]]
>>>> after [30000 ms]}
>>>> WARN 2015-04-01 21:00:54,151
>>>> org.apache.spark.streaming.scheduler.ReceiverTracker.logWarning.71:
>>>> Error
>>>> reported by receiver for stream 0: Error in block pushing thread -
>>>> java.util.concurrent.TimeoutException: Futures timed out after [30
>>>> seconds]
>>>>     at
>>>> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>>>>     at
>>>> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>>>>     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>>>>     at
>>>>
>>>> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>>>>     at scala.concurrent.Await$.result(package.scala:107)
>>>>     at
>>>>
>>>> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:166)
>>>>     at
>>>>
>>>> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushArrayBuffer(ReceiverSupervisorImpl.scala:127)
>>>>     at
>>>>
>>>> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl$$anon$2.onPushBlock(ReceiverSupervisorImpl.scala:112)
>>>>     at
>>>>
>>>> org.apache.spark.streaming.receiver.BlockGenerator.pushBlock(BlockGenerator.scala:182)
>>>>     at
>>>> org.apache.spark.streaming.receiver.BlockGenerator.org
>>>> $apache$spark$streaming$receiver$BlockGenerator$$keepPushingBlocks(BlockGenerator.scala:155)
>>>>     at
>>>>
>>>> org.apache.spark.streaming.receiver.BlockGenerator$$anon$1.run(BlockGenerator.scala:87)
>>>>
>>>>
>>>> Has anyone run into this before?
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Error-in-block-pushing-thread-tp22356.html
>>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [email protected]
>>>> For additional commands, e-mail: [email protected]
>>>>
>>>>
>>>
>>
>>
>> --
>> --
>> Bill Young
>> Threat Stack | Senior Infrastructure Engineer
>> http://www.threatstack.com
>>
>
>
>
> --
> --
> Bill Young
> Threat Stack | Senior Infrastructure Engineer
> http://www.threatstack.com
>

Reply via email to