Also my job is map only so there is no shuffle/reduce phase.

On Fri, Feb 27, 2015 at 7:10 PM, Mukesh Jha <me.mukesh....@gmail.com> wrote:

> I'm streamin data from kafka topic using kafkautils & doing some
> computation and writing records to hbase.
>
> Storage level is memory-and-disk-ser
> On 27 Feb 2015 16:20, "Akhil Das" <ak...@sigmoidanalytics.com> wrote:
>
>> You could be hitting this issue
>> https://issues.apache.org/jira/browse/SPARK-4516
>> Apart from that little more information about your job would be helpful.
>>
>> Thanks
>> Best Regards
>>
>> On Wed, Feb 25, 2015 at 11:34 AM, Mukesh Jha <me.mukesh....@gmail.com>
>> wrote:
>>
>>> Hi Experts,
>>>
>>> My Spark Job is failing with below error.
>>>
>>> From the logs I can see that input-3-1424842351600 was added at 5:32:32
>>> and was never purged out of memory. Also the available free memory for the
>>> executor is *2.1G*.
>>>
>>> Please help me figure out why executors cannot fetch this input.
>>>
>>> Txz for any help, Cheers.
>>>
>>>
>>> *Logs*
>>> 15/02/25 05:32:32 INFO storage.BlockManagerInfo: Added
>>> input-3-1424842351600 in memory on
>>> chsnmphbase31.usdc2.oraclecloud.com:50208 (size: 276.1 KB, free: 2.1 GB)
>>> .
>>> .
>>> 15/02/25 05:32:43 INFO storage.BlockManagerInfo: Added
>>> input-1-1424842362600 in memory on chsnmphbase30.usdc2.cloud.com:35919
>>> (size: 232.3 KB, free: 2.1 GB)
>>> 15/02/25 05:32:43 INFO storage.BlockManagerInfo: Added
>>> input-4-1424842363000 in memory on chsnmphbase23.usdc2.cloud.com:37751
>>> (size: 291.4 KB, free: 2.1 GB)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 32.1 in
>>> stage 451.0 (TID 22511, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 37.1 in
>>> stage 451.0 (TID 22512, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 31.1 in
>>> stage 451.0 (TID 22513, chsnmphbase30.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 34.1 in
>>> stage 451.0 (TID 22514, chsnmphbase26.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 36.1 in
>>> stage 451.0 (TID 22515, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 39.1 in
>>> stage 451.0 (TID 22516, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 30.1 in
>>> stage 451.0 (TID 22517, chsnmphbase30.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 33.1 in
>>> stage 451.0 (TID 22518, chsnmphbase26.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 35.1 in
>>> stage 451.0 (TID 22519, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 38.1 in
>>> stage 451.0 (TID 22520, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 WARN scheduler.TaskSetManager: Lost task 32.1 in stage
>>> 451.0 (TID 22511, chsnmphbase19.usdc2.cloud.com): java.lang.Exception:
>>> Could not compute split, block input-3-1424842351600 not found
>>>         at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at
>>> org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:56)
>>>         at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> 15/02/25 05:32:43 WARN scheduler.TaskSetManager: Lost task 36.1 in stage
>>> 451.0 (TID 22515, chsnmphbase19.usdc2.cloud.com): java.lang.Exception:
>>> Could not compute split, block input-3-1424842355600 not found
>>>         at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>>>
>>> --
>>> Thanks & Regards,
>>>
>>> *Mukesh Jha <me.mukesh....@gmail.com>*
>>>
>>
>>


-- 


Thanks & Regards,

*Mukesh Jha <me.mukesh....@gmail.com>*

Reply via email to