s): java.lang.Exception:
>>> Could not compute split, block input-0-1410443074600 not found
>>> org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>> org.apache.spar
rocessing ? Is it because of resource limitation ( say all cores are
>>>>> busy
>>>>> puling ) or it is by design ? I am setting the executor-memory to 10G
>>>>> and
>>>>> driver-memory to 4G .
>>>>>
>>>>> 2. This i
can see very frequently Block are selected to be
>>>> Removed...This kind of entries are all over the place. But when a Block is
>>>> removed , below problem happen.... May be this issue cause the issue 1 that
>>>> no Jobs are getting processed ..
>>>&
12.1 MB, free: 100.6 MB)
>>> ...
>>>
>>> INFO : org.apache.spark.storage.BlockManagerInfo - Removed
>>> input-0-1410443074600 on ip-10-252-5-62.asskickery.us:37033 in memory
>>> (size: 12.1 MB, free: 154.6 MB)
>>> ..........
>>
;>
>>
>> org.apache.spark.SparkException: *Job aborted due to stage failure*:
>> Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in
>> stage 7.0 (TID 139, ip-10-252-5-62.asskickery.us): java.lang.Exception:
>> Could not compute split, block input-0-1410443074600 not found
>>
Dear all,
I am sorry. This was a false alarm
There was some issue in the RDD processing logic which leads to large
backlog. Once I fixed the issues in my processing logic, I can see all
messages being pulled nicely without any Block Removed error. I need to
tune certain configurations in my Kafka
This is my case about broadcast variable:
14/07/21 19:49:13 INFO Executor: Running task ID 4 14/07/21 19:49:13 INFO
DAGScheduler: Completed ResultTask(0, 2) 14/07/21 19:49:13 INFO TaskSetManager:
Finished TID 2 in 95 ms on localhost (progress: 3/106) 14/07/21 19:49:13 INFO
TableOutputFormat:
Hi,
Can you attach more logs to see if there is some entry from ContextCleaner?
I met very similar issue before…but haven’t get resolved
Best,
--
Nan Zhu
On Thursday, September 11, 2014 at 10:13 AM, Dibyendu Bhattacharya wrote:
> Dear All,
>
> Not sure if this is a false alarm.