You posted system specifics earlier; would you mind posting again? can't find 
them in the thread. 

Sent from my iPhone

On May 13, 2011, at 8:05 AM, Adi <adi.pan...@gmail.com> wrote:

>>> Is there a reason for using OpenJDK and not Sun's JDK?
> 
> The cluster we are seeing the problem in uses Sun's JDK  java version
> "1.6.0_21",Java(TM) SE Runtime Environment (build 1.6.0_21-b06),Java
> HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> 
> The standalone node where I tried to reproduce the issue uses OpenJDK and
> this one does not see this issue as it is able to reuse JVMs.
> 
> -Adi
> 
> Also...  I believe there were noted issues with the .17 JDK. I will look for
>> a link and post if I can find.
>> 
>> 
> 
>> Otherwise, the behaviour I have seen before. Hadoop is detaching from the
>> JVM and stops seeing it.
>> 
>> I think your problem lies in the JDK and not Hadoop.
>> 
>> 
>> On May 12, 2011 at 8:12 PM, Adi <adi.pan...@gmail.com> wrote:
>> 
>>>>> 2011-05-12 13:52:04,147 WARN
>>>> org.apache.hadoop.mapreduce.util.ProcessTree:
>>>>> Error executing shell command
>>>>> org.apache.hadoop.util.Shell$ExitCodeException: kill -12545: No such
>>>> process
>>>> 
>>>> Your logs showed that Hadoop tried to kill processes but the kill
>>>> command claimed they didn't exist. The next time you see this problem,
>>>> can you check the logs and see if any of the PIDs that appear in the
>>>> logs are in fact still running?
>>>> 
>>>> A more likely scenario is that Hadoop's tracking of child VMs is
>>>> getting out of sync, but I'm not sure what would cause that.
>>>> 
>>>> 
>>> Yes those java processes are in fact running. And those error messages do
>>> not always show up. Just sometimes. But the processes never get cleaned
>> up.
>>> 
>>> -Adi
>> 

Reply via email to