)
>
> so the question is can we achive to read the counters using simple
> Java APIs ? Does anyone have idea how does the default jobtracker JSP works
> ? we wanted to built something similar to this
>
> thanks
> Rajan Dev
>
Best regards,
Ted Xu
Hi Daniel,
I think there are better solutions, but simply chop the input file into
pieces ( i.e. 10 urls per file ) shall work.
2009/12/4 Daniel Garcia
> Hello!
>
> I'm trying to rewrite an image resizing program in terms of map/reduce. The
> problem I see is that the job is not broken up in to
ly something is wrong in your hdfs
> cluster.
>
>
> On Thu, Nov 12, 2009 at 7:06 AM, Ted Xu wrote:
>
>> hi all,
>>
>> We are using hadoop-0.19.1 on about 200 nodes. We find there are lots of
>> slaves keep Child process even the job is done.
>>
>> H
t; at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
> at org.apache.hadoop.mapred.Child.main(Child.java:158)
>
> "VM Thread" prio=10 tid=0x080ff000 nid=0x6038 runnable
>
> "GC task thread#0 (ParallelGC)" prio=10 tid=0x08062400 nid=0x6034 runnable
>
> "GC task thread#1 (ParallelGC)" prio=10 tid=0x08063800 nid=0x6035 runnable
>
> "GC task thread#2 (ParallelGC)" prio=10 tid=0x08065000 nid=0x6036 runnable
>
> "GC task thread#3 (ParallelGC)" prio=10 tid=0x08066400 nid=0x6037 runnable
>
> "VM Periodic Task Thread" prio=10 tid=0x0811e400 nid=0x603f waiting on
> condition
>
> JNI global references: 738
>
It seems the process is blocked by DFS client. Anyone tell me how to avoid
it?
Best Regards,
Ted Xu