Thank you a lot for the replies.
To me it is clear when data locality gets broken though (and it is not only
the failure of the RS, there are other cases). I was hoping more for
suggestions around this particular use-case: assuming that nodes/RSs are
stable, how to make sure to achieve the data lo
-XX:HeapDumpPath=/path/to/heap.dump
-Original message-
> From:Marek Miglinski
> Sent: Wed 18-Jul-2012 19:51
> To: mapreduce-user@hadoop.apache.org
> Subject: location of Java heap dumps
>
> Hi all,
>
> I have a setting of -XX:+HeapDumpOnOutOfMemoryError on all nodes and I don't
>
Hi all,
I have a setting of -XX:+HeapDumpOnOutOfMemoryError on all nodes and I don't
have permissions to add location where those dumps will be saved, so I get a
message in my mapred process:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid10687.hprof ...
Heap dump file crea
Hi Syed,
Please do go through the tutorial completely, it helps you understand
whats possible and whats not, and how to do certain things:
http://hadoop.apache.org/common/docs/stable/mapred_tutorial.html#Reducer
As it mentions:
"The number of reduces for the job is set by the user via
JobConf.set
Team,
Is there a way to increase to Number of Reducer . In Map reduce Program
.
I had increased in configuration mapred.tasktracker.reduce.tasks.maximum =
2 . Is there a way where we can increase in Program?
Thanks and Regards,
S SYED ABDUL KATHER
Hello,
As far as I understand Bulk Import functionality will not take into account
the Data Locality question. MR job will create number of reducer tasks same
as regions to write into, but it will not "advice" on which nodes to run
these tasks. In that case Reducer task which writes HFiles of some