be inherent (not the same hardware etc.)
hope you find some of this useful.
Regards
Gorgo
--
View this message in context:
http://old.nabble.com/increase-number-of-map-tasks-tp33107775p33132789.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
task for
each line but as I was using c++ pipes that was not an option for me.
Hope this helps
GorGo
sset wrote:
Hello,
In hdfs we have set block size - 40bytes . Input Data set is as below
terminated with line feed.
data1 (5*8=40 bytes)
data2
..
...
data10
++ or
JAVA?
I also would like information on how Hadoop maintains the data locality
between RecordReaders and the spawned map tasks.
Any information is most welcome.
Regards
GorGo
--
View this message in context:
http://old.nabble.com/Hadoop-PIPES-job-using-C%2B%2B-and-binary-data-results