Hi,
Hadoop neither read one line each time, nor fetching dfs.block.size of lines
into a buffer,
Actually, for the TextInputFormat, it read io.file.buffer.size bytes of text
into a buffer each time,
this can be seen from the hadoop source file LineReader.java
2011/10/5 Mark question
> Hello,
>
Install hadoop on your local machine, copy the configuration files from the
remote
hadoop culuster server to your local machine(including the hosts file), then
you can
just submit a *.jar locally as before.
2011/10/5 oleksiy
>
> Hello,
>
> I'm trying to find a way how to run hadoop MapReduce ap
Hi all,
I am benchmarking a Hadoop Cluster with the hadoop-*-test.jar TestDFSIO
but the following error returns:
File /usr/hadoop-0.20.2/libhdfs/libhdfs.so.1 does not exist.
How to solve this problem?
Thanks!
Thanks a lot!
Yang Xiaoliang
2011/2/25 maha
> Hi Yang,
>
> The problem could be solved using the following link:
> http://www.roseindia.net/java/java-get-example/get-memory-usage.shtml
> You need to use other memory managers like the Garbage collector and its
> finalize
I had also encuntered the smae problem a few days ago.
any one has another method?
2011/2/24 maha
> Based on the Java function documentation, it gives approximately the
> available memory, so I need to tweak it with other functions.
> So it's a Java issue not Hadoop.
>
> Thanks anyways,
> Maha