gt; On 06/17/2011 09:51 AM, Lemon Cheng wrote:
>
> Hi,
>
> Thanks for your reply.
> I am not sure that. How can I prove that?
>
> Which is your dfs.tmp.dir and dfs.data.dir values?
>
> You can check the DataNodes“s health with bin/slaves.sh jps | grep Datanode
> | sor
us the output of "hadoop dfs -ls"?
>
>
>
> On Jun 17, 2011, at 10:21, Lemon Cheng wrote:
>
> Hi,
>
> Thanks for your reply.
> I am not sure that. How can I prove that?
>
> I checked the localhost:50070, it shows 1 live node and 0 dead
port of 0 blocks got
processed in 1 msecs
2011-06-17 21:56:02,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 1 msecs
On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz wrote:
> **
> On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>
> Hi
Hi,
I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type
"hadoop dfs -ls".
However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01",
the error is shown as follow.
I have searched the related problem in the web, but i can't find a solution
for helping me to solve t