On Tue, Apr 23, 2013 at 9:23 PM, Mohammad Tariq wrote:
> What should I do on namenode and datanode? Thank you very much
As Tariq has ask, can you provide datanode logs snapshots??
*Thanks & Regards*
∞
Shashwat Shriparv
Hi there,
Could you plz show me your config files and DN error logs?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 23, 2013 at 4:35 PM, 超级塞亚人 wrote:
> Asking for help! I'm facing the problem that no datanode to stop. Namenode
> has been started but datano
Asking for help! I'm facing the problem that no datanode to stop. Namenode
has been started but datanode can't be started. What should I do on
namenode and datanode? Thank you very much
2013/4/19 超级塞亚人
> I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna
> upload 2TB file to
the hdfs dfs command reading from stdin. You might have to correct
>> the above syntax, I just wrote it off the top of my head.
>>
>> ** **
>>
>> Dave****
>>
>> ** **
>>
>> ** **
>>
>> *From:* 超级塞亚人 [mailto:shel...@gmail.com]
>> *Sent:* Friday
>
> ** **
>
> ** **
>
> *From:* 超级塞亚人 [mailto:shel...@gmail.com]
> *Sent:* Friday, April 19, 2013 11:35 AM
> *To:* user@hadoop.apache.org
> *Subject:* Uploading file to HDFS
>
> ** **
>
> I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna
> upload 2TB file to HDFS.How can I put the file to the namenode and upload
> to HDFS?
>
ect
the above syntax, I just wrote it off the top of my head.
Dave
From: 超级塞亚人 [mailto:shel...@gmail.com]
Sent: Friday, April 19, 2013 11:35 AM
To: user@hadoop.apache.org
Subject: Uploading file to HDFS
I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna upload
2TB f
'd love to hear your final solution as I've also been
having fits getting into HDFS from outside the Hadoop environment. I wish
it natively supported NFS mounts or some light weight/easy to install remote
DFS tools.
Dave
-Original Message-
From: Harsh J [mailto:ha...@cloudera.
Can you not simply do a fs -put from the location where the 2 TB file
currently resides? HDFS should be able to consume it just fine, as the
client chunks them into fixed size blocks.
On Fri, Apr 19, 2013 at 10:05 AM, 超级塞亚人 wrote:
> I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I
I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna
upload 2TB file to HDFS.How can I put the file to the namenode and upload
to HDFS?