hi, guys.
I want to move all the blocks from 50 datanodes to another 50 new
datanodes. There is a very easy idea that we can add the new 50 nodes to
hadoop cluster firstly, then decommission the other 50 nodes one by one.
But, I believe it is not an efficient way to reach the goal.
So I plan to ge
Hi Bing,
I would recommend that you add your 50 new nodes to the cluster and the
decommission the 50 nodes you want to get rid of. You can do the decommission
in one operation (albeit a lengthy operation) by adding the nodes you want to
decommission to your HDFS exclude file and running `hdfs
Hi
Adding to Harshit Mathur's reply:
What should be the main class in the manifest file
From what I remember, you must not set that. (i.e. if you set it to MyMain1,
then you can't use MyMain2)
hth
Gabriel Balan
On 7/1/2015 12:02 AM, Harshit Mathur wrote:
Yes you can do this.
You can have
The best way to do is by decommissioning nodes as said by Rich.
Another way could be to bump up the replication factor by setrep command
(without the -w option) and wait for a few hours and then reset the
replication factor back to original.
Then start the decommissioning process, it will be
I am running the hadoop MRv2 in a cluster with 4 nodes. Java 8 is installed.
I start the resource manager and the node manager normally, but during
the execution the resource manager crashes with the error below. Any
help to solve this? Is it a problem related to java heap, or memory?
|
Jul 01
Can you check /var/log/messages to see if there is some clue ?
Which hadoop release are you using ?
Can you provide the command line for the resource manager ?
Thanks
On Wed, Jul 1, 2015 at 9:38 AM, xeonmailinglist-gmail <
xeonmailingl...@gmail.com> wrote:
> I am running the hadoop MRv2 in a
I agree to what Gabriel said.i avoided setting main class in manifest and
it did work.Thanks.
On Wed, Jul 1, 2015 at 9:25 AM, gabriel balan
wrote:
> Hi
>
> Adding to Harshit Mathur's reply:
>
> What should be the main class in the manifest file
>
> From what I remember, you must not set that. (
I have no file /var/log/messages.
I am using hadoop-2.6.0
Wellington:~/repositories/git/hadoop-2.6.0$ ./sbin/start-yarn.sh
On 07/01/2015 05:56 PM, Ted Yu wrote:
Can you check /var/log/messages to see if there is some clue ?
Which hadoop release are you using ?
Can you provide the command li
I have a question regarding scalability of name node. Typically the name
node handles 2 type of clients:
1. Internal clients (data nodes - part of the hadoop cluster)
2. External clients (client nodes requesting for block locations in order
to perform read/writes on data nodes)
I am not much conc
?That's just some warnings from web component. It should do harm to your RM.
You should check the RM log. Check if you defined HADOOP_YARN_HOME/logs or
YARN_LOG_DIR?, where the daemon log lives.
Thanks,
Zhijie
From: xeonmailinglist-gmail
Sent: Wednesday, July
?output means the path to the history file of the job you want to view on hdfs.
Thanks,
Zhijie
From: mehdi benchoufi
Sent: Saturday, June 20, 2015 12:09 PM
To: user@hadoop.apache.org
Subject: accessing hadoop job history
Hi,
I ma new to Hadoop and when I run
+1 Zhijie, or if that doesn't work may be you can run
*ps -aef | grep hadoop*
on terminal and check the value of *-Dyarn.log.dir*,
that should give you where are logs getting printed.
On Thu, Jul 2, 2015 at 10:13 AM, Zhijie Shen wrote:
> That's just some warnings from web component. It shou
Hey there,
Try posting to u...@hive.apache.org.
To answer your question... AFAIK you can do this with hive, but as a CTAS
operation:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableAsSelect(CTAS)
.
On Mon, Jun 29, 2015 at 8:01 PM, Kumar Jayapal w
13 matches
Mail list logo