But apart from storing metadata info, Is there anything more NN/JT machines
are doing ?? .
So I can say I can survive with poor NN if I am not dealing with lots of
files in HDFS ?
On Thu, Sep 22, 2011 at 11:08 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
By just changing the
You may be missing the kerberos principal for the namenode in your
configuration used to connect to NameNode. Check your configuration for
dfs.namenode.kerberos.principal and set it to the same value as on NN.
HTH
+Vinod
On Thu, Sep 22, 2011 at 4:06 AM, Sivva svijaysand...@gmail.com wrote:
Hi
Hi,
I am CC'ing this to hive-user as well .
I tried to do a simple join between two tables 2.2GB and 137MB.
select count(*) from A JOIN B ON (A.a = B.b);
The query ran for 7 hours . I am sure this is not normal. The reducer gets
stuck at reduce reduce phase . Map, copy phases complete just in
In NN many deamons will run. For replicating the blocks from one DN to other DN
when there is no enough replications. SafeMode monitering, LeaseManager and
will also maintain the Blocks to machineList mappings in memory,
HeartbeatMonitoring, IPC handlers..etc.
In JT also there are many
On Thu, Sep 22, 2011 at 11:44 AM, praveenesh kumar praveen...@gmail.com wrote:
But apart from storing metadata info, Is there anything more NN/JT machines
are doing ?? .
So I can say I can survive with poor NN if I am not dealing with lots of
files in HDFS ?
snip
The JT and NN are your
Hi Uma !
u got me right !
Actually without any patch when i modified appropriate mapred-site.xml and
capacity-scheduler.xml and copied capaciy jar accordingly.
I am able to see see queues in Jobracker GUI but both the queues show same
set of job's execution.
I ran with trace and topology files
On 22/09/11 05:42, praveenesh kumar wrote:
Hi all,
Can we replace our namenode machine later with some other machine. ?
Actually I got a new server machine in my cluster and now I want to make
this machine as my new namenode and jobtracker node ?
Also Does Namenode/JobTracker machine's
Hi Arun,
I have gone through the logs. Mumak simulator is trying to start the job
tracker and job tracking is failing to start because it is not able to
create /jobtracker/jobsinfo directory.
I think the directory doesn't have enough permissions. Please check the
permissions or any other
Yes Devaraj,
From the logs, looks it failed to create /jobtracker/jobsInfo
code snippet:
if (!fs.exists(path)) {
if (!fs.mkdirs(path, new
FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
throw new IOException(
CompletedJobStatusStore mkdirs failed to create
I agree w Steve except on one thing...
RAID 5 Bad. RAID 10 (1+0) good.
Sorry this goes back to my RDBMs days where RAID 5 will kill your performance
and worse...
Date: Thu, 22 Sep 2011 11:28:39 +0100
From: ste...@apache.org
To: common-user@hadoop.apache.org
Subject: Re: Can we replace
On 22/09/11 17:13, Michael Segel wrote:
I agree w Steve except on one thing...
RAID 5 Bad. RAID 10 (1+0) good.
Sorry this goes back to my RDBMs days where RAID 5 will kill your performance
and worse...
sorry, I should have said RAID =5. The main thing is you don't want the
NN data lost.
I would like to have a robust setup for anything residing on our edge nodes,
which is where these two daemons will be, and I was curious if anyone had any
suggestions around how to replicate / keep an active clone of the metadata for
these components. We already use DRBD and a vip to get around
Well you could do RAID 1 which is just mirroring.
I don't think you need to do any raid 0 or raid 5 (striping) to get better
performance.
Also if you're using a 1U box, you just need 2 SATA drives internal and then
NFS mount a drive from your SN for your backup copy...
Date: Thu, 22 Sep 2011
Hello,
I am trying to automate formatting an HDFS volume. Is there any way to do this
without the interaction (and using expect)?
Cheers,
Ivan
you could try
echo yes | bin/hadoop namenode -format
--
Arpit
ar...@hortonworks.com
On Sep 22, 2011, at 2:43 PM, ivan.nov...@emc.com wrote:
Hello,
I am trying to automate formatting an HDFS volume. Is there any way to do
this without the interaction (and using expect)?
Cheers,
Ian
echo 'Y' | hadoop namenode -format
should work.,
Raj
From: ivan.nov...@emc.com ivan.nov...@emc.com
To: common-user@hadoop.apache.org
Sent: Thursday, September 22, 2011 2:43 PM
Subject: formatting hdfs without user interaction
Hello,
I am trying to
yes | hadoop namenode -format
The yes program simply outputs 'y' a bunch of times. echo yes will just
print yes to stdout.
-- Adam
-Original Message-
From: ivan.nov...@emc.com [mailto:ivan.nov...@emc.com]
Sent: Thursday, September 22, 2011 6:00 PM
To: common-user@hadoop.apache.org;
I was reviewing a video from Hadoop Summit 2011[1] where Arun Murthy mentioned
that MRv2 was moving towards protocol buffers as the wire format but I feel
like this is contrary to an Avro presentation that Doug Cutting did back in
Hadoop World '09[2]. I haven't stayed up to date with the Jira
The reason you are getting multiple prompts is that you have multiple dir's
defined in the dfs.name.dir.
A simple expect script would take care of this.
#!/usr/bin/expect -f
spawn /bin/hadoop namenode -format
expect Re-format filesystem in
send Y\n
expect Re-format filesystem in
send Y\n
Yeah I have a secondary namenode as well so 2 directories.
I was trying to avoid expect if possible. But this is always an option.
Cheers,
Ivan
On 9/22/11 3:17 PM, Arpit Gupta ar...@hortonworks.com wrote:
The reason you are getting multiple prompts is that you have multiple
dir's defined
Hi Adam,
Well the yes program prints lower case y's and apparently only captial Y
is accepted.
But by creating my out Yes program that spews Y's to stdout it works :)
Cheers,
Ivan
On 9/22/11 3:02 PM, Adam Shook ash...@clearedgeit.com wrote:
yes | hadoop namenode -format
The yes program
Ivan,
Writing your own program was overkill.
The 'yes' coreutil is pretty silly, but nifty at the same time. It
accepts an argument, which it would repeat infinitely.
So:
$ yes Y | hadoop namenode -format
Would do it for you.
(Note that in the future release, saner answers will be
22 matches
Mail list logo