Please see
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_system-admin-guide/content/ch_hadoop-ha-3-2.html
Look for 'Standby NameNode'
Cheers
On Wed, Jan 7, 2015 at 8:57 PM, Shashidhar Rao
wrote:
> Hi,
> Thanks it helped me a lot. But I am not seeing any secondary namenode so
>
Hi,
Thanks it helped me a lot. But I am not seeing any secondary namenode so my
question is is Secondary Namenode required in HA Hadoop cluster
Thanks
Sd
On Thu, Jan 8, 2015 at 9:38 AM, Ted Yu wrote:
> Please take a look at
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_syste
Please take a look at
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_system-admin-guide/content/ch_hadoop-ha-3-1.html
Cheers
On Wed, Jan 7, 2015 at 8:00 PM, Shashidhar Rao
wrote:
> Hi,
>
> I want to setup Hadoop HA with 2 NameNodes and journal nodes. But I am not
> sure what sh
Hi,
I want to setup Hadoop HA with 2 NameNodes and journal nodes. But I am not
sure what should be the configuration of journal nodes
1. How many journal nodes I need
2. What should be the machine configuration like RAM, disk size etc
This is for a fairly small cluster dealing with 20 Terabytes
Also, bash wildcard expansion should automatically put the full list of
matching files into the list of local source arguments prior to execution.
For example, assuming 3 files named hello1, hello2 and hello3, then running
the following command...
hdfs dfs -put hello* /user/chris
...should turn i
Hi Yogendra,
This is something that I had fixed as part of HDFS-573.
https://issues.apache.org/jira/browse/HDFS-573
The relevant portion of hadoop-hdfs-project/hadoop-hdfs/pom.xml is here,
where we set up HADOOP_HOME and PATH to reference hadoop.dll and
winutils.exe in the hadoop-common-project/
Try:
java -cp $($HADOOP_HOME/bin/hadoop classpath):. abc.java
Cheers
On Wed, Jan 7, 2015 at 3:08 PM, Arko Provo Mukherjee <
arkoprovomukher...@gmail.com> wrote:
> Hello Gurus,
>
> I have been using Hadoop 1.x and have been compiling my programs as
> follows:
> $ javac -cp /hadoop-core-1.a.b.jar
Hello Gurus,
I have been using Hadoop 1.x and have been compiling my programs as follows:
$ javac -cp /hadoop-core-1.a.b.jar abc.java
However, I am unable to find the core jar for Hadoop 2.4.1. I see lots of
jars in /share/hadoop but unsure which ones need to connect for
regular MapReduce applica
It is more like a general cluster management question.
Do you use chef or puppet or any similar products?
Sent from my iPhone
> On Jan 7, 2015, at 12:51, Sajid Syed wrote:
>
> Hi All,
>
> Is there any easy way or links which show how to install patches (OS /
> Hadoop) on a 100 node clust
See http://search-hadoop.com/m/uUC1020JbAn1
Once the patch for hadoop is applied, you can build and deploy onto your
cluster.
Cheers
On Wed, Jan 7, 2015 at 12:51 PM, Sajid Syed wrote:
> Hi All,
>
> Is there any easy way or links which show how to install patches (OS /
> Hadoop) on a 100 node c
Hi All,
Is there any easy way or links which show how to install patches (OS /
Hadoop) on a 100 node cluster?
Thanks
SP
I had made changes in the file but still nothing happens. Please help i am
beginner in this.
kmer.java is attached herewith.
On Wed, Jan 7, 2015 at 7:03 PM, Vandana kumari
wrote:
> yes, there is no compilation error. But thank you for finding mistakes, i
> will make the above changes and let you
yes, there is no compilation error. But thank you for finding mistakes, i
will make the above changes and let you all know.
On Wed, Jan 7, 2015 at 6:52 PM, Shahab Yunus wrote:
> Vandana,
>
> Also, in the code that you attached, I see 2 compilation errors. How is it
> even compiling? Or am I miss
First try:
You should use @Override annotation before map and reduce methods so they
are actually called.
Like this:
*@Override*
public void map(LongWritable k,Text v,Context con)throws
IOException,InterruptedException
{...
Do same for 'reduce' method.
Regards,
Shahab
On Wed, Jan 7,
Hi,
Does this even compile? In the mapper the variable sum doesn't seem to be
defined?
con.write(new Text(y), new IntWritable(sum));
2015-01-07 13:58 GMT+01:00 Vandana kumari :
>
>
> On Wed, Jan 7, 2015 at 4:16 PM, saurabh chhajed
> wrote:
>
>> Can you attach .java file instead of .class ?
>>
Vandana,
Also, in the code that you attached, I see 2 compilation errors. How is it
even compiling? Or am I missing something here?
1- The 'sum' is not declared in the Map class.
2- The 'sum' in the Reduce class, when written to the context needs to be
IntWritable and not directly an int.
Regard
On Wed, Jan 7, 2015 at 4:16 PM, saurabh chhajed
wrote:
> Can you attach .java file instead of .class ?
>
> On Wed, Jan 7, 2015 at 4:04 PM, Vandana kumari
> wrote:
>
>> Hello all,
>>
>> I had written a kmer mapreduce program in java. There is no error in the
>> program and while running it create
Can you attach .java file instead of .class ?
On Wed, Jan 7, 2015 at 4:04 PM, Vandana kumari
wrote:
> Hello all,
>
> I had written a kmer mapreduce program in java. There is no error in the
> program and while running it creates an output directory in hadoop file
> system but that directory is e
Hello all,
I had written a kmer mapreduce program in java. There is no error in the
program and while running it creates an output directory in hadoop file
system but that directory is empty.
Please help to resolve this issue.
Input file(test.txt) and kmer.java is attached herewith.
--
Thanks and
you can configure your third mapreduce job using MultipleFileInput and read
those file into you job. if the file size is small then you can consider
the DistributedCache which will give you an optimal performance if you are
joining the datasets of file1 and file2. I will also recommend you to use
s
20 matches
Mail list logo