Hi Jason,
Mehmet said is exactly correct ,without reducers we cannot
increase performance please you can add mappers and reducers in any
processing data you can get output and performance is good.
Thanks Regards,
Ramesh.Narasingu
On Tue, Sep 11, 2012 at 9:31 AM, Mehmet
Hi Vinod,
Please check whether input file location and output file
location doesnt match. please find your input file first put into HDFS and
then run MR job it is working fine.
Thanks Regards,
Ramesh.Narasingu
On Tue, Sep 11, 2012 at 4:23 AM, Vinod Kumar Vavilapalli
Hi,
Please find i think one command is there then only build the all
applications.
Thanks Regards,
Ramesh.Narasingu
On Tue, Sep 11, 2012 at 2:28 PM, Tony Burton tbur...@sportingindex.comwrote:
Hi,
** **
I’ve checked out the hadoop trunk, and I’m running “mvn test” on the
Hi Andy,
Please specify the environment varibles in gedit .bashrc. you
can specify for the JAVA_HOME environment variables and configuration
hadoop-site.xml,hadoop-core.xml,and hadoop-default.xml files you can
specify which version hadoop you can use those things and then close
.bashrc
Hi Users,
Hadoop can distribute all the data into HDFS inside MapReduce
tasks can work together. which one is goes to which data node and how it
works all those things it can maintain each task has own JVM in each data
node. JVM can handle hell number of data to process to the all
Hi Abhay,
NameNode it has address of the all data nodes. MapReduce can do
all the data is processing. First data set is putting into HDFS filesystem
and then run hadoop jar file. Map task can handle input files for shufle,
sorting and grouped together. Map task is completed and then
Hi Rekha,
What I said means first he has set install java and then
pwd(present working directory) for JAVA_HOME directory and then set to
classpath for the hadoop installation.
Thanks Regards,
Ramesh.Narasingu
On Tue, Sep 4, 2012 at 1:36 PM, Joshi, Rekha rekha_jo...@intuit.com wrote:
Hi Pat,
Please specify correct input file location.
Thanks Regards,
Ramesh.Narasingu
On Mon, Sep 3, 2012 at 9:28 PM, Pat Ferrel p...@occamsmachete.com wrote:
Using hadoop with mahout in a local filesystem/non-hdfs config for
debugging purposes inside Intellij IDEA. When I run one
.
Again, appreciate your help.
Andy
On 4 September 2012 18:13, Narasingu Ramesh ramesh.narasi...@gmail.comwrote:
Hi Rekha,
What I said means first he has set install java and then
pwd(present working directory) for JAVA_HOME directory and then set to
classpath for the hadoop