Hi,
I was trying to set output format class using job.setOutputFormatClass()
in newer API (org.apache.hadoop.mapreduce), however the method does not
accept 'MapFileOutputFormat.class' argument. And MapFileOutputFormat is
imported from older API (org.apache.hadoop.mapred) in Eclipse and there
Your use of 'index' is indeed not clear. Are you talking about Hive or
HBase?
I can confirm that you will have one result file per reducer. Of course,
for efficiency reasons, you need to limit the number of files. But if you
are using multiple reducers it should mean that one reducer isn't fast
en
HI guys :
I have an EMR job which seems to be loading "old" versions of an
aws-sdk-java jar. I looked closer and found that
the hadoop nodes im using in fact have a old version of a jar in $HOME/lib/
which causing the problem.
This is most commonly seen, for example, with jackson json jars.
Wha
Thanks for the info i'll give it a try and update you soon.
On Thu, Jul 26, 2012 at 4:21 AM, Dave Beech wrote:
> Apache Oozie ( the workflow / coordination tool for Hadoop) has a feature
> similar to this.
>
> Take a look at
>
> http://incubator.apache.org/oozie/docs/3.2.0-incubating/docs/Coordi
You'd better check the dead node's log. By default, the logs are in
${HADOOP_HOME}/logs directory.
Liyin
-邮件原件-
发件人: Barry, Sean F [mailto:sean.f.ba...@intel.com]
发送时间: 2012年7月27日 7:20
收件人: common-user@hadoop.apache.org
主题: Multinode cluster only recognizes 1 node
Hi,
I just set up a 2
Hi Syed,
Do you mean I need to deploy the mahout jars to the lib directory of
the master node? Or all the data nodes? Or is there a way to simply
tell the hadoop job launcher to upload the jars itself?
Steve
On Thu, Jul 26, 2012 at 6:10 PM, syed kather wrote:
> Hi Steve ,
> I hope you had misse
Mike ,
Can you please give more details . Context is not clear . Can you share ur
use case if possible
On Jul 24, 2012 1:40 AM, "Mike S" wrote:
> If I set my reducer output to map file output format and the job would
> say have 100 reducers, will the output generate 100 different index
> file (on
Can you paste the information when you execute . start-all.sh in kernal..
when you do ssh on slave .. whether it is working fine...
On Jul 27, 2012 4:50 AM, "Barry, Sean F" wrote:
> Hi,
>
> I just set up a 2 node POC cluster and I am currently having an issue with
> it. I ran a wordcount MR test
Hi Steve ,
I hope you had missed that Sep ific jar to copy into your Hadoop lib
directories. Have a look on ur lib .
On Jul 27, 2012 4:49 AM, "Steve Armstrong" wrote:
> Hello,
>
> I'm trying to trigger a Mahout job from inside my Java application
> (running in Eclipse), and get it running on my
Hi,
I just set up a 2 node POC cluster and I am currently having an issue with it.
I ran a wordcount MR test on my cluster to see if it was working and noticed
that the Web ui at localhost:50030 showed that I only have 1 live node. I
followed the tutorial step by step and I cannot seem to figur
Hello,
I'm trying to trigger a Mahout job from inside my Java application
(running in Eclipse), and get it running on my cluster. I have a main
class that simply contains:
String[] args = new String[] { "--input", "/input/triples.csv",
"--output", "/output/vectors.txt", "--similarityClassname",
V
OK I think I understand it now. You probably have ACLs enabled, but no
web filter on the RM to let you sign in as a given user. As such the
default filter is making you be Dr. Who, or whomever else it is, but the
ACL check in the web service is rejecting Dr Who, because that is not the
correct us
Apache Oozie ( the workflow / coordination tool for Hadoop) has a feature
similar to this.
Take a look at
http://incubator.apache.org/oozie/docs/3.2.0-incubating/docs/CoordinatorFunctionalSpec.html-
there are triggers which allow you to perform actions when new data
appears on the filesystem in a
Hi,
Summarizing the two previous answers : yes, multithreaded mappers are
supported.
However, the most common use case is to increase the number of map 'slots'.
Have you considered the latter? If so, why would it not help you?
In that case, why do you think multithreading might help you?
In orde
hi user,
https://issues.apache.org/jira/browse/HADOOP-7750
it appear the problem like this link in my Experiment .
my hadoop version is 1.0.3-release。
i set up the hadoop with the user hadoop
and turn up the datanode port 1103 1104
but the problem is still appear
my namenode an
Hi Bobby
Thanks for the reply. My REST calls are working fine since I set the
'hadoop.http.staticuser.user' property to 'prajakta' instead of Dr.Who in
core-site.xml . I didn't get time to figure out the reason behind it as I
just moved on to further coding :)
Thanks,
Prajakta
On Thu, Jul 26,
Hi all,
I configured like below in hdfs-site.xml:
dfs.namenode.kerberos.principal
nn/_HOST@site
dfs.web.authentication.kerberos.principal
host/_HOST@site
When start up namenode, I found, namenode will use principal :
nn/167-52-0-56@site to login, but the http server wi
Thank you
I've been researching based on your opinions, and found the below two
solutions.
These are the answers for who has FileSystem.closed issue like me.
- close it in your cleanup method and you have JVM reuse turned on
(mapred.job.reuse.jvm.num.tasks)
- set "fs.hdfs.impl.disable,ca
HBase has coprocessors which might help you with your use case.
https://blogs.apache.org/hbase/entry/coprocessor_introduction
Regards
Bertrand
On Thu, Jul 26, 2012 at 4:08 AM, Sandeep Reddy P <
sandeepreddy.3...@gmail.com> wrote:
> Hi,
> Is it possible to implement triggers or listeners or obs
19 matches
Mail list logo