Hi Sujit,
Can you check the Job tracker logs for job_201204082039_0002 related info,
you can find out what is the status/error.
If you give the job_201204082039_0002 related info from Job tracker/task
tracker, can help better.
Thanks
Devaraj
From
Hi Friends ,
i am not able to run word count problem ,
Please look in the below issue.
i am not able to Find what exactly issue is , not able to troubleshoot.
please help me out :)
Thanks in Advance.
Kind Regards
Sujit
On Mon, Apr 9, 2012 at 8:56 AM, Sujit Dhamale wrote:
> hi all ,
> i did al
after moving the includes and excludes file from /root/ to
$HADOOP_HOME/conf
problem resolved. really strange
在 2012年4月10日 上午10:45,air 写道:
>
>
> -- 已转发邮件 --
> 发件人: air
> 日期: 2012年4月10日 上午9:46
> 主题: CDH3u3 problem when decommission nodes
> 收件人: CDH Users
>
>
>
> all operati
-- 已转发邮件 --
发件人: air
日期: 2012年4月10日 上午9:46
主题: CDH3u3 problem when decommission nodes
收件人: CDH Users
all operations on the (JT + NN) node.
I create a file with all (DN + TT) nodes listed in it called hosts.include
I also create a file with all the nodes need to be decommission
On Mon, Apr 9, 2012 at 5:29 PM, Deepak Nettem wrote:
> Hi Stan,
>
> Just out of curiosity, care to explain the use case a bit?
>
Very simply: lots of reasonably small files which I can't control,
i.e., changing block size is not an option. Note that this is not an
issue in pig or hive, both of w
Hi Stan,
Just out of curiosity, care to explain the use case a bit?
On Mon, Apr 9, 2012 at 5:25 PM, Stan Rosenberg wrote:
> Hi,
>
> I just came across a use case requiring CombineFileInputFormat under
> hadoop 0.20.2. I was surprised that the API does not provide a
> default
> implementation.
Hi,
I just came across a use case requiring CombineFileInputFormat under
hadoop 0.20.2. I was surprised that the API does not provide a
default
implementation. A precursory check against newer APIs also returned
the same result.
What's the rationale? I ended up writing my own implementation.
How
Sky,
Yes you are right in your summary. Thanks also for reporting back on
the 0.205/1.x issue.
One ugly hack that just comes to mind, would be to manipulate the
classloader at runtime. I've not tried it out to know for sure if its
possible/will work, but just thought I'd note it down.
However, y
Thanks for the reply. I appreciate your helpfulness. I created Jars by
following instructions at "http://blog.mafr.de/2010/07/24/maven-hadoop-job/";.
So external Jars are stored in lib/ folder within a jar.
Am I summarizing this correctly:
1. If hadoop version = 0.20.203 or lower - then, the
Hi,
With my configuration in place, I simply do:
"hadoop datanode 2>&1 > /tmp/datanode-$RANDOM.log &" required number
of times. I then track the launched instances with "jps", so I can
send them quit signals when I want to tear them down again.
On Tue, Apr 10, 2012 at 1:52 AM, Barry, Sean F wro
Harsh,
I am interested in adding datanodes just for testing.
I have a few more things I should have said earlier.
My current cluster looks like this. Which I set up exactly like tutorial
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
except I am runni
Thanks.
This:
>
> FileSystem fs = FileSystem.get(conf);
> DistributedCache.addFileToClassPath(new
> Path("hdfs://localhost:9000/my.app/batch.jar"), conf, fs);
didn't work, but the uber-jar /lib subfolder did work.
thanks again,
Nick
On Apr 9, 2012, at 2:44 PM, Harsh J wrote:
> Nick,
>
> F
Nick,
For jars, you need to use DistributedCache.addFileToClassPath(…). The
archive-adding function is only to be used when you want the file
you're adding to also be extracted into the task's working directory.
Also, you don't need to use DistributedFileSystem classes (its not
supposed to be pub
Barry,
Depends on what you'll be testing. If you want more daemons, then yes
you need to add more nodes onto the same box (configs may be tweaked
to achieve this). If you just want MR to provide more slots for tasks,
then a specific task tracker property alone may be edited.
For more daemons, see
Anyone faced similar issue or knows what the issue might be?
Thanks in advance.
On Thu, Apr 5, 2012 at 10:52 AM, Prashant Kommireddi wrote:
> Thanks Nitin.
>
> I believe the config key you mentioned controls the task attempts logs
> that go under - ${hadoop.log.dir}/userlogs.
>
> The ones that I
Hi,
Using Hadoop 1.0.1 I'm trying to use the DistributedCache to add additional
jars to the classpath used by my Mappers but I can't get it to work. In the
run(String[] args) method of my Tool implementation, I've tried:
FileSystem fs = DistributedFileSystem.get(conf);
DistributedCache.addArchi
What a well structured question!
On Sun, Apr 8, 2012 at 6:37 AM, Tom Ferguson wrote:
> Hello,
>
> I'm very new to Hadoop and I am trying to carry out of proof of concept for
> processing some trading data. I am from a .net background, so I am trying
> to prove whether it can be done primarily us
Hi : Well phrased question I think you will need to read up on
reducers, and then you will see the light.
1) in your mapper, emit (date,tradeValue) objects.
2) Then hadoop will send the following to the reducers.
date1,tradeValues[]
date2,tradeValues[]
...
3) Then, in your reducer, you wi
Resending my query below... it didn't seem to post first time.
Thanks,
Tom
On Apr 8, 2012 11:37 AM, "Tom Ferguson" wrote:
> Hello,
>
> I'm very new to Hadoop and I am trying to carry out of proof of concept
> for processing some trading data. I am from a .net background, so I am
> trying to pro
Hello,
I'm very new to Hadoop and I am trying to carry out of proof of concept for
processing some trading data. I am from a .net background, so I am trying
to prove whether it can be done primarily using C#, therefore I am looking
at the Hadoop Streaming job (from the Hadoop examples) to call in
i start hadoop and have error:
2012-04-09 19:06:57,971 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NullPointerException
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1091)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.a
Hi all,
I currently have a 2 node cluster up and running. But now I face a new issue,
one of my nodes is running a Datanode and a Tasktracker on a 4 core machine and
in order to do a bit of proof of concept testing I would like to have 4 nodes
running on that particular machine. Does this mean
You resume is received and will be used for next thousand positions...
Thank you for your interest.
On Mon, Apr 9, 2012 at 11:03 AM, Vishal Kumar Gupta wrote:
> hi Sarah,
>
> Please find my updated resume attached with this mail.
>
> Regards,
> vishal
>
>
> 2012/4/9 Bing Li
>
>> 国际著名大型IT企业(排名前3位
TLDR :/
Besides, it isn't a job list
Cos
On Mon, Apr 09, 2012 at 10:59PM, Bing Li wrote:
> 国际著名大型IT企业(排名前3位)开发中心招聘Hadoop技术专家(北京)-非猎头
>
> 职位描述:
> Hadoop系统和平台开发(架构师,资深开发人员)
>
>
> 职位要求:
>
> 1.有设计开发大型分布式系统的经验(工作年限3年以上,架构师5年以上),hadoop大型实际应用经验优先
>
> 2.良好的编程和调试经验(java or c++/c),扎实的计算机理论基础,快速的学习能力
Im sure i speak quite accurately for the moderators that ***This is not a job
board***
Jay Vyas
MMSB
UCHC
On Apr 9, 2012, at 10:03 AM, Vishal Kumar Gupta wrote:
> hi Sarah,
>
> Please find my updated resume attached with this mail.
>
> Regards,
> vishal
>
> 2012/4/9 Bing Li
> 国际著名大型IT企业(
hi Sarah,
Please find my updated resume attached with this mail.
Regards,
vishal
2012/4/9 Bing Li
> 国际著名大型IT企业(排名前3位)开发中心招聘Hadoop技术专家(北京)-非猎头
>
> 职位描述:
> Hadoop系统和平台开发(架构师,资深开发人员)
>
>
> 职位要求:
>
> 1.有设计开发大型分布式系统的经验(工作年限3年以上,架构师5年以上),hadoop大型实际应用经验优先
>
> 2.良好的编程和调试经验(java or c++/c),扎实的计算机理论基础,快速
国际著名大型IT企业(排名前3位)开发中心招聘Hadoop技术专家(北京)-非猎头
职位描述:
Hadoop系统和平台开发(架构师,资深开发人员)
职位要求:
1.有设计开发大型分布式系统的经验(工作年限3年以上,架构师5年以上),hadoop大型实际应用经验优先
2.良好的编程和调试经验(java or c++/c),扎实的计算机理论基础,快速的学习能力
3. 沟通和合作能力强,熟练使用英语(包括口语)
*我们将提供有竞争力的待遇,欢迎加入我们*
有意请发简历到邮箱: sarah.lib...@gmail.com
Hi I am using hadoop and HBase.When i tried to start hadoop, It started fine
but when I tried to start HBase it shows exception in log files. In log file
hadoop is refusing the connection on port 54310 of localhost. Logs are given
below:
Mon Apr 9 12:28:15 PKT 2012 Starting master on hbase
ulimit
Answer is a bit messy.
Perhaps you can set the environment variable "export
HADOOP_USER_CLASSPATH_FIRST=true" before you do a "hadoop jar …" to
launch your job. However, although this approach is present in
0.20.204+ (0.20.205, and 1.0.x), am not sure if it makes an impact on
the tasks as well. I
29 matches
Mail list logo