It is the reason, you see two data node log files one with the old host
name(which got created before restart) and another with the new host name.
Thanks
Devaraj k
-Original Message-
From: manishbh...@rocketmail.com [mailto:manishbh...@rocketmail.com]
Sent: 25 July 2013 13:52
To
-ef | grep datanode' shell command to know how many
datanode processes are running at this moment.
Thanks
Devaraj k
-Original Message-
From: Manish Bhoge [mailto:manishbh...@rocketmail.com]
Sent: 25 July 2013 12:56
To: common-user@hadoop.apache.org
Subject: Re: Multiple data no
Hi Manish,
Can you check how many data node processes are running really in the machine
using the command 'jps' or 'ps'.
Thanks
Devaraj k
-Original Message-
From: Manish Bhoge [mailto:manishbh...@rocketmail.com]
Sent: 25 July 2013 12:29
To: common-user@hadoo
Hi,
You could send the file meta info to the map function as key/value through
the split, and then you can read the entire file in your map function.
Thanks
Devaraj k
-Original Message-
From: Kasi Subrahmanyam [mailto:kasisubbu...@gmail.com]
Sent: 11 July 2013 13:38
To: common-user
Hi Kasi,
I think MapR mailing list is the better place to ask this question.
Thanks
Devaraj k
From: Kasi Subrahmanyam [mailto:kasisubbu...@gmail.com]
Sent: 04 July 2013 08:49
To: common-user@hadoop.apache.org; mapreduce-u...@hadoop.apache.org
Subject: Output Directory not getting created
Hi
Can you share the exception stack trace and piece of code where you are trying
to create?
Thanks
Devaraj
From: Ondřej Klimpera [klimp...@fit.cvut.cz]
Sent: Tuesday, June 19, 2012 6:03 PM
To: common-user@hadoop.apache.org
Subject: Creating MapFile.Reader
By default it uses the TextOutputFomat(subclass of FileOutputFormat) which
checks for output path.
You can use NullOuputFormat or your custom output format which doesn't do any
thing for your job.
Thanks
Devaraj
From: huanchen.zhang [huanchen.zh...@i
o put something special to the context to specify
the "empty" output?
Regards
Murat
On Mon, Jun 4, 2012 at 2:38 PM, Devaraj k wrote:
> Hi Murat,
>
> As Praveenesh explained, you can control the map outputs as you want.
>
> map() function will be called for each input i.e map
Hi Murat,
As Praveenesh explained, you can control the map outputs as you want.
map() function will be called for each input i.e map() function invokes
multiple times with different inputs in the same mapper. You can check by
having the logs in the map function what is happening in it.
Th
If you don't specify grouping comparator for your Job, it uses the Output Key
Comparator class for grouping.
This comparator should be provided if the equivalence rules for keys sorting
the intermediates are different from those for grouping keys.
Thanks
Devaraj
__
Can you check ValueCollection.write(DataOutput) method is writing properly
whatever you are expecting in readFields() method.
Thanks
Devaraj
From: Arpit Wanchoo [arpit.wanc...@guavus.com]
Sent: Thursday, May 31, 2012 2:57 PM
To:
Subject: Re: MapReduce c
>1) I am not sure that whether I should start the rebalance on the namenode or
>on each new datanode.
You can run the balancer in any node. It is not suggested to run in namenode
and would be better to run in a node which has less load.
>2) should I set the bandwidth on each datanode or just onl
Hi Gump,
Mapreduce fits well for solving these types(joins) of problem.
I hope this will help you to solve the described problem..
1. Mapoutput key and value classes : Write a map out put key class(Text.class),
value class(CombinedValue.class). Here value class should be able to hold the
va
Hi John,
You can extend FileInputFormat(or implement InputFormat) and then you need to
implement below methods.
1. InputSplit[] getSplits(JobConf job, int numSplits) : For splitting the
input files logically for the job. If FileInputFormat.getSplits(JobConf job,
int numSplits) suits for
Hi Tousif,
You can kill the Running Job using the killJob() client API.
If you want to kill the job itself, you can get the job id using task attempt
id from map() or reduce() functions, and you can invoke the killJob() API based
on your condition.
Thanks
Devaraj
__
Hi Wang Ruijun,
You can do this way,
1. Set the value in Job configuration with some property name before submitting
the job.
2. Get the value in map() function using the property name from the
configuration and you can perform the business logic.
Thanks
Devaraj
tand it now. But, is it possible to write a program using the
JobClient to submit the hadoop job?
To do that I have to create a JobConf manually. Am I thinking right?
Arindam
On Wed, Apr 25, 2012 at 10:56 AM, Devaraj k wrote:
> Hi Arindam,
>
>hadoop jar jarFileName MainClassName
&
Hi Arindam,
hadoop jar jarFileName MainClassName
The above command will not submit the job. This command only executes the jar
file using the Main Class(Main-class present in manifest info if available
otherwise class name(i.e MainClassName in the above command) passed as an
argument. If w
Hi Sunil,
Please check HarFileSystem (Hadoop Archive Filesystem), it will be useful
to solve your problem.
Thanks
Devaraj
From: Sunil S Nandihalli [sunil.nandiha...@gmail.com]
Sent: Tuesday, April 24, 2012 7:12 PM
To: common-user@hadoop.apache.org
Sub
Hi Lac,
As per my understanding based on your problem description, you need to the
below things.
1. Mapper : Write a mapper which emits records from input files and convert
intto key and values. Here this key should contain teacher id, class id and no
of students, value can be empty(or null).
Hi Sujit,
Can you check the Job tracker logs for job_201204082039_0002 related info,
you can find out what is the status/error.
If you give the job_201204082039_0002 related info from Job tracker/task
tracker, can help better.
Thanks
Devaraj
From
Hi Arun,
You can enable rack awareness for your hadoop cluster by configuring
the "topology.script.file.name" property.
Please go through this link for more details about rack awareness.
http://hadoop.apache.org/common/docs/r0.19.2/cluster_setup.html#Hadoop+Rack+
Awareness
1.getValue());
Devaraj K
-Original Message-
From: ArunKumar [mailto:arunk...@gmail.com]
Sent: Sunday, December 11, 2011 12:15 PM
To: hadoop-u...@lucene.apache.org
Subject: Accessing Job counters displayed in WEB GUI in Hadoop Code
Hai guys !
Can i access the Job counters displayed in
Can you try increasing the max heap memory whether still you face the
problem.
Devaraj K
-Original Message-
From: Niranjan Balasubramanian [mailto:niran...@cs.washington.edu]
Sent: Thursday, December 08, 2011 11:09 PM
To: common-user@hadoop.apache.org
Subject: Re: OOM Error Map
ich version of hadoop using?
Devaraj K
-Original Message-
From: Niranjan Balasubramanian [mailto:niran...@cs.washington.edu]
Sent: Thursday, December 08, 2011 12:21 AM
To: common-user@hadoop.apache.org
Subject: OOM Error Map output copy.
All
I am encountering the following out-of-mem
t for DBInputFormat, it supports only the input
format's which uses file path as input path.
If you explain your use case with more details, I may help you better.
Devaraj K
-Original Message-
From: Praveen Sripati [mailto:praveensrip...@gmail.com]
Sent: Tuesday, December 06, 20
files in the class path of the application from where you want to
submit the job.
You can refer this docs for more info on Job API's.
http://hadoop.apache.org/mapreduce/docs/current/api/org/apache/hadoop/mapred
uce/Job.html
Devaraj K
-Original Message-
From: Oleg Ruchovets [mailto:o
w Path("in"));
job.setOutputPath(new Path("out"));
job.setMapperClass(MyJob.MyMapper.class);
job.setReducerClass(MyJob.MyReducer.class);
// Submit the job, then poll for progress until the job is complete
JobClient.runJob(job);
I hope this help
Hi Bharath,
There are few reasons to cause this problem. I have listed below some reasons
with solutions. This might help you to solve this. If you post the logs, the
problem can be figured out.
Reason 1:
It could be that the mapping in the /etc/hosts file is not present.
The DNS server is d
ns or any other reason why it is failing to create the dir.
Devaraj K
-Original Message-
From: arun k [mailto:arunk...@gmail.com]
Sent: Thursday, September 22, 2011 3:57 PM
To: common-user@hadoop.apache.org
Subject: Re: Making Mumak work with capacity scheduler
Hi Uma !
u got me ri
Send a mail to common-user-unsubscr...@hadoop.apache.org from your mail to
unsubscribe.
http://hadoop.apache.org/common/mailing_lists.html
Devaraj K
-Original Message-
From: Hulme, Jill [mailto:jhu
/mapred/Ru
nningJob.html#killTask(org.apache.hadoop.mapred.TaskAttemptID, boolean)
Devaraj K
-Original Message-
From: Aleksandr Elbakyan [mailto:ramal...@yahoo.com]
Sent: Thursday, August 04, 2011 5:10 AM
To: common-user@hadoop.apache.org
Subject: Re: Kill Task Programmatically
Hello
Daniel, You can find those std out statements in "{LOG
Directory}/userlogs/{task attemp id}/stdout" file.
Same way you can find std err statements in "{LOG Directory}/userlogs/{task
attemp id}/stderr" and log statements in "{LOG Directory}/userlogs/{task
attem
Madhu,
Can you check the client logs, whether any error/exception is coming while
submitting the job?
Devaraj K
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Tuesday, July 26, 2011 5:01 PM
To: common-user@hadoop.apache.org
Subject: Re: Submitting and running
mit();
For submitting this, need to add the hadoop jar files and configuration
files in the class path of the application from where you want to submit the
job.
You can refer this docs for more info on Job API's.
http://hadoop.apache.org/mapreduce/docs/current/api/org/apache/had
the compiled jsp
files are not coming into the java classpath.
Devaraj K
-Original Message-
From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
Sent: Thursday, July 14, 2011 6:32 PM
To: common-user@hadoop.apache.org
Subject: Re: HTTP Error
Any update on the HTTP Error : Still the
Hi Teng,
As per the exception stack trace, it is not invoking the TaskMapper.map()
method and it is invoking the default Mapper.map() method. Can you recheck
the configurations and job code whether it is properly copied or not?
Devaraj K
Class,
Class mapperClass)
Devaraj K
-
This e-mail and its attachments contain confidential information from
HUAWEI, which
is intended only for the person or entity
-core jar in your eclipse plug-in with the jar
whatever the hadoop cluster is using and check.
Devaraj K
_
From: praveenesh kumar [mailto:praveen...@gmail.com]
Sent: Wednesday, June 22, 2011 12:07 PM
To: common-user@hadoop.apache.org; devara...@huawei.com
Subject: Re: Hadoop eclipse p
will work fine.
Devaraj K
-Original Message-
From: praveenesh kumar [mailto:praveen...@gmail.com]
Sent: Wednesday, June 22, 2011 11:25 AM
To: common-user@hadoop.apache.org
Subject: Hadoop eclipse plugin stopped working after replacing hadoop-0.20.2
jar files with hadoop-0.20-append jar
Drew,
Running hadoop on windows is not supported.
You can build/test hadoop on windows using Cygwin but if you want to run,
you will face many problems.
Devaraj K
-
From: Drew Gross [mailto:drew.a.gr...@gmail.com
Hi Daniel,
We also faced this problem when we try to build hadoop component using
proxy internet connection. We were able to build by doing these changes in ivy
source i.e changing the request method from HEAD to GET.
Class Name :
apache-ivy-2.2.0\src\java\org\apache\ivy\util\url\B
42 matches
Mail list logo