Hi,
While running a mapreduce task, i get a weird exception, taskProcess
exit with a non zero status of 126. I tried hunting for the same online,
coudnt find any lead. any pointers?
Regards,
Raakhi
tion.
>
> - Aaron
>
> On Wed, Mar 3, 2010 at 4:13 AM, Rakhi Khatwani
> wrote:
>
> > Hi,
> >I am running a job which has lotta preprocessing involved. so whn
> i
> > run my class from a jarfile, somehow it terminates after sometime without
> > gi
Hi,
I am running a job which has lotta preprocessing involved. so whn i
run my class from a jarfile, somehow it terminates after sometime without
giving any exception,
i have tried running the same program several times, and everytime it
terminates at different locations in the code(during
Hi,
Has anyone tried creating customInputFormat which reads from
solrIndex for processing using mapreduce??? is it possible doin tht?? and
how?
Regards,
Raakhi
Hi,
i have been trying to implement custom input and output formats. i
was successful enough in creating an custom output format. but whn i call a
mapreduce method which takes in a file, using custom input format, i get an
exception.
java.lang.NullPointerException
at Beans.Content.write(Co
Hi,
i m writing a map reduce program which reads a file from HDFS and
stores the contents in a static map (declared n initialized before executing
map reduce). but however after executing the map-reduce program, my map
returns 0 elements. is there any way i can make the data persistent in
Hi,
Suppose i have a hdfs file with 10,000 entries. and i want my job to
process 100 records at one time (to minimize loss of data during job
crashes/ network errors etc). so if a job can read a subset of records from
a fine in HDFS, i can combine with chaining to achieve my objective. for
Hi,
Whats the difference between a mapper and map runnable and its
usage?
Regards
Raakhi
Hi,
I am running a map reduce program which reads data from a file,
processes it and writes the output into another file.
i run 4 maps and 4 reduces, and my output is as follows:
09/08/27 17:34:37 INFO mapred.JobClient: Running job: job_200908271142_0026
09/08/27 17:34:38 INFO mapred.JobCli
Hi,
I was tryin to run a map reduce program which reads a txt file filled
wid some keywords,
my map task takes each of these keywords, does some processing and returns a
complex object url which contains media, status, link and title (each being
a string).
my reduce class simply has one line
Hi,
I am not very clear as to how does the mem cache thing works.
1. When you set memcache to say 1MB, does hbase write all the table
information into some cache memory and when the size reaches IMB, it writes
into hadoop and after that the replication takes place???
2. Is there any minimum limit
Hi,
I just wanted to know what if we have set the replication factor greater
than the number of nodes in the cluster.
for example, i have only 3 nodes in my cluster but i set the replication
factor to 5.
will it create 3 copies and save it in each node, or can it create more than
one copy per n
I doesn't support job suspension, only the ability to
> kill jobs.
>
> Cheers,
> Tom
>
> On Mon, Jul 20, 2009 at 9:39 AM, Rakhi Khatwani
> wrote:
> > Hi,
> > I have a scenario in which i have a list of 5 jobs. and an event
> > handler for example when tr
Hi,
I was going through Zookeeper and really interested in implementing
it, i am using hadoop-0.19.0 but coudnt find enough documentation which can
help me use zookeeper with hadoop-0.19.0
has anyone tried it with hadoop-0.19.0 or hadoop-0.19.1?
regards,
Raakhi
oldStatus = status;
> }
> try {
>Thread.sleep(1000);
> } catch (InterruptedException e) {
>// ignore
> }
>}
>
> Hope this helps.
>
> Tom
>
> On Fri, Jul 17, 2009 at 9:10 AM, Rakhi Khatwani
> wrote:
> >
per will be called with the key, value pair from the
> output.collect,
> when map2.mapper returns, the line of code after the Map1 output.collect
> will execute.
>
> If you pass true, instead of false, in your example above, a copy of the
> key/value pair will be the passed.
>
>
> On
Hi
I am trying out a simple example in which you have two maps and one
reduce, Map1 and Map2 and Reduce1
now i wanna execute my job in the following fashion
Map1 -> Map2 -> Reduce1 (o/p of map1 goes into map2 and o/p of map2 goes
into reduce1)
i have declared my conf as follows:
JobCo
Hi,
I was trying out a map-reduce example using JobControl.
i create a jobConf conf1 object, add the necessary information
then i create a job object
Job job1 = new Job(conf1);
n thn i delare JobControl object as follows:
JobControl jobControl = new JobControl("JobControl1");
r
> contrib/index/hadoop-0.19.1-index.jar -inputPaths -outputPath
> -indexPath -conf
> src/contrib/index/conf/index-config.xml
>
> I hope this helps.
>
> Regards,
> - Bhushan
>
>
> -Original Message-
> From: Rakhi Khatwani [mailto:rakhi.khatw...@gmai
Hi,
i was going through Hadoop-contrib and came across hadoop-index.jar
was wondering how to use it. there is no help online. i would greatly
appreciate if i could get a small tutorial. forexample i have a file which
contains data, i would like to create index for it. how do i go about it?
Rega
20 matches
Mail list logo