On Sat, Mar 26, 2022, 10:13 Ravikumar Govindarajan <
ravikumar.govindara...@gmail.com> wrote:
>
unsubscribe
Hi,
I have a requirement where i have to send one line to one mapper in a file
but doing it using hive.
How can we implement the functionality of NLineInptFormat in Hive.
I couldnt find this and i tried the following configuration in hive
set hive.merge.mapfiles=false;
set
Hi,
I am able to find that we have definite API for processing iages in hadoop
using HIPI.
Why dont we have the same for videos?
Thanks,
Subbu
Hi,
I am able to find that we have definite API for processing iages in hadoop
using HIPI.
Why dont we have the same for videos?
Thanks,
Subbu
Hi Team,
I have a file which has semi structured text data with no definite start
and end points.
How can i send the entire content of the file at once as key or value to
the mapper instead of line by line.
Thanks,
Subbu
Hi Team,
I have a file which has semi structured text data with no definite start
and end points.
How can i send the entire content of the file at once as key or value to
the mapper instead of line by line.
Thanks,
Subbu
Hi,
There is a production cluster which has the MapR installed with hadoop in
User A.
I am trying to run a hadoop job through another User B.
My Job is unable to create output in the filesystem under user B. with the
following error
13/07/03 09:34:00 INFO mapred.FileInputFormat: Total input
Hi,
There is a production cluster which has the MapR installed with hadoop in
User A.
I am trying to run a hadoop job through another User B.
My Job is unable to create output in the filesystem under user B. with the
following error
13/07/03 09:34:00 INFO mapred.FileInputFormat: Total input
Hi,
I have around 4 jobs running in a controller.
How can i have a single unique counter present in all the jobs and
incremented where ever used in a job?
For example:Consider a counter ACount.
If job1 is incrementing the counter by2 and job3 by 5 and job 4 by 6.
Can i have the counter
what you need here. Take a look
at its workflow running/management features:
http://incubator.apache.org/oozie/
Is this what you're looking for?
On Sat, Jun 30, 2012 at 3:30 PM, Kasi Subrahmanyam
kasisubbu...@gmail.com javascript:; wrote:
Hi,
If i have few jobs added to a controller
features:
http://incubator.apache.org/oozie/
Is this what you're looking for?
On Sat, Jun 30, 2012 at 3:30 PM, Kasi Subrahmanyam
kasisubbu...@gmail.com wrote:
Hi,
If i have few jobs added to a controller, and i explicitly killed a job
in
between (assuming all the other jobs failed due
HI MIcheal,
The problem for the second question can be solved if you use the
SequenceFileOutputFormat for the first job output and the
SequenceFileInputFormat for the second job input.
On Thu, Jun 14, 2012 at 11:11 PM, Michael Parker michael.g.par...@gmail.com
wrote:
Hi all,
One more
HI Ali,
I also faced this error when i ran the jobs either in local or in a cluster.
I am able to solve this problem by removing the .crc file created in the
input folder for this job.
Please check that there is no .crc file in the input.
I hope this solves the problem.
Thanks,
Subbu
On Wed, May
/**jira/browse/MAPREDUCEhttps://issues.apache.org/jira/browse/MAPREDUCEand
post back the ID
on this thread for posterity?
On Thu, May 3, 2012 at 6:25 PM, Kasi Subrahmanyam
kasisubbu...@gmail.com wrote:
Hi,
Could anyone suggest how to get the filename in the mapper.
I have gone through
a...@hortonworks.com wrote:
Use OutputCommitter.(abortJob, commitJob):
http://hadoop.apache.org/common/docs/r1.0.2/api/org/apache/hadoop/mapred/OutputCommitter.html
Arun
On Apr 26, 2012, at 4:44 PM, kasi subrahmanyam wrote:
Hi
I have few jobs added to a Job controller .
I need a afterJob() to be executed
, kasi subrahmanyam kasisubbu...@gmail.com wrote:
Hi arun,
I can see that the output commiter is present in the reducer.
How to make sure thtat this commiter happens at the end of the job or does
it run by default at the end of the job.
I can have more than one reducer tasks.
On Sun, Apr
Hi Onder,
You could try to format the namenode and restart the daemons,
That solved my problem most number of times.
May be the running daemons where not able to pickup the all the datanodes
configurations
On Sat, Apr 28, 2012 at 4:23 PM, Onder SEZGIN ondersez...@gmail.com wrote:
Hi,
I am
Hi Shirsh,
You need to increase the heap size of the JVM for the changes you made for
mapred.child to take effect
These changes need to be made in the files of the hadoop configuration
folder.
In hadoop-env.sh increase the heap size to may be 2000.(I think this
property by default will be
Hi Sujit,
I think it is a problem with the host names configuration.
Could you please check whether you added the host names of the master and
the slaves in the etc/hosts file of all the nodes.
On Mon, Apr 2, 2012 at 8:00 PM, Sujit Dhamale sujitdhamal...@gmail.comwrote:
Can some one please
Hi Pedro i am not sure we have a single method for reading the data in
output files for different otutput formats.
But for sequence files we can use SequenceFile.Reader class in the API to
read the sequence files.
On Fri, Mar 30, 2012 at 10:49 PM, Pedro Costa psdc1...@gmail.com wrote:
The
Try checking the logs in the logs folder for the datanode.It might give
some lead.
Maybe there is a mismatch between the namespace iDs in the system and user
itself while starting the datanode.
On Fri, Mar 30, 2012 at 10:32 PM, Ben Cuthbert bencuthb...@ymail.comwrote:
All
We have a master in
Hi Oliver,
I am not sure my suggestion might solve your problem or it might be already
solved on your side.
It seems the task tracker is having a problem accessing the tmp directory.
Try going to the core and mapred site xml and change the tmp directory to a
new one.
If this is not yet working
23 matches
Mail list logo