I found the solution to the problem!
As Harsh said I didn't use FileOutputCommiter because this Commiter is not used
in TableOutputFormat.
I changed the TableOutputFormat class to use this commiter and now everything
works great!
Thank you for you help!!!
From: antonopoulos...@hotmail.com
To:
I forgot to mention that there is no _SUCCESS folder in the output directory.
Thanks for your replies!
I use TableOutputFormat to delete entries from a HBase Table and MO for
HFileOutputFormat.
Until yesterday I used normal HFileOutputFormat output (not MO) and the files
appeared
Thanks for your replies!
I use TableOutputFormat to delete entries from a HBase Table and MO for
HFileOutputFormat.
Until yesterday I used normal HFileOutputFormat output (not MO) and the files
appeared in the output directory and not in the temporary folder.
I performed the deletes manu
Panayotis,
I've not seen this happen yet. I've regularly used MO to write my
files and both TextFileO/F and NullO/F have worked fine despite me not
writing a byte to their collectors. In fact, the test case for MO too
passes when I modify it to never emit to the default output sink.
Are you using
On 05/30/2011 11:02 AM, Panayotis Antonopoulos wrote:
Hello,
I just noticed that the files that are created using MultipleOutputs
remain in the temporary folder into attempt sub-folders when there is
no normal output (using context.write(...)).
Has anyone else noticed that?
Is there any way
On Mon, 30 May 2011 09:43:14 -0700, Alejandro Abdelnur
wrote:
> If you still want to start your MR job from your Java action, then your
> Java
> action should do all the setup the MapReduceMain class does before
starting
> the MR job (this will ensure delegation tokens and distributed cache is
> a
John,
Now I get what you are trying to do.
My recommendation would be:
* Use a Java action to do all the stuff prior to starting your MR job
* Use a mapreduce action to start your MR job
* If you need to propagate properties from the Java action to the MR action
you can use the flag.
If you st
Hello,
I just noticed that the files that are created using MultipleOutputs remain in
the temporary folder into attempt sub-folders when there is no normal output
(using context.write(...)).
Has anyone else noticed that?
Is there any way to change that and make the files appear in the output
Praveen,
Yes, in 0.21.x series, the jars are broken down by the project they
belong to (Common, HDFS, and MapReduce). This was due to splitting of
the Hadoop projects into subcomponents.
In 0.20.x it would be one single jar containing class files of all the
three projects (hadoop-core jar).
On M
Hi,
I have extracted the hadoop-0.20.2, hadoop-0.20.203.0 and hadoop-0.21.0
files.
In the hadoop-0.21.0 folder the hadoop-hdfs-0.21.0.jar,
hadoop-mapred-0.21.0.jar and the hadoop-common-0.21.0.jar files are there.
But in the hadoop-0.20.2 and the hadoop-0.20.203.0 releases the same files
are mis
On Fri, 27 May 2011 15:47:23 -0700, Alejandro Abdelnur
wrote:
> John,
>
> If you are using Oozie, dropping all the JARs your MR jobs needs in the
> Oozie WF lib/ directory should suffice. Oozie will make sure all those
JARs
> are in the distributed cache.
That doesn't seem to work. I have this
I have written a mistake : of course SpecificRecord is in the cassandra API
but my question is : why i have this problem ? or Is the an another way to
write back the result ?
Thanks
2011/5/30 Laurent Hatier
> Hi everybody,
>
> I have a little problem with cassandra-all jar file : when i want to
Hi everybody,
I have a little problem with cassandra-all jar file : when i want to write
back the result of the MapReduce in DB, he says me that the SpecificRecord
class (Hector API) is not found... I have already check this dependency and
it's ok. Do I have to use the Cassandra API or it's a tech
13 matches
Mail list logo