aah! I always thought about setting io.serializations at the job level. I
never thought about this. will try this site wide thing. thanks again.
On 28 Jul 2012 06:16, "Harsh J" wrote:
> Ah, that may be cause the core-site.xml has the property
> io.serializations fully defined for Gora as well? Yo
Ah, that may be cause the core-site.xml has the property
io.serializations fully defined for Gora as well? You can do that as
an alternative fix, supply a core-site.xml across tasktrackers that
also carry the serialization class Gora requires. I failed to think of
that as a solution.
On Sat, Jul 2
okay. But this issue didn't present itself when run in standalone mode. :)
On 28 Jul 2012 06:02, "Harsh J" wrote:
> I find it easier to run jobs via MRUnit (http://mrunit.apache.org,
> TDD) first, or via LocalJobRunner, for debug purposes.
>
> On Sat, Jul 28, 2012 at 5:53 AM, Sriram Ramachandrase
I find it easier to run jobs via MRUnit (http://mrunit.apache.org,
TDD) first, or via LocalJobRunner, for debug purposes.
On Sat, Jul 28, 2012 at 5:53 AM, Sriram Ramachandrasekaran
wrote:
> hello harsh,
> thanks for your investigations. while we were debugging, I saw the exact
> thing. As you poi
hello harsh,
thanks for your investigations. while we were debugging, I saw the exact
thing. As you pointed out, we suspected it to be a problem. So, we set the
job conf object directly on Gora's query object.
It goes something like this,
query.setConf..(job.getConfig..())
And, then I saw that it
Hi Sriram,
I suspect the following in Gora to somehow be causing this issue:
IOUtils source:
http://svn.apache.org/viewvc/gora/trunk/gora-core/src/main/java/org/apache/gora/util/IOUtils.java?view=markup
QueryBase source:
http://svn.apache.org/viewvc/gora/trunk/gora-core/src/main/java/org/apache/g
Bejoy,
Thanks alot for your response. You are right. The problem is with the
misconfigured nproc on the OS.
Originally, my limits.conf file was something like this:
* hardnofile100
* softnofile100
* hardnproc32
* softnproc32
but for some reason l
Hi Ben
This error happens when the mapreduce job triggers more number of
process than allowed by the underlying OS. You need to increase the
nproc value if it is the default one.
You can get the current values from linux using
ulimit -u
The default is 1024 I guess. Check that for the user that r
Hi
I'm having a similar problem so I'll continue on this mailing to describe
my issue.
I ran a MR job that takes 70GB of input and creates 1098 mappers and 100
Reducers to process tasks. (on 9 node Hadoop cluster)
but the job fails and 4 datanode dies after few min (processes are still
running, bu
Hello,
I have an MR job that talks to HBase. I use Gora to talk to HBase. Gora
also provides couple of classes which can be extended to write Mappers and
Reducers, if the mappers need input from an HBase store and Reducers need
to write it out to an HBase store. This is the reason why I use Gora.
10 matches
Mail list logo