The failure appear or occur in code in the system dynamic linker, which implies a shared library compatibility problem, or a heap shortfall
On Mon, Oct 26, 2009 at 2:25 PM, Ed Mazur <ma...@cs.umass.edu> wrote: > Err, disregard that. > > $ cat /proc/version > Linux version 2.6.9-89.0.9.plus.c4smp (mockbu...@builder10.centos.org) > (gcc version 3.4.6 20060404 (Red Hat 3.4.6-11)) #1 SMP Mon Aug 24 > 09:06:26 EDT 2009 > > Ed > > On Mon, Oct 26, 2009 at 3:23 PM, Ed Mazur <ma...@cs.umass.edu> wrote: > > $ cat /etc/*-release > > CentOS release 4.5 (Final) > > Rocks release 4.3 (Mars Hill) > > > > Ed > > > > On Mon, Oct 26, 2009 at 11:21 AM, Todd Lipcon <t...@cloudera.com> wrote: > >> What Linux distro are you running? It seems vaguely possible that you're > >> using some incompatible library versions compared to what everyone else > has > >> tested libhadoop with. > >> > >> -Todd > >> > >> On Sun, Oct 25, 2009 at 8:36 PM, Ed Mazur <ma...@cs.umass.edu> wrote: > >>> > >>> I'm having problems on 0.20.0 when map output compression is enabled. > >>> Map tasks complete (TaskRunner: Task 'attempt_*' done), but it looks > >>> like the JVM running the task crashes immediately after. Here's the > >>> TaskTracker log: > >>> > >>> java.io.IOException: Task process exit with nonzero status of 134. > >>> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) > >>> > >>> An error per task attempt. Each also produces a JRE error report file: > >>> > >>> http://pastebin.com/f590087f0 > >>> > >>> This was using DefaultCodec. I observed similar results with > GzipCodec. > >>> > >>> Ed Mazur > >> > >> > > > -- Pro Hadoop, a book to guide you from beginner to hadoop mastery, http://www.amazon.com/dp/1430219424?tag=jewlerymall www.prohadoopbook.com a community for Hadoop Professionals