Im running a job in local mode, and have found that it returns immediately,
switching job state to FAILURE.

>From the /tmp/hadoop-jay directory, I see that clearly an attempt was made
to run the job , and that some files seem to have been created....  But I
don't see any clues.

├── [        102]  local
│   └── [        102]  localRunner
│       └── [        170]  jay
│           ├── [         68]  job_local1531736937_0001
│           ├── [         68]  job_local218993552_0002
│           └── [        136]  jobcache
│               ├── [        102]  job_local1531736937_0001
│               │   └── [        102]
attempt_local1531736937_0001_m_000000_0
│               │       └── [        136]  output
│               │           ├── [         14]  file.out
│               │           └── [         32]  file.out.index
│               └── [        102]  job_local218993552_0002
│                   └── [        102]
attempt_local218993552_0002_m_000000_0
│                       └── [        136]  output
│                           ├── [         14]  file.out
│                           └── [         32]  file.out.index
└── [        136]  staging
    ├── [        102]  jay1531736937
    └── [        102]  jay218993552


Any thoughts on how i can further diagnose whats happening and why my job
fails without a stacktrace?  Because I dont have hadoop installed on the
system (i.e. im just running a java app that fires up  a hadoop client
locally), I cant see anything in /var/log .




-- 
Jay Vyas
http://jayunit100.blogspot.com

Reply via email to