[ 
https://issues.apache.org/jira/browse/HIVE-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4029:
-----------------------------

      Resolution: Fixed
    Hadoop Flags: Reviewed
          Status: Resolved  (was: Patch Available)

Committed. Thanks Brock
                
> Hive Profiler dies with NPE
> ---------------------------
>
>                 Key: HIVE-4029
>                 URL: https://issues.apache.org/jira/browse/HIVE-4029
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.11.0
>            Reporter: Brock Noland
>            Assignee: Brock Noland
>             Fix For: 0.11.0
>
>         Attachments: Hive-4029-D8649-0.diff
>
>
> Steps to reproduce:
> {noformat}
> $ git clone https://github.com/apache/hive.git hive-profiler-npe
> Initialized empty Git repository in /home/brock/hive-profiler-npe/.git/
> remote: Counting objects: 73654, done.
> remote: Compressing objects: 100% (15383/15383), done.
> remote: Total 73654 (delta 44331), reused 71338 (delta 43054)
> Receiving objects: 100% (73654/73654), 42.78 MiB | 1.69 MiB/s, done.
> Resolving deltas: 100% (44331/44331), done.
> $ cd hive-profiler-npe/
> $ ant clean package
> $ cd build/dist/
> $ ./bin/hive
> hive> DROP TABLE IF EXISTS users;
> hive> CREATE TABLE users (
> user string,
> passwd string,
> uid int,
> gid int,
> name string,
> home string,
> shell string
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ':'
> STORED AS TEXTFILE;
> hive> LOAD DATA LOCAL INPATH '/etc/passwd' INTO TABLE users;
> hive> set 
> hive.exec.operator.hooks=org.apache.hadoop.hive.ql.profiler.HiveProfiler;
> set hive.exec.operator.hooks=org.apache.hadoop.hive.ql.profiler.HiveProfiler
> hive> set 
> hive.exec.post.hooks=org.apache.hadoop.hive.ql.hooks.HiveProfilerResultsHook;
> set 
> hive.exec.post.hooks=org.apache.hadoop.hive.ql.hooks.HiveProfilerResultsHook
> hive> SET hive.exec.mode.local.auto=false;
> SET hive.exec.mode.local.auto=false
> hive> SET hive.task.progress=true;
> SET hive.task.progress=true
> hive> 
>     > select count(1) from users;
> select count(1) from users
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Starting Job = job_201302131617_0022, Tracking URL = 
> http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201302131617_0022
> Kill Command = /usr/local/hadoop-1.0.4/libexec/../bin/hadoop job  -kill 
> job_201302131617_0022
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-02-16 10:24:14,215 Stage-1 map = 0%,  reduce = 0%
> 2013-02-16 10:24:44,354 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201302131617_0022 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201302131617_0022
> Examining task ID: task_201302131617_0022_m_000002 (and more) from job 
> job_201302131617_0022
> Task with the most failures(4): 
> -----
> Task ID:
>   task_201302131617_0022_m_000000
> URL:
>   
> http://localhost.localdomain:50030/taskdetails.jsp?jobid=job_201302131617_0022&tipid=task_201302131617_0022_m_000000
> -----
> Diagnostic Messages for this Task:
> java.lang.RuntimeException: Hive Runtime Error while closing operators
>       at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:227)
>       at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
>       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>       at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>       at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.profiler.HiveProfilePublisher.publishStat(HiveProfilePublisher.java:85)
>       at 
> org.apache.hadoop.hive.ql.profiler.HiveProfiler.close(HiveProfiler.java:110)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.closeOperatorHooks(Operator.java:452)
>       at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:605)
>       at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:194)
>       ... 8 more
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to