[ 
https://issues.apache.org/jira/browse/HIVE-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772453#comment-13772453
 ] 

Jason Dere commented on HIVE-5070:
----------------------------------

Just looking at this a bit closer, and it appears that listLocatedStatus() was 
added in hadoop-0.22.0.  It's only hadoop-2.1.1 that MapReduce has been using 
this method which is breaking the unit tests.  So really I think it would be ok 
for this method to be overridden in 20/20S/23 - I think the original patch v1 
should be sufficient, sorry for the extra work Shanyu! Does anyone else agree 
that the v1 patch should be fine to fix this issue, without having to resort to 
a new shim method?
                
> Need to implement listLocatedStatus() in ProxyFileSystem for 0.23 shim
> ----------------------------------------------------------------------
>
>                 Key: HIVE-5070
>                 URL: https://issues.apache.org/jira/browse/HIVE-5070
>             Project: Hive
>          Issue Type: Bug
>          Components: CLI
>    Affects Versions: 0.12.0
>            Reporter: shanyu zhao
>             Fix For: 0.13.0
>
>         Attachments: HIVE-5070.patch.txt, HIVE-5070-v2.patch, 
> HIVE-5070-v3.patch
>
>
> MAPREDUCE-1981 introduced a new API for FileSystem - listLocatedStatus. It is 
> used in Hadoop's FileInputFormat.getSplits(). Hive's ProxyFileSystem class 
> needs to implement this API in order to make Hive unit test work.
> Otherwise, you'll see these exceptions when running TestCliDriver test case, 
> e.g. results of running allcolref_in_udf.q:
> [junit] Running org.apache.hadoop.hive.cli.TestCliDriver
>     [junit] Begin query: allcolref_in_udf.q
>     [junit] java.lang.IllegalArgumentException: Wrong FS: 
> pfile:/GitHub/Monarch/project/hive-monarch/build/ql/test/data/warehouse/src, 
> expected: file:///
>     [junit]   at 
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>     [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
>     [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:375)
>     [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1482)
>     [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1522)
>     [junit]   at 
> org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:1798)
>     [junit]   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1797)
>     [junit]   at 
> org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:579)
>     [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
>     [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
>     [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
>     [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
>     [junit]   at 
> org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69)
>     [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:385)
>     [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:351)
>     [junit]   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:389)
>     [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:503)
>     [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:495)
>     [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:390)
>     [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>     [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>     [junit]   at java.security.AccessController.doPrivileged(Native Method)
>     [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
>     [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
>     [junit]   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>     [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>     [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:552)
>     [junit]   at java.security.AccessController.doPrivileged(Native Method)
>     [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
>     [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
>     [junit]   at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:552)
>     [junit]   at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:543)
>     [junit]   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:448)
>     [junit]   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     [junit]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     [junit]   at java.lang.reflect.Method.invoke(Method.java:597)
>     [junit]   at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to