[jira] [Created] (HADOOP-14955) Document the support of multiple authentications for HTTP

2017-10-16 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-14955:
-

 Summary: Document the support of multiple authentications for HTTP
 Key: HADOOP-14955
 URL: https://issues.apache.org/jira/browse/HADOOP-14955
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony


Due to the enhancements done via  HADOOP-12082,  hadoop supports multiple 
authentications for the HTTP protocol.

This needs to be documented for wider usage



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13807) UGI renewal thread should be spawn only if the keytab is not external

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-13807.
--
Resolution: Duplicate

Based on my understanding of this jira, it is a dup of HADOOP-13805.
I'll close this jira as a result. Please reopen if this is not the case. Thanks!

> UGI renewal thread should be spawn only if the keytab is not external
> -
>
> Key: HADOOP-13807
> URL: https://issues.apache.org/jira/browse/HADOOP-13807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha1
>Reporter: Alejandro Abdelnur
>Priority: Minor
>
> The renewal thread should not be spawned if the keytab is external.
> Because of HADOOP-13805 there can be a case that an UGI does not have a 
> keytab because authentication is managed by the host program. In such case we 
> should not spawn the renewal thread.
> Currently this is logging a warning "Exception encountered while running the 
> renewal command. Aborting renew thread. " and exiting the thread. The warning 
> may be misleading and running the thread is not really needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14954:
---

 Summary: MetricsSystemImpl#init should increment refCount when 
already initialized
 Key: HADOOP-14954
 URL: https://issues.apache.org/jira/browse/HADOOP-14954
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.0
Reporter: John Zhuge
Priority: Minor


{{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
{{shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9296) Authenticating users from different realm without a trust relationship

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-9296.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Authenticating users from different realm without a trust relationship
> --
>
> Key: HADOOP-9296
> URL: https://issues.apache.org/jira/browse/HADOOP-9296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9296-1.1.patch, HADOOP-9296.patch, 
> HADOOP-9296.patch, multirealm.pdf
>
>
> Hadoop Masters (JobTracker and NameNode) and slaves (Data Node and 
> TaskTracker) are part of the Hadoop domain, controlled by Hadoop Active 
> Directory. 
> The users belong to the CORP domain, controlled by the CORP Active Directory. 
> In the absence of a one way trust from HADOOP DOMAIN to CORP DOMAIN, how will 
> Hadoop Servers (JobTracker, NameNode) authenticate  CORP users ?
> The solution and implementation details are in the attachement



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10057) Add ability in Hadoop servers (Namenode, JobTracker, Datanode ) to support multiple QOP (Authentication , Privacy) simultaneously

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-10057.
---
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Add ability in Hadoop servers (Namenode, JobTracker, Datanode ) to support 
> multiple QOP  (Authentication , Privacy) simultaneously
> --
>
> Key: HADOOP-10057
> URL: https://issues.apache.org/jira/browse/HADOOP-10057
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10057.pdf, hadoop-10057-branch-1.2.patch
>
>
> Add ability in Hadoop servers (Namenode, JobTracker Datanode ) to support 
> multiple QOP  (Authentication , Privacy) simlutaneously
> Hadoop Servers currently support only one QOP(quality of protection)for the 
> whole cluster.
> We want Hadoop servers to support multiple QOP  at the same time. 
> The logic used to determine the QOP should be pluggable.
> This will enable hadoop servers to communicate with different types of 
> clients with different QOP.
> A sample usecase:
> Let each Hadoop server support two QOP .
> 1. Authentication
> 2. Privacy (Privacy includes Authentication) .
> The Hadoop servers and internal clients require to do Authentication only 
> without incurring cost of encryption. External clients use Privacy. 
> An ip-whitelist logic to determine the QOP is provided and used as the 
> default QOP resolution logic.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9939) Custom Processing for Errors

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-9939.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Custom Processing for Errors
> 
>
> Key: HADOOP-9939
> URL: https://issues.apache.org/jira/browse/HADOOP-9939
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: hadoop-9939.patch
>
>   Original Estimate: 20h
>  Remaining Estimate: 20h
>
> We have a use case where we want to display different error message and take 
> some bookkeeping actions when there is authentication failure in Hadoop.
> There could be  other error cases where we want to associate custom actions 
> message.
> The work  is define a framework to attach custom error processors as part of 
> exception handling . Use that framework to display custom error message for 
> authentication failures.
> Please review and let me know your comments or alternatives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-8923.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> WEBUI shows an intermediatory page when the cookie expires.
> ---
>
> Key: HADOOP-8923
> URL: https://issues.apache.org/jira/browse/HADOOP-8923
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-8923.patch
>
>
> The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. 
> Once the cookie expires, the webui displays a page saying that 
> "authentication token expired". The user has to refresh the page to get 
> authenticated again. This page can be avoided and the user can authenticated 
> without showing such a page to the user.
> Also the when the cookie expires, a warning is logged. But there is no need 
> to log this as this is not of any significance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-10-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/

[Oct 16, 2017 2:28:22 AM] (xiao) HDFS-12659. Update 
TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to




-1 overall


The following subsystems voted -1:
unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.server.TestDiskFailures 

Timed out junit tests :

   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-compile-javac-root.txt
  [284K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/diff-javadoc-javadoc-root.txt
  [1.9M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/559/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [12K]

Powered by Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Merge feature branch YARN-5355 (Timeline Service v2) to trunk

2017-10-16 Thread Vrushali C
Timeline Service v2 should be landing on branch2 shortly.

thanks
Vrushali

On Thu, Sep 7, 2017 at 4:57 PM, Vrushali C  wrote:

> Thanks everyone.
>
> It has been over a week (~9 days) since TSv2 has been merged to trunk with
> no problems thus far. We are now thinking about merging timeline service v2
> to branch2 some time in the next few weeks.
>
> So far, we have been maintaining a branch2 based YARN-5355_branch2 along
> with our trunk based feature branch YARN-5355. Varun Saxena has been
> diligently rebasing it to stay current with branch2.
>
> Currently we are in the process of testing it just like we did our due
> diligence with the trunk based YARN-5355 branch and will ensure the TSv2
> branch2 code is a stable state to be merged.
>
> We will send out another email when we are ready to merge to branch2.
> thanks
> Vrushali
>
> On Thu, Aug 31, 2017 at 12:33 PM, Subramaniam V K 
> wrote:
>
>> Good to see this merged. I have initiated a separate thread with a
>> smaller set of stakeholders to discuss inclusion in 2.9. We'll report back
>> to the 2.9 release thread as soon as we reach consensus.
>>
>> On Thu, Aug 31, 2017 at 10:39 AM, Ravi Prakash 
>> wrote:
>>
>>> +1 to maintaining history.
>>>
>>> On Wed, Aug 30, 2017 at 11:38 PM, varunsax...@apache.org <
>>> varun.saxena.apa...@gmail.com> wrote:
>>>
>>> > Yes, I had used "git merge --no-ff"  while merging ATSv2 to trunk.
>>> > Maintaining history I believe can be useful as it can make reverts
>>> > easier if at all required.
>>> > And can be an easy reference point to look at who had contributed what
>>> > without having to go back to the branch.
>>> >
>>> > Regards,
>>> > Varun Saxena.
>>> >
>>> > On Thu, Aug 31, 2017 at 3:56 AM, Vrushali C 
>>> > wrote:
>>> >
>>> > > Thanks Sangjin for the link to the previous discussions on this! I
>>> think
>>> > > that helps answer Steve's questions.
>>> > >
>>> > > As decided on that thread [1], YARN-5355 as a feature branch was
>>> merged
>>> > to
>>> > > trunk via "git merge --no-ff" .
>>> > >
>>> > > Although trunk already had TSv2 code (alpha1) prior to this merge, we
>>> > > chose to develop on a feature branch YARN-5355 so that we could
>>> control
>>> > > when changes went into trunk and didn't inadvertently disrupt trunk.
>>> > >
>>> > > Is the latest merge causing any conflicts or issues for s3guard,
>>> Steve?
>>> > >
>>> > > thanks
>>> > > Vrushali
>>> > > [1] https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c7
>>> 6afd9ef
>>> > > f1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.a
>>> pache.org%3E
>>> > >
>>> > >
>>> > > On Wed, Aug 30, 2017 at 2:37 PM, Sangjin Lee 
>>> wrote:
>>> > >
>>> > >> I recall this discussion about a couple of years ago:
>>> > >> https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac
>>> > >> 2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-
>>> > >> dev.hadoop.apache.org%3E
>>> > >>
>>> > >> On Wed, Aug 30, 2017 at 2:32 PM, Steve Loughran <
>>> ste...@hortonworks.com
>>> > >
>>> > >> wrote:
>>> > >>
>>> > >>> I'd have assumed it would have gone in as one single patch, rather
>>> than
>>> > >>> a full history. I don't see why the trunk needs all the
>>> evolutionary
>>> > >>> history of a build.
>>> > >>>
>>> > >>> What should our policy/process be here?
>>> > >>>
>>> > >>> I do currently plan to merge the s3guard in as one single squashed
>>> > >>> patch; just getting HADOOP-14809 sorted first.
>>> > >>>
>>> > >>>
>>> > >>> > On 30 Aug 2017, at 07:09, Vrushali C 
>>> > wrote:
>>> > >>> >
>>> > >>> > I'm adding my +1 (binding) to conclude the vote.
>>> > >>> >
>>> > >>> > With 13 +1's (11 binding) and no -1's, the vote passes. We'll
>>> get on
>>> > >>> with
>>> > >>> > the merge to trunk shortly. Thanks everyone!
>>> > >>> >
>>> > >>> > Regards
>>> > >>> > Vrushali
>>> > >>> >
>>> > >>> >
>>> > >>> > On Tue, Aug 29, 2017 at 10:54 AM, varunsax...@apache.org <
>>> > >>> > varun.saxena.apa...@gmail.com> wrote:
>>> > >>> >
>>> > >>> >> +1 (binding).
>>> > >>> >>
>>> > >>> >> Kudos to all the team members for their great work!
>>> > >>> >>
>>> > >>> >> Being part of the ATSv2 team, I have been involved with either
>>> > >>> development
>>> > >>> >> or review of most of the JIRAs'.
>>> > >>> >> Tested ATSv2 in both secure and non-secure mode. Also verified
>>> that
>>> > >>> there
>>> > >>> >> is no impact when ATSv2 is turned off.
>>> > >>> >>
>>> > >>> >> Regards,
>>> > >>> >> Varun Saxena.
>>> > >>> >>
>>> > >>> >> On Tue, Aug 22, 2017 at 12:02 PM, Vrushali Channapattan <
>>> > >>> >> vrushalic2...@gmail.com> wrote:
>>> > >>> >>
>>> > >>> >>> Hi folks,
>>> > >>> >>>
>>> > >>> >>> Per earlier discussion [1], I'd like to start a formal vote to
>>> > merge
>>> > >>> >>> feature branch YARN-5355 [2] (Timeline Service v.2) to trunk.
>>> The
>>> > >>> vote
>>> > >>> >>> will
>>> > >>> >>> run for 7 days, and will end August 29 11:00 PM PDT.
>>> > >>> >>>
>>> > >>> >>> We have previously completed one merge onto trunk [3] and
>>> Timeline
>>> > >>> Se

[jira] [Created] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Kamil (JIRA)
Kamil created HADOOP-14952:
--

 Summary: Newest hadoop-client throws ClassNotFoundException
 Key: HADOOP-14952
 URL: https://issues.apache.org/jira/browse/HADOOP-14952
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kamil


I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
fine, but recently had problems with CGLIB (was conflicting with Spring).
I decided to try version 3.0.0-beta1 but server didn't start with exception:
{code}
16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: 
start:
 org.apache.catalina.LifecycleException: Failed to start component 
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
at 
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at 
org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
at 
org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: 
com/sun/jersey/api/core/DefaultResourceConfig
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
at 
org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
at 
org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
at 
org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
at 
org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
at 
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
at 
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
at 
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
... 10 more
Caused by: java.lang.ClassNotFoundException: 
com.sun.jersey.api.core.DefaultResourceConfig
at 
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
at 
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
... 21 more
{code}

after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14951) KMSACL implementation is not configurable

2017-10-16 Thread Zsombor Gegesy (JIRA)
Zsombor Gegesy created HADOOP-14951:
---

 Summary: KMSACL implementation is not configurable
 Key: HADOOP-14951
 URL: https://issues.apache.org/jira/browse/HADOOP-14951
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Reporter: Zsombor Gegesy


Currently, it is not possible to customize KMS's key management, if KMSACLs 
behaviour is not enough. If an external key management solution is used, that 
would need a higher level API, where it can decide, if the given operation is 
allowed, or not.
 For this to achieve, it would be a solution, to introduce a new interface, 
which could be implemented by KMSACLs - and also other KMS - and a new 
configuration point could be added, where the actual interface implementation 
could be specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14950) har file system throws ArrayIndexOutOfBoundsException

2017-10-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14950:


 Summary: har file system throws ArrayIndexOutOfBoundsException
 Key: HADOOP-14950
 URL: https://issues.apache.org/jira/browse/HADOOP-14950
 Project: Hadoop Common
  Issue Type: Bug
 Environment: CDH 5.9.2
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


When listing a har file system file, it throws an AIOOBE like the following:

{noformat}
$ hdfs dfs -ls har:///abc.har
-ls: Fatal internal error
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.fs.HarFileSystem$HarStatus.(HarFileSystem.java:597)
at 
org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201)
at 
org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098)
at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
{noformat}

Checking the code, it looks like the _index file in the har is mal-formed. It 
expects two string separately by a space in each line, and this AIOOBE is 
possible if the second string does not exist.

File this jira to improve the error handling of such case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org